Topic

# Reduction (complexity)

About: Reduction (complexity) is a research topic. Over the lifetime, 25831 publications have been published within this topic receiving 292001 citations.

##### Papers published on a yearly basis

##### Papers

More filters

••

[...]

TL;DR: In this article, the problem of capacitors placement on a radial distribution system is formulated and a solution algorithm is proposed, where the location, type, and size of the capacitors, voltage constraints, and load variations are considered.

Abstract: The problem of capacitor placement on a radial distribution system is formulated and a solution algorithm is proposed. The location, type, and size of capacitors, voltage constraints, and load variations are considered. The objective of capacitor placement is peak power and energy loss reduction, taking into account the cost of the capacitors. The problem is formulated as a mixed integer programming problem. The power flows in the system are explicitly represented, and the voltage constraints are incorporated. A solution method has been implemented that decomposes the problem into a master problem and a slave problem. The master problem is used to determine the location of the capacitors. The slave problem is used by the master problem to determine the type and size of the capacitors placed on the system. In solving the slave problem, and efficient phase I-phase II algorithm is used. >

1,610 citations

••

[...]

TL;DR: A novel algorithm is developed that is inspired by the Pohst enumeration strategy and is shown to offer a significant reduction in complexity compared to the Viterbo-Boutros sphere decoder and is supported by intuitive arguments and simulation results in many relevant scenarios.

Abstract: Maximum-likelihood (ML) decoding algorithms for Gaussian multiple-input multiple-output (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using number-theoretic tools for searching the closest lattice point. These decoders are collectively referred to as sphere decoders in the literature. In this paper, a fresh look at this class of decoding algorithms is taken. In particular, two novel algorithms are developed. The first algorithm is inspired by the Pohst enumeration strategy and is shown to offer a significant reduction in complexity compared to the Viterbo-Boutros sphere decoder. The connection between the proposed algorithm and the stack sequential decoding algorithm is then established. This connection is utilized to construct the second algorithm which can also be viewed as an application of the Schnorr-Euchner strategy to ML decoding. Aided with a detailed study of preprocessing algorithms, a variant of the second algorithm is developed and shown to offer significant reductions in the computational complexity compared to all previously proposed sphere decoders with a near-ML detection performance. This claim is supported by intuitive arguments and simulation results in many relevant scenarios.

1,376 citations

••

[...]

TL;DR: Different OFDM PAPR reduction techniques are reviewed and analysis, based on computational complexity, bandwidth expansion, spectral spillage and performance, for multiuser OFDM broadband communication systems.

Abstract: One of the challenging issues for Orthogonal Frequency Division Multiplexing (OFDM) system is its high Peak-to-Average Power Ratio (PAPR). In this paper, we review and analysis different OFDM PAPR reduction techniques, based on computational complexity, bandwidth expansion, spectral spillage and performance. We also discuss some methods of PAPR reduction for multiuser OFDM broadband communication systems.

1,358 citations

••

[...]

TL;DR: Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.

Abstract: We report on improved practical algorithms for lattice basis reduction. We present a variant of the L3-algorithm with “deep insertions” and a practical algorithm for blockwise Korkine-Zolotarev reduction, a concept extending L3-reduction, that has been introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 58 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 2 computer.

1,334 citations

••

[...]

TL;DR: Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise.

Abstract: Instance-based learning algorithms are often faced with the problem of deciding which instances to store for use during generalization. Storing too many instances can result in large memory requirements and slow execution speed, and can cause an oversensitivity to noise. This paper has two main purposes. First, it provides a survey of existing algorithms used to reduce storage requirements in instance-based learning algorithms and other exemplar-based algorithms. Second, it proposes six additional reduction algorithms called DROP1–DROP5 and DEL (three of which were first described in Wilson & Martinez, 1997c, as RT1–RT3) that can be used to remove instances from the concept description. These algorithms and 10 algorithms from the survey are compared on 31 classification tasks. Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise.

1,168 citations