scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2001"


Journal ArticleDOI
TL;DR: A more robust algorithm is developed called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation of Monte Carlo Localization algorithms, and is applied to mobile robots equipped with range finders.

1,945 citations


Book
22 May 2001
TL;DR: In this article, Liapunov's second method and LMIs are used for closed-loop stability sets and stability regions in a closed loop with discrete delays and LTIs.
Abstract: Preliminaries.- Examples.- Stability sets and regions.- Reducible discrete delays and LTIs.- Liapunov's second method and LMIs.- Robustness issues in closed-loop.- Applications.

1,825 citations


Journal ArticleDOI
TL;DR: In this paper, a Benchmark Resource Allocation Problem with Model Misspecification and Robust Control Problems is discussed. But the problem is not addressed in this paper, and the following sections are included:
Abstract: The following sections are included:IntroductionA Benchmark Resource Allocation ProblemModel MisspecificationTwo Robust Control ProblemsRecursivity of the Multiplier FormulationTwo Preference OrderingsRecursivity of the Preference OrderingsConcluding Remarks

1,239 citations


Proceedings ArticleDOI
Charu C. Aggarwal1, Philip S. Yu1
01 May 2001
TL;DR: New techniques for outlier detection which find the outliers by studying the behavior of projections from the data set are discussed.
Abstract: The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set.

1,132 citations


Proceedings ArticleDOI
22 Apr 2001
TL;DR: This work uses a previously developed nonlinear dynamic model of TCP to analyze and design active queue management (AQM) control systems using random early detection (RED) and presents guidelines for designing linearly stable systems subject to network parameters like propagation delay and load level.
Abstract: We use a previously developed nonlinear dynamic model of TCP to analyze and design active queue management (AQM) control systems using random early detection (RED). First, we linearize the interconnection of TCP and a bottlenecked queue and discuss its feedback properties in terms of network parameters such as link capacity, load and round-trip time. Using this model, we next design an AQM control system using the RED scheme by relating its free parameters such as the low-pass filter break point and loss probability profile to the network parameters. We present guidelines for designing linearly stable systems subject to network parameters like propagation delay and load level. Robustness to variations in system loads is a prime objective. We present no simulations to support our analysis.

974 citations


Book
01 Jan 2001
TL;DR: Elements of Probability Theory Uncertain Linear Systems and Robustness Linear Robust Control Design Some Limits of the Robusts Paradigm Probabilistic Methods for Robustity Monte Carlo Methods Randomized Algorithms in Systems and Control Probability Inequalities Statistical Learning Theory and Control Design Sequential Al algorithms for probabilistic Robust design SequentialAlgorithms for LPV Systems Scenario approach.
Abstract: Elements of Probability Theory Uncertain Linear Systems and Robustness Linear Robust Control Design Some Limits of the Robustness Paradigm Probabilistic Methods for Robustness Monte Carlo Methods Randomized Algorithms in Systems and Control Probability Inequalities Statistical Learning Theory and Control Design Sequential Algorithms for Probabilistic Robust Design Sequential Algorithms for LPV Systems Scenario Approach for Probabilistic Robust Design Random Number and Variate Generation Statistical Theory of Radial Random Vectors Vector Randomization Methods Statistical Theory of Radial Random Matrices Matrix Randomization Methods Applications of Randomized Algorithms

933 citations


Journal ArticleDOI
F. Zana1, J.-C. Klein1
TL;DR: An algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment is presented and its robustness and its accuracy with respect to noise are evaluated.
Abstract: This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.

881 citations


Book
31 Dec 2001
TL;DR: This book presents a brief introduction to Evolutionary Algorithms, a methodology for enabling Continuous Adaptation in Dynamic Environments and its applications, and some of the principles behind it, as well as some of its critics.
Abstract: Preface. 1. Brief Introduction to Evolutionary Algorithms. Part I: Enabling Continuous Adaptation. 2. Optimization in Dynamic Environments. 3. Survey: State of the Art. 4. From Memory to Self-Organization. 5. Empirical Evaluation. 6. Summary of Part I. Part II: Considering Adaptation Cost. 7. Adaptation Cost vs. Solution Quality. Part III: Robustness and Flexibility - Precaution against Changes. 8. Searching for Robust Solutions. 9. From Robustness to Flexibility. 10. Summary and Outlook. References. Index.

812 citations


01 Mar 2001

698 citations


Book
30 Nov 2001
TL;DR: In this article, the Stable Adaptive Neural Network Control offers an in-depth study of stable adaptive control designs using approximation-based techniques, and presents rigorous analysis for system stability and control performance.
Abstract: While neural network control has been successfully applied in various practical applications, many important issues, such as stability, robustness, and performance, have not been extensively researched for neural adaptive systems. Motivated by the need for systematic neural control strategies for nonlinear systems, Stable Adaptive Neural Network Control offers an in-depth study of stable adaptive control designs using approximation-based techniques, and presents rigorous analysis for system stability and control performance. Both linearly parameterized and multi-layer neural networks (NN) are discussed and employed in the design of adaptive NN control systems for completeness. Stable adaptive NN control has been thoroughly investigated for several classes of nonlinear systems, including nonlinear systems in Brunovsky form, nonlinear systems in strict-feedback and pure-feedback forms, nonaffine nonlinear systems, and a class of MIMO nonlinear systems. In addition, the developed design methodologies are not only applied to typical example systems, but also to real application-oriented systems, such as the variable length pendulum system, the underactuated inverted pendulum system and nonaffine nonlinear chemical processes (CSTR).

665 citations


Proceedings ArticleDOI
01 Dec 2001
TL;DR: This paper presents a method for combining multiple images of a 3D object into a single model representation that provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions.
Abstract: There have been important recent advances in object recognition through the matching of invariant local image features. However, the existing approaches are based on matching to individual training images. This paper presents a method for combining multiple images of a 3D object into a single model representation. This provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions. The decision of whether to cluster a training image into an existing view representation or to treat it as a new view is based on the geometric accuracy of the match to previous model views. A new probabilistic model is developed to reduce the false positive matches that would otherwise arise due to loosened geometric constraints on matching 3D and non-rigid models. A system has been developed based on these approaches that is able to robustly recognize 3D objects in cluttered natural images in sub-second times.

Journal ArticleDOI
TL;DR: The different steps in a robustness test are discussed and illustrated with examples and recommendations for the different steps are based on approaches found in the literature, several case studies performed by the authors and discussions of the authors within a commission of the French SFSTP.

Journal ArticleDOI
TL;DR: This work presents a classical solution in terms of the parallel connection of a robust stabilizer and an internal model, where the latter is adaptively tuned to the device that reproduces the steady-state control necessary to maintain the output-zeroing condition.
Abstract: We address the problem of output regulation for nonlinear systems driven by a linear, neutrally stable exosystem whose frequencies are not known a priori. We present a classical solution in terms of the parallel connection of a robust stabilizer and an internal model, where the latter is adaptively tuned to the device that reproduces the steady-state control necessary to maintain the output-zeroing condition. We obtain robust regulation (i.e. in presence of parameter uncertainties) with a semi-global domain of convergence for a significant class of nonlinear minimum-phase system.

Proceedings ArticleDOI
29 Oct 2001
TL;DR: In this paper, the authors present an acoustic ranging system that performs well in the presence of many types of interference, but can return incorrect measurements in non-line-of-sight conditions.
Abstract: Many applications of robotics and embedded sensor technology can benefit from fine-grained localization. Fine-grained localization can simplify multi-robot collaboration, enable energy efficient multi-hop routing for low-power radio networks, and enable automatic calibration of distributed sensing systems. We focus on range estimation, a critical prerequisite for fine-grained localization. While many mechanisms for range estimation exist, any individual mode of sensing can be blocked or confused by the environment. We present and analyze an acoustic ranging system that performs well in the presence of many types of interference, but can return incorrect measurements in non-line-of-sight conditions. We then suggest how evidence from an orthogonal sensory channel might be used to detect and eliminate these measurements. The work illustrates the more general research theme of combining multiple modalities to obtain robust results.

Journal ArticleDOI
TL;DR: Globally convergent observers are designed for a class of systems with monotonic nonlinearities and the observer is combined with control laws that ensure input-to-state stability with respect to the observer error.

Journal ArticleDOI
TL;DR: The theory of latency-insensitive design is presented as the foundation of a new correct-by-construction methodology to design complex systems by assembling intellectual property components to design large digital integrated circuits by using deep submicrometer technologies.
Abstract: The theory of latency-insensitive design is presented as the foundation of a new correct-by-construction methodology to design complex systems by assembling intellectual property components. Latency-insensitive designs are synchronous distributed systems and are realized by composing functional modules that exchange data on communication channels according to an appropriate protocol. The protocol works on the assumption that the modules are stallable, a weak condition to ask them to obey. The goal of the protocol is to guarantee that latency-insensitive designs composed of functionally correct modules behave correctly independently of the channel latencies. This allows us to increase the robustness of a design implementation because any delay variations of a channel can be "recovered" by changing the channel latency while the overall system functionality remains unaffected. As a consequence, an important application of the proposed theory is represented by the latency-insensitive methodology to design large digital integrated circuits by using deep submicrometer technologies.

Proceedings ArticleDOI
01 Dec 2001
TL;DR: A robust approach for super-resolution is presented, which is especially valuable in the presence of outliers, since super- resolution methods are very sensitive to such errors.
Abstract: A robust approach for super-resolution is, presented, which is especially valuable in the presence of outliers. Such outliers may be due to motion errors, inaccurate blur models, noise, moving objects, motion blur etc. This robustness is needed since super-resolution methods are very sensitive to such errors. A robust median estimator is combined in an iterative process to achieve a super resolution algorithm. This process can increase resolution even in regions with outliers, where other super resolution methods actually degrade the image.

Journal ArticleDOI
TL;DR: A systematic procedure of fuzzy control system design that consists of fuzzy model construction, rule reduction, and robust compensation for nonlinear systems, which achieves the decay rate controller design guaranteeing robust stability for the model uncertainties.
Abstract: This paper presents a systematic procedure of fuzzy control system design that consists of fuzzy model construction, rule reduction, and robust compensation for nonlinear systems. The model construction part replaces the nonlinear dynamics of a system with a generalized form of Takagi-Sugeno fuzzy systems, which is newly developed by us. The generalized form has a decomposed structure for each element of A/sub i/ and B/sub i/ matrices in consequent parts. The key feature of this structure is that it is suitable for constructing IF-THEN rules and reducing the number of IF-THEN rules. The rule reduction part provides a successive procedure to reduce the number of IF-THEN rules. Furthermore, we convert the reduction error between reduced fuzzy models and a system to model uncertainties of reduced fuzzy models. The robust compensation part achieves the decay rate controller design guaranteeing robust stability for the model uncertainties. Finally, two examples demonstrate the utility of the systematic procedure developed.

Journal ArticleDOI
TL;DR: A global analysis of the structure of a conductance-based model neuron finds correlates of this dual robustness and sensitivity in this model, which implies that neuromodulators that alter a sensitive set of conductances will have powerful, and possibly state-dependent, effects.
Abstract: The electrical characteristics of many neurons are remarkably robust in the face of changing internal and external conditions. At the same time, neurons can be highly sensitive to neuromodulators. We find correlates of this dual robustness and sensitivity in a global analysis of the structure of a conductance-based model neuron. We vary the maximal conductance parameters of the model neuron and, for each set of parameters tested, characterize the activity pattern generated by the cell as silent, tonically firing, or bursting. Within the parameter space of the five maximal conductances of the model, we find directions, representing concerted changes in multiple conductances, along which the basic pattern of neural activity does not change. In other directions, relatively small concurrent changes in a few conductances can induce transitions between these activity patterns. The global structure of the conductance-space maps implies that neuromodulators that alter a sensitive set of conductances will have powerful, and possibly state-dependent, effects. Other modulators that may have no direct impact on the activity of the neuron may nevertheless change the effects of such direct modulators via this state dependence. Some of the results and predictions arising from the model studies are replicated and verified in recordings of stomatogastric ganglion neurons using the dynamic clamp.

Journal ArticleDOI
TL;DR: This article investigates the relationship between software complexity, reliability, and development resources and presents a forward-recovery approach based on the idea of using simplicity to control complexity as a way to improve the robustness of complex software systems.
Abstract: Improving the reliability and availability of increasingly complex software is a serious challenge, especially in efforts to protect society's critical functions. Is it true that building diverse, redundant systems provides robustness? This article investigates the relationship between software complexity, reliability, and development resources. It also presents a forward-recovery approach based on the idea of using simplicity to control complexity as a way to improve the robustness of complex software systems.

01 Jan 2001
TL;DR: In this paper, the authors reviewed the properties of estimators for parameters, standard errors and model fit, under conditions of (non)normality, emphasizing that both model and sample data characteristics affect the statistical behaviour of these estimators.
Abstract: Some robustness questions in structural equation modeling (SEM) are introduced. Factors that affect the occurrence of nonconvergence and improper solutions are reviewed in detail. Recent research on the behaviour of estimators for parameters, standard errors and model fit, under conditions of (non)normality, is summarized. It is emphasized that both model and sample data characteristics affect the statistical behaviour of these estimators. This knowledge may be used to set guidelines for a combined choice of sample size and estimation method. It is concluded that for large models, under a variety of nonnormal conditions, (robust) maximum likelihood estimators have relatively good statistical properties compared to other estimators (GLS, ERLS, ADF or WLS). The cumulative theoretical knowledge about robust (asymptotic) estimators and corrective statistics and the availability of practical guidelines from robustness research together, may enhance statistical practice in SEM and hence lead to more sensible and solid applied research.

Journal ArticleDOI
TL;DR: The distinguished feature of the new controller architecture is that it shows structurally how the controller design for performance and robustness may be done separately which has the potential to overcome the conflict between performance and resilientness in the traditional feedback framework.
Abstract: We propose a new feedback controller architecture. The distinguished feature of our new controller architecture is that it shows structurally how the controller design for performance and robustness may be done separately which has the potential to overcome the conflict between performance and robustness in the traditional feedback framework. The controller architecture includes two parts: one part for performance and the other part for robustness. The controller architecture works in such a way that the feedback control system can be solely controlled by the performance controller when there is no model uncertainties and external disturbances and the robustification controller can only be active when there are model uncertainties or external disturbances.

Proceedings ArticleDOI
01 Aug 2001
TL;DR: A practical multi-candidate election scheme that guarantees privacy of voters, public verifiability, and robustness against a coalition of malicious authorities is described, based on the Paillier cryptosystem and on some related zero-knowledge proof techniques.
Abstract: The aim of electronic voting schemes is to provide a set of protocols that allow voters to cast ballots while a group of authorities collect the votes and output the final tally. In this paper we describe a practical multi-candidate election scheme that guarantees privacy of voters, public verifiability, and robustness against a coalition of malicious authorities. Furthermore, we address the problem of receipt-freeness and incoercibility of voters. Our new scheme is based on the Paillier cryptosystem and on some related zero-knowledge proof techniques. The voting schemes are very practical and can be efficiently implemented in a real system.

Proceedings ArticleDOI
08 Dec 2001
TL;DR: Experiments on some challenging monocular sequences show that robust cost modelling, joint and self-intersection constraints, and informed sampling are all essential for reliable monocular 3D body tracking.
Abstract: We present a method for recovering 3D human body motion from monocular video sequences using robust image matching, joint limits and non-self-intersection constraints, and a new sample-and-refine search strategy guided by rescaled cost-function covariances. Monocular 3D body tracking is challenging: for reliable tracking at least 30 joint parameters need to be estimated, subject to highly nonlinear physical constraints; the problem is chronically ill conditioned as about 1/3 of the d.o.f. (the depth-related ones) are almost unobservable in any given monocular image; and matching an imperfect, highly flexible self-occluding model to cluttered image features is intrinsically hard. To reduce correspondence ambiguities we use a carefully designed robust matching-cost metric that combines robust optical flow, edge energy, and motion boundaries. Even so, the ambiguity, nonlinearity and non-observability make the parameter-space cost surface multi-modal, unpredictable and ill conditioned, so minimizing it is difficult. We discuss the limitations of CONDENSATION-like samplers, and introduce a novel hybrid search algorithm that combines inflated-covariance-scaled sampling and continuous optimization subject to physical constraints. Experiments on some challenging monocular sequences show that robust cost modelling, joint and self-intersection constraints, and informed sampling are all essential for reliable monocular 3D body tracking.

Journal ArticleDOI
TL;DR: In this article, a new adaptive robust filtering method based on the robust M (maximumlikelihood type) estimation is proposed, which can not only resist the influence of outlying kinematic model errors, but also control the effects of measurement outliers.
Abstract: The Kalman filter has been applied extensively in the area of kinematic geodetic positioning. The reliability of the linear filtering results, however, is reduced when the kinematic model noise is not accurately modeled in filtering or the measurement noises at any measurement epoch are not normally distributed. A new adaptively robust filtering is proposed based on the robust M (maximum-likelihood-type) estimation. It consists in weighting the influence of the updated parameters in accordance with the magnitude of discrepancy between the updated parameters and the robust estimates obtained from the kinematic measurements and in weighting individual measurements at each discrete epoch. The new procedure is different from functional model-error compensation; it changes the covariance matrix or equivalently changes the weight matrix of the predicted parameters to cover the model errors. A general estimator for an adaptively robust filter is developed, which includes the estimators of the classical Kalman filter, adaptive Kalman filter, robust filter, sequential least-squares adjustment and robust sequential adjustment. The procedure can not only resist the influence of outlying kinematic model errors, but also controls the effects of measurement outliers. In addition to the robustness, the feasibility of implementing the new filter is achieved by using the equivalent weights of the measurements and the predicted state parameters. A numerical example is given to demonstrate the ideas involved.

Journal ArticleDOI
TL;DR: A rigorous approach to optimizing the performance and choosing the correct parameter settings by developing a statistical model for the watermarking algorithm that can be used for maximizing the robustness against re-encoding and for selecting adequate error correcting codes for the label bit string.
Abstract: This paper proposes the differential energy watermarking (DEW) algorithm for JPEG/MPEG streams. The DEW algorithm embeds label bits by selectively discarding high frequency discrete cosine transform (DCT) coefficients in certain image regions. The performance of the proposed watermarking algorithm is evaluated by the robustness of the watermark, the size of the watermark, and the visual degradation the watermark introduces. These performance factors are controlled by three parameters, namely the maximal coarseness of the quantizer used in pre-encoding, the number of DCT blocks used to embed a single watermark bit, and the lowest DCT coefficient that we permit to be discarded. We follow a rigorous approach to optimizing the performance and choosing the correct parameter settings by developing a statistical model for the watermarking algorithm. Using this model, we can derive the probability that a label bit cannot be embedded. The resulting model can be used, for instance, for maximizing the robustness against re-encoding and for selecting adequate error correcting codes for the label bit string.

Proceedings ArticleDOI
01 Dec 2001
TL;DR: This paper proposes biased discriminant analysis and transforms specifically designed to address the asymmetry between the positive and negative examples, and to trade off generalization for robustness under a small training sample.
Abstract: All positive examples are alike; each negative example is negative in its own way. During interactive multimedia information retrieval, the number of training samples fed-back by the user is usually small; furthermore, they are not representative for the true distributions-especially the negative examples. Adding to the difficulties is the nonlinearity in real-world distributions. Existing solutions fail to address these problems in a principled way. This paper proposes biased discriminant analysis and transforms specifically designed to address the asymmetry between the positive and negative examples, and to trade off generalization for robustness under a small training sample. The kernel version, namely "BiasMap ", is derived to facilitate nonlinear biased discrimination. Extensive experiments are carried out for performance evaluation as compared to the state-of-the-art methods.

Proceedings ArticleDOI
21 May 2001
TL;DR: This work considers algorithms that evaluate and synthesize controllers under distributions of Markovian models and demonstrates the presented learning control algorithm by flying an autonomous helicopter and shows that the controller learned is robust and delivers good performance in this real-world domain.
Abstract: Many control problems in the robotics field can be cast as partially observed Markovian decision problems (POMDPs), an optimal control formalism. Finding optimal solutions to such problems in general, however is known to be intractable. It has often been observed that in practice, simple structured controllers suffice for good sub-optimal control, and recent research in the artificial intelligence community has focused on policy search methods as techniques for finding sub-optimal controllers when such structured controllers do exist. Traditional model-based reinforcement learning algorithms make a certainty equivalence assumption on their learned models and calculate optimal policies for a maximum-likelihood Markovian model. We consider algorithms that evaluate and synthesize controllers under distributions of Markovian models. Previous work has demonstrated that algorithms that maximize mean reward with respect to model uncertainty leads to safer and more robust controllers. We consider briefly other performance criterion that emphasize robustness and exploration in the search for controllers, and note the relation with experiment design and active learning. To validate the power of the approach on a robotic application we demonstrate the presented learning control algorithm by flying an autonomous helicopter. We show that the controller learned is robust and delivers good performance in this real-world domain.

Proceedings ArticleDOI
25 Aug 2001
TL;DR: This paper examines the robustness of current longitudinal controller designs to communication delays and finds that string stability is seriously compromised by communication delays introduced by the network when the controllers are triggered by the receipt of either the lead vehicle information or the preceding vehicle information.
Abstract: The throughput of vehicles on highways can be greatly increased by forming vehicle platoons. The control law that maintains stable operation of a platoon is dependent on the lead and preceding vehicle's position, velocity and acceleration profiles. These profiles guarantee string stability of a platoon and are transmitted via wireless communication networks. Communication networks generally introduce delays and drop packets. However, these communication faults are not typically taken into account in controller designs. In this paper, we examine the robustness of current longitudinal controller designs to communication delays. The results show that string stability is seriously compromised by communication delays introduced by the network when the controllers are triggered by the receipt of either the lead vehicle information or the preceding vehicle information. We find that when all the vehicles are synchronized to update their controllers at the same time, string stability can be maintained if the delay in preceding vehicle information is small. An upper bound on the preceding vehicle information delay is derived through a simple partial fraction expansion approach. We also point out a potential problem due to the clock jitters associated with the synchronization among vehicles.

Proceedings Article
01 Jan 2001
TL;DR: In this paper, variable threshold voltage sub-threshold CMOS (VT-Sub-CMOS) and subthreshold dynamic threshold voltage MOS (Sub-DTMOS) were proposed.
Abstract: Digital subthreshold logic circuits can be used for applications in the ultra-low power end of the design spectrum, where performance is of secondary importance. In this paper, we propose two different subthreshold logic families: 1) variable threshold voltage subthreshold CMOS (VT-Sub-CMOS) and 2) subthreshold dynamic threshold voltage MOS (Sub-DTMOS) logic. Both logic families have comparable power consumption as regular subthreshold CMOS logic (which is up to six orders of magnitude lower than that of normal strong inversion circuit) with superior robustness and tolerance to process and temperature variations than that of regular subthreshold CMOS logic.