scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Magazine in 2008"


Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations


Journal ArticleDOI
TL;DR: A new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling is presented that can operate efficiently across a broader spectral range than conventional silicon-based cameras.
Abstract: In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.

3,316 citations


Journal ArticleDOI
TL;DR: The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation, and there is also an emphasis on the understanding of the driving factors in applications.
Abstract: This article reviews the requirements for successful compressed sensing (CS), describes their natural fit to MRI, and gives examples of four interesting applications of CS in MRI. The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation. There is also an emphasis on the understanding of the driving factors in applications, including limitations imposed by MRI hardware, by the characteristics of different types of images, and by clinical concerns.

2,134 citations


Journal ArticleDOI
TL;DR: It is shown that with noncoherent processing, a target's RCS spatial variations can be exploited to obtain a diversity gain for target detection and for estimation of various parameters, such as angle of arrival and Doppler.
Abstract: MIMO (multiple-input multiple-output) radar refers to an architecture that employs multiple, spatially distributed transmitters and receivers. While, in a general sense, MIMO radar can be viewed as a type of multistatic radar, the separate nomenclature suggests unique features that set MIMO radar apart from the multistatic radar literature and that have a close relation to MIMO communications. This article reviews some recent work on MIMO radar with widely separated antennas. Widely separated transmit/receive antennas capture the spatial diversity of the target's radar cross section (RCS). Unique features of MIMO radar are explained and illustrated by examples. It is shown that with noncoherent processing, a target's RCS spatial variations can be exploited to obtain a diversity gain for target detection and for estimation of various parameters, such as angle of arrival and Doppler. For target location, it is shown that coherent processing can provide a resolution far exceeding that supported by the radar's waveform.

1,927 citations


Journal ArticleDOI
TL;DR: The theoretical background of the common spatial pattern (CSP) algorithm, a popular method in brain-computer interface (BCD research), is elucidated and tricks of the trade for achieving a powerful CSP performance are revealed.
Abstract: Due to the volume conduction multichannel electroencephalogram (EEG) recordings give a rather blurred image of brain activity. Therefore spatial filters are extremely useful in single-trial analysis in order to improve the signal-to-noise ratio. There are powerful methods from machine learning and signal processing that permit the optimization of spatio-temporal filters for each subject in a data dependent fashion beyond the fixed filters based on the sensor geometry, e.g., Laplacians. Here we elucidate the theoretical background of the common spatial pattern (CSP) algorithm, a popular method in brain-computer interface (BCD research. Apart from reviewing several variants of the basic algorithm, we reveal tricks of the trade for achieving a powerful CSP performance, briefly elaborate on theoretical aspects of CSP, and demonstrate the application of CSP-type preprocessing in our studies of the Berlin BCI (BBCI) project.

1,799 citations


Journal ArticleDOI
TL;DR: This article introduces compressive sampling and recovery using convex programming, which converts high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one.
Abstract: Image compression algorithms convert high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one. This article introduces compressive sampling and recovery using convex programming.

1,025 citations


Journal ArticleDOI
TL;DR: It was from here that "Bayesian" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century.
Abstract: The influence of this Thomas Bayes' work was immense. It was from here that "Bayesian" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century. It was also this article of Laplace's that introduced the mathematical techniques for the asymptotic analysis of posterior distributions that are still employed today. And it was here that the earliest example of optimum estimation can be found, the derivation and characterization of an estimator that minimized a particular measure of posterior expected loss. After more than two centuries, we mathematicians, statisticians cannot only recognize our roots in this masterpiece of our science, we can still learn from it.

774 citations


Journal ArticleDOI
TL;DR: This article describes a very different approach to the decentralized compression of networked data, considering a particularly salient aspect of this struggle that revolves around large-scale distributed sources of data and their storage, transmission, and retrieval.
Abstract: This article describes a very different approach to the decentralized compression of networked data. Considering a particularly salient aspect of this struggle that revolves around large-scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems.

575 citations


Journal ArticleDOI
TL;DR: This article highlights some of the recent information theoretic limits, models, and design of these promising networks of intelligent, adaptive wireless devices called cognitive radios.
Abstract: In recent years, the development of intelligent, adaptive wireless devices called cognitive radios, together with the introduction of secondary spectrum licensing, has led to a new paradigm in communications: cognitive networks. Cognitive networks are wireless networks that consist of several types of users: often a primary user (the primary license-holder of a spectrum band) and secondary users (cognitive radios). These cognitive users employ their cognitive abilities to communicate without harming the primary users. The study of cognitive networks is relatively new and many questions are yet to be answered. In this article we highlight some of the recent information theoretic limits, models, and design of these promising networks.

502 citations


Journal ArticleDOI
TL;DR: It is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals, which should lead to further research in sparse sampling, as well as new applications.
Abstract: Sparse sampling of continuous-time sparse signals is addressed. In particular, it is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals. The noisy case is analyzed and solved, proposing methods reaching the optimal performance given by the Cramer-Rao bounds. Finally, a number of applications have been discussed where sparsity can be taken advantage of. The comprehensive coverage given in this article should lead to further research in sparse sampling, as well as new applications. One main application to use the theory presented in this article is ultra-wide band (UWB) communications.

481 citations


Journal ArticleDOI
TL;DR: This paper present an alternative promising approach to ISI mitigation by the use of single-carrier (SC) modulation combined with frequency-domain equalization (FDE).
Abstract: This paper present an alternative promising approach to ISI mitigation by the use of single-carrier (SC) modulation combined with frequency-domain equalization (FDE).

Journal ArticleDOI
TL;DR: This article's goal is to provide an in-depth understanding of the principles of FSMC modeling of fading channels with its applications in wireless communication systems, and to introduce both FSMC models and flat-fading channels.
Abstract: This article's goal is to provide an in-depth understanding of the principles of FSMC modeling of fading channels with its applications in wireless communication systems. While the emphasis is on frequency nonselective or flat-fading channels, this understanding will be useful for future generalizations of FSMC models for frequency-selective fading channels. The target audience of this article include both theory- and practice-oriented researchers who would like to design accurate channel models for evaluating the performance of wireless communication systems in the physical or media access control layers, or those who would like to develop more efficient and reliable transceivers that take advantage of the inherent memory in fading channels. Both FSMC models and flat-fading channels will be formally introduced. FSMC models are particulary suitable to represent and estimate the relatively fast flat-fading channel gain in each subcarrier.

Journal ArticleDOI
TL;DR: It is argued that collaborative spectrum sensing can make use of signal processing gains at the physical layer to mitigate strict requirements on the radio frequency front-end and to exploit spatial diversity through network cooperation to significantly improve sensing reliability.
Abstract: Cognitive radio (CR) has recently emerged as a promising technology to revolutionize spectrum utilization in wireless communications. In a CR network, secondary users continuously sense the spectral environment and adapt transmission parameters to opportunistically use the available spectrum. A fundamental problem for CRs is spectrum sensing; secondary users need to reliably detect weak primary signals of possibly different types over a targeted wide frequency band in order to identify spectral holes for opportunistic communications. Conceptually and practically, there is growing awareness that collaboration among several CRs can achieve considerable performance gains. This article provides an overview of the challenges and possible solutions for the design of collaborative wideband sensing in CR networks. It is argued that collaborative spectrum sensing can make use of signal processing gains at the physical layer to mitigate strict requirements on the radio frequency front-end and to exploit spatial diversity through network cooperation to significantly improve sensing reliability.

Journal ArticleDOI
TL;DR: This lecture note describes a technique known as locality-sensitive hashing (LSH) that allows one to quickly find similar entries in large databases using a novel and interesting class of algorithms that are known as randomized algorithms.
Abstract: This lecture note describes a technique known as locality-sensitive hashing (LSH) that allows one to quickly find similar entries in large databases. This approach belongs to a novel and interesting class of algorithms that are known as randomized algorithms. A randomized algorithm does not guarantee an exact answer but instead provides a high probability guarantee that it will return the correct answer or one close to it. By investing additional computational effort, the probability can be pushed as high as desired.

Journal ArticleDOI
TL;DR: The approaches to brushwork analysis and artist identification developed by three research groups are described within the framework of this data set of 101 high-resolution gray-scale scans of paintings within the Van Gogh and Kroller-Muller museums.
Abstract: A survey of the literature reveals that image processing tools aimed at supplementing the art historian's toolbox are currently in the earliest stages of development. To jump-start the development of such methods, the Van Gogh and Kroller-Muller museums in The Netherlands agreed to make a data set of 101 high-resolution gray-scale scans of paintings within their collections available to groups of image processing researchers from several different universities. This article describes the approaches to brushwork analysis and artist identification developed by three research groups, within the framework of this data set.

Journal ArticleDOI
TL;DR: Through both theoretical and experimental results, it is shown that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal.
Abstract: Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.

Journal ArticleDOI
TL;DR: The history, recent advances, and challenges in distributed synchronization for distributed wireless systems are explored, and insight on the open issues and available analytical tools that could inspire further research within the signal processing community are provided.
Abstract: This article has explored history, recent advances, and challenges in distributed synchronization for distributed wireless systems. It is focused on synchronization schemes based on exchange of signals at the physical layer and corresponding baseband processing, wherein analysis and design can be performed using known tools from signal processing. Emphasis has also been given on the synergy between distributed synchronization and distributed estimation/detection problems. Finally, we have touched upon synchronization of nonperiodic (chaotic) signals. Overall, we hope to have conveyed the relevance of the subject and to have provided insight on the open issues and available analytical tools that could inspire further research within the signal processing community.

Journal ArticleDOI
TL;DR: Spoken language understanding and natural language understanding share the goal of obtaining a conceptual representation of natural language sentences and computational semantics performs a conceptualization of the world using computational processes for composing a meaning representation structure from available signs.
Abstract: Semantics deals with the organization of meanings and the relations between sensory signs or symbols and what they denote or mean. Computational semantics performs a conceptualization of the world using computational processes for composing a meaning representation structure from available signs and their features present, for example, in words and sentences. Spoken language understanding (SLU) is the interpretation of signs conveyed by a speech signal. SLU and natural language understanding (NLU) share the goal of obtaining a conceptual representation of natural language sentences. Specific to SLU is the fact that signs to be used for interpretation are coded into signals along with other information such as speaker identity. Furthermore, spoken sentences often do not follow the grammar of a language; they exhibit self-corrections, hesitations, repetitions, and other irregular phenomena. SLU systems contain an automatic speech recognition (ASR) component and must be robust to noise due to the spontaneous nature of spoken language and the errors introduced by ASR. Moreover, ASR components output a stream of words with no structure information like punctuation and sentence boundaries. Therefore, SLU systems cannot rely on such markers and must perform text segmentation and understanding at the same time.

Journal ArticleDOI
TL;DR: A comparative study of widely used ICA algorithms in the BCI community, conducted on simulated electroencephalography (EEG) data, shows that an appropriate selection of an ICA algorithm may significantly improve the capabilities of BCI systems.
Abstract: Several studies dealing with independent component analysis (ICA)-based brain-computer interface (BCI) systems have been reported. Most of them have only explored a limited number of ICA methods, mainly FastICA and INFOMAX. The aim of this article is to help the BCI community researchers, especially those who are not familiar with ICA techniques, to choose an appropriate ICA method. For this purpose, the concept of ICA is reviewed and different measures of statistical independence are reported. Then, the application of these measures is illustrated through a brief description of the widely used algorithms in the ICA community, namely SOBI, COM2, JADE, ICAR, FastICA, and INFOMAX. The implementation of these techniques in the BCI field is also explained. Finally, a comparative study of these algorithms, conducted on simulated electroencephalography (EEG) data, shows that an appropriate selection of an ICA algorithm may significantly improve the capabilities of BCI systems.

Journal ArticleDOI
TL;DR: The changes in the brain that occur during different styles of meditation practice are examined to describe the brain changes that occur in response to experience.
Abstract: The term neuroplasticity is used to describe the brain changes that occur in response to experience. This article examines the changes in the brain that occur during different styles of meditation practice.

Journal ArticleDOI
TL;DR: Radio regulatory bodies are recognizing that the rigid spectrum assignment granting exclusive use to licensed services is highly inefficient, due to the high variability of the traffic statistics across time, space, and frequency, and this inefficiency is motivating a flurry of research activities in the engineering, economics, and regulation communities in the effort of finding more efficient spectrum management policies.
Abstract: Radio regulatory bodies are recognizing that the rigid spectrum assignment granting exclusive use to licensed services is highly inefficient, due to the high variability of the traffic statistics across time, space, and frequency. Recent Federal Communications Commission (FCC) measurements show that, in fact, the spectrum usage is typically concentrated over certain portions of the spectrum, while a significant amount of the licensed bands (or idle slots in static time division multiple access (TDMA) systems with bursty traffic) remains unused or underutilized for 90% of time [1]. It is not surprising then that this inefficiency is motivating a flurry of research activities in the engineering, economics, and regulation communities in the effort of finding more efficient spectrum management policies.

Journal ArticleDOI
TL;DR: The main goal of this article is to provide an underlying foundation for MMI, MCE, and MPE/MWE at the objective function level to facilitate the development of new parameter optimization techniques and to incorporate other pattern recognition concepts, e.g., discriminative margins [66], into the current discrim inative learning paradigm.
Abstract: In this article, we studied the objective functions of MMI, MCE, and MPE/MWE for discriminative learning in sequential pattern recognition. We presented an approach that unifies the objective functions of MMI, MCE, and MPE/MWE in a common rational-function form of (25). The exact structure of the rational-function form for each discriminative criterion was derived and studied. While the rational-function form of MMI has been known in the past, we provided the theoretical proof that the similar rational-function form exists for the objective functions of MCE and MPE/MWE. Moreover, we showed that the rational function forms for objective functions of MMI, MCE, and MPE/MWE differ in the constant weighting factors CDT (s1 . . . sR) and these weighting factors depend only on the labeled sequence s1 . . . sR, and are independent of the parameter set - to be optimized. The derived rational-function form for MMI, MCE, and MPE/MWE allows the GT/EBW-based parameter optimization framework to be applied directly in discriminative learning. In the past, lack of the appropriate rational-function form was a difficulty for MCE and MPE/MWE, because without this form, the GT/EBW-based parameter optimization framework cannot be directly applied. Based on the unified rational-function form, in a tutorial style, we derived the GT/EBW-based parameter optimization formulas for both discrete HMMs and CDHMMs in discriminative learning using MMI, MCE, and MPE/MWE criteria. The unifying review provided in this article has been based upon a large number of earlier contributions that have been cited and discussed throughout the article. Here we provide a brief summary of such background work. Extension to large-scale speech recognition tasks was accomplished in the work of [59] and [60]. The dissertation of [47] further improved the MMI criterion to that of MPE/MWE. In a parallel vein, the work of [20] provided an alternative approach to that of [41], with an attempt to more rigorously provide a CDHMM model re-estimation formula that gives positive growth of the MMI objective function. A crucial error of this attempt was corrected in [2] for establishing an existence proof of such positive growth. The main goal of this article is to provide an underlying foundation for MMI, MCE, and MPE/MWE at the objective function level to facilitate the development of new parameter optimization techniques and to incorporate other pattern recognition concepts, e.g., discriminative margins [66], into the current discriminative learning paradigm.

Journal ArticleDOI
TL;DR: The issue of handling the errorful or incomplete output provided by ASR systems for spoken audio documents is focused on, focusing on the usage case where a user enters search terms into a search engine and is returned a collection of spoken document hits.
Abstract: Ever-increasing computing power and connectivity bandwidth, together with falling storage costs, are resulting in an overwhelming amount of data of various types being produced, exchanged, and stored. Consequently, information search and retrieval has emerged as a key application area. Text-based search is the most active area, with applications that range from Web and local network search to searching for personal information residing on one's own hard-drive. Speech search has received less attention perhaps because large collections of spoken material have previously not been available. However, with cheaper storage and increased broadband access, there has been a subsequent increase in the availability of online spoken audio content such as news broadcasts, podcasts, and academic lectures. A variety of personal and commercial uses also exist. As data availability increases, the lack of adequate technology for processing spoken documents becomes the limiting factor to large-scale access to spoken content. In this article, we strive to discuss the technical issues involved in the development of information retrieval systems for spoken audio documents, concentrating on the issue of handling the errorful or incomplete output provided by ASR systems. We focus on the usage case where a user enters search terms into a search engine and is returned a collection of spoken document hits.

Journal ArticleDOI
TL;DR: The properties of auxetic materials, their biomedical (bioprostheses) and signal processing applications are discussed and materials with negative values of the Poisson's ratio are referred to as auxetic.
Abstract: In this paper, the properties of auxetic materials, their biomedical (bioprostheses) and signal processing applications are discussed. Materials with negative values of the Poisson's ratio (NPR) are referred to as auxetic.

Journal ArticleDOI
TL;DR: This article brings together the various elements that constitute the signal processing challenges presented by a hemodynamics-driven functional near-infrared spectroscopy (fNIRS) based brain-computer interface (BCI).
Abstract: This article brings together the various elements that constitute the signal processing challenges presented by a hemodynamics-driven functional near-infrared spectroscopy (fNIRS) based brain-computer interface (BCI). We discuss the use of optically derived measures of cortical hemodynamics as control signals for next generation BCIs. To this end we present a suitable introduction to the underlying measurement principle, we describe appropriate instrumentation and highlight how and where performance improvements can be made to current and future embodiments of such devices. Key design elements of a simple fNIRS-BCI system are highlighted while in the process identifying signal processing problems requiring improved solutions and suggesting methods by which this might be accomplished.

Journal ArticleDOI
TL;DR: The aim was to show how classical techniques for solving linear inverse problems are applied in current state-of-the-art imaging systems, and to provide a classification of the techniques into four families: FT-based, direct reconstruction, indirect reconstruction, and interpolation.
Abstract: Classical techniques for solving linear inverse problems have been presented. Our aim was to show how these classical techniques are applied in current state-of-the-art imaging systems. Moreover, we have provided a classification of the techniques into four families: FT-based, direct reconstruction, indirect reconstruction, and interpolation. We hope that this classification will guide the curious reader into a discipline with a rich bibliography and sometimes sophisticated mathematics. In this survey, we skipped complicated methods to solve inverse problems. Through our examples, we have tried to emphasize the large variety of applications of linear inverse problems in imaging. Two main examples have been examined more deeply in this survey. We hope they have helped the reader to understand the application of the general techniques in two interesting contexts: multispectral imaging and magnetic resonance imaging.

Journal ArticleDOI
TL;DR: Multiplicative algorithms are not necessary the best approaches for NMF, especially if data representations are not very redundant or sparse, and much better performance can be achieved using the FP-ALS, IPC, and QN methods.
Abstract: In these lecture notes, the authors have outlined several approaches to solve a NMF/NTF problem. The following main conclusions can be drawn: 1) Multiplicative algorithms are not necessary the best approaches for NMF, especially if data representations are not very redundant or sparse. 2) Much better performance can be achieved using the FP-ALS (especially for large-scale problems), IPC, and QN methods. 3) To achieve high performance it is quite important to use the multilayer structure with multistart initialization conditions. 4) To estimate physically meaningful nonnegative components it is often necessary to use some a priori knowledge and impose additional constraints or regularization terms (to control sparsity, boundness, continuity or smoothness of the estimated nonnegative components).

Journal ArticleDOI
TL;DR: To identify materials on a painting's surface in a reliable manner, currently the most popular and trustworthy method is the analysis of microsamples of the paint layer, however, chemical analyses although reliable, have a number of drawbacks.
Abstract: Identifying the materials of a painting is a crucial step in any conservation process. When the objective is to prepare an intervention plan, it is absolutely necessary to understand the matters the restorer is going to encounter. Also, when the aim is a better understanding of the artwork, and perhaps an authenticity check, it is highly relevant to know which materials were employed, since they may differ depending on the period of execution and on the specific artist as well. To identify materials on a painting's surface in a reliable manner, currently the most popular and trustworthy method is the analysis of microsamples of the paint layer. However, chemical analyses although reliable, have a number of drawbacks. The first is linked to the fact that this is an invasive method. The samples used need to be detached from the painting and will not be put back in place. Moreover, the achieved results are in principle - and often also in practice - valid only for that specific specimen and cannot generally be extended to neighboring areas.

Journal ArticleDOI
TL;DR: This review summarizes linear spatiotemporal signal analysis methods that derive their power from careful consideration of spatial and temporal features of skull surface potentials from signal processing and machine learning.
Abstract: This review summarizes linear spatiotemporal signal analysis methods that derive their power from careful consideration of spatial and temporal features of skull surface potentials. BCIs offer tremendous potential for improving the quality of life for those with severe neurological disabilities. At the same time, it is now possible to use noninvasive systems to improve performance for time-demanding tasks. Signal processing and machine learning are playing a fundamental role in enabling applications of BCI and in many respects, advances in signal processing and computation have helped to lead the way to real utility of noninvasive BCI.

Journal ArticleDOI
TL;DR: A multistage image-based modeling approach that requires only a limited amount of human interactivity and is capable of capturing the fine geometric details with similar accuracy as close-range active range sensors is proposed.
Abstract: In this article developments and performance analysis of image matching for detailed surface reconstruction of heritage objects is discussed. Three dimensional image-based modeling of heritages is a very interesting topic with many possible applications. In this article we propose a multistage image-based modeling approach that requires only a limited amount of human interactivity and is capable of capturing the fine geometric details with similar accuracy as close-range active range sensors. It can also cope with wide baselines using several advancements over standard stereo matching techniques. Our approach is sequential, starting from a sparse basic segmented model created with a small number of interactively measured points. This model, specifically the equation of each surface, is then used as a guide to automatically add the fine details. The following three techniques are used, each where best suited, to retrieve the details: 1) for regularly shaped patches such as planes, cylinders, or quadrics, we apply a fast relative stereo matching technique. 2) For more complex or irregular segments with unknown shape, we use a global multi-image geometrically constrained technique. 3) For segments unsuited for stereo matching, we employ depth from shading (DFS). The goal is not the development of a fully automated procedure for 3D object reconstruction from image data or a sparse stereo approach, but we aim at the digital reconstruction of detailed and accurate surfaces from calibrated and oriented images for practical daily documentation and digital conservation of wide variety of heritage objects.