scispace - formally typeset
Search or ask a question
Author

John R. Buck

Bio: John R. Buck is an academic researcher from University of Massachusetts Dartmouth. The author has contributed to research in topics: Covariance matrix & Beamforming. The author has an hindex of 21, co-authored 113 publications receiving 3469 citations. Previous affiliations of John R. Buck include Dartmouth College & Massachusetts Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: This analysis demonstrates that there is a strong structural constraint in the generation of the songs, and the structural constraints exhibit periodicities with periods of 6-8 and 180-400 units, implying that no empirical Markov model is capable of representing the songs' structure.
Abstract: Many theories of nonhuman animal communication posit a first‐order Markov model in which the next signal depends only on the current one Such a model precludes a hierarchical structure to the communication signal Information theory and signal processing provide quantitative techniques to estimate the underlying complexity of an arbitrary signal or symbol sequence These techniques are applied to humpback whale songs and demonstrate that any first‐order Markov model fails to attain the underlying bound of complexity in these songs Humpback songs are symbolized into alphabet sequences using spectrograms and self‐organizing neural nets [Walker, unpublished] The entropy of the song sequence is measured with a first‐order parametric Markov model, and with a nonparametric sliding window method [Kontoyiannis et al, IEEE Trans Info Theory 44, 1319–1327 (1998)] Preliminary analyses suggest that the entropy of the first‐order Markov model is significantly higher than that of the nonparametric model, meaning that any first‐order Markov source cannot reasonably model humpback songs Furthermore, it is found that the symbolized song statistics are locally but not globally stationary, implying that these songs possess a hierarchical structure [Work supported by NSF Ocean Sciences]

154 citations

Journal ArticleDOI
TL;DR: The three-year development of the Signals and Systems Concept Inventory is summarized and results obtained from testing with a pool of over 900 students from seven schools are presented.
Abstract: The signal processing community needs quantitative standardized tools to assess student learning in order to improve teaching methods and satisfy accreditation requirements. The Signals and Systems Concept Inventory (SSCI) is a 25-question multiple-choice exam designed to measure students' understanding of fundamental concepts taught in standard signals and systems curricula. When administered as a pre- and postcourse assessment, the SSCI measures the gain in conceptual understanding as a result of instruction. This paper summarizes the three-year development of this new assessment instrument and presents results obtained from testing with a pool of over 900 students from seven schools. Initial findings from the SSCI study show that students in traditional lecture courses master approximately 20% of the concepts they do not know prior to the start of the course. Other results highlight the most common student misconceptions and quantify the correlation between signals and systems and prerequisite courses.

141 citations

Journal ArticleDOI
TL;DR: The authors specifically consider the two-sensor signal enhancement problem in which the desired signal is modeling as a Gaussian autoregressive (AR) process, the noise is modeled as a white Gaussian process, and the coupling systems are modeled as linear time-invariant finite impulse response (FIR) filters.
Abstract: In problems of enhancing a desired signal in the presence of noise, multiple sensor measurements will typically have components from both the signal and the noise sources. When the systems that couple the signal and the noise to the sensors are unknown, the problem becomes one of joint signal estimation and system identification. The authors specifically consider the two-sensor signal enhancement problem in which the desired signal is modeled as a Gaussian autoregressive (AR) process, the noise is modeled as a white Gaussian process, and the coupling systems are modeled as linear time-invariant finite impulse response (FIR) filters. The main approach consists of modeling the observed signals as outputs of a stochastic dynamic linear system, and the authors apply the estimate-maximize (EM) algorithm for jointly estimating the desired signal, the coupling systems, and the unknown signal and noise spectral parameters. The resulting algorithm can be viewed as the time-domain version of the frequency-domain approach of Feder et al. (1989), where instead of the noncausal frequency-domain Wiener filter, the Kalman smoother is used. This approach leads naturally to a sequential/adaptive algorithm by replacing the Kalman smoother with the Kalman filter, and in place of successive iterations on each data block, the algorithm proceeds sequentially through the data with exponential weighting applied to allow adaption to nonstationary changes in the structure of the data. A computationally efficient implementation of the algorithm is developed. An expression for the log-likelihood gradient based on the Kalman smoother/filter output is also developed and used to incorporate efficient gradient-based algorithms in the estimation process. >

120 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The analysis supports theory claiming that calls to increase the number of students receiving STEM degrees could be answered, at least in part, by abandoning traditional lecturing in favor of active learning and supports active learning as the preferred, empirically validated teaching practice in regular classrooms.
Abstract: creased by 0.47 SDs under active learning (n = 158 studies), and that the odds ratio for failing was 1.95 under traditional lecturing (n = 67 studies). These results indicate that average examination scores improved by about 6% in active learning sections, and that students in classes with traditional lecturing were 1.5 times more likely to fail than were students in classes with active learning. Heterogeneity analyses indicated that both results hold across the STEM disciplines, that active learning increases scores on concept inventories more than on course examinations, and that active learning appears effective across all class sizes—although the greatest effects are in small (n ≤ 50) classes. Trim and fill analyses and fail-safe n calculations suggest that the results are not due to publication bias. The results also appear robust to variation in the methodological rigor of the included studies, based on the quality of controls over student quality and instructor identity. This is the largest and most comprehensive metaanalysis of undergraduate STEM education published to date. The results raise questions about the continued use of traditional lecturing as a control in research studies, and support active learning as the preferred, empirically validated teaching practice in regular classrooms.

5,474 citations

Journal ArticleDOI
24 Jan 2005
TL;DR: It is shown that such an approach can yield an implementation of the discrete Fourier transform that is competitive with hand-optimized libraries, and the software structure that makes the current FFTW3 version flexible and adaptive is described.
Abstract: FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.

5,172 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations

Journal ArticleDOI
T.K. Moon1
TL;DR: The EM (expectation-maximization) algorithm is ideally suited to problems of parameter estimation, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation.
Abstract: A common task in signal processing is the estimation of the parameters of a probability distribution function Perhaps the most frequently encountered estimation problem is the estimation of the mean of a signal in noise In many parameter estimation problems the situation is more complicated because direct access to the data necessary to estimate the parameters is impossible, or some of the data are missing Such difficulties arise when an outcome is a result of an accumulation of simpler outcomes, or when outcomes are clumped together, for example, in a binning or histogram operation There may also be data dropouts or clustering in such a way that the number of underlying data points is unknown (censoring and/or truncation) The EM (expectation-maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation The EM algorithm is presented at a level suitable for signal processing practitioners who have had some exposure to estimation theory

2,573 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: The design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs are presented and achieves close to the theoretical doubling of throughput in all practical deployment scenarios.
Abstract: This paper presents the design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs and achieves close to the theoretical doubling of throughput in all practical deployment scenarios. Our design uses a single antenna for simultaneous TX/RX (i.e., the same resources as a standard half duplex system). We also propose novel analog and digital cancellation techniques that cancel the self interference to the receiver noise floor, and therefore ensure that there is no degradation to the received signal. We prototype our design by building our own analog circuit boards and integrating them with a fully WiFi-PHY compatible software radio implementation. We show experimentally that our design works robustly in noisy indoor environments, and provides close to the expected theoretical doubling of throughput in practice.

2,084 citations