scispace - formally typeset
Search or ask a question
Author

Aggelos K. Katsaggelos

Bio: Aggelos K. Katsaggelos is an academic researcher from Northwestern University. The author has contributed to research in topics: Image restoration & Image processing. The author has an hindex of 76, co-authored 946 publications receiving 26196 citations. Previous affiliations of Aggelos K. Katsaggelos include University of Stavanger & Delft University of Technology.


Papers
More filters
Posted Content
TL;DR: In this article, an adaptive host-chip modular architecture for video acquisition is presented to optimize an overall objective task constrained under a given bit rate, where the host performs objective task specific computations and also intelligently guides the chip to optimize (compress) the data sent to host.
Abstract: We present a novel adaptive host-chip modular architecture for video acquisition to optimize an overall objective task constrained under a given bit rate. The chip is a high resolution imaging sensor such as gigapixel focal plane array (FPA) with low computational power deployed on the field remotely, while the host is a server with high computational power. The communication channel data bandwidth between the chip and host is constrained to accommodate transfer of all captured data from the chip. The host performs objective task specific computations and also intelligently guides the chip to optimize (compress) the data sent to host. This proposed system is modular and highly versatile in terms of flexibility in re-orienting the objective task. In this work, object tracking is the objective task. While our architecture supports any form of compression/distortion, in this paper we use quadtree (QT)-segmented video frames. We use Viterbi (Dynamic Programming) algorithm to minimize the area normalized weighted rate-distortion allocation of resources. The host receives only these degraded frames for analysis. An object detector is used to detect objects, and a Kalman Filter based tracker is used to track those objects. Evaluation of system performance is done in terms of Multiple Object Tracking Accuracy (MOTA) metric. In this proposed novel architecture, performance gains in MOTA is obtained by twice training the object detector with different system generated distortions as a novel 2-step process. Additionally, object detector is assisted by tracker to upscore the region proposals in the detector to further improve the performance.
01 Jan 2003
TL;DR: This work presents a fast retrieval algorithm based on matched filtering of the video sequence trace characteristics in the principal component space that resulted into a robust and fast content -based video shot retrieval solution.
Abstract: Content-based video retrieval technology holds the key to the efficient management and sharing of video content from different sources, across different platforms and over different communication channels. In this work we present a fast retrieval algorithm based on matched filtering of the video sequence trace characteristics in the principal component space. Techniques to combat scale variance, noises and distortions are also investigated, resulted into a robust and fast content -based video shot retrieval solution.
Journal ArticleDOI
01 Dec 2022-Sensors
TL;DR: In this article , the authors explore the continual learning paradigm for accurately estimating the characteristics of fluid flow in pipelines by compressing the distributed sensor data to increase the capacity of the CL memory bank using a compressive learning algorithm.
Abstract: A robust–accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning algorithms to make predictions outside the training data distribution. To improve predictions outside the training distribution, we explore the continual learning (CL) paradigm for accurately estimating the characteristics of fluid flow in pipelines. A significant challenge facing CL is the concept of catastrophic forgetting. In this paper, we provide a novel approach for how to address the forgetting problem via compressing the distributed sensor data to increase the capacity of the CL memory bank using a compressive learning algorithm. Through extensive experiments, we show that our approach provides around 8% accuracy improvement compared to other CL algorithms when applied to a real-world distributed sensor dataset collected from an oilfield. Noticeable accuracy improvement is also achieved when using our proposed approach with the CL benchmark datasets, achieving state-of-the-art accuracies for the CIFAR-10 dataset on blurry10 and blurry30 settings of 80.83% and 88.91%, respectively.
Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this paper , the authors compare two standard optical characterization methods to analyze the material properties of amorphous silicon thin films obtained from their transmission spectra, and compare their results with those obtained from optical flow simulations.
Abstract: In this investigation, we compare two standard optical characterization methods to analyze the material properties of amorphous silicon thin films obtained from their transmission spectra.
Journal ArticleDOI
01 Nov 2008
TL;DR: In this paper, the pansharpening of multispectral images based on the use of a total variation image prior is proposed. But the method is limited to panchromatic and multi-spectral images.
Abstract: In this paper we propose a novel algorithm for the pansharpening of multispectral images based on the use of a Total Variation (TV) image prior. Within the Bayesian formulation, the proposed methodology incorporates prior knowledge on the expected characteristics of multispectral images, and uses the sensor characteristics to model the observation process of both panchromatic and multispectral images. The pansharpened multispectral images are compared with the images obtained by other parsharpening methods and their quality is assessed both qualitatively and quantitatively.

Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a cloud centric vision for worldwide implementation of Internet of Things (IoT) and present a Cloud implementation using Aneka, which is based on interaction of private and public Clouds, and conclude their IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.

9,593 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep learning method for single image super-resolution (SR), which directly learns an end-to-end mapping between the low/high-resolution images.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

6,122 citations