scispace - formally typeset
Search or ask a question
Author

Thomas Arildsen

Bio: Thomas Arildsen is an academic researcher from Aalborg University. The author has contributed to research in topics: Compressed sensing & Signal reconstruction. The author has an hindex of 11, co-authored 42 publications receiving 329 citations.

Papers
More filters
Journal ArticleDOI
18 Dec 2017-PeerJ
TL;DR: ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description.
Abstract: Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results, however computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested, hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.

104 citations

Journal ArticleDOI
TL;DR: The experiments show that the proposed algorithm improves over established iterative thresholding algorithms by being able to reconstruct AFM images to a comparable quality using fewer measurements or equivalently obtaining a more detailed reconstruction for a fixed number of measurements.

29 citations

Proceedings ArticleDOI
18 Oct 2012
TL;DR: In this paper, compressive signal processing is applied to demodulate the received signal to lower the sampling rate in a spread spectrum communication system using Direct Sequence Spread Spectrum (DSSS), which may lead to a decrease in the power consumption or the manufacturing price of wireless receivers using spread spectrum technology.
Abstract: We show that to lower the sampling rate in a spread spectrum communication system using Direct Sequence Spread Spectrum (DSSS), compressive signal processing can be applied to demodulate the received signal. This may lead to a decrease in the power consumption or the manufacturing price of wireless receivers using spread spectrum technology. The main novelty of this paper is the discovery that in spread spectrum systems it is possible to apply compressive sensing with a much simpler hardware architecture than in other systems, making the implementation both simpler and more energy efficient. Our theoretical work is exemplified with a numerical experiment using the IEEE 802.15.4 standard's 2.4 GHz band specification. The numerical results support our theoretical findings and indicate that compressive sensing may be used successfully in spread spectrum communication systems. The results obtained here may also be applicable in other spread spectrum technologies, such as Code Division Multiple Access (CDMA) systems.

22 citations

Journal ArticleDOI
TL;DR: This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation, revealing that using a simple raster scanning pattern in combination with conventional image interpolation performs very well.
Abstract: This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is necessary to subsequently process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR on the tested cell images.

22 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: To bridge the gap between theory and practicality of CS, different CS acquisition strategies and reconstruction approaches are elaborated systematically in this paper.
Abstract: Compressive Sensing (CS) is a new sensing modality, which compresses the signal being acquired at the time of sensing. Signals can have sparse or compressible representation either in original domain or in some transform domain. Relying on the sparsity of the signals, CS allows us to sample the signal at a rate much below the Nyquist sampling rate. Also, the varied reconstruction algorithms of CS can faithfully reconstruct the original signal back from fewer compressive measurements. This fact has stimulated research interest toward the use of CS in several fields, such as magnetic resonance imaging, high-speed video acquisition, and ultrawideband communication. This paper reviews the basic theoretical concepts underlying CS. To bridge the gap between theory and practicality of CS, different CS acquisition strategies and reconstruction approaches are elaborated systematically in this paper. The major application areas where CS is currently being used are reviewed here. This paper also highlights some of the challenges and research directions in this field.

334 citations

Journal ArticleDOI
20 Aug 2019-eLife
TL;DR: Brian 2 allows scientists to simply and efficiently simulate spiking neural network models by transforming code with simple and concise high-level descriptions into efficient low-level code that can run interleaved with their code.
Abstract: Simulating the brain starts with understanding the activity of a single neuron. From there, it quickly gets very complicated. To reconstruct the brain with computers, neuroscientists have to first understand how one brain cell communicates with another using electrical and chemical signals, and then describe these events using code. At this point, neuroscientists can begin to build digital copies of complex neural networks to learn more about how those networks interpret and process information. To do this, computational neuroscientists have developed simulators that take models for how the brain works to simulate neural networks. These simulators need to be able to express many different models, simulate these models accurately, and be relatively easy to use. Unfortunately, simulators that can express a wide range of models tend to require technical expertise from users, or perform poorly; while those capable of simulating models efficiently can only do so for a limited number of models. An approach to increase the range of models simulators can express is to use so-called ‘model description languages’. These languages describe each element within a model and the relationships between them, but only among a limited set of possibilities, which does not include the environment. This is a problem when attempting to simulate the brain, because a brain is precisely supposed to interact with the outside world. Stimberg et al. set out to develop a simulator that allows neuroscientists to express several neural models in a simple way, while preserving high performance, without using model description languages. Instead of describing each element within a specific model, the simulator generates code derived from equations provided in the model. This code is then inserted into the computational experiments. This means that the simulator generates code specific to each model, allowing it to perform well across a range of models. The result, Brian 2, is a neural simulator designed to overcome the rigidity of other simulators while maintaining performance. Stimberg et al. illustrate the performance of Brian 2 with a series of computational experiments, showing how Brian 2 can test unconventional models, and demonstrating how users can extend the code to use Brian 2 beyond its built-in capabilities.

319 citations

Journal ArticleDOI
Kuansan Wang1, Zhihong Shen1, Chiyuan Huang1, Chieh-Han Wu1, Yuxiao Dong1, Anshul Kanakia1 
23 Jan 2020
TL;DR: The design, schema, and technical and business motivations behind MAG are described and how MAG can be used in analytics, search, and recommendation scenarios are elaborated.
Abstract: An ongoing project explores the extent to which artificial intelligence (AI), specifically in the areas of natural language processing and semantic reasoning, can be exploited to facilitate the stu...

310 citations

Posted ContentDOI
01 Apr 2019-bioRxiv
TL;DR: “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach, and enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models.
Abstract: To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.

178 citations