scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors discuss the resource optimization challenges of tiny machine learning and different methods, such as quantization, pruning, and clustering, that can be used to overcome these resource difficulties.
Abstract: We use 250 billion microcontrollers daily in electronic devices that are capable of running machine learning models inside them. Unfortunately, most of these microcontrollers are highly constrained in terms of computational resources, such as memory usage or clock speed. These are exactly the same resources that play a key role in teaching and running a machine learning model with a basic computer. However, in a microcontroller environment, constrained resources make a critical difference. Therefore, a new paradigm known as tiny machine learning had to be created to meet the constrained requirements of the embedded devices. In this review, we discuss the resource optimization challenges of tiny machine learning and different methods, such as quantization, pruning, and clustering, that can be used to overcome these resource difficulties. Furthermore, we summarize the present state of tiny machine learning frameworks, libraries, development environments, and tools. The benchmarking of tiny machine learning devices is another thing to be concerned about; these same constraints of the microcontrollers and diversity of hardware and software turn to benchmark challenges that must be resolved before it is possible to measure performance differences reliably between embedded devices. We also discuss emerging techniques and approaches to boost and expand the tiny machine learning process and improve data privacy and security. In the end, we form a conclusion about tiny machine learning and its future development.

4 citations

Proceedings ArticleDOI
08 Apr 2002
TL;DR: Results show that a nine-fold improvement can be obtained in H.26L decoding speed in terms of frames per second with video quality equivalent to a non-optimized implementation.
Abstract: A unified method for optimization of video coding algorithms on general-purpose processors is presented. The method consists of algorithmic, code, compiler, and SIMD (Single Instruction Multiple Data) media Instruction Set Architecture (ISA) optimizations. H.263, H.263+ and emerging H.26L are used as example cases. For the realization of the unified method, the coding elements in all the codecs are analyzed and optimization techniques suitable for one or several of all the coding elements are presented. Results show that a nine-fold improvement can be obtained in H.26L decoding speed in terms of frames per second with video quality equivalent to a non-optimized implementation.

4 citations

Proceedings ArticleDOI
06 Jul 2003
TL;DR: A novel architecture for real-time betting is presented that disposes of the up-front effort and enables frequent bet announcements and placements during an ongoing event by broadcasting announcements, time-stamping and storing the placements locally, and collecting them after the event has been finished.
Abstract: Traditional betting systems do not enable betting in real-time during an event and require decisions and preparations beforehand. In this paper a novel architecture for real-time betting is presented. It disposes of the up-front effort and enables frequent bet announcements and placements during an ongoing event. The novel situation is achieved by broadcasting announcements, time-stamping and storing the placements locally, and collecting them after the event has been finished. While solving the processing problems, the architecture requires reliable cryptographic and physical protection. Currently, e.g. DVB and LAN technologies offer potential platforms for providing the service. The implemented LAN demonstrator has shown that user interfaces have to be simple and bets should not be announced too often. It has also shown that real-time operation makes betting more inspiring.

4 citations

Proceedings ArticleDOI
01 Sep 2020
TL;DR: The improved variation of the linear regression-based method called RT-LRbTC is tested and it has proved to be a potential method to be used in a wireless sensor node with a fixed and predictable latency.
Abstract: The escalation of the Internet of Things applications has put on display the different sensor data processing methods. The sensor data compression is one of the fundamental methods to reduce the amount of data needed to transmit from the sensor node which is often battery powered and operates wirelessly. Reducing the amount of data in wireless transmission is an effective way to reduce overall energy consumption in wireless sensor nodes. The methods presented and tested are suitable for constrained sensor nodes with limited computational power and limited energy resources. The methods presented are compared with each other using compression ratio and inherent latency. Latency is an important parameter in on-line applications. The improved variation of the linear regression-based method called RT-LRbTC is tested and it has proved to be a potential method to be used in a wireless sensor node with a fixed and predictable latency. The compression efficiency of the compression algorithms is tested with real measurement data sets.

4 citations

Proceedings ArticleDOI
08 Dec 2008
TL;DR: This paper presents execution monitor, a versatile monitoring tool implemented in Java, for multi-processor systems-on-chip (MPSoCs), which allows monitoring both the application and the underlying platform in real-time, and also viewing the previously recorded execution trace.
Abstract: In system-level design, design space exploration (DSE) produces large amounts of data when exploring myriad of alternatives for application mapping and the underlying platform. Visualization of the essential execution data makes the right design decisions essentially easier. This paper presents execution monitor, a versatile monitoring tool implemented in Java, for multi-processor systems-on-chip (MPSoCs). It allows monitoring both the application and the underlying platform in real-time, and also viewing the previously recorded execution trace. Execution monitor can be used both during the simulation and prototyping. Moreover, the designer can rapidly evaluate in run-time the performance of multiple application mappings via intuitive drag-and-drop mechanism. The case study shows that the visualization of the monitored execution data significantly eases optimizing the performance of the video codec after addition of new application functionality.

4 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations