scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Proceedings ArticleDOI
01 Aug 2006
TL;DR: This paper presents a multiprocessor GALS implementation on a standard commercial FPGA with standard development tools, with a key building block a novel, reliable RTL mixed clock FIFO.
Abstract: Globally Asynchronous Locally Synchronous (GALS) is a paradigm for complexity management and re-use of large System-on-Chip (SoC) architectures. GALS is most often based on specific ASIC design components or special FPGA platforms with custom development tools. In this paper we present a multiprocessor GALS implementation on a standard commercial FPGA with standard development tools. The key building block is a novel, reliable RTL mixed clock FIFO. A complete MPEG-4 video encoder with four processors is implemented for proofing the concept. The area overhead compared to a fully synchronous design is shown to be only 2% and the performance overhead is 3%. This is negligible compared to the benefits that are much better flexibility, ASIC or FPGA vendor independency, and reduced design time. Furthermore, the mixed-clock interfaces allow easy re-usability, since the RTL-level blocks do not need to be re-verified in design iterations.

10 citations

Proceedings ArticleDOI
01 Jan 2005
TL;DR: The presented benchmarking method utilizes traffic generator with a dataflow models of the applications that allows approximately 200/spl times/ speedup and on average 10% error in estimated runtime w.r.t. cycle-accurate HW/SW cosimulation without exposing the exact internal functionality of the application.
Abstract: This work presents the motivation, basic concepts, and requirements for benchmarking a network-on-chip (NoC). Currently there is practically no benchmark sets for NoC or the presented tools do not meet the requirements. The presented benchmarking method utilizes traffic generator with a dataflow models of the applications. Combined with transaction-level NoC, the abstract application model allows approximately 200/spl times/ speedup and on average 10% error in estimated runtime w.r.t. cycle-accurate HW/SW cosimulation without exposing the exact internal functionality of the application.

10 citations

Proceedings ArticleDOI
16 Nov 2004
TL;DR: The admission control method rejects new multicast join requests that would otherwise decrease the quality experienced by the existing receivers and maintains them within the desired constraints.
Abstract: Multicast admission control in differentiated services network is an important but shortly researched subject. Our admission control method rejects new multicast join requests that would otherwise decrease the quality experienced by the existing receivers. Edge nodes filter join requests and generate new requests. The method was developed as an extension to the DSM cast protocol but could also be adapted to the other protocols. It decreases delays, jitter, and losses and maintains them within the desired constraints.

10 citations

Journal ArticleDOI
TL;DR: The results show that an FPGA implementation can downscale 16VGA and HDTV video in real-time with a complexity of less than half of the reference implementations.

10 citations

Proceedings ArticleDOI
16 Jun 2008
TL;DR: Simulation results reveal that a low ARQ feedback intensity results only in a marginal improvement, and it is reasonable to rely upon more frequent ARQ Feedback messages as they do not result in a performance degradation.
Abstract: The IEEE 802.16 standard defines the ARQ mechanism as a part of the MAC layer. The functioning of the ARQ mechanism depends on a number of parameters. The IEEE 802.16 specification defines them but it does not provide concrete values and solutions. We ran simulation scenarios to study how the ARQ feedback intensity impacts the performance of application protocols. The simulation results reveal that a low ARQ feedback intensity results only in a marginal improvement. Though it is possible to optimize the ARQ feedback intensity, it is reasonable to rely upon more frequent ARQ feedback messages as they do not result in a performance degradation. At the same time, ARQ connections, which work on top of HARQ, can delay the ARQ feedbacks up to the ARQ retry timeout to optimize the performance.

10 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations