scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2003
TL;DR: This paper analyzes how well 802.11e's QoS properties perform under the different kind of simulation scenarios when packet sizes and channel error rates are varied and studies at the point access network for 3G traffic classes.
Abstract: Seamless interconnection with WLAN 80211 and 3/4G technology is essential for the future wireless environment Supporting quality of service is one of the most important issues in 3/4G In this paper the WLAN QoS standard IEEE 80211e draft is studied at the point access network for 3G traffic classes 80211e is compatible with the layer 2 80211p standard and resembles DiffServ QoS functionality that is proposed to be one of the applied QoS standards in 3/4G networks We analyze how well 80211e's QoS properties perform under the different kind of simulation scenarios when packet sizes and channel error rates are varied

11 citations

Proceedings ArticleDOI
11 Sep 2005
TL;DR: Evaluation of the performance of IEEE 802.11b WLAN for supporting multihop voice over IP (VoIP) service using the NS-2 network simulator and the mean opinion score (MOS) shows that the mean number of hops between a VoIP transmitter and receiver has the main effect on the number of calls with acceptable quality.
Abstract: This paper evaluates the performance of IEEE 802.11b WLAN for supporting multihop voice over IP (VoIP) service. Evaluation is carried out using the NS-2 network simulator and the mean opinion score (MOS) as a criteria for measuring the quality of a VoIP connection. The results show that the mean number of hops between a VoIP transmitter and receiver has the main effect on the number of calls with acceptable quality. On a small network where connections cause interference to each other already three hops cause problems. The mean number of hops can be decreased with a supporting access point (AP) infrastructure. Also the type of interfering traffic affects. The voice quality in VoIP is sensitive to transmission losses, and VoIP cannot compete equally with high data rate applications

11 citations

Proceedings Article
15 Nov 2011
TL;DR: This research investigates how to build an Always Best Connected application that works with EPS standard and what kind of user preferences should be taken into account when making the network selection more user centric.
Abstract: Mobile Internet has rapidly evolved in the past years with an ever increasing number of novel technologies and services with a variety of access technologies being deployed along side. Today's mobile devices often support multiple communication technologies for accessing Internet services. However, they all do not tap the full potential of these capabilities as users often have to manually select the networks and only one network is used at a time. The concept of Always Best Connected (ABC) allows a person to connect applications using the devices and access technologies that best suit to his or her needs, thereby combining the features of access technologies to provide an enhanced user experience for future Internet. However, an Always Best Connected scenario generates great complexity and a number of requirements, not only for the technical solutions, but also in terms of business relationships between operators and service providers, and in subscription handling. The Third Generation Partnership Project (3GPP) standardized the Evolved Packet System (EPS) where one of the key features is a support access system selection based on a combination of operator policies, user preference and access network conditions. Still the standard focuses mainly on the operator point of view, the user is considered to follow this approach. Yet, the standard offers a number of possibilities where the user centric approach can improve the ABC scenario over the predefined by the operator. This research investigates how to build an Always Best Connected application that works with EPS standard and what kind of user preferences should be taken into account when making the network selection more user centric. Additionally the proof-of-concept implementation of this user centric approach to Always Best Connected model is illustrated.

11 citations

Journal ArticleDOI
TL;DR: This study explores the cybersecurity posture of various MCIS setups for both types of ADS-B technology: 1090ES and UAT978 against radio-link- based attacks by transmission-capable software-defined radio (SDR).
Abstract: Automatic dependent surveillance-broadcast (ADS-B) is a key air surveillance technology and a critical component of next-generation air transportation systems. It significantly simplifies aircraft surveillance technology and improves airborne traffic situational awareness. Many types of mobile cockpit information systems (MCISs) are based on ADS-B technology. MCIS gives pilots the flight and traffic-related information they need. MCIS has two parts: an ADS-B transceiver and an electronic flight bag (EFB) application. The ADS-B transceivers transmit and receive the ADS-B radio signals while the EFB applications hosted on mobile phones display the data. Because they are cheap, lightweight, and easy to install, MCISs became very popular. However, because it lacks basic security measures, ADS-B technology is vulnerable to cyberattacks, which makes the MCIS inherently exposed to attacks. This is even more likely because they are power, memory, and computationally constrained. This study explores the cybersecurity posture of various MCIS setups for both types of ADS-B technology: 1090ES and UAT978. Total six portable MCIS devices and 21 EFB applications were tested against radio-link- based attacks by transmission-capable software-defined radio (SDR). Packet-level denial of service (DoS) attacks affected approximately 63% and 37% of 1090ES and UAT978 setups, respectively, while many of them experienced a system crash. Our experiments show that DoS attacks on the reception could meaningfully reduce transmission capacity. Our coordinated attack and fuzz tests also reported worrying issues on the MCIS. The consistency of our results on a very broad range of hardware and software configurations indicate the reliability of our proposed methodology as well as the effectiveness and efficiency of our platform.

11 citations

Book ChapterDOI
18 Jul 2005
TL;DR: Generic synthesizable 2-dimensional mesh and hierarchical bus, which is an extended version of a single bus, are benchmarked in a SoC context with five parameterizable test cases and the results show that the hierarchical bus offers a good performance and area trade-off.
Abstract: A simulation-based comparison scheme for on-chip communication networks is presented. Performance of the network depends heavily on the application and therefore several test cases are required. In this paper, generic synthesizable 2-dimensional mesh and hierarchical bus, which is an extended version of a single bus, are benchmarked in a SoC context with five parameterizable test cases. The results show that the hierarchical bus offers a good performance and area trade-off. In the presented test cases, a 2-dimensional mesh offers a speedup of 1.1x – 3.3x over hierarchical bus, but the area overhead is of 2.3x – 3.4x, which is larger than performance improvement.

11 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations