scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Journal ArticleDOI
TL;DR: GDL-90 protocol fuzzing options are researched and practical Denial-of-Service (DoS) attacks on popular Electronic Flight Bag (EFB) software operating on mobile devices are demonstrated and a worrying lack of security in many EFB applications where the security is directly related to aircraft’s safety navigation is shown.
Abstract: As the core part of next-generation air transportation systems, the Automatic Dependent Surveillance-Broadcast (ADS-B) is becoming very popular. However, many (if not most) ADS-B devices and implementations support and rely on Garmin’s GDL-90 protocol for data exchange and encapsulation. In this paper, we research GDL-90 protocol fuzzing options and demonstrate practical Denial-of-Service (DoS) attacks on popular Electronic Flight Bag (EFB) software operating on mobile devices. For this purpose, we specifically configured our own avionics pentesting platform. and targeted the popular Garmin’s GDL-90 protocol as the industry-leading devices operate on it. We captured legitimate traffic from ADS-B avionics devices. We ran our samples through a state-of-the-art fuzzing platform (AFL), and fed the AFL’s output to the EFB apps and GDL-90 decoding software via the network in the same manner as legitimate GDL-90 traffic is sent from ADS-B and other avionics devices. The result shows a worrying anc critical lack of security in many EFB applications where the security is directly related to aircraft’s safety navigation. Out of 16 tested configurations, our avionics pentesting platform managed to crash or otherwise impact 9 (or 56%) of those. The observed problems manifested as crashes, hangs, and abnormal behaviours of the EFB apps and GDL-90 decoders during the fuzzing test. Attacks on core sub-system availability (such as DoS) pose high risks to safety-critical and mission-critical systems such as avionics and aerospace. Our work aims at developing and proposing a systematic pentesting methodology for such devices, protocols, and software, and discovering and reporting as early as possible such vulnerabilities.

6 citations

Proceedings ArticleDOI
01 Aug 2018
TL;DR: It is pointed out that for future data centers it is beneficial to rely on HW acceleration in terms of speed and energy efficiency for applications like IPsec.
Abstract: Line-rate speed requirements for performance hungry network applications like IPsec are getting problematic due to the virtualization trend. A single virtual network application hardly can provide 40 Gbps operation. This research considers the IPsec packet processing without IKE to be offloaded on an FPGA in a network. We propose an IPsec accelerator in an FPGA and explain the details that need to be considered for a production ready design. Based on our evaluation, Intel Arria 10 FPGA can provide 10 Gbps line-rate operation for the IPsec accelerator and to be responsible for 1000 IPsec tunnels. The research points out that for future data centers it is beneficial to rely on HW acceleration in terms of speed and energy efficiency for applications like IPsec.

6 citations

01 Jan 2015
TL;DR: This paper presents a new log file analyzing framework, LOGDIG, for checking expected system behavior from log files, a generic framework motivated by logs that include temporal data (timestamps) and system-specific data (e.g. spatial data with coordinates of moving objects).
Abstract: Log files are often the only way to identify and locate errors in a deployed system. This paper presents a new log file analyzing framework, LOGDIG, for checking expected system behavior from log files. LOGDIG is a generic framework, but it is motivated by logs that include temporal data (timestamps) and system-specific data (e.g. spatial data with coordinates of moving objects), which are present e.g. in Real Time Passenger Information Systems (RTPIS). The behavior mining in LOGDIG is state-machine-based, where a search algorithm in states tries to find desired events (by certain accuracy) from log files. That is different from related work, in which transitions are directly connected to lines of log files. LOGDIG reads any log files and uses metadata to interpret input data. The output is static behavioral knowledge and human friendly composite log for reporting results in legacy tools. Field data from a commercial RTPIS called ELMI is used as a proof-of-concept case study. LOGDIG can also be configured to analyze other systems log files by its flexible metadata formats and a new behavior mining language.

6 citations

Proceedings ArticleDOI
29 Aug 2004
TL;DR: Simulation results demonstrated the effectiveness of the proposed optimal resource allocation scheme for maximizing the revenue of service providers in a network node by optimally allocating a given amount of node resources among multiple service classes.
Abstract: In the future IP networks, a wide range of different service classes must be supported in a network node and different classes of customers pay different prices for their used node resources based on their service-level-agreements. We link the resource allocation issue with pricing strategies and explore the problem of maximizing the revenue of service providers in a network node by optimally allocating a given amount of node resources among multiple service classes. Under linear pricing strategy, the optimal resource allocation scheme is derived from the revenue target function by Lagrangian optimization approach. The simulation results demonstrated the effectiveness of the proposed optimal resource allocation scheme for maximizing the revenue of service providers.

6 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations