scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2017
TL;DR: Kvazzup is the first HEVC-based end-to-end video call system with a user-friendly Graphical User Interface for call management and it validates the feasibility of HEVC in different types of video calls.
Abstract: This paper introduces an open-source HEVC video call application called Kvazzup. This academic proposal is the first HEVC-based end-to-end video call system with a user-friendly Graphical User Interface for call management. Kvazzup is built on the Qt framework and it makes use of four open-source tools: Kvazaar for HEVC encoding, OpenHEVC for HEVC decoding, Opus codec for audio coding, and Live555 for managing RTP/RTCP traffic. In our experiments, Kvazzup is prototyped with low-complexity VGA and high-quality 720p video calls between two desktops. On an Intel 4-core i5 processor, the VGA call accounts for 17% of the total CPU time. Averagely, it requires a bit rate of 0.31 Mbit/s out of which 0.26 Mbit/s is taken by video and 0.05 Mbit/s by audio. In the 720p call, the respective figures are 46%, 1.13 Mbit/s, 1.08 Mbit/s, and 0.05 Mbit/s. These test cases also validate the feasibility of HEVC in different types of video calls. HEVC coding is shown to account for around 34% of the Kvazzup processing time in the VGA call and 45% in the 720p call.

2 citations

Proceedings ArticleDOI
11 Jul 2005
TL;DR: Two distinct admission control methods for multicast admission control in differentiated services network are proposed and compared against each other and to the situation without any admission control.
Abstract: Multicast admission control in differentiated services network is an important but lightly researched subject. We propose two distinct admission control methods. The methods reject new multicast join requests that would otherwise decrease the quality experienced by the existing receivers. Edge nodes filter join requests and generate new requests. The proposed methods are developed as an extension to the DSMCast protocol but could also be adapted to other protocols. In this paper the methods are compared against each other and to the situation without any admission control

2 citations

Proceedings ArticleDOI
30 Dec 2010
TL;DR: Main results of research work related to the development of enhanced modulation technique with Orthogonal Time-Frequency Division Multiplexing (OFTDM) are presented.
Abstract: Utilization of well-localized bases allows to improve considerably the efficiency and robustness against intercarrier interference (ICI) of now existing wireless digital communication systems based on Orthogonal Frequency Division Multiplexing (OFDM). The efficient algorithm of orthogonal well-localized finite dimensional generalized Weyl-Heisenberg (WH) basis synthesis is considered in the paper. Optimal basis parameters are proposed. Computationally efficient modulation and demodulation algorithms for signals constructed from WH bases are described. Presented modeling results confirm good bases' localization characteristics and robustness against Doppler shift. This article presents main results of research work related to the development of enhanced modulation technique with Orthogonal Time-Frequency Division Multiplexing (OFTDM).

2 citations

Proceedings ArticleDOI
01 Jun 2022
TL;DR: This paper determines the sum squared position error bound (SPEB) as the localization accuracy metric for the presented localization-communication system and proposes an iterative algorithm to obtain a suboptimal solution by utilizing the Lagrange duality as well as penalty-based optimization methods.
Abstract: Joint localization and communication systems have drawn significant attention due to their high resource utilization. In this paper, we consider a reconfigurable intelligent surface (RIS)-aided simultaneously localization and communication system. We first determine the sum squared position error bound (SPEB) as the localization accuracy metric for the presented localization-communication system. Then, a joint RIS discrete phase shifts design and subcarrier assignment problem is formulated to minimize the SPEB while guaranteeing each user’s achievable data rate requirement. For the presented non-convex mixed-integer problem, we propose an iterative algorithm to obtain a suboptimal solution by utilizing the Lagrange duality as well as penalty-based optimization methods. Simulation results are provided to validate the performance of the proposed algorithm.

2 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This paper describes how the proposed framework for eye tracking data collection can be used in practice with videos up to 4K resolution and the data collected during a sample experiment are made publicly available.
Abstract: Eye tracking is nowadays the primary method for collecting training data for neural networks in the Human Visual System modelling. Our recommendation is to collect eye tracking data from videos with eye tracking glasses that are more affordable and applicable to diverse test conditions than conventionally used screen based eye trackers. Eye tracking glasses are prone to moving during the gaze data collection but our experiments show that the observed displacement error accumulates fairly linearly and can be compensated automatically by the proposed framework. This paper describes how our framework can be used in practice with videos up to 4K resolution. The proposed framework and the data collected during our sample experiment are made publicly available.

2 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations