scispace - formally typeset
Search or ask a question
Author

Timo Hämäläinen

Other affiliations: Dalian Medical University, Nokia, Dublin Institute of Technology  ...read more
Bio: Timo Hämäläinen is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Quality of service & Encoder. The author has an hindex of 38, co-authored 560 publications receiving 7648 citations. Previous affiliations of Timo Hämäläinen include Dalian Medical University & Nokia.


Papers
More filters
Proceedings ArticleDOI
01 Oct 2019
TL;DR: This work presents an instruction extension flow for a RISC-V processor core modeled in IP-XACT and proposes the workflow to cover adding dedicated hardware in IP -XACT for improved re-usability and design consistency.
Abstract: Short time-to-market and cost consideration of hardware design promotes reuse of ever more complex intellectual property even up to processors. In processor design, the instruction set architecture (ISA) selection is a major design decision driven largely by application requirements. Extendable ISAs enable application-specific adjustments and improved performance at the cost of more complex design. Adding a custom instruction introduces a choice of either utilizing existing hardware or adding new dedicated hardware. This work presents an instruction extension flow for a RISC-V processor core modeled in IP-XACT. We demonstrate the workflow by adding three bit manipulation instructions “popcnt”, “parity” and “bswap” in the instruction set that executes on an extended processor platform and evaluate their performance in simulation. The simulated instruction count and performance are used to evaluate the benefit of adding dedicated hardware. The effort analysis of the design flow shows approximately 110 minutes work for adding a new instruction to the RISC-V core. This suggests a straightforward and easy to follow approach that can be extended to other instructions as well. In addition, we propose the workflow to cover adding dedicated hardware in IP-XACT for improved re-usability and design consistency.

4 citations

Proceedings ArticleDOI
02 Dec 2013
TL;DR: A novel tool for file dependency and change analysis and visualization that was implemented into the Kactus2 design environment (GPL2) and capable of sorting source files into IP-XACT file sets, extracting and visualizing file dependencies, and keeping track of changed files.
Abstract: Large-scale HW and SW projects contain thousands of source files, which requires proper file management in order to keep track of changes and keep the code in compilable state. Different parts of the system depend on each other, and even a small change in a certain part of the code may break the other parts. Dependency analysis can be used to prevent such problems by visualizing the SW structure so that dependencies are easily seen by the developer. This paper presents a novel tool for file dependency and change analysis and visualization that was implemented into our IP-XACT based Kactus2 design environment (GPL2). The tool is capable of sorting source files into IP-XACT file sets, extracting and visualizing file dependencies, and keeping track of changed files. It also offers the ability to create manual dependencies, e.g., between source code and documentation. The dependency and change analysis of 1k source code files containing 140k lines of code is performed in less than two minutes.

4 citations

Journal Article
TL;DR: This work argues in favor of the inter-cell approach and applies a degradation detection method that is able to detect a sleeping cell that could be difficult to observe using traditional intra-cell methods.
Abstract: Fault management is a crucial part of cellular network management systems. The status of the base stations is usually monitored by well-defined key performance indicators (KPIs). The approaches for cell degradation detection are based on either intra-cell or inter-cell analysis of the KPIs. In intra-cell analysis, KPI profiles are built based on their local history data whereas in inter-cell analysis, KPIs of one cell are compared with the corresponding KPIs of the other cells. In this work, we argue in favor of the inter-cell approach and apply a degradation detection method that is able to detect a sleeping cell that could be difficult to observe using traditional intra-cell methods. We demonstrate its use for detecting emulated degradations among performance data recorded from a live LTE network. The method can be integrated in current systems because it can operate using existing KPIs without any major modification to the network infrastructure.

4 citations

Journal Article
TL;DR: Analgorithm that actually provides Class of Servicebased differentiated differentiatedaccessto serverclusters, and offers better playground for QoSmechanisms in client-serverenvironments is described.
Abstract: Theswift growth of Internethasboostedtheuseof Webbasedservicesandin somepracticalcaseshas led to overwhelmingrequestburststo servers.Relationaldatabasequeries,imagestorage/retrie val andothernew typesof applicationtransactionshave becomeincreasinglypopular. Their coexistencein commercialparalleland distributedsystemshave generatedsomeuniquelynew loadingproblems.For example,the constantincreaseof requestratefinally leadsto processingpowerrequirement exceedingthatof theaccessedserver. As aconsequence, the responsetimesincreaseandsomeportionof the requestsarelost. Clusteringof serversto meetthe growing demandfor serverprocessingcapacity, especiallyin web-basedservicesupply, havecreatedtheneedfor intelligent switchingat front-enddevices.As aconsequence of clustering,multilayerswitchingschemeshavebeendeveloped to enableoptimumloadingof the individual serversin a cluster. In this paper , we formulatethe load balancing problemtaking the QoSinto considerationandintroducea QoSawareloadbalancingalgorithm(QoS-LB).The performanceof thealgorithmis simulatedandresultsindicatingtheloadbalancingcapabilityof thealgorithmare presented.Theoverall ideaof this paperis to describeanalgorithmthatactuallyprovidesClassof Servicebased differentiatedaccessto serverclusters,andoffersbetterplaygroundfor QoSmechanismsin client-serverenvironments.Theengineeringtaskto offerQoSguarantees with suchadifferentationtool is outof thescopeof thispaper .

4 citations

Journal Article
TL;DR: The Flow-based Fast Handover for Mobile IPv6 (FFHMIPv6) is found to be an efficient and simple way to speed up the downstream handover delay.
Abstract: In this paper we present the Flow-based Fast Handover for Mobile IPv6 (FFHMIPv6) to speed up the handover process of the standardized Mobile IPv6 (MIPv6) protocol. The location registration processes related to the MlPv6 method cause delays dependant of the distance of the corresponding nodes, thus the delays might be considerable. The FFHMIPv6 uses the flow state information and IPv6-in-IPv6 encapsulation to enable the reception of flows simultaneously with the location update processes. We analyze the method mainly with Network Simulator 2 simulations, but also a glimpse to Mobile IPv6 for Linux experiments is presented. The FFHMIPv6 is found to be an efficient and simple way to speed up the downstream handover delay. Also, some ideas about the upstream traffic are presented.

4 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations