scispace - formally typeset
Search or ask a question

Showing papers by "Timo Hämäläinen published in 2013"


Journal ArticleDOI
TL;DR: A new uncertain optimal control model based on the Hurwicz criterion is introduced to design dynamic optimization problems using the method of dynamic programming and the equation of optimality is given to solve the proposed model.

29 citations


Proceedings ArticleDOI
14 Nov 2013
TL;DR: A new real-time unsupervised NIDS for detecting new and complex attacks within normal and encrypted communications, and an aim of proposing the second engine is to conduct a deeper analysis and correlate the traffic and behaviour of Bots during DDOS attacks to find the Bot-Master.
Abstract: Previously, Network Intrusion Detection Systems (NIDS) detected intrusions by comparing the behaviour of the network to the pre-defined rules or pre-observed network traffic, which was expensive in terms of both cost and time. Unsupervised machine learning techniques have overcome these issues and can detect unknown and complex attacks within normal or encrypted communication without any prior knowledge. NIDS monitors bytes, packets and network flow to detect intrusions. It is nearly impossible to monitor the payload of all packets in a high-speed network. On the other hand, the content of packets does not have sufficient information to detect a complex attack. Since the rate of attacks within encrypted communication is increasing and the content of encrypted packets is not accessible to NIDS, it has been suggested to monitor network flows. As most network intrusions spread within the network very quickly, in this paper we will propose a new real-time unsupervised NIDS for detecting new and complex attacks within normal and encrypted communications. To achieve having a real-time NIDS, the proposed model should capture live network traffic from different sensors and analyse specific metrics such as number of bytes, packets, network flows, and the time explicitly and implicitly, of packets and network flows, in the different resolutions. The NIDS will flag the time slot as an anomaly if any of those metrics passes the threshold, and it will send the time slot to the first engine. The first engine clusters different layers and dimensions of the network's behaviour and correlates the outliers to purge the intrusions from normal traffic. Detecting network attacks, which produce a huge amount of network traffic (e.g. DOS, DDOS, scanning) was the aim of proposing the first engine. Analysing statistics of network flows increases the feasibility of detecting intrusions within encrypted communications. The aim of proposing the second engine is to conduct a deeper analysis and correlate the traffic and behaviour of Bots (current attackers) during DDOS attacks to find the Bot-Master.

23 citations


Proceedings ArticleDOI
02 Dec 2013
TL;DR: This paper presents an IP-XACT based design flow that reduces the design time to one third compared to the conventional FPGA flow, the number of automated design phases is doubled and any manual error prone data transfer between HW and SW tools is completely avoided.
Abstract: Typical MPSoC FPGA product design is a rigid waterfall process proceeding one-way from HW to SW design. Any changes to HW trigger the SW project re-creation from the beginning. When several product variations or speculative development time exploration is required, the disk bloats easily with hundreds of Board Support Package (BSP), configuration and SW project files. In this paper, we present an IP-XACT based design flow that solves the problems by agile re-use of HW and SW components, automation and single golden reference source for information. We also present new extensions to IP-XACT since the standard lacks SW related features. Three use cases demonstrate how the BSP is changed, an application is moved to another processor and a function is moved from SW implementation to a HW accelerator. Our flow reduces the design time to one third compared to the conventional FPGA flow, the number of automated design phases is doubled and any manual error prone data transfer between HW and SW tools is completely avoided.

12 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: The problem of malware detection and classification is solved by applying a data-mining-based approach that relies on supervised machine-learning and a game-theory approach is applied to combine the classifiers together.
Abstract: In the modern world, a rapid growth of malicious software production has become one of the most significant threats to the network security. Unfortunately, widespread signature-based anti-malware strategies can not help to detect malware unseen previously nor deal with code obfuscation techniques employed by malware designers. In our study, the problem of malware detection and classification is solved by applying a data-mining-based approach that relies on supervised machine-learning. Executable files are presented in the form of byte and opcode sequences and n-gram models are employed to extract essential features from these sequences. Feature vectors obtained are classified with the help of support vector classifiers integrated with a genetic algorithm used to select the most essential features, and a game-theory approach is applied to combine the classifiers together. The proposed algorithm, ZSGSVM, is tested by using a set of byte and opcode sequences obtained from a set containing executable files of benign software and malware. As a result, almost all malicious files are detected while the number of false alarms remains very low.

12 citations


Journal ArticleDOI
TL;DR: Results show that SA offers 4–6 orders of magnitude reduction is optimization time compared to brute force while achieving high quality solutions, and 7 guidelines for obtaining a good trade-off made between solution quality and algorithm’s execution time.
Abstract: A Multiprocessor System-on-Chip (MPSoC) may contain hundreds of processing elements (PEs) and thousands of tasks but design productivity is lagging the evolution of HW platforms. One problem is application task mapping, which tries to find a placement of tasks onto PEs which optimizes several criteria such as application runtime, intertask communication, memory usage, energy consumption, real-time constraints, as well as area in case that PE selection or buffer sizing are combined with the mapping procedure. Among optimization algorithms for the task mapping, we focus in this paper on Simulated Annealing (SA) heuristics. We present a literature survey and 5 general recommendations for reporting heuristics that should allow disciplined comparisons and reproduction by other researchers. Most importantly, we present our findings about SA parameter selection and 7 guidelines for obtaining a good trade-off made between solution quality and algorithm's execution time. Notably, SA is compared against global optimum. Thorough experiments were performed with 2---8 PEs, 11---32 tasks, 10 graphs per system, and 1000 independent runs, totaling over 500 CPU days of computation. Results show that SA offers 4---6 orders of magnitude reduction is optimization time compared to brute force while achieving high quality solutions. In fact, the globally optimum solution was achieved with a 1.6--90 % probability when problem size is around 1e9---4e9 possibilities. There is approx. 90 % probability for finding a solution that is at most 18 % worse than optimum.

9 citations


Journal ArticleDOI
TL;DR: This work presents a freely available toolset to perform detailed performance analysis at the level of the network instead of the Analysis at the Processing Elements of an MPEG-4 encoder utilizing the NoC paradigm.

8 citations


Book ChapterDOI
28 Aug 2013
TL;DR: The method proposed allows detection of HTTP intrusions in case of continuously updated web-applications and does not require a set of HTTP requests free of attacks to build the normal user behaviour model.
Abstract: Nowadays HTTP servers and applications are some of the most popular targets for network attacks. In this research, we consider an algorithm for HTTP intrusions detection based on simple clustering algorithms and advanced processing of HTTP requests which allows the analysis of all queries at once and does not separate them by resource. The method proposed allows detection of HTTP intrusions in case of continuously updated web-applications and does not require a set of HTTP requests free of attacks to build the normal user behaviour model. The algorithm is tested using logs acquired from a large real-life web service and, as a result, all attacks from these logs are detected, while the number of false alarms remains zero.

8 citations


Proceedings ArticleDOI
23 Mar 2013
TL;DR: The problem solution is proposed based on the analysis of spatial-temporal trajectories with the help of several classifying algorithms and the simulation results show that the next user locations can be predicted with a high accuracy rate.
Abstract: The problem of predicting next user location is one of the most interesting and important mobile data mining tasks. Potential applications of the ability to predict the user's moves range from improving the relevance of location-based recommendations and mobile advertising to network traffic planning and promotion of coordination for disaster relief. In this research, the problem solution is proposed based on the analysis of spatial-temporal trajectories with the help of several classifying algorithms. The algorithm is tested using real data collected from the mobile phones of several non-related users over periods of time varying from a few weeks to two years. The simulation results show that the next user locations can be predicted with a high accuracy rate.

8 citations


Proceedings ArticleDOI
23 Mar 2013
TL;DR: A new Real Time Unsupervised Network Intrusion Detection System (RTUNIDS) which monitor network flows in two windows with different sizes and detects network attacks by correlating outliers from multiple clusters is proposed.
Abstract: The most traditional technique for Network Intrusion Detection Systems (NIDSs) is misuse detection which only detects well-known attacks by matching the current behavior of network with pre-defined attacks' signatures. Providing attacks' signatures is costly, time consuming and with the explosive growing number of zero day attacks, using misuse detection mechanism is not an efficient solution. Other techniques which applied on NIDS are supervised and semi-supervised anomaly detection systems which can detect novel attacks by comparing the current behavior of the network to the training sample; however producing labeled or attack-free dataset is difficult for training the engine. Current NIDS solutions monitor bytes, packets' payload or network flows to detect intrusions. Today it is difficult to monitor the payload of packets in high speed network (1-10 Gbps) and recent network attacks are becoming more complex and analyzing only the payload of packets will not produce enough information for detection engine. In this paper we propose a new Real Time Unsupervised Network Intrusion Detection System (RTUNIDS) which monitor network flows in two windows with different sizes and detects network attacks by correlating outliers from multiple clusters. The proposed solution has the ability of detecting different types of intrusions in realtime such as DOS, DDOS, scanning, distribution of worms and any other network attacks which produce huge amount of network traffic and in the meanwhile it detects Bot-Master if the detected attack lunched by Bots.

6 citations


Proceedings ArticleDOI
02 Dec 2013
TL;DR: Recent development on the workflow and tools for the Programmable Control and Communication Platform (PCCP) are presented and PCCP can be upgraded in a controlled way during its life cycle by 2026.
Abstract: Industrial machines like cranes have very long lifetime and high safety and reliability requirements. Embedded systems including HW boards and SW stacks are used to control the machines. The challenge is how to manage several different HW/SW combinations and versions over the life cycle. This paper presents recent development on the workflow and tools for our Programmable Control and Communication Platform (PCCP). The cornerstone is Yocto Project, which handles building Linux-based control SW and includes a new layer for PCCP. Setting up the new workflow took 180 person hours excluding initial studying and training. After deployment, typical changes in HW and SW configurations take only hours and the quality of the process is significantly improved. Based on this work, PCCP can be upgraded in a controlled way during its life cycle by 2026.

5 citations


Proceedings ArticleDOI
10 Sep 2013
TL;DR: The schedule slippages were reduced, although both teachers and students still underestimate the required time and effort, and 15 student projects where FPGA platform was also used are introduced.
Abstract: This paper presents our experiences in using FPGA in teaching System-on-Chip design in Tampere University of Technology. We had a major reform on our courses and, most notably, chose a common HW platform which is used in 11 courses. It has proved good that most exercises are mandatory and bonus points are awarded for good work. In order to manage the schedules, larger projects have been partitioned by the teachers into smaller tasks and pairwork is allowed. Automated testbenches, reuse, startup examples were very useful. As a result, we observed increased motivation among students and better learning outcomes. The schedule slippages were reduced, although both teachers and students still underestimate the required time and effort. Moreover, we introduce 15 student projects where FPGA platform was also used. Some of the most innovative topics were suggested by students themselves, such as games. In the future, more effort is needed is finalizing the project works for easier reuse and setting up a common repository.

Proceedings ArticleDOI
25 Mar 2013
TL;DR: A pair of algorithms derived with the aid of BSS schemes are presented in this paper and are important mechanisms due to simplicity and applicability to High Speed Packet Access (HSPA) based systems.
Abstract: Employing a Blind Source Separation (BSS) algorithm is one of the mechanisms used in extracting unobserved signals from observed mixtures in signal processing. Direct Sequence - Code Division Multiple Access (DS-CDMA) is mature and prominent in spreading code assisted spread spectrum based multiple access communication techniques. Mitigation of deteriorative effects caused within the air interface of DS-CDMA is aimed by trying to remove the jamming signal. A pair of algorithms derived with the aid of BSS schemes are presented in this paper. In the short code model time correlation properties of the channel is taken advantage for BSS. Two energy functions of receive signal are used with the iterative fixed point rule in determining the filter coefficients. The methods are tested in a downlink channel. Equal Gain Combining (EGC) is used to treat the channel parameter values. They are important mechanisms due to simplicity and applicability to High Speed Packet Access (HSPA) based systems.

Proceedings ArticleDOI
02 Dec 2013
TL;DR: A novel tool for file dependency and change analysis and visualization that was implemented into the Kactus2 design environment (GPL2) and capable of sorting source files into IP-XACT file sets, extracting and visualizing file dependencies, and keeping track of changed files.
Abstract: Large-scale HW and SW projects contain thousands of source files, which requires proper file management in order to keep track of changes and keep the code in compilable state. Different parts of the system depend on each other, and even a small change in a certain part of the code may break the other parts. Dependency analysis can be used to prevent such problems by visualizing the SW structure so that dependencies are easily seen by the developer. This paper presents a novel tool for file dependency and change analysis and visualization that was implemented into our IP-XACT based Kactus2 design environment (GPL2). The tool is capable of sorting source files into IP-XACT file sets, extracting and visualizing file dependencies, and keeping track of changed files. It also offers the ability to create manual dependencies, e.g., between source code and documentation. The dependency and change analysis of 1k source code files containing 140k lines of code is performed in less than two minutes.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter describes low-power WSN as a platform for signal processing by presenting the WSN services that can be used as building blocks for the applications and explains the implications of resource constraints and expected performance in terms of throughput, reliability and latency.
Abstract: Wireless sensor network (WSN) is a technology comprising even thousands of autonomic and self-organizing nodes that combine environmental sensing, data processing, and wireless multihop ad-hoc networking. The features of WSNs enable monitoring, object tracking, and control functionality. The potential applications include environmental and condition monitoring, home automation, security and alarm systems, industrial monitoring and control, military reconnaissance and targeting, and interactive games. This chapter describes low-power WSN as a platform for signal processing by presenting the WSN services that can be used as building blocks for the applications. It explains the implications of resource constraints and expected performance in terms of throughput, reliability and latency.

Book ChapterDOI
28 Aug 2013
TL;DR: The analysis and measurement results show that, with the modifications presented IPv6 and PMIPv6 may be supported and utilized for localized mobility management and seamless handovers.
Abstract: A prospective next generation wireless network is expected to integrate harmoniously into an IP-based core network. It is widely anticipated that IP-layer handover is a feasible solution to global mobility. However, the performance of IP-layer handover based on basic Mobile IP (MIP) cannot support real time services very well due to long handover delay. The Internet Engineering Task Force (IETF) Network-based Localized Mobility Management (NETLMM) working group developed a network-based localized mobility management protocol called Proxy Mobile IPv6 (PMIPv6) to reduce the handoff latency of MIPv6. Moreover, PMIPv6 provides the IP with the mobility to support User Equipments (UEs) without it being required to participate in any mobility-related signaling. This was one of the reasons why the 3rd Generation Partnership Project (3GPP) chose PMIPv6 as one of the mobility management protocols when defining the Evolved Packed System (EPS). One of the key features of the standard is its support for access system selection based on a combination of operator policies, user preference and access network conditions. Although Android, which is one of the most popular “mobile” operating system, is not officially supporting IPv6 nor PMIPv6, this paper analyzes the required challenges for IPv6 and PMIPv6 usage and handover performance with real-time services. The analysis and measurement results show that, with the modifications presented IPv6 and PMIPv6 may be supported and utilized for localized mobility management and seamless handovers. ...

Proceedings ArticleDOI
07 Jul 2013
TL;DR: An information-theoretic approach to variable selection for prediction of laboratory measurements of paper quality based on the classical Shannon Mutual Information and a novel Maximal Information Coefficient was presented.
Abstract: This paper presents an information-theoretic approach to variable selection for prediction of laboratory measurements of paper quality. Along with a well-known Principal Component Analysis we considered techniques for variable selection based on the classical Shannon Mutual Information and a novel Maximal Information Coefficient. A multilayer perceptron neural model was used to predict quality measurements and compare feature selection techniques. The suggested approach was tested on real industrial data obtained form a pilot paper machine. The presented results show that information-theoretic techniques perform better compared to Principal Component Analysis, providing higher accuracy results.

Proceedings ArticleDOI
17 Jul 2013
TL;DR: It is shown that a robust controller tracking/rejecting signals generated by an exosystem can be decomposed into a servocompensator and a stabilizing controller that stabilizes the infinite-dimensional closed-loop system.
Abstract: Starting from a very general formulation of the Internal Model Principle it is shown that a robust controller tracking/rejecting signals generated by an exosystem can be decomposed into a servocompensator and a stabilizing controller. The servocompensator contains an internal model of the exosystem generating the reference and disturbance signals and the stabilizing controller stabilizes the infinite-dimensional closed-loop system.

Journal ArticleDOI
TL;DR: A set of remedial solutions taken to mitigate deteriorative effects caused within the air interface of an OFDM transmission with aid of BSS schemes is presented and can be noted as quite low computational complexity mechanisms.
Abstract: One of the premier mechanisms used in extracting unobserved signals from observed mixtures in signal processing is employing a blind source separation (BSS) algorithm or technique. A prominent role in the sphere of multicarrier communication is played by orthogonal frequency division multiplexing (OFDM) techniques. A set of remedial solutions taken to mitigate deteriorative effects caused within the air interface of an OFDM transmission with aid of BSS schemes is presented. Two energy functions are used in deriving the filter coefficients. They are optimized and performance is justified. These functions with the iterative fixed point rule for receive signal are used in determining the filter coefficients. Time correlation properties of the channel are taken advantage for BSS. It is tried colored noise and interference components to be removed from the signal mixture at the receiver. The method is tested in a slow fading channel with a receiver containing equal gain combining to treat the channel state information values. The importance is that, these solutions can be noted as quite low computational complexity mechanisms.

Proceedings ArticleDOI
23 Mar 2013
TL;DR: The results presented show that cooperative transmission from several cells can improve user throughput when the number of UEs sharing the same resources within an SFN is fairly low.
Abstract: Quick growth in the amount of wireless subscribers coupled with the necessity to provide desired level of user experience for broadband, stimulates mobile operators to improve the efficiency of their limited air resources by using different technologies. This paper discusses the study on Single Frequency Network (SFN) based multipoint-to-point transmission scheme as a possible way to improve the usage of air resources of the High-Speed Downlink Packet Access (HSDPA) network. The results presented show that cooperative transmission from several cells can improve user throughput when the number of UEs sharing the same resources within an SFN is fairly low. Due to this reason a suitable size for the created SFN cluster is strongly dictated by the load level of the network.