scispace - formally typeset
Search or ask a question

Showing papers by "Timo Hämäläinen published in 2021"


Proceedings ArticleDOI
30 Sep 2021
TL;DR: In this article, the authors discuss the open research challenges to achieve such a holistic design space exploration for a HW/SW Co-design for edge AI systems and discuss the current state with three currently developed flows: one design flow for systems with tightly-coupled accelerator architectures based on RISC-V, one approach using loosely-couple, application-specific accelerators as well as one framework that integrates software and hardware optimization techniques to built efficient Deep Neural Network (DNN) systems.
Abstract: Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT), and Smart Cyber Physical Systems (CPS) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world using edge sensors and actuators. For IoT systems, there is now a strong trend to move the intelligence from the cloud to the edge or the extreme edge (known as TinyML). Yet, this shift to edge AI systems requires to design powerful machine learning systems under very strict resource constraints. This poses a difficult design task that needs to take the complete system stack from machine learning algorithm, to model optimization and compression, to software implementation, to hardware platform and ML accelerator design into account. This paper discusses the open research challenges to achieve such a holistic Design Space Exploration for a HW/SW Co-design for Edge AI Systems and discusses the current state with three currently developed flows: one design flow for systems with tightly-coupled accelerator architectures based on RISC-V, one approach using loosely-coupled, application-specific accelerators as well as one framework that integrates software and hardware optimization techniques to built efficient Deep Neural Network (DNN) systems.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a joint radio resource allocation and offloading decision optimization problem is presented under the explicit consideration of capacity constraints of fronthaul and backhaul links, and the original problem is divided into several sub-problems and addressed accordingly to find the optimal solution.
Abstract: Edge computing is able to provide proximity solutions for the future wireless network to accommodate different types of devices with various computing service demands. Meanwhile, in order to provide ubiquitous connectivities to massive devices over a relatively large area, densely deploying remote radio head (RRH) is considered as a cost-efficient solution. In this work, we consider a vertical and heterogeneous multi-access edge computing system. In the system, the RRHs are deployed for providing wireless access for the users and the edge node with computing capability can process the computation requests from the users. With the objective to minimize the total energy consumption for processing the computation task, a joint radio resource allocation and offloading decision optimization problem is presented under the explicit consideration of capacity constraints of fronthaul and backhaul links. Due to the non-convexity of the formulated problem, we divide the original problem into several sub-problems and address them accordingly to find the optimal solution. Extensive simulation studies are conducted and illustrated to evaluate the advantages of the proposed scheme.

9 citations


Journal ArticleDOI
09 Sep 2021
TL;DR: In this paper, a coupled nonnegative tensor decomposition algorithm was applied on two adjacency tensors with the dimension of time, frequency and connectivity subject, and imposed double-coupled constraints on spatial and spectral modes.
Abstract: Previous researches demonstrate that major depression disorder (MDD) is associated with widespread network dysconnectivity, and the dynamics of functional connectivity networks are important to delineate the neural mechanisms of MDD. Neural oscillations exert a key role in coordinating the activity of remote brain regions, and various assemblies of oscillations can modulate different networks to support different cognitive tasks. Studies have demonstrated that the dysconnectivity of electroencephalography (EEG) oscillatory networks is related with MDD. In this study, we investigated the oscillatory hyperconnectivity and hypoconnectivity networks in MDD under a naturalistic and continuous stimuli condition of music listening. With the assumption that the healthy group and the MDD group share similar brain topology from the same stimuli and also retain individual brain topology for group differences, we applied the coupled nonnegative tensor decomposition algorithm on two adjacency tensors with the dimension of time $\times $ frequency $\times $ connectivity $\times $ subject, and imposed double-coupled constraints on spatial and spectral modes. The music-induced oscillatory networks were identified by a correlation analysis approach based on the permutation test between extracted temporal factors and musical features. We obtained three hyperconnectivity networks from the individual features of MDD and three hypoconnectivity networks from common features. The results demonstrated that the dysfunction of oscillatory networks could affect the involvement in music perception for MDD patients. Those oscillatory dysconnectivity networks may provide promising references to reveal the pathoconnectomics of MDD and potential biomarkers for the diagnosis of MDD.

5 citations


Proceedings ArticleDOI
07 Jun 2021
TL;DR: In this paper, three lightweight compression algorithms are implemented in an embedded LoRa platform to compress sensor data in on-line mode and the overall energy consumption is measured. And the results show that a simple compression algorithm is an effective method to improve the battery powered sensor node lifetime.
Abstract: In this paper simple temporal compression algorithms' efficiency to reduce LoRa-based sensor node energy consumption has been evaluated and measured. It is known that radio transmission is the most energy consuming operation in a wireless sensor node. In this paper three lightweight compression algorithms are implemented in an embedded LoRa platform to compress sensor data in on-line mode and the overall energy consumption is measured. Energy consumption is compared to the situation without implementing any compression algorithm. The results show that a simple compression algorithm is an effective method to improve the battery powered sensor node lifetime. Despite the radio transmission's high energy consumption, the sleep current consumption is a significant factor for the device overall lifetime if the measurement interval is long.

3 citations


Proceedings ArticleDOI
13 Oct 2021
TL;DR: A survey of functional programming languages in System-on-a-Chip (SoC) design can be found in this paper, where the authors focus on Chisel that is one of the most potential High Level Language (HLL) based design frameworks.
Abstract: This paper presents a survey of functional programming languages in System-on-a-Chip (SoC) design. The motivation is improving the design productivity by better source code expressiveness, increased abstraction level in design entry, or improved automation. The survey focuses on Chisel that is one of the most potential High Level Language (HLL) based design frameworks. We include 26 papers that report implementations ranging from IP blocks to complete chips. The result is that functional programming languages are viable for SoC design and can also be deployed in production use. However, Chisel does not increase the abstraction level in a similar way as High Level Synthesis (HLS), since it is used to create circuit generators instead of direct descriptions. Additional benefit is that Chisel offloads user effort from control and connectivity structures, and makes reusability and configurability improved over traditional Hardware Description Language (HDL) designs.

3 citations


Posted ContentDOI
23 Apr 2021-bioRxiv
TL;DR: In this article, a coupled nonnegative tensor decomposition algorithm was applied on two adjacency tensors with the dimension of time × frequency × connectivity × subject, and imposed double-coupled constraints on spatial and spectral modes.
Abstract: Previous researches demonstrate that major depression disorder (MDD) is associated with widespread network dysconnectivity, and the dynamics of functional connectivity networks are important to delineate the neural mechanisms of MDD. Cortical electroencephalography (EEG) oscillations act as coordinators to connect different brain regions, and various assemblies of oscillations can form different networks to support different cognitive tasks. Studies have demonstrated that the dysconnectivity of EEG oscillatory networks is related with MDD. In this study, we investigated the oscillatory hyperconnectivity and hypoconnectivity networks in MDD under a naturalistic and continuous stimuli condition of music listening. With the assumption that the healthy group and the MDD group share similar brain topology from the same stimuli and also retain individual brain topology for group differences, we applied the coupled nonnegative tensor decomposition algorithm on two adjacency tensors with the dimension of time × frequency × connectivity × subject, and imposed double-coupled constraints on spatial and spectral modes. The music-induced oscillatory networks were identified by a correlation analysis approach based on the permutation test between extracted temporal factors and musical features. We obtained three hyperconnectivity networks from the individual features of MDD and three hypoconnectivity networks from common features. The results demonstrated that the dysfunction of oscillation-modulated networks could affect the involvement in music perception for MDD patients. Those oscillatory dysconnectivity networks may provide promising references to reveal the pathoconnectomics of MDD and potential biomarkers for the diagnosis of MDD.

2 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, the authors explore the energy efficiency optimization for a solar-powered unmanned aerial vehicle (UAV) communications system and propose to dynamically adjust the UAV trajectory and gesture, by optimizing its velocity, acceleration, heading angle and ground user transmission power.
Abstract: In this work, we explore the energy efficiency optimization for a solar-powered unmanned aerial vehicle (UAV) communications system. We consider a scenario where a number of ground users (GUs) connect with a solar-powered multi-antenna UAV over a wireless link. First, we are able to derive the relations between the uplink data rate and heading angle of UAV and transmission power of GUs. In addition, the harvested energy from solar light is also affected by UAV’s angle. Accordingly, with the objective to maximize the energy efficiency that is related to uplink data rate and energy consumption, we propose to dynamically adjust the UAV trajectory and gesture, by optimizing its velocity, acceleration, heading angle and ground user transmission power. Performance evaluations demonstrate the advantages of the presented scheme on the design of solar-powered UAV communications system.

2 citations


Proceedings ArticleDOI
25 Apr 2021
TL;DR: In this paper, the authors proposed a covert communication scheme against a mobile warden, which maximizes the connectivity throughput between a multi-antenna transmitter and a full-duplex jamming receiver with the covert outage probability (COP) limit.
Abstract: Covert communication can hide the information transmission process from the warden to prevent adversarial eavesdropping. However, it becomes challenging when the warden can move. In this paper, we propose a covert communication scheme against a mobile warden, which maximizes the connectivity throughput between a multi-antenna transmitter and a full-duplex jamming receiver with the covert outage probability (COP) limit. First, we analyze the monotonicity of the COP to obtain the optimal location the warden can move. Then, under this worst situation, we optimize the transmission rate, the transmit power and the jamming power of covert communication to maximize the connection throughput. This problem is solved in two stages. Under this worst situation, we first maximize the connection probability over the transmit-to-jamming power ratio within the maximum allowed COP for a fixed transmission rate. Then, the Newton’s method is applied to maximize the connection throughput via optimizing the transmission rate iteratively. Simulation results are presented to evaluate the effectiveness of the proposed scheme.

2 citations



Posted Content
TL;DR: The OSRM-CCTV as mentioned in this paper is the first and only CCTV-aware routing and navigation system designed and built for privacy, anonymity and safety applications, which provides both privacy and safety options for areas where cameras are known to be present.
Abstract: For the last several decades, the increased, widespread, unwarranted, and unaccountable use of Closed-Circuit TeleVision (CCTV) cameras globally has raised concerns about privacy risks. Additional recent features of many CCTV cameras, such as Internet of Things (IoT) connectivity and Artificial Intelligence (AI)-based facial recognition, only increase concerns among privacy advocates. Therefore, on par \emph{CCTV-aware solutions} must exist that provide privacy, safety, and cybersecurity features. We argue that an important step forward is to develop solutions addressing privacy concerns via routing and navigation systems (e.g., OpenStreetMap, Google Maps) that provide both privacy and safety options for areas where cameras are known to be present. However, at present no routing and navigation system, whether online or offline, provide corresponding CCTV-aware functionality. In this paper we introduce OSRM-CCTV -- the first and only CCTV-aware routing and navigation system designed and built for privacy, anonymity and safety applications. We validate and demonstrate the effectiveness and usability of the system on a handful of synthetic and real-world examples. To help validate our work as well as to further encourage the development and wide adoption of the system, we release OSRM-CCTV as open-source.

Proceedings ArticleDOI
06 Sep 2021
TL;DR: In this paper, a joint radio resource allocation and offloading decision optimization problem is presented under the explicit consideration of capacity constraints of fronthaul and backhaul links, and the original problem is divided into several sub-problems and addressed accordingly to find the optimal solution.
Abstract: Edge computing is able to provide proximity solutions for the future wireless network to accommodate different types of devices with various computing service demands. Meanwhile, in order to provide ubiquitous connectivities to massive devices over a relatively large area, densely deploying remote radio head (RRH) is considered as a cost-efficient solution. In this work, we consider a vertical and heterogeneous multiaccess edge computing system. In the system, the RRHs are deployed for providing wireless access for the users and the edge node with computing capability can process the computation requests from the users. With the objective to minimize the total energy consumption for processing the computation task, a joint radio resource allocation and offloading decision optimization problem is presented under the explicit consideration of capacity constraints of fronthaul and backhaul links. Due to the nonconvexity of the formulated problem, we divide the original problem into several sub-problems and address them accordingly to find the optimal solution. Extensive simulation studies are conducted and illustrated to evaluate the advantages of the proposed scheme.

Proceedings ArticleDOI
29 Jun 2021
TL;DR: In this paper, the authors investigated how security-related information can be shared online as efficiently as possible by building a security information sharing topology based on the two most widely used network optimization algorithms.
Abstract: The digitized environments are particularly vulnerable to various attacks. In such a situation of a security attack, detecting and responding to attacks require effective actions. One of the most significant ways to improve resilience to security attacks is to obtain accurate and timely situational aspect of the security awareness. The efficient production and utilization of situation information is achieved by sharing information with other actors in the information sharing network quickly and reliably without compromising the confidential information of one's own organization. At the same time, it should also be possible to avoid a flood of irrelevant information in the sharing network, which wastes resources and slows down the implementation of security measures. In our study, we have investigated how security-related information can be shared online as efficiently as possible by building a security information sharing topology based on the two most widely used network optimization algorithms. In the article, we present a model of an information sharing network, in which three different parameters have been used to optimize the network topology: the activity level of organization, the similarity of information systems between different actors and the requirement for the level of information privacy generally in the organization.

Proceedings Article
01 Oct 2021
TL;DR: In this article, the authors discuss the open research challenges to achieve such a holistic design space exploration for a HW/SW Co-design for edge AI systems and discuss the current state with three currently developed flows: one design flow for systems with tightly-coupled accelerator architectures based on RISC-V, one approach using loosely-couple, application-specific accelerators as well as one framework that integrates software and hardware optimization techniques to built efficient Deep Neural Network (DNN) systems.
Abstract: Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT), and Smart Cyber Physical Systems (CPS) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world using edge sensors and actuators. For IoT systems, there is now a strong trend to move the intelligence from the cloud to the edge or the extreme edge (known as TinyML). Yet, this shift to edge AI systems requires to design powerful machine learning systems under very strict resource constraints. This poses a difficult design task that needs to take the complete system stack from machine learning algorithm, to model optimization and compression, to software implementation, to hardware platform and ML accelerator design into account. This paper discusses the open research challenges to achieve such a holistic Design Space Exploration for a HW/SW Co-design for Edge AI Systems and discusses the current state with three currently developed flows: one design flow for systems with tightly-coupled accelerator architectures based on RISC-V, one approach using loosely-coupled, application-specific accelerators as well as one framework that integrates software and hardware optimization techniques to built efficient Deep Neural Network (DNN) systems.

Proceedings ArticleDOI
02 Dec 2021
TL;DR: The RelAA Framework as discussed by the authors is a bottom-up approach for monitoring product-focused software development, which is created in an industrial setup that currently includes around 350 persons in different phases of the software life cycle.
Abstract: The development of software for modern products with lots of interfaces, layers and stakeholders has become very complex, increasing the risk of inefficiency. Key Performance Indicators (KPIs) can be used to identify bottlenecks and problems, but the challenge is how to create KPI models, processes and dashboards that help improving the development processes and can be adopted by all the stakeholders. We introduce the RelAA Framework - a bottom-up approach for monitoring product-focused software development. The RelAA (Relevant, Accessible and Adoptable) Framework is created in an industrial setup that currently includes around 350 persons in different phases of the software life cycle. The RelAA Framework is formed by analyzing existing KPIs and tools, gathering feedback from development teams, management, business representatives, and other stakeholders, and creating intuitive ways to share information related to KPIs. The RelAA Framework itself does not define exact KPIs for the organization to adopt, but it provides a process and model how to create, document and utilize KPIs. The RelAA Framework ensures relevance, accessibility, and adoption of KPIs across stakeholders and organization.

Proceedings ArticleDOI
11 Oct 2021
TL;DR: In this paper, an energy optimization model with considering power allocation, user association and dynamic beam ON/OFF operation jointly was proposed to minimize the onboard power with QoS requirements, which can effectively reduce the system energy consumption.
Abstract: In Low Earth Orbit (LEO) satellites, which run in polar orbit, the area of overlap among beams becomes wider as the latitude of satellites increases, which leads to intolerable interference and extra energy consumption. To minimize the onboard power with QoS requirements, we propose an energy optimization model with considering power allocation, user association and dynamic beam ON/OFF operation jointly. Moreover, the frequent beam ON/OFF operations lead to the large number of user handovers, so handover cost is also considered in the model. The original problem is decomposed into two levels due to the high coupling of variables and the successive convex approximation is employed. A low complexity greedy ON/OFF iteration is proposed to adapt to dynamic topology of LEO. Simulation results show that the proposed scheme can effectively reduce the system energy consumption.