scispace - formally typeset
Search or ask a question
Author

Georgia Psychou

Bio: Georgia Psychou is an academic researcher from RWTH Aachen University. The author has contributed to research in topics: Reliability (statistics) & Communications system. The author has an hindex of 3, co-authored 6 publications receiving 125 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A systematic classification of approaches that increase system resilience in the presence of functional hardware (HW)-induced errors is presented, dealing with higher system abstractions, such as the (micro)architecture, the mapping, and platform software (SW).
Abstract: Nanoscale technology nodes bring reliability concerns back to the center stage of digital system design. A systematic classification of approaches that increase system resilience in the presence of functional hardware (HW)-induced errors is presented, dealing with higher system abstractions, such as the (micro)architecture, the mapping, and platform software (SW). The field is surveyed in a systematic way based on nonoverlapping categories, which add insight into the ongoing work by exposing similarities and differences. HW and SW solutions are discussed in a similar fashion so that interrelationships become apparent. The presented categories are illustrated by representative literature examples to illustrate their properties. Moreover, it is demonstrated how hybrid schemes can be decomposed into their primitive components.

103 citations

Journal ArticleDOI
TL;DR: A systematic classification of the state of the art on the analysis and modeling of such threats, which are caused by physical mechanisms to digital systems, to provide a classification tool that can aid with the navigation across the entire landscape of reliability analysis and modeled.
Abstract: Technology downscaling is expected to amplify a variety of reliability concerns in future digital systems. A good understanding of reliability threats is crucial for the creation of efficient mitigation techniques. This survey performs a systematic classification of the state of the art on the analysis and modeling of such threats, which are caused by physical mechanisms to digital systems. The purpose of this article is to provide a classification tool that can aid with the navigation across the entire landscape of reliability analysis and modeling. A classification framework is constructed in a top-down fashion from complementary categories, each one addressing an approach on reliability analysis and modeling. In comparison to other classifications, the proposed methodology approaches the target research domain in a complete way, without suppressing hybrid works that fall under multiple categories. To substantiate the usability of the classification framework, representative works from the state of the art are mapped to each appropriate category and are briefly analyzed. Thus, research trends and opportunities for novel approaches can be identified.

11 citations

Proceedings Article
21 Jun 2012
TL;DR: In this paper, the authors focus on the broader solution space of selecting subword lengths (at design time) including especially hybrids, so that mapping on these data parallel single/multi-core processors is more energy-efficient.
Abstract: Data-parallel processing is a widely applicable technique, which can be implemented on different processor styles, with varying capabilities. Here we address single or multi-core data-parallel instruction-set processors. Often, handling and reorganisation of the parallel data may be needed because of diverse needs during the execution of the application code. Signal word-length considerations are crucial to incorporate because they influence the outcome very strongly. This paper focuses on the broader solution space of selecting sub-word lengths (at design time) including especially hybrids, so that mapping on these data parallel single/multi-core processors is more energy-efficient. Our goal is to introduce systematic exploration techniques so that part of the designers effort is removed. The methodology is evaluated on a representative application driver for a number of data-path variants and the most promising trade-off points are indicated. The range of throughput-energy ratios among the different mapping implementations is spanning a factor of 2.2.

9 citations

Journal ArticleDOI
TL;DR: This article employs the new IBM INC-3000 prototype FPGA-based neural supercomputer to implement a widely used model of the cortical microcircuit, which turns out that the speed-up factor is essentially limited by the latency of the INC- 3000 communication system.
Abstract: This article employs the new IBM INC-3000 prototype FPGA-based neural supercomputer to implement a widely used model of the cortical microcircuit. With approximately 80,000 neurons and 300 Million synapses this model has become a benchmark network for comparing simulation architectures with regard to performance. To the best of our knowledge, the achieved speed-up factor is 2.4 times larger than the highest speed-up factor reported in the literature and four times larger than biological real time demonstrating the potential of FPGA systems for neural modeling. The work was performed at Jülich Research Centre in Germany and the INC-3000 was built at the IBM Almaden Research Center in San Jose, CA, United States. For the simulation of the microcircuit only the programmable logic part of the FPGA nodes are used. All arithmetic is implemented with single-floating point precision. The original microcircuit network with linear LIF neurons and current-based exponential-decay-, alpha-function- as well as beta-function-shaped synapses was simulated using exact exponential integration as ODE solver method. In order to demonstrate the flexibility of the approach, additionally networks with non-linear neuron models (AdEx, Izhikevich) and conductance-based synapses were simulated, applying Runge–Kutta and Parker–Sochacki solver methods. In all cases, the simulation-time speed-up factor did not decrease by more than a very few percent. It finally turns out that the speed-up factor is essentially limited by the latency of the INC-3000 communication system.

7 citations

DOI
01 Jan 2017
TL;DR: A framework is proposed that exploits the repetitive nature of fault injection experiments for speed-up in LTI blocks and propagating error statistics through each of these subsystems further improves the speed- up and exibility in the reliability evaluation of complex systems.
Abstract: Today's nano-scale technology nodes are bringing reliability concerns back to the center stage of digital system design because of issues, like process variability, noise e ects, radiation particles, as well as increasing variability at run time. Alleviations of these e ects can become potentially very costly and the bene ts of technology scaling can be signi cantly reduced or even lost. In order to build more robust digital systems, initially, their behavior in the presence of hardware-induced bit errors must be analyzed. In many systems, certain types of errors can be tolerated. These cases can be revealed through such an analysis. Overhead can be avoided and remedy measures can be applied only when needed. Communication systems are an interesting domain for such explorations: First, they have high societal relevance due to their ubiquity. Second, they can potentially tolerate hardware-induced errors due to their built-in redundancy present to cope with channel noise. This work focuses on analyzing the impact of such errors on the behavior of communication systems. Typically, error propagation studies are performed through time-consuming fault injection campaigns. These approaches do not scale well with growing system sizes. Stochastic experiments allow a more time-e cient approach. On top, breaking down the system into subsystems and propagating error statistics through each of these subsystems further improves the speed-up and exibility in the reliability evaluation of complex systems. As an initial step in this thesis, statistical moments are propagated through the signal ows of Linear-Time-Invariant (LTI) blocks. Such a scheme, although fast, can only be applied in the case that the signal lacks autocorrelation. However, autocorrelation can be introduced in the signal due to various reasons, like by signal processing blocks. In that case, other approaches are available to reduce the computational cost of the necessary (repetitive) experiments, like the Principal Component Analysis (PCA). Bene ts of such a technique depend on several parameters and, therefore, a more broadly usable technique is required. To address this need, a framework is proposed that exploits the repetitive nature of fault injection experiments for speed-up in LTI blocks. Two cases are distinguished: One, in which all operators of the LTI block act in a linear time-invariant way, and one, in which non-linear operations due to nite wordlengths are present. To complement the subject matter, the broad range of hardware-based mitigation techniques at the higher system level are explored and characterized. In this way, the main properties of each mitigation category are identi ed and, therefore, suitable choices can be made according to the application needs.

2 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

01 Jan 2013
TL;DR: Aujourd’hui toute l’électronique se concentre sur une puce, and les logiciels de simulation utilisés sont plutôt du type VHDL, voire mixte continu/événementiel, mais dont the plupart sont inspirés de SPICE.
Abstract: SPICE est un logiciel qui m’a beaucoup émerveillé lorsque j’étais étudiant, de par sa puissance et sa simplicité d’utilisation. Il est vrai que nous faisions beaucoup de travaux pratiques en électronique, et le simple fait de pouvoir mettre son circuit sous forme de liste, appelée netlist, et de le simuler était une découverte fascinante. SPICE a servi de modèle à nombre d’autres programmes de simulation, dans les universités et dans l’industrie, grâce à son modèle précoce Open Source. Aujourd’hui toute l’électronique se concentre sur une puce, et les logiciels de simulation utilisés sont plutôt du type VHDL [1], voire mixte continu/événementiel [2], mais dont la plupart sont inspirés de SPICE. Actuellement, on édite de manière graphique son schéma et on lance directement la simulation depuis son interface graphique. Ce qu’il faut savoir c’est que bon nombre de ces interfaces graphiques construisent une netlist et c’est un SPICE adapté qui fera la simulation par-derrière. La première version de SPICE date de 1972 et a été écrite en fortran, rapidement suivie par la deuxième version en 1975. Il faudra attendre 1989 pour voir la troisième version (définitive) de SPICE, écrite en C. Il est remarquable d’observer que la troisième version fût la dernière, avec des mises à jour mineures et qui se déclinent en lettre, la dernière étant 3f5. Aujourd’hui, et dans cet article, on utilisera Ngspice [3] qui est basé sur la dernière version de SPICE, et qui se trouve sous licence GNU GPL2. Il est fourni par le groupe de développement gEDA [4] qui a repris le flambeau pour le développement de tous les logiciels de simulation ou de conception de circuits électroniques.

159 citations

Journal ArticleDOI
TL;DR: Canonical correlation analysis is a prototypical family of methods that is useful in identifying the links between variable sets from different modalities and so is well suited to the analysis of big neuroscience datasets.

133 citations

Journal ArticleDOI
TL;DR: This study includes the collection of data from the experimental work and the application of ML techniques to predict the CS of concrete containing fly ash, and shows high accuracy towards the prediction of outcome as indicated by its high coefficient correlation (R2) value.

103 citations