scispace - formally typeset
Search or ask a question

Showing papers by "Technische Universität Darmstadt published in 2014"


Proceedings ArticleDOI
09 Jun 2014
TL;DR: FlowDroid is presented, a novel and highly precise static taint analysis for Android applications that successfully finds leaks in a subset of 500 apps from Google Play and about 1,000 malware apps from the VirusShare project.
Abstract: Today's smartphones are a ubiquitous source of private and confidential data. At the same time, smartphone users are plagued by carelessly programmed apps that leak important data by accident, and by malicious apps that exploit their given privileges to copy such data intentionally. While existing static taint-analysis approaches have the potential of detecting such data leaks ahead of time, all approaches for Android use a number of coarse-grain approximations that can yield high numbers of missed leaks and false alarms. In this work we thus present FlowDroid, a novel and highly precise static taint analysis for Android applications. A precise model of Android's lifecycle allows the analysis to properly handle callbacks invoked by the Android framework, while context, flow, field and object-sensitivity allows the analysis to reduce the number of false alarms. Novel on-demand algorithms help FlowDroid maintain high efficiency and precision at the same time. We also propose DroidBench, an open test suite for evaluating the effectiveness and accuracy of taint-analysis tools specifically for Android apps. As we show through a set of experiments using SecuriBench Micro, DroidBench, and a set of well-known Android test applications, FlowDroid finds a very high fraction of data leaks while keeping the rate of false positives low. On DroidBench, FlowDroid achieves 93% recall and 86% precision, greatly outperforming the commercial tools IBM AppScan Source and Fortify SCA. FlowDroid successfully finds leaks in a subset of 500 apps from Google Play and about 1,000 malware apps from the VirusShare project.

1,730 citations


Journal ArticleDOI
24 Oct 2014
TL;DR: This contribution provides a review of fundamental goals, development and future perspectives of driver assistance systems, and examines the progress incented by the use of exteroceptive sensors such as radar, video, or lidar in automated driving in urban traffic and in cooperative driving.
Abstract: This contribution provides a review of fundamental goals, development and future perspectives of driver assistance systems. Mobility is a fundamental desire of mankind. Virtually any society strives for safe and efficient mobility at low ecological and economic costs. Nevertheless, its technical implementation significantly differs among societies, depending on their culture and their degree of industrialization. A potential evolutionary roadmap for driver assistance systems is discussed. Emerging from systems based on proprioceptive sensors, such as ABS or ESC, we review the progress incented by the use of exteroceptive sensors such as radar, video, or lidar. While the ultimate goal of automated and cooperative traffic still remains a vision of the future, intermediate steps towards that aim can be realized through systems that mitigate or avoid collisions in selected driving situations. Research extends the state-of-the-art in automated driving in urban traffic and in cooperative driving, the latter addressing communication and collaboration between different vehicles, as well as cooperative vehicle operation by its driver and its machine intelligence. These steps are considered important for the interim period, until reliable unsupervised automated driving for all conceivable traffic situations becomes available. The prospective evolution of driver assistance systems will be stimulated by several technological, societal and market trends. The paper closes with a view on current research fields.

716 citations


Journal ArticleDOI
TL;DR: This work introduces the electric vehicle-routing problem with time windows and recharging stations E-VRPTW and presents a hybrid heuristic that combines a variable neighborhood search algorithm with a tabu search heuristic, which incorporates the possibility of recharging at any of the available stations using an appropriate recharging scheme.
Abstract: Driven by new laws and regulations concerning the emission of greenhouse gases, carriers are starting to use electric vehicles for last-mile deliveries. The limited battery capacities of these vehicles necessitate visits to recharging stations during delivery tours of industry-typical length, which have to be considered in the route planning to avoid inefficient vehicle routes with long detours. We introduce the electric vehicle-routing problem with time windows and recharging stations E-VRPTW, which incorporates the possibility of recharging at any of the available stations using an appropriate recharging scheme. Furthermore, we consider limited vehicle freight capacities as well as customer time windows, which are the most important constraints in real-world logistics applications. As a solution method, we present a hybrid heuristic that combines a variable neighborhood search algorithm with a tabu search heuristic. Tests performed on newly designed instances for the E-VRPTW as well as on benchmark instances of related problems demonstrate the high performance of the heuristic proposed as well as the positive effect of the hybridization.

695 citations


Journal ArticleDOI
TL;DR: It is discovered that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques, and a new objective function is derived that formalizes the median filtering heuristic and develops a method that can better preserve motion details.
Abstract: The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that "classical" flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on the Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings.

623 citations


Journal ArticleDOI
TL;DR: It has been determined that the recombination rate is mainly governed by the selective contacts, which has important implication for the future optimization of perovskite solar cells.
Abstract: The effect of electron- and hole-selective contacts in the final cell performance of hybrid lead halide perovskite, CH3NH3PbI3, solar cells has been systematically analyzed by impedance spectroscopy. Complete cells with compact TiO2 and spiro-OMeTAD as electron- and hole-selective contacts have been compared with incomplete cells without one or both selective contacts to highlight the specific role of each contact. It has been described how selective contacts contribute to enhance the cell FF and how the hole-selective contact is mainly responsible for the high Voc in this kind of device. We have determined that the recombination rate is mainly governed by the selective contacts. This fact has important implication for the future optimization of perovskite solar cells. Finally, we have developed a method to analyze the results obtained, and it has been applied for three different electron-selecting materials: TiO2, ZnO, and CdS.

576 citations



Journal ArticleDOI
TL;DR: In this article, a detailed, three-dimensional hydrodynamic study of the neutrino-driven winds that emerge from the remnant of a neutron star merger is presented, and a lower limit on the expelled mass of 3:5 10 3 M, large enough to be relevant for heavy element nucleosynthesis is derived.
Abstract: We present a detailed, three-dimensional hydrodynamic study of the neutrino-driven winds that emerge from the remnant of a neutron star merger. Our simulations are performed with the Newtonian, Eulerian code FISH, augmented by a detailed, spectral neutrino leakage scheme that accounts for heating due to neutrino absorption in optically thin conditions. Consistent with the earlier, two-dimensional study of Dessart et al. (2009), we nd that a strong baryonic wind is blown out along the original binary rotation axis within 100 milliseconds after the merger. We compute a lower limit on the expelled mass of 3:5 10 3 M , large enough to be relevant for heavy element nucleosynthesis. The physical properties vary signicantly between dierent wind regions. For example, due to stronger neutrino irradiation, the polar regions show substantially larger electron fractions than those at lower latitudes. This has its bearings on the nucleosynthesis: the polar ejecta produce interesting r-process contributions from A 80 to about 130, while the more neutron-rich, lower-latitude parts produce in addition also elements up to the third r-process peak near A 195. We also calculate the properties of electromagnetic transients that are powered by the radioactivity in the wind, in addition to the \macronova" transient that stems from the dynamic ejecta. The high-latitude (polar) regions produce UV/optical transients reaching luminosities up to 10 41 erg s 1 , which peak around 1 day in optical and 0.3 days in bolometric luminosity. The lower-latitude regions, due to their contamination with high-opacity heavy elements, produce dimmer and more red signals, peaking after 2 days in optical and infrared. Our numerical experiments indicate that it will be dicult to infer the collapse time-scale of the hypermassive neutron star to a black hole based on the wind electromagnetic transient, at least for collapse time-scales larger than the wind production time-scale.

471 citations


Journal ArticleDOI
TL;DR: This work integrates a Random Forest classifier into a Conditional Random Field framework, a flexible approach for obtaining a reliable classification result even in complex urban scenes, and investigates the relevance of different features for the LiDAR points as well as for the interaction of neighbouring points.
Abstract: In this work we address the task of the contextual classification of an airborne LiDAR point cloud. For that purpose, we integrate a Random Forest classifier into a Conditional Random Field (CRF) framework. It is a flexible approach for obtaining a reliable classification result even in complex urban scenes. In this way, we benefit from the consideration of context on the one hand and from the opportunity to use a large amount of features on the other hand. Considering the interactions in our experiments increases the overall accuracy by 2%, though a larger improvement becomes apparent in the completeness and correctness of some of the seven classes discerned in our experiments. We compare the Random Forest approach to linear models for the computation of unary and pairwise potentials of the CRF, and investigate the relevance of different features for the LiDAR points as well as for the interaction of neighbouring points. In a second step, building objects are detected based on the classified point cloud. For that purpose, the CRF probabilities for the classes are plugged into a Markov Random Field as unary potentials, in which the pairwise potentials are based on a Potts model. The 2D binary building object masks are extracted and evaluated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction. The evaluation shows that the main buildings (larger than 50 m 2 ) can be detected very reliably with a correctness larger than 96% and a completeness of 100%.

455 citations


Proceedings Article
S. Chatrchyan1, Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1  +2179 moreInstitutions (201)
30 Jul 2014

409 citations


Journal ArticleDOI
TL;DR: This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the Fusion of spectral, spatial, and elevation information.
Abstract: The 2013 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society aimed at investigating the synergistic use of hyperspectral and Light Detection And Ranging (LiDAR) data. The data sets distributed to the participants during the Contest, a hyperspectral imagery and the corresponding LiDAR-derived digital surface model (DSM), were acquired by the NSF-funded Center for Airborne Laser Mapping over the University of Houston campus and its neighboring area in the summer of 2012. This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the fusion of spectral, spatial, and elevation information.

379 citations


Journal ArticleDOI
01 Mar 2014-JOM
TL;DR: In this article, the authors discuss computational analysis methods typically used in atomistic modeling of crystalline materials and highlight recent developments that can provide better insights into processes at the atomic scale, including the classification of local atomic structures, the transition from atomistics to mesoscale and continuum-scale descriptions, and the automated identification of dislocations.
Abstract: This article discusses computational analysis methods typically used in atomistic modeling of crystalline materials and highlights recent developments that can provide better insights into processes at the atomic scale. Topics include the classification of local atomic structures, the transition from atomistics to mesoscale and continuum-scale descriptions, and the automated identification of dislocations in atomistic simulation data.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: A novel approach for identifying argumentative discourse structures in persuasive essays by evaluating several classifiers and proposing novel feature sets including structural, lexical, syntactic and contextual features.
Abstract: In this paper, we present a novel approach for identifying argumentative discourse structures in persuasive essays. The structure of argumentation consists of several components (i.e. claims and premises) that are connected with argumentative relations. We consider this task in two consecutive steps. First, we identify the components of arguments using multiclass classification. Second, we classify a pair of argument components as either support or non-support for identifying the structure of argumentative discourse. For both tasks, we evaluate several classifiers and propose novel feature sets including structural, lexical, syntactic and contextual features. In our experiments, we obtain a macro F1-score of 0.726 for identifying argument components and 0.722 for argumentative relations.

Posted Content
TL;DR: It is argued that the BISE community offers distinct and unique competencies that can be harnessed for significant research contributions to this field and within this research gap three distinct streams are delineated.
Abstract: The business model concept, although a relatively new topic for research, has garnered growing attention over the past decade. Whilst it has been robustly defined, the concept has so far attracted very little substantive research. In the context of the wide-spread digitization of businesses and society at large, the logic inherent in a business model has become critical for business success and, hence, a focus for academic inquiry. The business model concept is identified as the missing link between business strategy, processes, and Information Technology (IT). The authors argue that the BISE community offers distinct and unique competencies (e.g., translating business strategies into IT systems, managing business and IT processes, etc.) that can be harnessed for significant research contributions to this field. Within this research gap three distinct streams are delineated, namely, business models in IT industries, IT enabled or digital business models, and IT support for developing and managing business models. For these streams, the current state of the art, suggest critical research questions, and suitable research methodologies are outlined.

Journal Article
TL;DR: Natural Evolution Strategies (NES) as mentioned in this paper is a family of black-box optimization algorithms that use the natural gradient to update a parameterized search distribution in the direction of higher expected fitness.
Abstract: This paper presents Natural Evolution Strategies (NES), a recent family of black-box optimization algorithms that use the natural gradient to update a parameterized search distribution in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complexity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, such as general-purpose multi-variate normal distributions and separable distributions tailored towards search in high dimensional spaces. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API, is proposed and shown that SUSI can reliably classify sources and sink even in new, previously unseen Android versions and components like Google Glass or the Chromecast API.
Abstract: Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or the Chromecast API.

Proceedings ArticleDOI
14 Apr 2014
TL;DR: This work describes mechanisms for secure exception handling and communication between protected modules, enabling seamless interoperability with untrusted operating systems and tasks, and presents the TrustLite security architecture for flexible, hardware-enforced isolation of software modules.
Abstract: Embedded systems are increasingly pervasive, interdependent and in many cases critical to our every day life and safety Tiny devices that cannot afford sophisticated hardware security mechanisms are embedded in complex control infrastructures, medical support systems and entertainment products [51] As such devices are increasingly subject to attacks, new hardware protection mechanisms are needed to provide the required resilience and dependency at low costIn this work, we present the TrustLite security architecture for flexible, hardware-enforced isolation of software modules We describe mechanisms for secure exception handling and communication between protected modules, enabling seamless interoperability with untrusted operating systems and tasks TrustLite scales from providing a simple protected firmware runtime to advanced functionality such as attestation and trusted execution of userspace tasks Our FPGA prototype shows that these capabilities are achievable even on low-cost embedded systems

Proceedings Article
20 Aug 2014
TL;DR: This paper provides the first comprehensive security analysis of various CFI solutions, and shows that with bare minimum assumptions, turing-complete and real-world ROP attacks can still be launched even when the strictest of enforcement policies is in use.
Abstract: Return-oriented programming (ROP) offers a robust attack technique that has, not surprisingly, been extensively used to exploit bugs in modern software programs (eg, web browsers and PDF readers) ROP attacks require no code injection, and have already been shown to be powerful enough to bypass fine-grained memory randomization (ASLR) defenses To counter this ingenious attack strategy, several proposals for enforcement of (coarse-grained) control-flow integrity (CFI) have emerged The key argument put forth by these works is that coarse-grained CFI policies are sufficient to prevent ROP attacks As this reasoning has gained traction, ideas put forth in these proposals have even been incorporated into coarse-grained CFI defenses in widely adopted tools (eg, Microsoft's EMET framework) In this paper, we provide the first comprehensive security analysis of various CFI solutions (covering kBouncer, ROPecker, CFI for COTS binaries, ROP-Guard, and Microsoft EMET 41) A key contribution is in demonstrating that these techniques can be effectively undermined, even under weak adversarial assumptions More specifically, we show that with bare minimum assumptions, turing-complete and real-world ROP attacks can still be launched even when the strictest of enforcement policies is in use To do so, we introduce several new ROP attack primitives, and demonstrate the practicality of our approach by transforming existing real-world exploits into more stealthy attacks that bypass coarse-grained CFI defenses

Posted Content
TL;DR: A review of the literature on variants and extensions of the standard location-routing problem published since the last survey, by Nagy and Salhi, appeared in 2006 can be found in this article.
Abstract: This is a review of the literature on variants and extensions of the standard location-routing problem published since the last survey, by Nagy and Salhi, appeared in 2006. We propose a classification of problem variants, provide concise paper excerpts that convey the central ideas of each work, discuss recent developments in the field, and list promising topics for further research.

Journal ArticleDOI
TL;DR: The International Axion Observatory (IAXO) as mentioned in this paper is the most powerful axion helioscope, reaching sensitivity to axion-photon couplings down to a few × 10−12 GeV−1 and thus probing a large fraction of the currently unexplored axion and ALP parameter space.
Abstract: The International Axion Observatory (IAXO) will be a forth generation axion helioscope. As its primary physics goal, IAXO will look for axions or axion-like particles (ALPs) originating in the Sun via the Primakoff conversion of the solar plasma photons. In terms of signal-to-noise ratio, IAXO will be about 4–5 orders of magnitude more sensitive than CAST, currently the most powerful axion helioscope, reaching sensitivity to axion-photon couplings down to a few × 10−12 GeV−1 and thus probing a large fraction of the currently unexplored axion and ALP parameter space. IAXO will also be sensitive to solar axions produced by mechanisms mediated by the axion-electron coupling gae with sensitivity — for the first time — to values of gae not previously excluded by astrophysics. With several other possible physics cases, IAXO has the potential to serve as a multi-purpose facility for generic axion and ALP research in the next decade. In this paper we present the conceptual design of IAXO, which follows the layout of an enhanced axion helioscope, based on a purpose-built 20 m-long 8-coils toroidal superconducting magnet. All the eight 60cm-diameter magnet bores are equipped with focusing x-ray optics, able to focus the signal photons into ~ 0.2 cm2 spots that are imaged by ultra-low-background Micromegas x-ray detectors. The magnet is built into a structure with elevation and azimuth drives that will allow for solar tracking for ~ 12 h each day.

Book ChapterDOI
06 Sep 2014
TL;DR: This work presents the first comprehensive texturing framework for large-scale, real-world 3D reconstructions, and addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders.
Abstract: 3D reconstruction pipelines using structure-from-motion and multi-view stereo techniques are today able to reconstruct impressive, large-scale geometry models from images but do not yield textured results. Current texture creation methods are unable to handle the complexity and scale of these models. We therefore present the first comprehensive texturing framework for large-scale, real-world 3D reconstructions. Our method addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of magnitude larger and far more challenging than shown in related work.

Proceedings ArticleDOI
03 Nov 2014
TL;DR: This paper describes the design and implementation of a framework that significantly eases the evaluation of NILM algorithms using different data sets and parameter configurations, and demonstrates the use of the presented framework and data set through an extensive performance evaluation of four selected NilM algorithms.
Abstract: Non-intrusive load monitoring (NILM) is a popular approach to estimate appliance-level electricity consumption from aggregate consumption data of households. Assessing the suitability of NILM algorithms to be used in real scenarios is however still cumbersome, mainly because there exists no standardized evaluation procedure for NILM algorithms and the availability of comprehensive electricity consumption data sets on which to run such a procedure is still limited. This paper contributes to the solution of this problem by: (1) outlining the key dimensions of the design space of NILM algorithms; (2) presenting a novel, comprehensive data set to evaluate the performance of NILM algorithms; (3) describing the design and implementation of a framework that significantly eases the evaluation of NILM algorithms using different data sets and parameter configurations; (4) demonstrating the use of the presented framework and data set through an extensive performance evaluation of four selected NILM algorithms. Both the presented data set and the evaluation framework are made publicly available.


Journal ArticleDOI
TL;DR: In this paper, the long-term evolution of the dynamic ejecta of neutron star mergers for up to 100 years and over a density range of roughly 40 orders of magnitude was studied.
Abstract: We follow the long-term evolution of the dynamic ejecta of neutron star mergers for up to 100 years and over a density range of roughly 40 orders of magnitude. We include the nuclear energy input from the freshly synthesized, radioactively decaying nuclei in our simulations and study its effects on the remnant dynamics. Although the nuclear heating substantially alters the long-term evolution, we find that running nuclear networks over purely hydrodynamic simulations (i.e. without heating) yields actually acceptable nucleosynthesis results. The main dynamic effect of the radioactive heating is to quickly smooth out inhomogeneities in the initial mass distribution, subsequently the evolution proceeds self-similarly and after 100 years the remnant still carries the memory of the initial binary mass ratio. We also explore the nucleosynthetic yields for two mass ejection channels. The dynamic ejecta very robustly produce 'strong' r-process elements with A > 130 with a pattern that is essentially independent of the details of the merging system. From a simple model we find that neutrino-driven winds yield 'weak' r-process contributions with 50 < A < 130 whose abundance patterns vary substantially between different merger cases. This is because their electron fraction, set by the ratio of neutrino luminosities, varies considerably from case to case. Such winds do not produce any Ni-56, but a range of radioactive isotopes that are long-lived enough to produce a second, radioactively powered electromagnetic transient in addition to the 'macronova' from the dynamic ejecta. While our wind model is very simple, it nevertheless demonstrates the potential of such neutrino-driven winds for electromagnetic transients and it motivates further, more detailed neutrino-hydrodynamic studies. The properties of the mentioned transients are discussed in more detail in a companion paper.

Journal ArticleDOI
Betty Abelev1, Jaroslav Adam2, Dagmar Adamová3, Madan M. Aggarwal4  +989 moreInstitutions (101)
TL;DR: In this paper, the authors measured the transverse momentum spectra of pi(+/-), K-+/- and p((p) over bar) up to p(T) = 20 GeV/c at mid-rapidity in pp, peripheral (60-80%) and central (0-5%) Pb-Pb collisions.

Proceedings Article
01 Aug 2014
TL;DR: An annotation scheme that includes the annotation of claims and premises as well as support and attack relations for capturing the structure of argumentative discourse is proposed.
Abstract: In this paper, we present a novel approach to model arguments, their components and relations in persuasive essays in English. We propose an annotation scheme that includes the annotation of claims and premises as well as support and attack relations for capturing the structure of argumentative discourse. We further conduct a manual annotation study with three annotators on 90 persuasive essays. The obtained inter-rater agreement of αU =0 .72 for argument components and α =0 .81 for argumentative relations indicates that the proposed annotation scheme successfully guides annotators to substantial agreement. The final corpus and the annotation guidelines are freely available to encourage future research in argument recognition.

Journal ArticleDOI
12 Dec 2014-EPL
TL;DR: The anomalous Hall effect is investigated theoretically by employing density functional calculations for the non-collinear antiferromagnetic order of the hexagonal compounds Mn3Ge and Mn3Sn using various planar triangular magnetic configurations as well as unexpected non-planar configurations as mentioned in this paper.
Abstract: The anomalous Hall effect is investigated theoretically by employing density functional calculations for the non-collinear antiferromagnetic order of the hexagonal compounds Mn3Ge and Mn3Sn using various planar triangular magnetic configurations as well as unexpected non-planar configurations. The former give rise to anomalous Hall conductivities (AHC) that are found to be extremely anisotropic. For the planar cases the AHC is connected with Weyl points in the energy-band structure. If this case were observable in Mn3Ge, a large AHC of about should be expected. However, in Mn3Ge it is the non-planar configuration that is energetically favored, in which case it gives rise to an AHC of . The non-planar configuration allows a quantitative evaluation of the topological Hall effect that is seen to determine this value of to a large extent. For Mn3Sn it is the planar configurations that are predicted to be observable. In this case the AHC can be as large as .

Journal ArticleDOI
Betty Abelev1, Jaroslav Adam2, Dagmar Adamová3, Madan M. Aggarwal4  +1065 moreInstitutions (103)
TL;DR: In this paper, the authors proposed an ultra-light, high-resolution Inner Tracking System (ITS) based on monolithic CMOS pixel detectors for detection of heavy-flavour hadrons, and of thermal photons and low-mass di- electrons emitted by the Quark-Gluon Plasma (QGP) at the CERN LHC (Large Hadron Collider).
Abstract: ALICE (A Large Ion Collider Experiment) is studying the physics of strongly interacting matter, and in particular the properties of the Quark–Gluon Plasma (QGP), using proton–proton, proton–nucleus and nucleus–nucleus collisions at the CERN LHC (Large Hadron Collider). The ALICE Collaboration is preparing a major upgrade of the experimental apparatus, planned for installation in the second long LHC shutdown in the years 2018–2019. A key element of the ALICE upgrade is the construction of a new, ultra-light, high- resolution Inner Tracking System (ITS) based on monolithic CMOS pixel detectors. The primary focus of the ITS upgrade is on improving the performance for detection of heavy-flavour hadrons, and of thermal photons and low-mass di- electrons emitted by the QGP. With respect to the current detector, the new Inner Tracking System will significantly enhance the determination of the distance of closest approach to the primary vertex, the tracking efficiency at low transverse momenta, and the read-out rate capabilities. This will be obtained by seven concentric detector layers based on a 50 μm thick CMOS pixel sensor with a pixel pitch of about 30×30 μm2. This document, submitted to the LHCC (LHC experiments Committee) in September 2013, presents the design goals, a summary of the R&D activities, with focus on the technical implementation of the main detector components, and the projected detector and physics performance.

Journal ArticleDOI
TL;DR: It is suggested that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding, when measured on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat.
Abstract: Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding.

Journal ArticleDOI
15 Dec 2014-Energy
TL;DR: A system that uses supervised machine learning techniques to automatically estimate specific “characteristics” of a household from its electricity consumption, which paves the way for targeted energy efficiency programs and other services that benefit from improved customer insights is developed.

Journal ArticleDOI
TL;DR: In this article, the basic plastic joining principles for force-and form-closed joints as well as for solid state welds are discussed along with their specific potentials and limitations, and future trends in joining by forming based upon current research developments are highlighted.