scispace - formally typeset
Search or ask a question
Author

Martin Bichler

Bio: Martin Bichler is an academic researcher from Technische Universität München. The author has contributed to research in topics: Common value auction & Quantum dot. The author has an hindex of 44, co-authored 334 publications receiving 12141 citations. Previous affiliations of Martin Bichler include IBM & University of California, Berkeley.


Papers
More filters
Journal ArticleDOI
08 Aug 2002-Nature
TL;DR: It is demonstrated that coherent optical excitations in the quantum-dot two-level system can be converted into deterministic photocurrents and found that this device can function as an optically triggered single-electron turnstile.
Abstract: Present-day information technology is based mainly on incoherent processes in conventional semiconductor devices. To realize concepts for future quantum information technologies, which are based on coherent phenomena, a new type of 'hardware' is required. Semiconductor quantum dots are promising candidates for the basic device units for quantum information processing. One approach is to exploit optical excitations (excitons) in quantum dots. It has already been demonstrated that coherent manipulation between two excitonic energy levels--via so-called Rabi oscillations--can be achieved in single quantum dots by applying electromagnetic fields. Here we make use of this effect by placing an InGaAs quantum dot in a photodiode, which essentially connects it to an electric circuit. We demonstrate that coherent optical excitations in the quantum-dot two-level system can be converted into deterministic photocurrents. For optical excitation with so-called pi-pulses, which completely invert the two-level system, the current is given by I = fe, where f is the repetition frequency of the experiment and e is the elementary charge. We find that this device can function as an optically triggered single-electron turnstile.

702 citations

Journal ArticleDOI
01 Aug 2018
TL;DR: Robotic Process Automation is an umbrella term for tools that operate on the user interface of other computer systems in the way a human would do, aiming to replace people by automation done in an “outside-in’’ manner.
Abstract: A foundational question for many BISE (Business & Information Systems Engineering) authors and readers is “What should be automated and what should be done by humans?” This question is not new. However, developments in data science, machine learning, and artificial intelligence force us to revisit this question continuously. Robotic Process Automation (RPA) is one of these developments. RPA is an umbrella term for tools that operate on the user interface of other computer systems in the way a human would do. RPA aims to replace people by automation done in an “outside-in’’ manner. This differs from the classical “inside-out” approach to improve information systems. Unlike traditional workflow technology, the information system remains unchanged. Gartner defines Robotic Process Automation (RPA) as follows: “RPA tools perform [if, then, else] statements on structured data, typically using a combination of user interface interactions, or by connecting to APIs to drive client servers, mainframes or HTML code. An RPA tool operates by mapping a process in the RPA tool language for the software robot to follow, with runtime allocated to execute the script by a control dashboard.” [9]. Hence, RPA tools aim to reduce the burden of repetitive, simple tasks on employees.

417 citations

Journal ArticleDOI
01 Dec 2008
TL;DR: A significant lift is found when using central customers in message diffusion, but also differences in the various centrality measures depending on the underlying network topology and diffusion process are found.
Abstract: Viral marketing refers to marketing techniques that use social networks to produce increases in brand awareness through self-replicating viral diffusion of messages, analogous to the spread of pathological and computer viruses. The idea has successfully been used by marketers to reach a large number of customers rapidly. If data about the customer network is available, centrality measures provide a structural measure that can be used in decision support systems to select influencers and spread viral marketing campaigns in a customer network. Usage stimulation and churn management are examples of DSS applications, where centrality of customers does play a role. The literature on network theory describes a large number of such centrality measures. A critical question is which of these measures is best to select an initial set of customers for a marketing campaign, in order to achieve a maximum dissemination of messages. In this paper, we present the results of computational experiments based on call data from a telecom company to compare different centrality measures for the diffusion of marketing messages. We found a significant lift when using central customers in message diffusion, but also found differences in the various centrality measures depending on the underlying network topology and diffusion process.

398 citations

Journal ArticleDOI
TL;DR: This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints and presents a heuristic to address large-scale server consolidation projects.
Abstract: Today's data centers offer IT services mostly hosted on dedicated physical servers. Server virtualization provides a technical means for server consolidation. Thus, multiple virtual servers can be hosted on a single server. Server consolidation describes the process of combining the workloads of several different servers on a set of target servers. We focus on server consolidation with dozens or hundreds of servers, which can be regularly found in enterprise data centers. Cost saving is among the key drivers for such projects. This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints. Our central model is proven to be an NP-hard problem. Therefore, besides an exact solution method, a heuristic is presented to address large-scale server consolidation projects. In addition, a preprocessing method for server load data is introduced allowing for the consideration of quality-of-service levels. Extensive experiments were conducted based on a large set of server load data from a data center provider focusing on managerial concerns over what types of problems can be solved. Results show that, on average, server savings of 31 percent can be achieved only by taking cycles in the server workload into account.

359 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Book
01 Jan 2009

8,216 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations