scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 2012"


Journal ArticleDOI
TL;DR: A general, unifying model is proposed to capture the different aspects of an IFP system and use it to provide a complete and precise classification of the systems and mechanisms proposed so far.
Abstract: A large number of distributed applications requires continuous and timely processing of information as it flows from the periphery to the center of the system. Examples include intrusion detection systems which analyze network traffic in real-time to identify possible attacks; environmental monitoring applications which process raw data coming from sensor networks to identify critical situations; or applications performing online analysis of stock prices to identify trends and forecast future values.Traditional DBMSs, which need to store and index data before processing it, can hardly fulfill the requirements of timeliness coming from such domains. Accordingly, during the last decade, different research communities developed a number of tools, which we collectively call Information flow processing (IFP) systems, to support these scenarios. They differ in their system architecture, data model, rule model, and rule language. In this article, we survey these systems to help researchers, who often come from different backgrounds, in understanding how the various approaches they adopt may complement each other.In particular, we propose a general, unifying model to capture the different aspects of an IFP system and use it to provide a complete and precise classification of the systems and mechanisms proposed so far.

918 citations


Journal ArticleDOI
TL;DR: This paper presents a generic system architecture for the proposed knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes, and describes the underlying ontology-based recognition process.
Abstract: This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes The approach goes beyond the traditional data-centric methods for activity recognition in three ways First, it makes extensive use of domain knowledge in the life cycle of activity recognition Second, it uses ontologies for explicit context and activity modeling and representation Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios An average activity recognition rate of 9444 percent was achieved and the average recognition runtime per recognition operation was measured as 25 seconds

558 citations


Journal ArticleDOI
TL;DR: An eco-routing navigation system that determines the most eco-friendly route between a trip origin and a destination is presented and example results are presented to prove the validity of the eco-Routing concept and to demonstrate the operability of the developed eco- routed navigation system.
Abstract: Due to increased public awareness on global climate change and other energy and environmental problems, a variety of strategies are being developed and used to reduce the energy consumption and environmental impact of roadway travel. In advanced traveler information systems, recent efforts have been made in developing a new navigation concept called “eco-routing,” which finds a route that requires the least amount of fuel and/or produces the least amount of emissions. This paper presents an eco-routing navigation system that determines the most eco-friendly route between a trip origin and a destination. It consists of the following four components: 1) a Dynamic Roadway Network database, which is a digital map of a roadway network that integrates historical and real-time traffic information from multiple data sources through an embedded data fusion algorithm; 2) an energy/emissions operational parameter set, which is a compilation of energy/emission factors for a variety of vehicle types under various roadway characteristics and traffic conditions; 3) a routing engine, which contains shortest path algorithms used for optimal route calculation; and 4) user interfaces that receive origin-destination inputs from users and display route maps to the users. Each of the system components and the system architecture are described. Example results are also presented to prove the validity of the eco-routing concept and to demonstrate the operability of the developed eco-routing navigation system. In addition, current limitations of the system and areas for future improvements are discussed.

281 citations


Posted Content
TL;DR: Vertica demonstrates a modern commercial RDBMS system that presents a classical relational interface while at the same time achieving the high performance expected from modern "web scale" analytic systems by making appropriate architectural choices.
Abstract: This paper describes the system architecture of the Vertica Analytic Database (Vertica), a commercialization of the design of the C-Store research prototype. Vertica demonstrates a modern commercial RDBMS system that presents a classical relational interface while at the same time achieving the high performance expected from modern "web scale" analytic systems by making appropriate architectural choices. Vertica is also an instructive lesson in how academic systems research can be directly commercialized into a successful product.

272 citations


Journal ArticleDOI
TL;DR: This architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading, and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing.
Abstract: This paper presents a system architecture for load management in smart buildings which enables autonomous demand side load management in the smart grid. Being of a layered structure composed of three main modules for admission control, load balancing, and demand response management, this architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading. Hence it is capable of handling autonomous energy consumption management for systems with heterogeneous dynamics in multiple time-scales and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing. The design of a home energy manager based on this architecture is illustrated and the simulation results with Matlab/Simulink confirm the viability and efficiency of the proposed framework.

243 citations


BookDOI
08 May 2012
TL;DR: The first technical book emerging from a standards perspective to respond to this highly specific technology/business segment covers the main challenges facing the M2M industry today, and proposes early roll-out scenarios and potential optimization solutions.
Abstract: A comprehensive introduction to M2M Standards and systems architecture, from concept to implementationFocusing on the latest technological developments, M2M Communications: A Systems Approach is an advanced introduction to this important and rapidly evolving topic. It provides a systems perspective on machine-to-machine services and the major telecommunications relevant technologies. It provides a focus on the latest standards currently in progress by ETSI and 3GPP, the leading standards entities in telecommunication networks and solutions. The structure of the book is inspired by ongoing standards developments and uses a systems-based approach for describing the problems which may be encountered when considering M2M, as well as offering proposed solutions from the latest developments in industry and standardization.The authors provide comprehensive technical information on M2M architecture, protocols and applications, especially examining M2M service architecture, access and core network optimizations, and M2M area networks technologies. It also considers dominant M2M application domains such as Smart Metering, Smart Grid, and eHealth. Aimed as an advanced introduction to this complex technical field, the book will provide an essential end-to-end overview of M2M for professionals working in the industry and advanced students.Key features:First technical book emerging from a standards perspective to respond to this highly specific technology/business segmentCovers the main challenges facing the M2M industry today, and proposes early roll-out scenarios and potential optimization solutionsExamines the system level architecture and clearly defines the methodology and interfaces to be consideredIncludes important information presented in a logical manner essential for any engineer or business manager involved in the field of M2M and Internet of ThingsProvides a cross-over between vertical and horizontal M2M concepts and a possible evolution path between the twoWritten by experts involved at the cutting edge of M2M developments

200 citations



Journal ArticleDOI
TL;DR: The main components and the key technologies in each component are discussed, and the main functions of the system that have been focused include Digital Assembly Modeling, Assembly Sequence Planning, Path Planning, Visualization, and Simulation.
Abstract: To automate assembly planning for complex products such as aircraft components, an assembly planning and simulation system called AutoAssem has been developed. In this paper, its system architecture is presented; the main components and the key technologies in each component are discussed. The core functions of the system that have been focused include Digital Assembly Modeling, Assembly Sequence Planning (ASP), Path Planning, Visualization, and Simulation. In contrast to existing assembly planning systems, one of the novelties of the system is it allows the assembly plans be automatically generated from a CAD assembly model with minimal manual interventions. Within the system, new methodologies have been developed to: (i) create Assembly Relationship Matrices; (ii) plan assembly sequences; (iii) generate assembly paths; and (iv) visualize and simulate assembly plans. To illustrate the application of the system, the assembly of a worm gear reducer is used as an example throughout this paper for demonstration purpose. AutoAssem has been successfully applied to virtual assembly design for various complex products so far.

188 citations


Journal ArticleDOI
TL;DR: This paper presents a survey of techniques and technologies proposed over the years either to prevent architecture erosion or to detect and restore architectures that have been eroded, and argues that no single strategy can address the problem of erosion.

175 citations


01 Jan 2012
TL;DR: The experimental results demonstrate that the proposed system can deal with various software faults for server applications in a cloud virtualized environment.
Abstract: Fault tolerance is a major concern to guarantee availability and reliability of critical services as well as application execution. In order to minimize failure impact on the system and application execution, failures should be anticipated and proactively handled. Fault tolerance techniques are used to predict these failures and take an appropriate action before failures actually occur. This paper discusses the existing fault tolerance techniques in cloud computing based on their policies, tools used and research challenges. Cloud virtualized system architecture has been proposed. In the proposed system autonomic fault tolerance has been implemented. The experimental results demonstrate that the proposed system can deal with various software faults for server applications in a cloud virtualized environment.

156 citations


Journal ArticleDOI
TL;DR: A mature architecture for typical-case reasoning tasks is provided in RacerPro, a description logic reasoner that goes well beyond standard inference services provided by other OWL reasoners.
Abstract: RacerPro is a software system for building applications based on ontologies. The backbone of RacerPro is a description logic reasoner. It provides inference services for terminological knowledge as well as for representations of knowledge about individuals. Based on new optimization techniques and techniques that have been developed in the research field of description logics throughout the years, a mature architecture for typical-case reasoning tasks is provided. The system has been used in hundreds of research projects and industrial contexts throughout the last twelve years. W3C standards as well as detailed feedback reports from numerous users have influenced the design of the system architecture in general, and have also shaped the RacerPro knowledge representation and interface languages. With its query and rule languages, RacerPro goes well beyond standard inference services provided by other OWL reasoners.

Journal ArticleDOI
TL;DR: The Halmstad University entry in the Grand Cooperative Driving Challenge, which is a competition in vehicle platooning, develops a longitudinal controller that uses information exchanged via wireless communication with other cooperative vehicles to achieve string-stable platooning.
Abstract: This paper describes the Halmstad University entry in the Grand Cooperative Driving Challenge, which is a competition in vehicle platooning. Cooperative platooning has the potential to improve traffic flow by mitigating shock wave effects, which otherwise may occur in dense traffic. A longitudinal controller that uses information exchanged via wireless communication with other cooperative vehicles to achieve string-stable platooning is developed. The controller is integrated into a production vehicle, together with a positioning system, communication system, and human-machine interface (HMI). A highly modular system architecture enabled rapid development and testing of the various subsystems. In the competition, which took place in May 2011 on a closed-off highway in The Netherlands, the Halmstad University team finished second among nine competing teams.

Proceedings ArticleDOI
03 Dec 2012
TL;DR: BodyCloud is presented, a system architecture based on Cloud Computing for the management and monitoring of body sensor data streams that incorporates key concepts such as scalability and flexibility of resources, sensor heterogeneity, and the dynamic deployment and management of user and community applications.
Abstract: Spatially distributed sensor nodes can be used to monitor systems and humans conditions in a wide range of application domains. A network of body sensors in a community of people generates large amounts of contextual data that requires a scalable approach for storage and processing. Cloud computing can provide a powerful, scalable storage and processing infrastructure to perform both online and offline analysis and mining of body sensor data streams. This paper presents BodyCloud, a system architecture based on Cloud Computing for the management and monitoring of body sensor data streams. It incorporates key concepts such as scalability and flexibility of resources, sensor heterogeneity, and the dynamic deployment and management of user and community applications.

Book ChapterDOI
03 Apr 2012
TL;DR: A design flow and supporting tools to significantly improve the design and verification of complex cyber-physical systems and the compositional reasoning framework that is developed for proving the correctness of a system design are described.
Abstract: This paper describes a design flow and supporting tools to significantly improve the design and verification of complex cyber-physical systems. We focus on system architecture models composed from libraries of components and complexity-reducing design patterns having formally verified properties. This allows new system designs to be developed rapidly using patterns that have been shown to reduce unnecessary complexity and coupling between components. Components and patterns are annotated with formal contracts describing their guaranteed behaviors and the contextual assumptions that must be satisfied for their correct operation. We describe the compositional reasoning framework that we have developed for proving the correctness of a system design, and provide a proof of the soundness of our compositional reasoning approach. An example based on an aircraft flight control system is provided to illustrate the method and supporting analysis tools.

Journal ArticleDOI
TL;DR: A control system based on wide area measurements and the interaction of a dc microgrid involving sustainable energy sources with the main ac grid have been implemented and presented.
Abstract: Wide area monitoring (WAM), wide area protection (WAP), and wide area control (WAC) systems will enhance the future of smart grid operation in terms of reliability and security. In part I of this paper, a proposed architecture for a hybrid ac/dc smart grid hardware test-bed system was presented. Design details of the various components and their connectivity in the overall system architecture were identified. In part II of the paper, the focus is on the design of monitoring, control, and protection systems and their integrated real-time operation. Various control scenarios for system startup and continuous operation are examined. We have developed a control system based on wide area measurements. The advanced measurement system based on synchrophasors was also implemented using DAQs real-time synchronous data. The developed system features a wide variety of capabilities such as online system parameters calculation and online voltage stability monitoring. These are implemented as an experimental case to enhance wide area monitoring systems. Moreover, the protection system was designed inside of the real-time software environment to monitor the real-time wide area data, and make a comprehensive and reliable coordination for the whole system. Ideas related to the interaction of a dc microgrid involving sustainable energy sources with the main ac grid have been also implemented and presented. The implemented system is explicit and achievable in any research laboratory and for real-time real-world smart grid applications.

Proceedings ArticleDOI
07 May 2012
TL;DR: A new architecture is proposed with the aim to improve the on-site handling and transfer optimization in the waste management process and makes use of Data Transfer Nodes (DTN) in order to provide to a remote server the retrieved data measurements from the garbage bins filling.
Abstract: In many application fields such as home, industry, environment and health, different Wireless Sensor Network (WSN) applications have been developed to solve management problems with smart implementations. This approach can be applied in the filed of solid waste management. In this paper a new architecture is proposed with the aim to improve the on-site handling and transfer optimization in the waste management process. The system architecture is based on sensor nodes and makes use of Data Transfer Nodes (DTN) in order to provide to a remote server the retrieved data measurements from the garbage bins filling. A remote monitoring solution has been implemented, providing user the possibility to interact with the system by using a web browser. Several activities with the aim to provide a Decision Support System (DSS) able to find solutions for resources organization problems linked to solid waste management have been started.

Journal ArticleDOI
TL;DR: This work proposes a system architecture to facilitate analysis and feedback in architectural design, based on post-processing design-oriented building models that is applicable to both API-based direct interfaces as well as open-standard building models.

Proceedings ArticleDOI
23 May 2012
TL;DR: A multi-layered agent-based architecture for the development of proactive, cooperating and context-aware smart objects that takes into account a wide variety of smart objects, from reactive to proactive, from small to very large, from stand-alone to social.
Abstract: The Internet of Things (IoT) term is recently emerging to envision a global infrastructure of networked physical objects. As different definitions of IoT do currently exist, we specifically refer to IoT as a loosely coupled, decentralized system of smart objects (SOs), which are autonomous physical/digital objects augmented with sensing/actuating, processing, and networking capabilities. SOs are able to sense, log, and interpret information generated within themselves and around the neighboring external world where they are situated, act on their own, cooperate with each other, and exchange information with humans. The development of a IoT based on SOs raises many issues involving hw/sw system architecture and application development methodology. A few approaches (e.g. FedNet, UbiComp, Smart Products) have been to date proposed to support the vision of an SO-based IoT infrastructure. In this paper we first discuss the suitability of the agent paradigm and technology to effectively support the development of such an IoT infrastructure and then propose a multi-layered agent-based architecture for the development of proactive, cooperating and context-aware smart objects. Our architecture takes into account a wide variety of smart objects, from reactive to proactive, from small to very large, from stand-alone to social. The implementation phase can be based on multiple agent languages and platforms (JADE, JADEX, LEAP, MAPS) atop heterogeneous computing systems (computers, smartphones, and sensor nodes).

Journal ArticleDOI
TL;DR: In this article, the authors present and evaluate a framework aimed at finding the most appropriate deployment architecture for a distributed software system with respect to multiple, possibly conflicting QoS dimensions, and provide a set of tailorable algorithms for improving a system's deployment.
Abstract: A distributed system's allocation of software components to hardware nodes (i.e., deployment architecture) can have a significant impact on its quality of service (QoS). For a given system, there may be many deployment architectures that provide the same functionality, but with different levels of QoS. The parameters that influence the quality of a system's deployment architecture are often not known before the system's initial deployment and may change at runtime. This means that redeployment of the software system may be necessary to improve the system's QoS properties. This paper presents and evaluates a framework aimed at finding the most appropriate deployment architecture for a distributed software system with respect to multiple, possibly conflicting QoS dimensions. The framework supports formal modeling of the problem and provides a set of tailorable algorithms for improving a system's deployment. We have realized the framework on top of a visual deployment architecture modeling and analysis environment. The framework has been evaluated for precision and execution-time complexity on a large number of simulated distributed system scenarios, as well as in the context of two third-party families of distributed applications.

Proceedings ArticleDOI
02 Jun 2012
TL;DR: Active code completion is described, an architecture that allows library developers to introduce interactive and highly-specialized code generation interfaces, called palettes, directly into the editor, and one such system is designed, named Graphite, for the Eclipse Java development environment.
Abstract: Code completion menus have replaced standalone API browsers for most developers because they are more tightly integrated into the development workflow. Refinements to the code completion menu that incorporate additional sources of information have similarly been shown to be valuable, even relative to standalone counterparts offering similar functionality. In this paper, we describe active code completion, an architecture that allows library developers to introduce interactive and highly-specialized code generation interfaces, called palettes, directly into the editor. Using several empirical methods, we examine the contexts in which such a system could be useful, describe the design constraints governing the system architecture as well as particular code completion interfaces, and design one such system, named Graphite, for the Eclipse Java development environment. Using Graphite, we implement a palette for writing regular expressions as our primary example and conduct a small pilot study. In addition to showing the feasibility of this approach, it provides further evidence in support of the claim that integrating specialized code completion interfaces directly into the editor is valuable to professional developers.

Proceedings ArticleDOI
05 Sep 2012
TL;DR: This paper presents a novel, fully formal contract framework, which relies on an expressive property specification language, conceived for the formalization of embedded system requirements, and is supported by a verification engine based on automated SMT techniques.
Abstract: Contract-based design is an emerging paradigm for the design of complex systems, where each component is associated with a contract, i.e., a clear description of the expected behaviour. Contracts specify the input-output behaviour of a component by defining what the component guarantees, provided that the its environment obeys some given assumptions. The ultimate goal of contract-based design is to allow for compositional reasoning, stepwise refinement, and a principled reuse of components that are already pre-designed, or designed independently. In this paper, we present a novel, fully formal contract framework. The decomposition of the system architecture is complemented with the corresponding decomposition of component contracts. The framework exploits such decomposition to automatically generate a set of proof obligations, which, once verified, allow concluding the correctness of the top-level system properties. The framework relies on an expressive property specification language, conceived for the formalization of embedded system requirements. The proof system reduces the correctness of contracts refinement to entailment of temporal logic formulas, and is supported by a verification engine based on automated SMT techniques.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: This paper presents all about the promising cloud computing technology i.e. its architecture, advantages, platforms, issues and challenges, applications, future and research options of cloud computing.
Abstract: Cloud computing, one of the emerging topic in the field of information technology, is the development of parallel computing, distributed computing and grid computing. By using the internet and central remote services it maintains the data, applications etc which offers much more efficient computing by centralizing storage, memory, processing, bandwidth and so on. It can also concentrate all computation resources and manage automatically through the software without intervene. There are several layers in present cloud computing architecture, service models, platforms, issues i.e. security, privacy, reliability, open standard etc. and types. This paper presents all about the promising cloud computing technology i.e. its architecture, advantages, platforms, issues and challenges, applications, future and research options of cloud computing. There four generations of computing such as mainframe based computing, personal computing, client server based computing and web server based computing respectively. As there are several advantages over present generation of web server based computing such as fast micro processor, huge memory, high-speed network, reliable system architecture etc. we can say that cloud computing will provide the next generation of computing services.

Journal ArticleDOI
TL;DR: An automatic simple, compact, wireless, personalized and cost efficient pervasive architecture for the evaluation of the stress state of individual subjects suitable for prolonged stress monitoring during normal activity is described.

Journal ArticleDOI
TL;DR: The proposed work presents a highly parallel architecture for motion estimation that implements the well-known Lucas and Kanade algorithm with the multi-scale extension for the computation of large motion estimations in a dedicated device [field-programmable gate array (FPGA).
Abstract: The proposed work presents a highly parallel architecture for motion estimation. Our system implements the well-known Lucas and Kanade algorithm with the multi-scale extension for the computation of large motion estimations in a dedicated device [field-programmable gate array (FPGA)]. Our system achieves 270 frames per second for a 640 × 480 resolution in the best case of the mono-scale implementation and 32 frames per second for the multi-scale one, fulfilling the requirements for a real-time system. We describe the system architecture, address the evaluation of the accuracy with well-known benchmark sequences (including a comparative study), and show the main hardware resources used.

Proceedings ArticleDOI
13 Aug 2012
TL;DR: This work discusses how to design a network architecture where choices at different layers of the protocol stack are explicitly exposed to users, which ensures that innovative technical solutions can be used and rewarded, which is essential to encourage wide deployment of this architecture.
Abstract: There has been a great interest in defining a new network architecture that can meet the needs of a future Internet. One of the main challenges in this context is how to realize the many different technical solutions that have developed in recent years in a single coherent architecture. In addition, it is necessary to consider how to ensure economic viability of architecture solutions. In this work, we discuss how to design a network architecture where choices at different layers of the protocol stack are explicitly exposed to users. This approach ensures that innovative technical solutions can be used and rewarded, which is essential to encourage wide deployment of this architecture.

Journal ArticleDOI
TL;DR: This conceptual design utilises a Collaborative decision-support model that effectively interacts with the decision-makers and the management information systems/tools exist in the network, and provides appropriate support to all necessary decision-making steps towards the attainment of the network's strategic goals, while making full benefits of thenetwork resources.

Journal ArticleDOI
TL;DR: The aim of the proposed architecture is to enable the realization of scalable, flexible, adaptive, energy-efficient, and trust-aware VSN platforms, focusing on the reduction of deployment complexity and management cost, and on advanced interoperability mechanisms.
Abstract: The majority of research and development efforts in the area of Wireless Sensor Networks (WSNs) focus on WSN systems that are dedicated for a specific application. However, this trend is currently being replaced by resource-rich WSN deployments that are expected to provide capabilities in excess of any application's requirements. In this regard, the concept of virtual sensor networking is an emerging approach that enables the decoupling of the physical sensor deployment from the applications running on top of it, allowing in this way the dynamic collaboration of a subset of sensor nodes and helping the proliferation of new services and applications beyond the scope of the original deployment. In this context, the article presents the architecture of a system for the realization of Virtual Sensor Networks (VSNs). The aim of the proposed architecture is to enable the realization of scalable, flexible, adaptive, energy-efficient, and trust-aware VSN platforms, focusing on the reduction of deployment complexity and management cost, and on advanced interoperability mechanisms. The efforts have been put towards specifying a service provisioning architecture and mechanisms for advanced sensor and middleware design.

Journal ArticleDOI
TL;DR: The simulation results show that compared with three traditional algorithms based on different architectures, the new hybrid navigation algorithm proposed in this paper performs more reliable in terms of escaping from traps, resolving conflicts between layers and decreasing the computational time for avoiding time out of the control cycle.
Abstract: Focusing on the navigation problem of mobile robots in environments with incomplete knowledge, a new hybrid navigation algorithm is proposed. The novel system architecture in the proposed algorithm is the main contribution of this paper. Unlike most existing hybrid navigation systems whose deliberative layers usually play the dominant role while the reactive layers are only simple executors, a more independent reactive layer that can guarantee convergence without the assistance of a deliberative layer is pursued in the proposed architecture, which brings two benefits. First, the burden of the deliberative layer is released, which is beneficial to guaranteeing real-time property and decreasing resource requirement. Second, some possible layer conflicts in the traditional architecture can be resolved, which improves the system stability. The convergence of the new algorithm has been proved. The simulation results show that compared with three traditional algorithms based on different architectures, the new hybrid navigation algorithm proposed in this paper performs more reliable in terms of escaping from traps, resolving conflicts between layers and decreasing the computational time for avoiding time out of the control cycle. The experiments on a real robot further verify the validity and applicability of the new algorithm.

Journal ArticleDOI
TL;DR: This work proposes a control-theoretic self-tuning method that can dynamically tune the preferences of different quality requirements, and can autonomously make tradeoff decisions through the authors' Preference-Based Goal Reasoning procedure.

Proceedings ArticleDOI
01 Aug 2012
TL;DR: This article consists of a collection of slides from the author's conference presentation on Ivy Bridge, power management applications of the third generation core Intel micro architecture.
Abstract: This article consists of a collection of slides from the author's conference presentation on Ivy Bridge, power management applications of the third generation core Intel micro architecture. Some of the specific topics discussed include: an overview of the Ivy Bridge architecture and supported applications; power scaling and management facilities; core product features; power efficiency; voltage control and optimization technqiues; power sharing capabilities; and system architecture.