scispace - formally typeset
Search or ask a question

Showing papers on "System integration published in 2012"


Posted Content
01 Jan 2012
TL;DR: Design structure matrix (DSM) is a straightforward and flexible modeling technique that can be used for designing, developing, and managing complex systems as mentioned in this paper, which can be applied to complex issues in health care management, financial systems, public policy, natural sciences, and social systems.
Abstract: Design structure matrix (DSM) is a straightforward and flexible modeling technique that can be used for designing, developing, and managing complex systems. DSM offers network modeling tools that represent the elements of a system and their interactions, thereby highlighting the system’s architecture (or designed structure). Its advantages include compact format, visual nature, intuitive representation, powerful analytical capacity, and flexibility. Used primarily so far in the area of engineering management, DSM is increasingly being applied to complex issues in health care management, financial systems, public policy, natural sciences, and social systems. This book offers a clear and concise explanation of DSM methods for practitioners and researchers. The book’s four sections correspond to the four primary types of DSM models, offering tools for representing product architectures, organization architectures, process architectures, and multidomain architectures (which combine different types of DSM models to represent multiple domains simultaneously). In each section, a chapter introducing the technique is followed by a chapter of examples showing a variety of applications of that DSM type. The forty-four applications represent a wide range of industries (including automotive, aerospace, electronics, building, and pharmaceutical), countries (among them Australia, Germany, Japan, Turkey, and the United State), and problems addressed (modularity, outsourcing, system integration, knowledge management, and others).

542 citations


Book
09 Jul 2012
TL;DR: This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts.
Abstract: How do you approach answering queries when your data is stored in multiple databases that were designed independently by different people? This is first comprehensive book on data integration and is written by three of the most respected experts in the field. This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. Data integration is the problem of answering queries that span multiple data sources (e.g., databases, web pages). Data integration problems surface in multiple contexts, including enterprise information integration, query processing on the Web, coordination between government agencies and collaboration between scientists. In some cases, data integration is the key bottleneck to making progress in a field. The authors provide a working knowledge of data integration concepts and techniques, giving you the tools you need to develop a complete and concise package of algorithms and applications. *Offers a range of data integration solutions enabling you to focus on what is most relevant to the problem at hand. *Enables you to build your own algorithms and implement your own data integration applications *Companion website with numerous project-based exercises and solutions and slides. Links to commercially available software allowing readers to build their own algorithms and implement their own data integration applications. Facebook page for reader input during and after publication.

494 citations


Journal ArticleDOI
TL;DR: Brain-computer interaction has already moved from assistive care to applications such as gaming, but improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.
Abstract: Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.

332 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: A passivity-based design approach that decouples stability from timing uncertainties caused by networking and computation is presented, and cross-domain abstractions that provide effective solution for model-based fully automated software synthesis and high-fidelity performance analysis are described.
Abstract: System integration is the elephant in the china store of large-scale cyber-physical system (CPS) design. It would be hard to find any other technology that is more undervalued scientifically and at the same time has bigger impact on the presence and future of engineered systems. The unique challenges in CPS integration emerge from the heterogeneity of components and interactions. This heterogeneity drives the need for modeling and analyzing cross-domain interactions among physical and computational/networking domains and demands deep understanding of the effects of heterogeneous abstraction layers in the design flow. To address the challenges of CPS integration, significant progress needs to be made toward a new science and technology foundation that is model based, precise, and predictable. This paper presents a theory of composition for heterogeneous systems focusing on stability. Specifically, the paper presents a passivity-based design approach that decouples stability from timing uncertainties caused by networking and computation. In addition, the paper describes cross-domain abstractions that provide effective solution for model-based fully automated software synthesis and high-fidelity performance analysis. The design objectives demonstrated using the techniques presented in the paper are group coordination for networked unmanned air vehicles (UAVs) and high-confidence embedded control software design for a quadrotor UAV. Open problems in the area are also discussed, including the extension of the theory of compositional design to guarantee properties beyond stability, such as safety and performance.

307 citations


01 Jan 2012
TL;DR: All-Programmable devices enable designers to go beyond programmable logic to programmable systems integration, to incorporate more system functions into fewer parts, increase system performance, reduce system power, and lower BOM cost.
Abstract: All-Programmable devices go beyond programmable logic and I/O, integrating various combinations of 3D stacked silicon interconnect technology, software programmable ARM® processing systems, programmable Analog Mixed Signal (AMS), and a significant amount of intellectual-property (IP) cores. These next generation devices enable designers to go beyond programmable logic to programmable systems integration, to incorporate more system functions into fewer parts, increase system performance, reduce system power, and lower BOM cost.

244 citations


Journal ArticleDOI
TL;DR: The objective of this article is to survey current visual sensor platforms according to in-network processing and compression/coding techniques together with their targeted applications.
Abstract: Recent developments in low-cost CMOS cameras have created the opportunity of bringing imaging capabilities to sensor networks. Various visual sensor platforms have been developed with the aim of integrating visual data to wireless sensor applications. The objective of this article is to survey current visual sensor platforms according to in-network processing and compression/coding techniques together with their targeted applications. Characteristics of these platforms such as level of integration, data processing hardware, energy dissipation, radios and operating systems are also explored and discussed.

147 citations


Journal ArticleDOI
TL;DR: The Substitutable Medical Applications, Reusable Technologies (SMART) Platforms project as mentioned in this paper aims to develop a health information technology platform with substitutable applications (apps) constructed around core services.

143 citations


Journal ArticleDOI
TL;DR: Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation and enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows.
Abstract: Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org .

143 citations


Journal ArticleDOI
TL;DR: The Halmstad University entry in the Grand Cooperative Driving Challenge, which is a competition in vehicle platooning, develops a longitudinal controller that uses information exchanged via wireless communication with other cooperative vehicles to achieve string-stable platooning.
Abstract: This paper describes the Halmstad University entry in the Grand Cooperative Driving Challenge, which is a competition in vehicle platooning. Cooperative platooning has the potential to improve traffic flow by mitigating shock wave effects, which otherwise may occur in dense traffic. A longitudinal controller that uses information exchanged via wireless communication with other cooperative vehicles to achieve string-stable platooning is developed. The controller is integrated into a production vehicle, together with a positioning system, communication system, and human-machine interface (HMI). A highly modular system architecture enabled rapid development and testing of the various subsystems. In the competition, which took place in May 2011 on a closed-off highway in The Netherlands, the Halmstad University team finished second among nine competing teams.

122 citations


Book ChapterDOI
11 Nov 2012
TL;DR: The LOD2 Stack is an integrated distribution of aligned tools which support the whole life cycle of Linked Data from extraction, authoring/creation via enrichment, interlinking, fusing to maintenance.
Abstract: The LOD2 Stack is an integrated distribution of aligned tools which support the whole life cycle of Linked Data from extraction, authoring/creation via enrichment, interlinking, fusing to maintenance. The LOD2 Stack comprises new and substantially extended existing tools from the LOD2 project partners and third parties. The stack is designed to be versatile; for all functionality we define clear interfaces, which enable the plugging in of alternative third-party implementations. The architecture of the LOD2 Stack is based on three pillars: ( 1 ) Software integration and deployment using the Debian packaging system. ( 2 ) Use of a central SPARQL endpoint and standardized vocabularies for knowledge base access and integration between the different tools of the LOD2 Stack. ( 3 ) Integration of the LOD2 Stack user interfaces based on REST enabled Web Applications. These three pillars comprise the methodological and technological framework for integrating the very heterogeneous LOD2 Stack components into a consistent framework. In this article we describe these pillars in more detail and give an overview of the individual LOD2 Stack components. The article also includes a description of a real-world usage scenario in the publishing domain.

116 citations


Journal ArticleDOI
TL;DR: In this article, the authors empirically analyse whether there is a relationship between the difficulties found in the integration process and the level of system integration achieved and demonstrate that organisations with three implemented management systems face difficulties in integration process, while this relationship is not significant for those organisations with two management systems.

Journal ArticleDOI
TL;DR: An agent-based, serviced-oriented approach for integrating data, information, and knowledge captured and accumulated during the entire facility lifecycle from its project planning, design, construction, material/component/equipment procurement, to operation and maintenance is presented.

Journal ArticleDOI
TL;DR: A collaborative environment called distributed interoperable manufacturing platform is introduced, which is based on a module-based, service-oriented architecture (SOA) and the STEP-NC data model is used to facilitate data-exchange among heterogeneous CAD/CAM/CNC systems.
Abstract: Today, globalisation has become one of the main trends of manufacturing business that has led to a world-wide decentralisation of resources amongst not only individual departments within one company but also business partners. However, despite the development and improvement in the last few decades, difficulties in information exchange and sharing still exist in heterogeneous applications environments. This article is divided into two parts. In the first part, related research work and integrating solutions are reviewed and discussed. The second part introduces a collaborative environment called distributed interoperable manufacturing platform, which is based on a module-based, service-oriented architecture SOA. In the platform, the STEP-NC data model is used to facilitate data-exchange among heterogeneous CAD/CAM/CNC systems.

Journal ArticleDOI
TL;DR: A technical framework and roadmap of embedded diagnostics and prognostics (ED/EP) for complex mechanical systems is presented based on the methodology of system integration and parallel design, which includes six key elements (embedded sensors, embedded sensing design, embedded sensors placement, embedded signals transmission, ED/EP algorithms, and embedded self-power).
Abstract: Prognostics and Health Management (PHM) technologies have emerged as a key enabler to provide early indications of system faults and perform predictive maintenance actions. Implementation of a PHM system depends on accurately acquiring in real time the present and estimated future health states of a system. For electronic systems, built-in-test (BIT) makes it not difficult to achieve these goals. However, reliable prognostics capability is still a bottle-neck problem for mechanical systems due to a lack of proper on-line sensors. Recent advancements in sensors and micro- electronics technologies have brought about a novel way out for complex mechanical systems, which is called embedded diagnostics and prognostics (ED/EP). ED/EP can provide real-time present condition information and future health states by integrating micro-sensors into mechanical structures when designing and manufacturing, so ED/EP has a revolutionary progress compared to traditional mechanical fault diagnostic and prognostic ways. But how to study ED/EP for complex mechanical systems has not been focused so far. This paper explores the challenges and needs of efforts to implement ED/EP technologies. In particular, this paper presents a technical framework and roadmap of ED/EP for complex mechanical systems. The framework is based on the methodology of system integration and parallel design, which includes six key elements (embedded sensors, embedded sensing design, embedded sensors placement, embedded signals transmission, ED/EP algorithms, and embedded self-power). Relationships among these key elements are outlined, and they should be considered simultaneously when designing a complex mechanical system. Technical challenges of each key element are emphasized, and the corresponding existed or potential solutions are summarized in detail. Then a suggested roadmap of ED/EP for complex mechanical systems is brought forward according to potential advancements in related areas, which can be divided into three different stages: embedded diagnostics, embedded prognostics, and system integration. In the end, the presented framework is exemplified with a gearbox.

Journal ArticleDOI
TL;DR: In this article, a system integration scheme relevant for smart packaging applications is presented, which analyzes the requirements on hybridization technologies suitable for packaging applications and provides design examples on integration of intrusion surveillance solutions for cellulose-based packaging applications.
Abstract: A system integration scheme relevant for smart packaging applications is presented. Recent advances in printed electronics, radio frequency identification tag production, and standardization of communication protocols are factors that increase the design freedom for new applications. As in all new technology fields, the first products are expected to appear in the high-cost segment attracting early adopters in the form of niche products. A reasonable assumption is that these products will come from hybridization of different types of technologies. Such a scenario is likely since no technology solution available can provide all features that these types of applications demand. There is a need of standard solutions for hybridization of silicon devices and printed (or foil-type) components. Conductive ink technology is a powerful tool for hybridization and customization of large-area electronics, providing 3-D integration and large-area customization. However, high-performance communication and advanced processing demand the use of silicon. Smart hybridization solutions allow combination of the best from both worlds. This paper analyzes the requirements on hybridization technologies suitable for smart packaging applications and provides design examples on integration of intrusion surveillance solutions for cellulose-based packaging applications. It shows that even though the current hybridization technologies are far from optimal, they can provide a considerable design freedom and system performance.


Book
09 May 2012
TL;DR: This engineering systems integration theory metrics and methods book will probably make you feel curious and help you to have willing to reach all benefits.
Abstract: The first book to address the underlying premises of systems integration and how to exposit them into a practical and productive manner, this book prepares systems managers and systems engineers to consider their decisions in light of systems integration metrics. The book addresses two questions: Is there a way to express the interplay of human actions and the result of system interactions of a product with its environment, and are there methods that combine to improve the integration of systems? The systems integration theory and integration frameworks proposed in the book tie General Systems Theory with practice.

Book
10 Apr 2012
TL;DR: This book sets forth tested and proven electromagnetic modeling and simulation methods for analyzing signal and power integrity as well as electromagnetic interference in large complex electronic interconnects, multilayered package structures, integrated circuits, and printed circuit boards.
Abstract: Based on the author's extensive research, this book sets forth tested and proven electromagnetic modeling and simulation methods for analyzing signal and power integrity as well as electromagnetic interference in large complex electronic interconnects, multilayered package structures, integrated circuits, and printed circuit boards. Readers will discover the state of the technology in electronic package integration and printed circuit board simulation and modeling. In addition to popular full-wave electromagnetic computational methods, the book presents new, more sophisticated modeling methods, offering readers the most advanced tools for analyzing and designing large complex electronic structures.

Journal ArticleDOI
TL;DR: The creation and contents of the first set of standard specifications developed during the first 2 years-named Device Control and Data Interface Specification, Common Command Dictionary, and Data Capture Specification-describe how devices are connected to a software controlling the interplay of the devices, the command sets for various device classes, and the structure of result data.
Abstract: A standard (SiLA, Standardization in Lab Automation) that focuses on the connection between sample processing devices and a software system for automation gains acceptance. This article reports about the creation and contents of the first set of standard specifications developed during the first 2 years. These specifications-named Device Control and Data Interface Specification, Common Command Dictionary, and Data Capture Specification-describe how devices are connected to a software controlling the interplay of the devices, the command sets for various device classes, and the structure of result data such as data generated by microtiter plate readers. A section about SiLA-compliant products and pilot projects using SiLA-compatible devices for system integration gives an idea about the acceptance of the standard in the marketplace.

Proceedings ArticleDOI
03 Jul 2012
TL;DR: The hardware and software integration aspects that were necessary in order to address the problem of autonomously searching and recovering a black-box mock-up that was previously thrown to an unknown position are presented.
Abstract: Nowadays, autonomous intervention is getting more attention in the underwater robotics community. Few research projects on this matter are currently under development. In this context, and after a first successful experience in the RAUVI Spanish project (2009–2011), the authors are currently involved in the TRIDENT project (2010–2013), funded by the European Commission. To succeed in autonomous intervention, an AUV endowed with a manipulator and with a high degree of autonomy is essential. The complexity of the required robotic system is very high and the system integration process becomes critical. This paper presents the problems being solved in TRIDENT, from a systems integration perspective. As a case study, some results, achieved during the last experiments carried out in the Roses harbor (Girona) in October 2011 will be presented, to demonstrate the capabilities exhibited by the AUV for Intervention under development. The experiments were focused on the problem of autonomously searching and recovering a black-box mock-up that was previously thrown to an unknown position. This paper presents the hardware and software integration aspects that were necessary in order to address such a challenging problem.

Proceedings ArticleDOI
13 Mar 2012
TL;DR: A scenario for the integration of an electronic ticketing system into an existing public transport system based on NFC is introduced and the main focus is its realisation in accordance with the VDV Core Application.
Abstract: A key application of Near Field Communication (NFC) can be found in the field of Electronic Fare Management. It can radically change existing systems of isolated applications in public transport by providing new approaches for a national or international inter operable fare management. In this paper a scenario for the integration of an electronic ticketing system into an existing public transport system based on NFC is introduced. The main focus is its realisation in accordance with the VDV Core Application. Electronic fare management systems consist of sophisticated structures and processes. Therefore, at the current stage of development only a selected subset of features which is essential for prototypical implementation is presented in this paper. First, the technology, electronic ticketing and previous field trials in this application area are introduced. Next, a set of relevant use cases is outlined and the existing system architecture is presented as basis for the description of the chosen system integration scenario. Finally the adopted and newly implemented system components and their interfaces are described in detail before concluding with faced challenges and some future prospects.

Journal ArticleDOI
TL;DR: An overview of the process of implementing and integrating PACS in a comprehensive health system comprising an academic core hospital and numerous community hospitals is provided, touching all stages from planning to operation and training.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: A fractionated spacecraft is a cluster of independent modules that interact wirelessly to maintain cluster flight and realize the functions usually performed by a monolithic satellite, based on a layered architecture consisting of a novel operating system, a middleware layer, and component-structured applications.
Abstract: A fractionated spacecraft is a cluster of independent modules that interact wirelessly to maintain cluster flight and realize the functions usually performed by a monolithic satellite. This spacecraft architecture poses novel software challenges because the hardware platform is inherently distributed, with highly fluctuating connectivity among the modules. It is critical for mission success to support autonomous fault management and to satisfy real-time performance requirements. It is also both critical and challenging to support multiple organizations and users whose diverse software applications have changing demands for computational and communication resources, while operating on different levels and in separate domains of security. The solution proposed in this paper is based on a layered architecture consisting of a novel operating system, a middleware layer, and component-structured applications. The operating system provides primitives for concurrency, synchronization, and secure information flows; it also enforces application separation and resource management policies. The middleware provides higher-level services supporting request/response and publish/subscribe interactions for distributed software. The component model facilitates the creation of software applications from modular and reusable components that are deployed in the distributed system and interact only through well-defined mechanisms. Two cross-cutting aspects — multi-level security and multi-layered fault management — are addressed at all levels of the architecture. The complexity of creating applications and performing system integration is mitigated through the use of a domain-specific model-driven development process that relies on a dedicated modeling language and its accompanying graphical modeling tools, software generators for synthesizing infrastructure code, and the extensive use of model-based analysis for verification and validation.

Journal ArticleDOI
TL;DR: In this paper, four graduate student teams completed a distributed, complex system design task, and their design histories suggest three categories of suboptimal approaches: global rather than local searches, optimizing individual design parameters separately, and sequential rather than concurrent optimization strategies.
Abstract: Large-scale engineering systems require design teams to balance complex sets of considerations using a wide range of design and decision-making skills. Formal, computational approaches for optimizing complex systems offer strategies for arriving at optimal solutions in situations where system integration and design optimization are well-formulated. However, observation of design practice suggests engineers may be poorly prepared for this type of design. Four graduate student teams completed a distributed, complex system design task. Analysis of the teams’ design histories suggests three categories of suboptimal approaches: global rather than local searches, optimizing individual design parameters separately, and sequential rather than concurrent optimization strategies. Teams focused strongly on individual subsystems rather than system-level optimization, and did not use the provided system gradient indicator to understand how changes in individual subsystems impacted the overall system. This suggests the need for curriculum to teach engineering students how to appropriately integrate systems as a whole. [DOI: 10.1115/1.4007840]

Journal ArticleDOI
TL;DR: An evaluation of European smart grid projects showed that it was very difficult to grasp technological and non-technological key characteristics of this complex system.
Abstract: The electric power grid is a crucial part of societal infrastructure and needs constant attention to maintain its performance and reliability. European grid project investments are currently valued at over 5 billion Euros and are estimated to reach 56 billion by 2020 [2]. Successful smart grid deployment will require a holistic analysis and design process if it is to function properly and in an environmentally sustainable way. The entire system must be treated as integrated, not as isolated parts [3]. System integration and full customer engagement are crucial elements of this development. An evaluation of European smart grid projects showed that it was very difficult to grasp technological and non-technological key characteristics of this complex system [4].

Journal ArticleDOI
TL;DR: A novel integrated monitoring architecture based on Web services is proposed, which offers a universal client for accessing different monitoring systems and then facilitates system integration under the heterogeneous environment.
Abstract: The integrated monitoring has become an important approach for investigation, detection, and policy decision in many fields. Unfortunately, current monitoring systems are commonly developed by different organizations using specific technologies and platforms, bringing a lot of difficulties for the seamless integration and unified access. To address the aforementioned problem, a novel integrated monitoring architecture based on Web services is proposed, which offers a universal client for accessing different monitoring systems and then facilitates system integration under the heterogeneous environment. By analyzing the characters of sensor-network-based monitoring applications, this paper presents the whole architecture design which consists of the standardized Web services, management subsystem, configuration subsystem, local monitoring subsystem, and integration monitoring subsystem. Through the integration architecture, the distributed isomerous monitoring systems can be accessed in a unified user interface if owning the corresponding ranking. In order to validate the proposed architecture, three different monitoring systems are constructed and integrated. The results show that the seamless system integration is achieved and the supervisory efficiency is improved remarkably.

Journal ArticleDOI
TL;DR: In this article, a telematic platform of an integral nature, enhancing tracking and tracing capabilities for vehicles and goods, giving a secure solution to the problem of installing wireless processing units in truck trailers and cabs, identifying drivers and journey itineraries, and assuring freight environmental parameters during the journey.

Journal ArticleDOI
Guojian Wang1, Linmi Tao1, Huijun Di1, Xiyong Ye1, Yuanchun Shi1 
TL;DR: An application-oriented service share model for the generalization of vision processing is presented and a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently.
Abstract: The complexity of intelligent computer vision systems demands novel system architectures that are capable of integrating various computer vision algorithms into a working system with high scalability. The real-time applications of human-centered computing are based on multiple cameras in current systems, which require a transparent distributed architecture. This paper presents an application-oriented service share model for the generalization of vision processing. Based on the model, a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently. The architecture provides a standard interface for loading various modules and a mechanism for modules to acquire inputs and publish processing results that can be used as inputs by others. Using this architecture, a system can load specific applications without considering the common low-layer data processing. We have implemented a prototype vision system based on the proposed architecture. The latency performance and 3-D track function were tested with the prototype system. The architecture is scalable and open, so it will be useful for supporting the development of an intelligent vision system, as well as a distributed sensor system.

Book
31 Dec 2012
TL;DR: Wang et al. as discussed by the authors have published over 100 refereed papers and seven books and have also developed several computer software programs based on their research findings, such as Data Warehousing and Mining: Concepts, Methodologies, Tools, and Applications.
Abstract: John Wang is a professor in the Department of Information & Operations Management at Montclair State University, USA. Having received a scholarship award, he came to the USA and completed his PhD in operations research from Temple University. Due to his extraordinary contributions beyond a tenured full professor, Dr. Wang has been honored with a special range adjustment in 2006. He has published over 100 refereed papers and seven books. He has also developed several computer software programs based on his research findings. He is the Editor-in-Chief of International Journal of Applied Management Science, International Journal of Operations Research and Information Systems, and International Journal of Information Systems and Supply Chain Management. He is the Editor of Data Warehousing and Mining: Concepts, Methodologies, Tools, and Applications (six-volume) and the Editor of the Encyclopedia of Data Warehousing and Mining, 1st (two-volume) and 2nd (four-volume). His long-term research goal is on the synergy of operations research, data mining and cybernetics. John Wang (Montclair State University, USA)

Book
03 Dec 2012
TL;DR: In this paper, the authors present a broad overview of WSN technology, including an introduction to sensor and sensing technologies; contains an extensive section on case studies, providing details of the development of a number of wireless sensor networks applications; discusses frameworks for WSN systems integration; investigates real-world applications in medical and vehicular sensor networks; with a Foreword by the Nobel Laurate Professor Martin Perl of Stanford University.
Abstract: It is a general trend in computing that computers are becoming ever smaller and ever more interconnected. Sensor networks large networks of small, simple devices are a logical extreme of this trend. Wireless sensor networks (WSNs) are attracting an increasing degree of research interest, with a growing number of industrial applications starting to emerge. Two of these applications, personal health monitoring and emergency/disaster recovery, are the focus of the European Commission project ProSense: Promote, Mobilize, Reinforce and Integrate Wireless Sensor Networking Research and Researchers. This hands-on introduction to WSN systems development presents a broad coverage of topics in the field, contributed by researchers involved in the ProSense project. An emphasis is placed on the practical knowledge required for the successful implementation of WSNs. Divided into four parts, the first part covers basic issues of sensors, software, and position-based routing protocols. Part two focuses on multidisciplinary issues, including sensor network integration, mobility aspects, georouting, medical applications, and vehicular sensor networks. The remaining two parts present case studies and further applications. Topics and features: presents a broad overview of WSN technology, including an introduction to sensor and sensing technologies; contains an extensive section on case studies, providing details of the development of a number of WSN applications; discusses frameworks for WSN systems integration, through which WSN technology will become fundamental to the Future Internet concept; investigates real-world applications of WSN systems in medical and vehicular sensor networks; with a Foreword by the Nobel Laurate Professor Martin Perl of Stanford University. Providing holistic coverage of WSN technology, this text/reference will enable graduate students of computer science, electrical engineering and telecommunications to master the specific domains of this emerging area. The book will also be a valuable resource for researchers and practitioners interested in entering the field.