Author
Dimitrios Georgakopoulos
Other affiliations: Telcordia Technologies, Monroe Community College, Australian National University ...read more
Bio: Dimitrios Georgakopoulos is an academic researcher from Swinburne University of Technology. The author has contributed to research in topics: Wireless sensor network & Middleware. The author has an hindex of 18, co-authored 56 publications receiving 1904 citations. Previous affiliations of Dimitrios Georgakopoulos include Telcordia Technologies & Monroe Community College.
Papers published on a yearly basis
Papers
More filters
••
914 citations
••
05 Jun 2000TL;DR: This paper introduces a Polymorphic Process Model (PPM) that supports both reference process- and service-based MEPs and illustrates that these key PPM capabilities permit the late binding and use of multiple activity implementations within a MEP without modifying the MEP at run time or enumerating the alternative implementation at specification time.
Abstract: Multi-enterprise processes (MEPs) are workflows consisting of a set of activities that are implemented by different enterprises Tightly coupled Virtual Enterprises (VEs) typically agree on abstract MEPs (reference MEPs), to which each enterprise contributes single-enterprise processes (SEPs) that implement and refine the activities in the reference MEP On the other end of the spectrum, loosely coupled VEs use service-based MEPs that fuse together heterogeneous services implemented and provided by different enterprises Existing process models usually couple activities with their implementation Therefore, they cannot effectively support such MEPs In this paper, we introduce a Polymorphic Process Model (PPM) that supports both reference process- and service-based MEPs To accomplish this, PPM decouples activity interface from activity implementation, and provides process polymorphism to support their mapping In particular, PPM determines activity types from the activity interfaces, permits activity interface subtyping, and provides for the mapping of MEP activity types to concrete implementations via interface matching We illustrate that these key PPM capabilities permit the late binding and use of multiple activity implementations within a MEP without modifying the MEP at run time or enumerating the alternative implementation at specification time
172 citations
••
12 Dec 2013TL;DR: In this article, the authors present a collaborative mobile sensing framework called Mobile Sensor Data EngineiNe (MOSDEN) that can operate on smartphones capturing and sharing sensed data between multiple distributed applications and users.
Abstract: Mobile devices are rapidly becoming the primary computing device in people's lives. Application delivery platforms like Google Play, Apple App Store have transformed mobile phones into intelligent computing devices by the means of applications that can be downloaded and installed instantly. Many of these applications take advantage of the plethora of sensors installed on the mobile device to deliver enhanced user experience. The sensors on the smartphone provide the opportunity to develop innovative mobile opportunistic sensing applications in many sectors including healthcare, environmental monitoring and transportation. In this paper, we present a collaborative mobile sensing framework namely Mobile Sensor Data EngiNe (MOSDEN) that can operate on smartphones capturing and sharing sensed data between multiple distributed applications and users. MOSDEN follows a component-based design philosophy promoting reuse for easy and quick opportunistic sensing application deployments. MOSDEN separates the application-specific processing from the sensing, storing and sharing. MOSDEN is scalable and requires minimal development effort from the application developer. We have implemented our framework on Android-based mobile platforms and evaluate its performance to validate the feasibility and efficiency of MOSDEN to operate collaboratively in mobile opportunistic sensing applications. Experimental outcomes and lessons learnt conclude the paper.
72 citations
•
09 Apr 1992TL;DR: In this article, a ticket is used to ensure global serializability by preventing multidatabase transactions from being serialized in different ways at the participating local database systems (LDBS).
Abstract: Our invention guarantees global serializability by preventing multidatabase transactions from being serialized in different ways at the participating local database systems (LDBS). In one embodiment tickets are used to inform the MDBS of the relative serialization order of the subtransactions of each global transactions at each LDBS. A ticket is a (logical) timestamp whose value is stored as a regular data item in each LDBS. Each substransaction of a global transaction is required to issue the take-a-ticket operations which consists of reading the value of the ticket (i.e., read ticket) and incrementing it (i.e., write (ticket+1)) through regular data manipulation operations. Only the subtransactions of global transactions take tickets. When different global transactions issue subtransactions at a local database, each subtransaction will include the take-a-ticket operations. Therefore, the ticket values associated with each global subtransaction at the MDBS reflect the local serialization order at each LDBS. The MDBS in accordance with our invention examines the ticket values to determine the local serialization order at the different LDBS's and only authorizes the transactions to commit if the serialization order of the global transactions is the same at each LDBS. In another embodiment, the LDBSs employ rigorous schedulers and the prepared-to-commit messages for each subtransaction are used by the MDBS to ensure global serializability.
71 citations
•
29 Jun 2001
TL;DR: In this article, a workflow management method and system including a process definition tool enabling a user to model a workflow process definition is presented, and a workflow engine is configured to interpret the process definition to perform workflow management tasks.
Abstract: A workflow management method and system including a process definition tool enabling a user to model a workflow process definition. A workflow management engine is configured to interpret the workflow process definition to perform workflow management tasks. The definition tool and workflow management engine are configured to support primitives including an inhibitor primitive that enables modeling processes having mutually exclusive interdependencies, an option primitive that may be instantiated zero or more times, a group assignment primitive that supports group activity, and an activity placeholder that enables the specification of activities whose concrete types may be unknown at process definition time.
55 citations
Cited by
More filters
••
TL;DR: This paper surveys context awareness from an IoT perspective and addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT.
Abstract: As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.
2,542 citations
••
01 Jul 2007TL;DR: Technology and approaches that unify the principles and concepts of SOA with those of event-based programing are reviewed and an approach to extend the conventional SOA to cater for essential ESB requirements that include capabilities such as service orchestration, “intelligent” routing, provisioning, integrity and security of message as well as service management is proposed.
Abstract: Service-oriented architectures (SOA) is an emerging approach that addresses the requirements of loosely coupled, standards-based, and protocol- independent distributed computing. Typically business operations running in an SOA comprise a number of invocations of these different components, often in an event-driven or asynchronous fashion that reflects the underlying business process needs. To build an SOA a highly distributable communications and integration backbone is required. This functionality is provided by the Enterprise Service Bus (ESB) that is an integration platform that utilizes Web services standards to support a wide variety of communications patterns over multiple transport protocols and deliver value-added capabilities for SOA applications. This paper reviews technologies and approaches that unify the principles and concepts of SOA with those of event-based programing. The paper also focuses on the ESB and describes a range of functions that are designed to offer a manageable, standards-based SOA backbone that extends middleware functionality throughout by connecting heterogeneous components and systems and offers integration services. Finally, the paper proposes an approach to extend the conventional SOA to cater for essential ESB requirements that include capabilities such as service orchestration, "intelligent" routing, provisioning, integrity and security of message as well as service management. The layers in this extended SOA, in short xSOA, are used to classify research issues and current research activities.
2,035 citations
••
06 Jul 2004TL;DR: An overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community is given.
Abstract: In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.
1,216 citations
•
TL;DR: This keynote argues that there is in fact even more profound change that the authors are facing – the programmability aspect that is intimately associated with all IoT systems.
1,171 citations
••
19 May 2004TL;DR: This paper presented an open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.
Abstract: The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.
969 citations