scispace - formally typeset
Search or ask a question

Showing papers on "Service provider published in 2014"


Posted Content
TL;DR: In this article, the authors discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility.
Abstract: The public perception of shared goods has changed substantially in the past few years. While co-owning properties has been widely accepted for a while (e.g., timeshares), the notion of sharing bikes, cars, or even rides on an on-demand basis is just now starting to gain widespread popularity. The emerging “sharing economy” is particularly interesting in the context of cities that struggle with population growth and increasing density. While sharing vehicles promises to reduce inner-city traffic, congestion, and pollution problems, the associated business models are not without problems themselves. Using agency theory, in this article we discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility. Our findings show private or public models are fraught with conflicts, and point to a merit model as the most promising alignment of the strengths of agents and principals.

698 citations


BookDOI
25 Feb 2014
TL;DR: This chapter discusses goal setting, follow-up, and goal monitoring in the context ofGoal Attainment Scaling, with a focus on the second half of the 1990s.
Abstract: There is an extensive literature on Goal Attainment Scaling (GAS), but the publications are widely scattered and often inaccessible, covering several foreign countries and many professional disciplines and fields of application. This book provides both a user manual and a complete reference work on GAS, including a comprehensive account of what the method is, what its strengths and limitations are, how it can be used, and what it can offer. The book is designed to be of interest to service providers, program directors and administrators, service and business organizations, program evaluators, researchers, and students in a variety of fields. No previous account of GAS has provided an up-to-date, comprehensive description and explanation of the technique. The chapters include a basic "how to do it" handbook, step-by-step implementation instructions, frequently occurring problems and what should be done about them, methods for monitoring the quality of the goal setting process, and a discussion of policy and administration issues. There are many illustrations from actual applications including examples of goals scaled for the individual, the specific program, the agency, or the total system. Procedures for training and estimates of training costs are also provided.

637 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility.
Abstract: The public perception of shared goods has changed substantially in the past few years. While co-owning properties has been widely accepted for a while (e.g., timeshares), the notion of sharing bikes, cars, or even rides on an on-demand basis is just now starting to gain widespread popularity. The emerging “sharing economy” is particularly interesting in the context of cities that struggle with population growth and increasing density. While sharing vehicles promises to reduce inner-city traffic, congestion, and pollution problems, the associated business models are not without problems themselves. Using agency theory, in this article we discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility. Our findings show private or public models are fraught with conflicts, and point to a merit model as the most promising alignment of the strengths ...

576 citations


Journal ArticleDOI
TL;DR: In order to reduce signaling traffic and achieve better performance, this article proposes a criterion to bundle multiple functions of a virtualized evolved packet core in a single physical device or a group of adjacent devices.
Abstract: As mobile network users look forward to the connectivity speeds of 5G networks, service providers are facing challenges in complying with connectivity demands without substantial financial investments. Network function virtualization (NFV) is introduced as a new methodology that offers a way out of this bottleneck. NFV is poised to change the core structure of telecommunications infrastructure to be more cost-efficient. In this article, we introduce an NFV framework, and discuss the challenges and requirements of its use in mobile networks. In particular, an NFV framework in the virtual environment is proposed. Moreover, in order to reduce signaling traffic and achieve better performance, this article proposes a criterion to bundle multiple functions of a virtualized evolved packet core in a single physical device or a group of adjacent devices. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent.

462 citations


Proceedings ArticleDOI
01 Nov 2014
TL;DR: This paper presents and evaluates a formal model for resource allocation of virtualized network functions within NFV environments, a problem it refers to as Virtual Network Function Placement (VNF-P), and evaluates its execution speed.
Abstract: Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.

461 citations


Journal ArticleDOI
TL;DR: This work defines two models for trustworthiness management starting from the solutions proposed for P2P and social networks and shows how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in thenetwork traffic for feedback exchange.
Abstract: The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange.

427 citations


Proceedings ArticleDOI
02 Apr 2014
TL;DR: The design and implementation of a network virtualization solution for multi-tenant datacenters is presented, which aims to meet the needs of tenants and service providers without operator intervention while preserving their own operational flexibility and efficiency.
Abstract: Multi-tenant datacenters represent an extremely challenging networking environment. Tenants want the ability to migrate unmodified workloads from their enterprise networks to service provider datacenters, retaining the same networking configurations of their home network. The service providers must meet these needs without operator intervention while preserving their own operational flexibility and efficiency. Traditional networking approaches have failed to meet these tenant and provider requirements. Responding to this need, we present the design and implementation of a network virtualization solution for multi-tenant datacenters.

407 citations


Journal ArticleDOI
TL;DR: In this paper, a conceptual analysis of two approaches to understand service perspectives, service logic (SL) and service-dominant logic (SDL), reveals direct and indirect marketing implications.
Abstract: Purpose – The purpose of this conceptual paper is to analyse the implications generated by a service perspective. Design/methodology/approach – A conceptual analysis of two approaches to understanding service perspectives, service logic (SL) and service-dominant logic (SDL), reveals direct and indirect marketing implications. Findings – The SDL is based on a metaphorical view of co-creation and value co-creation, in which the firm, customers and other actors participate in the process that leads to value for customers. The approach is firm-driven; the service provider drives value creation. The managerial implications are not service perspective-based, and co-creation may be imprisoned by its metaphor. In contrast, SL takes an analytical approach, with co-creation concepts that can significantly reinvent marketing from a service perspective. Value gets created in customer processes, and value creation is customer driven. Ten managerial SL principles derived from these analyses offer theoretical and practi...

404 citations


Journal ArticleDOI
TL;DR: This work proposes a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption and proposes an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way.
Abstract: Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.

403 citations


Journal ArticleDOI
TL;DR: By dividing the research into four main groups based on the problem-solving approaches and identifying the investigated quality of service parameters, intended objectives, and developing environments, beneficial results and statistics are obtained that can contribute to future research.
Abstract: The increasing tendency of network service users to use cloud computing encourages web service vendors to supply services that have different functional and nonfunctional (quality of service) features and provide them in a service pool. Based on supply and demand rules and because of the exuberant growth of the services that are offered, cloud service brokers face tough competition against each other in providing quality of service enhancements. Such competition leads to a difficult and complicated process to provide simple service selection and composition in supplying composite services in the cloud, which should be considered an NP-hard problem. How to select appropriate services from the service pool, overcome composition restrictions, determine the importance of different quality of service parameters, focus on the dynamic characteristics of the problem, and address rapid changes in the properties of the services and network appear to be among the most important issues that must be investigated and addressed. In this paper, utilizing a systematic literature review, important questions that can be raised about the research performed in addressing the above-mentioned problem have been extracted and put forth. Then, by dividing the research into four main groups based on the problem-solving approaches and identifying the investigated quality of service parameters, intended objectives, and developing environments, beneficial results and statistics are obtained that can contribute to future research.

367 citations


Journal ArticleDOI
TL;DR: A pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service and outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.
Abstract: Wireless body area network (WBAN) has been recognized as one of the promising wireless sensor technologies for improving healthcare service, thanks to its capability of seamlessly and continuously exchanging medical information in real time. However, the lack of a clear in-depth defense line in such a new networking paradigm would make its potential users worry about the leakage of their private information, especially to those unauthenticated or even malicious adversaries. In this paper, we present a pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service. In particular, our authentication protocols are rooted with a novel certificateless signature (CLS) scheme, which is computational, efficient, and provably secure against existential forgery on adaptively chosen message attack in the random oracle model. Also, our designs ensure that application or service providers have no privilege to disclose the real identities of users. Even the network manager, which serves as private key generator in the authentication protocols, is prevented from impersonating legitimate users. The performance of our designs is evaluated through both theoretic analysis and experimental simulations, and the comparative studies demonstrate that they outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.

Journal ArticleDOI
TL;DR: A survey of state-of-the-art Cloud service selection approaches, which are analyzed from the following five perspectives: decision-making techniques; data representation models; parameters and characteristics of Cloud services; contexts, purposes.

Journal ArticleDOI
TL;DR: A novel hybrid MCDM model that combines fuzzy Decision Making Trial and Evaluation Laboratory Model (DEMATEL), fuzzy Analytical Network Process (ANP) and fuzzy Visekriterijumska Optimizacija i kompromisno Resenje (VIKOR) methods is developed and successfully performed in this paper for the City of Belgrade.
Abstract: City logistics (CL) tends to increase efficiency and mitigate the negative effects of logistics processes and activities and at the same time to support the sustainable development of urban areas. Accordingly, various measures and initiatives are applying and various conceptual solutions are defining. The effects vary depending on the characteristics of the city. This paper proposes a framework for the selection of the CL concept which would be most appropriate for different participants, stakeholders, and which would comply with attributes of the surroundings. CL participants have different, usually conflicting goals and interests, so it is necessary to define a large number of criteria for concepts evaluation. On the other hand, the importance of the criteria is dependent on the specific situation, i.e., a large number of factors describing the surroundings. In situations like this, selecting the best alternative is a complex multi-criteria decision-making (MCDM) problem consisting of conflicting and uncertain elements. A novel hybrid MCDM model that combines fuzzy Decision Making Trial and Evaluation Laboratory Model (DEMATEL), fuzzy Analytical Network Process (ANP) and fuzzy Visekriterijumska Optimizacija i kompromisno Resenje (VIKOR) methods is developed in this paper. The model provides support to decision makers (planners, city administration, logistics service providers, users, etc.) when selecting the CL concept, which is successfully performed in this paper for the City of Belgrade.

Journal ArticleDOI
TL;DR: The moderate level of evidence indicates that permanent supportive housing is promising, but research is needed to clarify the model and determine the most effective elements for various subpopulations.
Abstract: Permanent supportive housing for individuals with mental illness and substance abuse is predicated on the simple idea that housing has an impact on health and should be part of the treatment and recovery process. Service providers offer ongoing support and collaborate with property managers to preserve tenancy and help individuals resolve crisis situations and other issues. This comprehensive literature review (1995–2012) found substantial research evidence demonstrating that this approach has been successful, but more rigorous studies are needed to clarify which components of the model are most effective and for which subpopulations.

Journal ArticleDOI
TL;DR: In this article, the authors compare the alternatives to home delivery that have been developed by French and German parcel delivery operators which developed pick-up points in stores and automated lockers networks, with reference to the strategies of service providers and e-commerce firms as well as consumer preferences.
Abstract: In Europe, shopping habits have changed fast during the last decade and a high percentage of consumers now shop online. E-commerce for physical goods generates a significant demand for dedicated delivery services, and results in increasingly difficult last mile logistics. In particular home delivery services, which are usually the preferred option by the online consumers, contribute to the atomization of parcel flows thus causing particular problems within the urban areas. However, alternative delivery solutions are growing fast, especially in metropolitan areas The purpose of this article is to compare the alternatives to home delivery that have been developed by French and German parcel delivery operators which developed pick-up points in stores and automated lockers networks. The paper includes an analysis of the key drivers of the development of the two emblematic delivery services (pick-up points and lockers), with reference to the strategies of service providers and e-commerce firms as well as consumer preferences.

Journal ArticleDOI
TL;DR: In this article, the authors explore the forms that combinations of digital manufacturing, logistics and equipment use are likely to take and how these novel combinations may affect the relationship among logistics service providers (LSPs), users and manufacturers of equipment.
Abstract: Purpose – The purpose of this paper is to explore the forms that combinations of digital manufacturing, logistics and equipment use are likely to take and how these novel combinations may affect the relationship among logistics service providers (LSPs), users and manufacturers of equipment. Design/methodology/approach – Brian Arthur’s theory of combinatorial technological evolution is applied to examine possible digital manufacturing-driven transformations. The F-18 Super Hornet is used as an illustrative example of a service supply chain for a complex product. Findings – The introduction of digital manufacturing will likely result in hybrid solutions, combining conventional logistics, digital manufacturing and user operations. Direct benefits can be identified in the forms of life cycle extension and the increased availability of parts in challenging locations. Furthermore, there are also opportunities for both equipment manufacturers and LSPs to adopt new roles, thereby supporting the efficient and sust...

Patent
18 Feb 2014
TL;DR: In this paper, a method and a system for connecting a service provider and a user at a remote location relative to the service provider, via a network based telecommunications device, is provided.
Abstract: A method and a system for connecting a service provider and a user at a remote location relative to the service provider, via a network based telecommunications device, are provided. The method includes utilizing a network operable terminal for transmitting communications between the service provider and the user, employing a display screen depicting a user selectable options menu, corresponding with service functions offered by the service provider, and enabling the user to choose an option from the user selectable options menu to initiate a corresponding communication to the service provider. The system includes a user operable terminal including a user interface display screen with user selectable menu options that are changeable in accordance with differing modes of operation, an internal processing unit configured for providing at least one selectable menu option; and a gateway service platform configured for transmitting an option selected from the at least one selectable menu option to and from one of the user or the provider.

Journal ArticleDOI
TL;DR: TrustedDB is an outsourced database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints by leveraging server-hosted, tamper-proof trusted hardware in critical query processing stages, thereby removing any limitations on the type of supported queries.
Abstract: Traditionally, as soon as confidentiality becomes a concern, data are encrypted before outsourcing to a service provider. Any software-based cryptographic constructs then deployed, for server-side query processing on the encrypted data, inherently limit query expressiveness. Here, we introduce TrustedDB, an outsourced database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints by leveraging server-hosted, tamper-proof trusted hardware in critical query processing stages, thereby removing any limitations on the type of supported queries. Despite the cost overhead and performance limitations of trusted hardware, we show that the costs per query are orders of magnitude lower than any (existing or) potential future software-only mechanisms. TrustedDB is built and runs on actual hardware, and its performance and costs are evaluated here.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results demonstrate that the proposed scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.
Abstract: Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called ‘auditing-as-a-service’ at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.

Journal ArticleDOI
TL;DR: A Hybrid Manufacturing Cloud is proposed that allows companies to deploy different cloud modes for their periodic business goals and enables companies to set self-defined access rules for each resource so that unauthorised companies will not have access to the resource.

Proceedings ArticleDOI
01 Feb 2014
TL;DR: This study aims to identify an efficient resource allocation strategy that utilizes resources effectively in the resource constrained environment of cloud computing.
Abstract: Cloud computing provides user-requested services that are reliable, dynamic, flexible and efficient. In order to offer such guaranteed services to cloud users, effective resource allocation strategies must be implemented. The methodology used should also confirm to the Service Level Agreement (SLA) drawn between the customer and the service provider. This work presents a study of such resource allocation strategies in cloud computing. The strategies include resource requirements prediction algorithms and resource allocation algorithms. This works studies the various resource allocation techniques utilized in cloud computing and makes a comparative study of the merits and demerits of each technique. This study aims to identify an efficient resource allocation strategy that utilizes resources effectively in the resource constrained environment of cloud computing.

Posted Content
TL;DR: In this paper, the authors examine the challenges of regulating innovation from an 'innovation law' perspective, i.e., they qualify these practices as innovations that should not be stifled by regulations but not be left unregulated either.
Abstract: Sharing economy practices have become increasingly popular in the past years. From swapping systems, network transportation to private kitchens, sharing with strangers appears to be the new urban trend. Although Uber, Airbnb, and other online platforms have democratized the access to a number of services and facilities, multiple concerns have been raised as to the public safety, health and limited liability of these sharing economy practices. In addition, these innovative activities have been contested by professionals offering similar services that claim that sharing economy is opening the door to unfair competition. Regulators are at crossroads: on the one hand, innovation in sharing economy should not be stifled by excessive and outdated regulation; on the other, there is a real need to protect the users of these services from fraud, liability and unskilled service providers. This dilemma is far more complex than it seems since regulators are confronted here with an array of challenging questions: firstly, can these sharing economy practices be qualified as "innovations" worth protecting and encouraging? Secondly, should the regulation of these practices serve the same goals as the existing rules for the equivalent commercial services (e.g. taxi regulations)? Thirdly, how can regulation keep up with the evolving nature of these innovative practices? All these questions, come down to one simple problem: too little is known about the most socially effective ways of consistently regulating and promoting innovation. The solution of these problems implies analyzing two fields of study which still seem to be at an embryonic stage in the legal literature: the study of sharing economy practices and the relationship between innovation and law in this area. In this article, I analyze the challenges of regulating sharing economy from an 'innovation law perspective', i.e., I qualify these practices as innovations that should not be stifled by regulations but should not be left unregulated either. I start at an abstract level by defining the concept of innovation and explaining it characteristics. The "innovation law" perspective adopted in this article to analyze sharing economy implies an overreaching study of the relationship between law and innovation. This perspective elects innovation as the ultimate policy and regulatory goal and defends that law should be shaped according to this goal. In this context, I examine the multiple features of the innovation process in the specific case of sharing economy and the role played by different fields of law. Electing innovation as the ultimate policy target may however be devoid of meaning in a world where law is expected to pursue many other — and often conflicting — values. In this article, I examine the challenges of regulating innovation from the lens of sharing economy. This field offers us a solid case study to explore the concept of "innovation", think about how regulators should look at the innovation process, how inadequate rules may have a negative impact on innovation, and how regulators should fine tune regulations to ensure that the advancement of innovation is balanced with other values such as public health or safety. I argue that the regulation of innovative sharing economy practices requires regulatory "openness": fewer, but broader rules that do not stifle innovation while imposing a minimum of legal requirements that take into account the characteristics of innovative sharing economy practices, but that are open for future developments.

Journal ArticleDOI
TL;DR: This paper proposes a suite of measurement techniques to evaluate the QoS of cloud gaming systems and shows that OnLive performs better, because it provides adaptable frame rates, better graphic quality, and shorter server processing delays, while consuming less network bandwidth.
Abstract: Cloud gaming, i.e., real-time game playing via thin clients, relieves users from being forced to upgrade their computers and resolve the incompatibility issues between games and computers. As a result, cloud gaming is generating a great deal of interests among entrepreneurs, venture capitalists, general publics, and researchers. However, given the large design space, it is not yet known which cloud gaming system delivers the best user-perceived Quality of Service (QoS) and what design elements constitute a good cloud gaming system. This study is motivated by the question: How good is the QoS of current cloud gaming systems? Answering the question is challenging because most cloud gaming systems are proprietary and closed, and thus their internal mechanisms are not accessible for the research community. In this paper, we propose a suite of measurement techniques to evaluate the QoS of cloud gaming systems and prove the effectiveness of our schemes using a case study comprising two well-known cloud gaming systems: OnLive and StreamMyGame. Our results show that OnLive performs better, because it provides adaptable frame rates, better graphic quality, and shorter server processing delays, while consuming less network bandwidth. Our measurement techniques are general and can be applied to any cloud gaming systems, so that researchers, users, and service providers may systematically quantify the QoS of these systems. To the best of our knowledge, the proposed suite of measurement techniques have never been presented in the literature.

Journal ArticleDOI
TL;DR: This work develops appointment scheduling models that take into account the patient preferences regarding when they would like to be seen and proposes a heuristic solution procedure to maximize the expected net “profit” per day.
Abstract: Motivated by the rising popularity of electronic appointment booking systems, we develop appointment scheduling models that take into account the patient preferences regarding when they would like to be seen. The service provider dynamically decides which appointment days to make available for the patients. Patients arriving with appointment requests may choose one of the days offered to them or leave without an appointment. Patients with scheduled appointments may cancel or not show up for the service. The service provider collects a “revenue” from each patient who shows up and incurs a “service cost” that depends on the number of scheduled appointments. The objective is to maximize the expected net “profit” per day. We begin by developing a static model that does not consider the current state of the scheduled appointments. We give a characterization of the optimal policy under the static model and bound its optimality gap. Building on the static model, we develop a dynamic model that considers the current state of the scheduled appointments, and we propose a heuristic solution procedure. In our computational experiments, we test the performance of our models under the patient preferences estimated through a discrete choice experiment that we conduct in a large community health center. Our computational experiments reveal that the policies we propose perform well under a variety of conditions.

Journal ArticleDOI
TL;DR: The limits of machine type communication traffic coexisting with human communication traffic in LTE-A networks are investigated, such that human customer churn is minimized and under proper design, the outage probability of human communication is marginally impacted.
Abstract: Machine-to-machine wireless systems are being standardized to provide ubiquitous connectivity between machines without the need for human intervention. A natural concern of cellular operators and service providers is the impact that these machine type communications will have on current human type communications. Given the exponential growth of machine type communication traffic, it is of utmost importance to ensure that current voice and data traffic is not jeopardized. This article investigates the limits of machine type communication traffic coexisting with human communication traffic in LTE-A networks, such that human customer churn is minimized. We show that under proper design, the outage probability of human communication is marginally impacted whilst duty cycle and access delay of machine type communications are reasonably bounded to ensure viable M2M operations.

Journal ArticleDOI
TL;DR: In this article, the authors propose a conceptual model based on mental accounting principles derived from prospect theory and develop a series of research propositions to explicate the links between distribution patterns of service failures/delights and service quality perceptions.
Abstract: Service delivery often involves a series of events or stages of exchange between a service provider and its customer. At each stage, performance can meet, exceed, or fall below the customer's expectations. This article contributes to the literature by examining how the patterns of distribution (frequency, timing, proximity, and sequence) of service failures and delights affect customers' perceptions of service quality. The authors propose a conceptual model based on mental accounting principles derived from prospect theory and develop a series of research propositions to explicate the links between distribution patterns of service failures/delights and service quality perceptions. The study integrates prospect theory with service encounter research and provides a comprehensive theory-driven platform for exploring the impact of various service failure and delight distribution patterns. In addition, it offers important managerial implications for service design and resource allocation regarding when, how of...

Proceedings ArticleDOI
27 Oct 2014
TL;DR: This work investigates the causes of latency inflation in the Internet across the network stack, and proposes a grand challenge for the networking research community: a speed-of-light Internet.
Abstract: For many Internet services, reducing latency improves the user experience and increases revenue for the service provider. While in principle latencies could nearly match the speed of light, we find that infrastructural inefficiencies and protocol overheads cause today's Internet to be much slower than this bound: typically by more than one, and often, by more than two orders of magnitude. Bridging this large gap would not only add value to today's Internet applications, but could also open the door to exciting new applications. Thus, we propose a grand challenge for the networking research community: a speed-of-light Internet. To inform this research agenda, we investigate the causes of latency inflation in the Internet across the network stack. We also discuss a few broad avenues for latency improvement.

Patent
21 May 2014
TL;DR: In this paper, the authors present a system and methods for providing personal computing and service provider platforms for enabling a consumer to control and monetize their personal data while managing their online privacy.
Abstract: The present invention relates to systems and methods for providing personal computing and service provider platforms for enabling a consumer to control and monetize their personal data while managing their online privacy. Business methods utilizing the systems and methods of the present invention resemble those of profit-sharing and asset-sharing paradigms such as cooperatives, and they comprise means for enabling a diverse array of individual subscriber shareholders to receive dividends, share profits and assets, pool resources, and otherwise participate in the ownership of the personal and behavioral data and other content that they generate.

Journal ArticleDOI
TL;DR: This article advocates an all-SDN network architecture with hierarchical network control capabilities to allow for different grades of performance and complexity in offering core network services and provide service differentiation for 5G systems.
Abstract: The tremendous growth in wireless Internet use is showing no signs of slowing down. Existing cellular networks are starting to be insufficient in meeting this demand, in part due to their inflexible and expensive equipment as well as complex and non-agile control plane. Software-defined networking is emerging as a natural solution for next generation cellular networks as it enables further network function virtualization opportunities and network programmability. In this article, we advocate an all-SDN network architecture with hierarchical network control capabilities to allow for different grades of performance and complexity in offering core network services and provide service differentiation for 5G systems. As a showcase of this architecture, we introduce a unified approach to mobility, handoff, and routing management and offer connectivity management as a service (CMaaS). CMaaS is offered to application developers and over-the-top service providers to provide a range of options in protecting their flows against subscriber mobility at different price levels.

Book
09 Aug 2014
TL;DR: This book is written as a textbook on Internet of Things for educational programs at colleges and universities, and also for IoT vendors and service providers who may be interested in offering a broader perspective of Internet of things to accompany their own customer and developer training programs.
Abstract: Internet of Things (IoT) refers to physical and virtual objects that have unique identities and are connected to the internet to facilitate intelligent applications that make energy, logistics, industrial control, retail, agriculture and many other domains "smarter". Internet of Things is a new revolution of the Internet that is rapidly gathering momentum driven by the advancements in sensor networks, mobile devices, wireless communications, networking and cloud technologies. Experts forecast that by the year 2020 there will be a total of 50 billion devices/things connected to the internet. This book is written as a textbook on Internet of Things for educational programs at colleges and universities, and also for IoT vendors and service providers who may be interested in offering a broader perspective of Internet of Things to accompany their own customer and developer training programs. The typical reader is expected to have completed a couple of courses in programming using traditional high-level languages at the college-level, and is either a senior or a beginning graduate student in one of the science, technology, engineering or mathematics (STEM) fields. Like our companion book on Cloud Computing, we have tried to write a comprehensive book that transfers knowledge through an immersive "hands on" approach, where the reader is provided the necessary guidance and knowledge to develop working code for real-world IoT applications. Additional support is available at the book's website: www.internet-of-things-book.com The book is organized into 3 main parts, comprising of a total of 11 chapters. Part I covers the building blocks of Internet of Things (IoTs) and their characteristics. A taxonomy of IoT systems is proposed comprising of various IoT levels with increasing levels of complexity. Domain specific Internet of Things and their real-world applications are described. A generic design methodology for IoT is proposed. An IoT system management approach using NETCONF-YANG is described. Part II introduces the reader to the programming aspects of Internet of Things with a view towards rapid prototyping of complex IoT applications. We chose Python as the primary programming language for this book, and an introduction to Python is also included within the text to bring readers to a common level of expertise. We describe packages, frameworks and cloud services including the WAMP-AutoBahn, Xively cloud and Amazon Web Services which can be used for developing IoT systems. We chose the Raspberry Pi device for the examples in this book. Reference architectures for different levels of IoT applications are examined in detail. Case studies with complete source code for various IoT domains including home automation, smart environment, smart cities, logistics, retail, smart energy, smart agriculture, industrial control and smart health, are described. Part III introduces the reader to advanced topics on IoT including IoT data analytics and Tools for IoT. Case studies on collecting and analyzing data generated by Internet of Things in the cloud are described.