scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Internet Computing in 2010"


Journal ArticleDOI
TL;DR: The authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity for smart objects, describing activity-, policy-, and process-aware smart objects and demonstrating how the respective architectural abstractions support increasingly complex application.
Abstract: The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.

1,459 citations


Journal ArticleDOI
TL;DR: A framework for developing high-performance, concurrent programs that don't rely on the mainstream multithreading approach but use asynchronous I/O with an event-driven programming model.
Abstract: One of the more interesting developments recently gaining popularity in the server-side JavaScript space is Node.js. It's a framework for developing high-performance, concurrent programs that don't rely on the mainstream multithreading approach but use asynchronous I/O with an event-driven programming model.

515 citations


Journal ArticleDOI
TL;DR: Although service-oriented computing in cloud computing environments presents a new set of research challenges, the authors believe the combination also provides potentially transformative opportunities.
Abstract: Service-oriented computing and cloud computing have a reciprocal relationship - one provides computing of services and the other provides services of computing Although service-oriented computing in cloud computing environments presents a new set of research challenges, the authors believe the combination also provides potentially transformative opportunities

342 citations


Journal ArticleDOI
TL;DR: The authors suggest using a trust-overlay network over multiple data centers to implement a reputation system for establishing trust between service providers and data owners.
Abstract: Trust and security have prevented businesses from fully accepting cloud platforms. To protect clouds, providers must first secure virtualized data center resources, uphold user privacy, and preserve data integrity. The authors suggest using a trust-overlay network over multiple data centers to implement a reputation system for establishing trust between service providers and data owners. Data coloring and software watermarking techniques protect shared data objects and massively distributed software modules. These techniques safeguard multi-way authentications, enable single sign-on in the cloud, and tighten access control for sensitive data in both public and private clouds.

324 citations


Journal ArticleDOI
TL;DR: The authors developed various prototypes to explore novel ways for human-computer interaction enabled by the Internet of Things and related technologies and derive a set of guidelines for embedding interfaces into people's daily lives.
Abstract: The Internet of Things assumes that objects have digital functionality and can be identified and tracked automatically. The main goal of embedded interaction is to look at new opportunities that arise for interactive systems and the immediate value users gain. The authors developed various prototypes to explore novel ways for human-computer interaction (HCI), enabled by the Internet of Things and related technologies. Based on these experiences, they derive a set of guidelines for embedding interfaces into people's daily lives.

312 citations


Journal ArticleDOI
TL;DR: Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking technologies and raises issues in the architecture, design, and implementation of existing networks and data centers.
Abstract: Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking technologies and raises issues in the architecture, design, and implementation of existing networks and data centers. The relevant research has just recently gained momentum, and the space of potential ideas and solutions is still far from being widely explored.

305 citations


Journal ArticleDOI
TL;DR: The cloud computing model - especially the public cloud - is unsuited to many business applications and is likely to remain so for many years due to fundamental limitations in architecture and design, but private clouds offer the benefits like scale and virtualization with fewer drawbacks.
Abstract: The cloud computing model - especially the public cloud - is unsuited to many business applications and is likely to remain so for many years due to fundamental limitations in architecture and design. Enterprises that move their IT to the cloud are likely to encounter challenges such as security, interoperability, and limits on their ability to tailor their ERP to their business processes. The cloud can be a revolutionary technology, especially for small startups, but the benefits wane for larger enterprises with more complex IT needs. Utility computing still cannot match the "plug-and-play" simplicity of electricity. On the other hand, private clouds offer the benefits like scale and virtualization with fewer drawbacks.

239 citations


Journal Article
TL;DR: Participatory sensing can serve as a powerful "make a case" technology to support advocacy and civic engagement and provide a framework in which citizens can bring to light a civic bottleneck, hazard, personal-safety concern, cultural asset, or other data relevant to urban and natural-resources planning and services.
Abstract: Participatory sensing is the process whereby individuals and communities use ever-more-capable mobile phones and cloud services to collect and analyze systematic data for use in discovery. The convergence of technology and analytical innovation with a citizenry that is increasingly comfortable using mobile phones and online social networking sets the stage for this technology to dramatically impact many aspects of our daily lives.Participatory sensing can serve as a powerful "make a case" technology to support advocacy and civic engagement. It can provide a framework in which citizens can bring to light a civic bottleneck, hazard, personal-safety concern, cultural asset, or other data relevant to urban and natural-resources planning and services, all using data that are systematic and can be validated (http://whatsinvasive.com.) The same systems can be used as tools for sustainability. For example, individuals and communities can explore their transportation and consumption habits and corporations can promote more sustainable practices among employees (http://peir.cens.ucla.edu and http://biketastic.com.)

145 citations


Journal ArticleDOI
TL;DR: This special issue briefly introduces the main characteristics and benefits of RIAs and highlights the research challenges in their development, and features two articles that address some of these open problems.
Abstract: Modern Web solutions resemble desktop applications, enabling sophisticated user interactions, client-side processing, asynchronous communications, and multimedia. A pure HTTP/HTML architecture fails to support these required capabilities in several respects. The "network as platform computing" idea, strengthened by Web 2.0's emergence, has accentuated HTML/HTTP's limits. This is the reason why many developers are switching to novel technologies, known under the collective name of rich Internet applications (RIAs). RIAs combine the Web's lightweight distribution architecture with desktop applications' interface interactivity and computation power, and the resulting combination improves all the elements of a Web application (data, business logic, communication, and presentation). This special issue briefly introduces the main characteristics and benefits of RIAs and highlights the research challenges in their development. The issue features two articles that address some of these open problems. One focuses on language and architecture issues, whereas the other deals with the methodological principles at the base of a model-driven approach to RIA development. Despite these efforts, the research community must continue investigating to propose novel methods and tools to make their development more systematic and efficient.

130 citations


Journal ArticleDOI
TL;DR: The architecture, usage models, and application of participatory sensing were discussed and the essential components for these emerging systems were discussed.
Abstract: Participatory sensing is the process whereby individuals and communities use ever more capable mobile phones and cloud services to collect and analyze systematic data for use in discovery. The convergence of technology and analytical innovation with a citizenry that is increasingly comfortable using mobile phones and online social networking sets the stage for this technology to dramatically impact many aspects of daily lives. Ubiquitous data capture, leveraged data processing, and personal data vault are the essential components for these emerging systems. The architecture, usage models,and application of participatory sensing were discussed in this paper.

122 citations


Journal ArticleDOI
Avishai Wool1
TL;DR: The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point Firewall-1 rule sets as mentioned in this paper, which indicated that corporate firewalls often enforced poorly written rule sets.
Abstract: The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point Firewall-1 rule sets. In general, that survey indicated that corporate firewalls often enforced poorly written rule sets. This article revisits the first survey. In addition to being larger, the current study includes configurations from two major vendors. It also introduces a firewall complexity. The study's findings validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule -set's complexity is (still) positively correlated with the number of detected configuration errors. However, unlike the 2004 study, the current study doesn't suggest that later software versions have fewer errors.

Journal ArticleDOI
TL;DR: Recent standardization efforts in Network Address Translation and Network Address and Port Translation are surveyed.
Abstract: Network Address Translation (NAT) and Network Address and Port Translation (NAPT) are widely used to separate networks and share IPv4 addresses. They're valuable tools for network administrators and help with the imminent exhaustion of IPv4 address space and the transition to IPv6. This article surveys recent standardization efforts in this area.

Journal ArticleDOI
TL;DR: An initial evaluation shows that the additional savings in the scenarios considered range from 5 to 70 percent for conventional users and approximately 50 percent for large data centers.
Abstract: The proposed Energy-Efficient Ethernet (EEE) standard reduces energy consumption by defining two operation modes for transmitters and receivers: active and low power. Burst transmission can provide additional energy savings when EEE is used. Collecting data frames into large-sized data bursts for back-to-back transmission maximizes the time an EEE device spends in low power, thus making its consumption nearly proportional to its traffic load. An initial evaluation shows that the additional savings in the scenarios considered range from 5 to 70 percent for conventional users and approximately 50 percent for large data centers.

Journal ArticleDOI
TL;DR: The areas in which semantic models can support cloud computing are discussed, including functional and nonfunctional definitions, data modeling, and service description enhancement.
Abstract: This article discusses the areas in which semantic models can support cloud computing. Semantic models are helpful in three aspects of cloud computing. The first is functional and nonfunctional definitions. The ability to define application functionality and quality-of-service details in a platform-agnostic manner can immensely benefit the cloud community. The second aspect is data modeling. Semantic modeling of data to provide a platform-independent data representation would be a major advantage in the cloud space. The third aspect is service description enhancement.

Journal ArticleDOI
TL;DR: This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging data between clouds.
Abstract: Cloud computing has lately become the attention grabber in both academia and industry. The promise of seemingly unlimited, readily available utility-type computing has opened many doors previously considered difficult, if not impossible, to open. The cloud computing landscape, however, is still evolving, and we must overcome many challenges to foster widespread adoption of clouds. The main challenge is interoperability.

Journal ArticleDOI
TL;DR: The authors present an approach to a formal-semantics-based calculus of trust, from conceptualization to logical formalization, from logic model to quantification of uncertainties, and from quantified trust to trust decision-making.
Abstract: Building trust models based on a well-defined semantics of trust is important so that we can avoid misinterpretation, misuse, or inconsistent use of trust in Internet-based distributed computing. The authors present an approach to a formal-semantics-based calculus of trust, from conceptualization to logical formalization, from logic model to quantification of uncertainties, and from quantified trust to trust decision-making. They also explore how to apply a formal trust model to a PGP (Pretty Good Privacy) system to develop decentralized public-key certification and verification.

Journal ArticleDOI
TL;DR: The author discusses some of the most important tipping points that he believe will make CHE a reality within a decade.
Abstract: People are on the verge of an era in which the human experience can be enriched in ways they couldn't have imagined two decades ago. Rather than depending on a single technology, people progressed with several whose semantics-empowered convergence and integration will enable us to capture, understand, and reapply human knowledge and intellect. Such capabilities will consequently elevate our technological ability to deal with the abstractions, concepts, and actions that characterize human experiences. This will herald computing for human experience (CHE). The CHE vision is built on a suite of technologies that serves, assists, and cooperates with humans to nondestructively and unobtrusively complement and enrich normal activities, with minimal explicit concern or effort on the humans' part. CHE will anticipate when to gather and apply relevant knowledge and intelligence. It will enable human experiences that are intertwined with the physical, conceptual, and experiential worlds (emotions, sentiments, and so on), rather than immerse humans in cyber worlds for a specific task. Instead of focusing on humans interacting with a technology or system, CHE will feature technology-rich human surroundings that often initiate interactions. Interaction will be more sophisticated and seamless compared to today's precursors such as automotive accident-avoidance systems. Many components of and ideas associated with the CHE vision have been around for a while. Here, the author discuss some of the most important tipping points that he believe will make CHE a reality within a decade.

Journal ArticleDOI
TL;DR: Using data from LiveJournal, the authors show that the answer to both questions is yes, a priori there's no reason to believe that two users with common interests are more likely to be friends.
Abstract: Are two users more likely to be friends if they share common interests? Are two users more likely to share common interests if they're friends? The authors study the phenomenon of homophily in the digital world by answering these central questions. Unlike the physical world, the digital world doesn't impose any geographic or organizational constraints on friendships. So, although online friends might share common interests, a priori there's no reason to believe that two users with common interests are more likely to be friends. Using data from LiveJournal, the authors show that the answer to both questions is yes.

Journal ArticleDOI
TL;DR: The Service-Centric Monitoring Language (SECMOL), a general monitoring specification language, clearly separates concerns between data collection, data computation, and data analysis, allowing for high flexibility and scalability.
Abstract: Service-oriented systems' distributed ownership has led to an increasing focus on runtime management solutions. Service-oriented systems can change greatly after deployment, hampering their quality and reliability. Their service bindings can change, and providers can modify the internals of their services. Monitoring is critical for these systems to keep track of behavior and discover whether anomalies have occurred. The Service-Centric Monitoring Language (SECMOL), a general monitoring specification language, clearly separates concerns between data collection, data computation, and data analysis, allowing for high flexibility and scalability. SECMOL also presents a concrete projection of the model onto three monitoring frameworks.

Journal Article
TL;DR: Synergistic TLBs as discussed by the authors is different from per-core private TLB organization in three ways: (i) it provides capacity sharing of TLBs by facilitating storing of victim translations from one TLB in another to emulate a distributed shared TLB (DST), (ii) it supports translation migration for maximizing the utilization of TLB capacity, and (iii) support translation replication to avoid excess latency for remote TLB accesses.
Abstract: Translation Look-aside Buffers (TLBs) are vital hardware support for virtual memory management in high performance computer systems and have a momentous influence on overall system performance. Numerous techniques to reduce TLB miss latencies including the impact of TLB size, associativity, multilevel hierarchies, super pages, and prefetching have been well studied in the context of uniprocessors. However, with Chip Multiprocessors (CMPs) becoming the standard design point of processor architectures, it is imperative that we review the design and organization of TLBs in the context of CMPs. In this paper, we propose to improve system performance by means of a novel way of organizing TLBs called Synergistic TLBs. Synergistic TLB is different from per-core private TLB organization in three ways: (i) it provides capacity sharing of TLBs by facilitating storing of victim translations from one TLB in another to emulate a distributed shared TLB (DST), (ii) it supports translation migration for maximizing the utilization of TLB capacity, and (iii) it supports translation replication to avoid excess latency for remote TLB accesses. We explore all the design points in this design space and find that an optimal point exists for high performance address translation. Our evaluation with both multiprogrammed (SPEC 2006 applications) and multithreaded workloads (PARSEC applications) shows that Synergistic TLBs can eliminate, respectively, 44.3% and 31.2% of the TLB misses, on average. It also improves the weighted speedup of multiprogrammed application mixes by 25.1% and performance of multithreaded applications by 27.3%, on average.

Journal ArticleDOI
TL;DR: Assessing a mashup's quality requires understanding how the mashup has been developed, how its components look alike, and how quality propagates from basic components to the final mashup application.
Abstract: Modern Web 2.0 applications are characterized by high user involvement: users receive support for creating content and annotations as well as "composing" applications using content and functions from third parties. This latter phenomenon is known as Web mashups and is gaining popularity even with users who have few programming skills, raising a set of peculiar information quality issues. Assessing a mashup's quality, especially the information it provides, requires understanding how the mashup has been developed, how its components look alike, and how quality propagates from basic components to the final mashup application.

Journal Article
TL;DR: Dynamically Adaptive HTM (DynTM) as mentioned in this paper is the first fully flexible HTM system that permits the simultaneous execution of transactions using complementary version and conflict management strategies.
Abstract: Most Hardware Transactional Memory (HTM) implementations choose fixed version and conflict management policies at design time. While eager HTM systems store transactional state in-place in memory and resolve conflicts when they are produced, lazy HTM systems buffer the transactional state in specialized hardware and defer the resolution of conflicts until commit time. Each scheme has its strengths and weaknesses, but, unfortunately, both approaches are too inflexible in the way they manage data versioning and transactional contention. Thus, fixed HTM systems may result in a significant performance opportunity loss when they execute complex transactional applications. In this paper, we present DynTM (Dynamically Adaptable HTM), the first fully-flexible HTM system that permits the simultaneous execution of transactions using complementary version and conflict management strategies. In the heart of DynTM is a novel coherence protocol that allows tracking conflicts among eager and lazy transactions. Both the eager and the lazy execution modes of DynTM exhibit very high performance compared to modern HTM systems. For example, the DynTM lazy execution mode implements local commits to improve on previous proposals. In addition, lazy transactions share the majority of hardware support with eager transactions, reducing substantially the hardware cost compared to other lazy HTM systems. By utilizing a simple predictor to decide the best execution mode for each transaction at runtime, DynTM obtains an average speedup of 34% over HTM systems that employ fixed version and conflict management policies.

Journal ArticleDOI
TL;DR: The authors describe the most common errors they've found in real WSDL documents, explain how these errors impact service discovery, and present some guidelines for revising them.
Abstract: Although Web service technologies promote reuse, Web Services Description Language (WSDL) documents that are supposed to describe the API that services offer often fail to do so properly. Therefore, finding services, understanding what they do, and reusing them are challenging tasks. The authors describe the most common errors they've found in real WSDL documents, explain how these errors impact service discovery, and present some guidelines for revising them.

Journal ArticleDOI
TL;DR: An overview of the IETF standards for call setup and media transport is provided and how they work together to provide a complete, Internet-based, real-time communications capability is shown.
Abstract: Over the past decade, voice communications have moved from the public-switched telephone network to the Internet The IETF has enabled this transition by developing stable standards for call setup and media transport The authors provide an overview of these standards and show how they work together to provide a complete, Internet-based, real-time communications capability

Journal ArticleDOI
TL;DR: The traditional semantic approach has difficulties dealing with the dynamic domains involved in social, mobile, and sensor webs and continuous semantics, which captures changing conceptualizations and relevant knowledge, can help model those domains and analyze the related real-time data.
Abstract: The traditional semantic approach has difficulties dealing with the dynamic domains involved in social, mobile, and sensor webs. Continuous semantics, which captures changing conceptualizations and relevant knowledge, can help us model those domains and analyze the related real-time data.

Journal ArticleDOI
TL;DR: This special issue addresses key issues in the field, such as representation, recommendation aggregation, and attack-resilient reputation systems.
Abstract: Trust and reputation management research is highly interdisciplinary, involving researchers from networking and communication, data management and information systems, e-commerce and service computing, artificial intelligence, and game theory, as well as the social sciences and evolutionary biology. Trust and reputation management has played and will continue to play an important role in Internet and social computing systems and applications. This special issue addresses key issues in the field, such as representation, recommendation aggregation, and attack-resilient reputation systems.

Journal ArticleDOI
TL;DR: The authors' extended the OOH4RIA approach to generative RIA development, which introduces architectural and technological aspects at the design phase and provides a closer match between the modeled system and the final implementation.
Abstract: The advent of rich Internet applications (RIAs) has evolved into an authentic technological revolution, providing Web information systems with advanced requirements similar to desktop applications. At the same time, RIAs have multiplied the possible architectural and technological options, complicating development and increasing risks. The real challenge is selecting the right alternatives among the existing RIA variability, thus creating an optimal solution to satisfy most user requirements. To face this challenge, the authors' extended the OOH4RIA approach to generative RIA development, which introduces architectural and technological aspects at the design phase and provides a closer match between the modeled system and the final implementation.

Journal ArticleDOI
TL;DR: The Active project addresses the challenge of information overload through an integrated knowledge management workspace that reduces information overload by significantly improving the mechanisms for creating, managing, and using information.
Abstract: Knowledge workers are central to an organization's success, yet their information management tools often hamper their productivity. This has major implications for businesses across the globe because their commercial advantage relies on the optimal exploitation of their own enterprise information, the huge volumes of online information, and the productivity of the required knowledge work. The Active project addresses this challenge through an integrated knowledge management workspace that reduces information overload by significantly improving the mechanisms for creating, managing, and using information. The project's approach follows three themes: sharing information through tagging, wikis, and ontologies; prioritizing information delivery by understanding users' current-task context; and leveraging informal processes that are learned from user behavior.

Journal ArticleDOI
TL;DR: Categorize, analyze, and compare a range of incentive mechanisms proposed for P2P streaming systems, leading to scalability issues and service degradation.
Abstract: Free riding, whereby a peer utilizes network resources but doesn't contribute services, could have a huge impact on the efficacy of peer-to-peer media streaming systems, leading to scalability issues and service degradation. Here, the authors categorize, analyze, and compare a range of incentive mechanisms proposed for P2P streaming systems.

Journal ArticleDOI
TL;DR: This article analyzes current trends in the evolution of e-learning architectures and describes a new architecture that captures the needs of both formal (instructor- led) and informal (student-led) learning environments.
Abstract: The social component of Web 2.0-related services is providing a new open and personal approach to how we expect things to solve problems in our information-driven world. In particular, students' learning needs require open, personal e-learning systems adapted to life-long learning needs in a rapidly changing environment. It therefore shouldn't be surprising that a new wave of ideas centered on pervasive systems has drawn so much attention. This article analyzes current trends in the evolution of e-learning architectures and describes a new architecture that captures the needs of both formal (instructor-led) and informal (student-led) learning environments.