scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 2008"


Journal ArticleDOI
11 Apr 2008-Science
TL;DR: The racetrack memory described in this review comprises an array of magnetic nanowires arranged horizontally or vertically on a silicon chip and is an example of the move toward innately three-dimensional microelectronic devices.
Abstract: Recent developments in the controlled movement of domain walls in magnetic nanowires by short pulses of spin-polarized current give promise of a nonvolatile memory device with the high performance and reliability of conventional solid-state memory but at the low cost of conventional magnetic disk drive storage. The racetrack memory described in this review comprises an array of magnetic nanowires arranged horizontally or vertically on a silicon chip. Individual spintronic reading and writing nanodevices are used to modify or read a train of ∼10 to 100 domain walls, which store a series of data bits in each nanowire. This racetrack memory is an example of the move toward innately three-dimensional microelectronic devices.

4,052 citations


Journal ArticleDOI
TL;DR: The authors argue that value is fundamentally derived and determined in use -the integration and application of resources in a specific context, rather than in exchange, embedded in firm output and captured by price.

2,861 citations


Proceedings ArticleDOI
Jie Tang1, Jing Zhang1, Limin Yao1, Juanzi Li1, Li Zhang2, Zhong Su2 
24 Aug 2008
TL;DR: The architecture and main features of the ArnetMiner system, which aims at extracting and mining academic social networks, are described and a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues is proposed.
Abstract: This paper addresses several key issues in the ArnetMiner system, which aims at extracting and mining academic social networks. Specifically, the system focuses on: 1) Extracting researcher profiles automatically from the Web; 2) Integrating the publication data into the network from existing digital libraries; 3) Modeling the entire academic network; and 4) Providing search services for the academic network. So far, 448,470 researcher profiles have been extracted using a unified tagging approach. We integrate publications from online Web databases and propose a probabilistic framework to deal with the name ambiguity problem. Furthermore, we propose a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues. Search services such as expertise search and people association search have been provided based on the modeling results. In this paper, we describe the architecture and main features of the system. We also present the empirical evaluation of the proposed methods.

2,058 citations


Journal ArticleDOI
23 Oct 2008-Nature
TL;DR: The findings demonstrate the abundance of CDS-located miRNA targets, some of which can be species-specific, and support an augmented model whereby animal miRNAs exercise their control on mRNAs through targets that can reside beyond the 3′ untranslated region.
Abstract: MicroRNAs (miRNAs) are short RNAs that direct messenger RNA degradation or disrupt mRNA translation in a sequence-dependent manner. For more than a decade, attempts to study the interaction of miRNAs with their targets were confined to the 3' untranslated regions of mRNAs, fuelling an underlying assumption that these regions are the principal recipients of miRNA activity. Here we focus on the mouse Nanog, Oct4 (also known as Pou5f1) and Sox2 genes and demonstrate the existence of many naturally occurring miRNA targets in their amino acid coding sequence (CDS). Some of the mouse targets analysed do not contain the miRNA seed, whereas others span exon-exon junctions or are not conserved in the human and rhesus genomes. miR-134, miR-296 and miR-470, upregulated on retinoic-acid-induced differentiation of mouse embryonic stem cells, target the CDS of each transcription factor in various combinations, leading to transcriptional and morphological changes characteristic of differentiating mouse embryonic stem cells, and resulting in a new phenotype. Silent mutations at the predicted targets abolish miRNA activity, prevent the downregulation of the corresponding genes and delay the induced phenotype. Our findings demonstrate the abundance of CDS-located miRNA targets, some of which can be species-specific, and support an augmented model whereby animal miRNAs exercise their control on mRNAs through targets that can reside beyond the 3' untranslated region.

1,329 citations


Journal ArticleDOI
Paul P. Maglio1, James C. Spohrer1
TL;DR: Service-dominant logic may be the philosophical foundation of service science, and the service system may be its basic theoretical construct, according to this paper.
Abstract: Service systems are value-co-creation configurations of people, technology, value propositionsconnecting internal and external service systems, and shared information (e.g., language, laws, measures, and methods). Service science is the study of service systems, aiming to create a basis for systematicservice innovation. Service science combines organization and human understanding with business andtechnological understanding to categorize and explain the many types of service systems that exist as wellas how service systems interact and evolve to co-create value. The goal is to apply scientific understandingto advance our ability to design, improve, and scale service systems. To make progress, we think servicedominantlogic provides just the right perspective, vocabulary, and assumptions on which to build a theory ofservice systems, their configurations, and their modes of interaction. Simply put, service-dominant logicmay be the philosophical foundation of service science, and the service system may be its basic theoreticalconstruct.

1,274 citations


Journal ArticleDOI
TL;DR: In this article, the fundamental optical behavior of carbon nanotubes as well as their opportunities for light generation and detection, and photovoltaic energy generation are described. But the authors do not discuss the potential of using these materials for light and energy generation.
Abstract: Carbon nanotubes possess unique properties that make them potentially useful in many applications in optoelectronics. This review describes the fundamental optical behaviour of carbon nanotubes as well as their opportunities for light generation and detection, and photovoltaic energy generation.

1,084 citations


Journal ArticleDOI
TL;DR: This work discusses the critical aspects that may affect the scaling of PCRAM, including materials properties, power consumption during programming and read operations, thermal cross-talk between memory cells, and failure mechanisms, and discusses experiments that directly address the scaling properties of the phase-change materials themselves.
Abstract: Nonvolatile RAM using resistance contrast in phase-change materials [or phase-change RAM (PCRAM)] is a promising technology for future storage-class memory. However, such a technology can succeed only if it can scale smaller in size, given the increasingly tiny memory cells that are projected for future technology nodes (i.e., generations). We first discuss the critical aspects that may affect the scaling of PCRAM, including materials properties, power consumption during programming and read operations, thermal cross-talk between memory cells, and failure mechanisms. We then discuss experiments that directly address the scaling properties of the phase-change materials themselves, including studies of phase transitions in both nanoparticles and ultrathin films as a function of particle size and film thickness. This work in materials directly motivated the successful creation of a series of prototype PCRAM devices, which have been fabricated and tested at phase-change material cross-sections with extremely small dimensions as low as 3 nm × 20 nm. These device measurements provide a clear demonstration of the excellent scaling potential offered by this technology, and they are also consistent with the scaling behavior predicted by extensive device simulations. Finally, we discuss issues of device integration and cell design, manufacturability, and reliability.

1,018 citations


Proceedings ArticleDOI
01 Nov 2008
TL;DR: An ontology of this area is proposed which demonstrates a dissection of the cloud into five main layers, and illustrates their interrelations as well as their inter-dependency on preceding technologies.
Abstract: Progress of research efforts in a novel technology is contingent on having a rigorous organization of its knowledge domain and a comprehensive understanding of all the relevant components of this technology and their relationships. Cloud computing is one contemporary technology in which the research community has recently embarked. Manifesting itself as the descendant of several other computing research areas such as service-oriented architecture, distributed and grid computing, and virtualization, cloud computing inherits their advancements and limitations. Towards the end-goal of a thorough comprehension of the field of cloud computing, and a more rapid adoption from the scientific community, we propose in this paper an ontology of this area which demonstrates a dissection of the cloud into five main layers, and illustrates their interrelations as well as their inter-dependency on preceding technologies. The contribution of this paper lies in being one of the first attempts to establish a detailed ontology of the cloud. Better comprehension of the technology would enable the community to design more efficient portals and gateways for the cloud, and facilitate the adoption of this novel computing approach in scientific environments. In turn, this will assist the scientific community to expedite its contributions and insights into this evolving computing field.

1,014 citations


Proceedings ArticleDOI
21 Apr 2008
TL;DR: This paper objectify the WS-* vs. REST debate by giving a quantitative technical comparison based on architectural principles and decisions and shows that the two approaches differ in the number of architectural decisions that must be made and in theNumber of available alternatives.
Abstract: Recent technology trends in the Web Services (WS) domain indicate that a solution eliminating the presumed complexity of the WS-* standards may be in sight: advocates of REpresentational State Transfer (REST) have come to believe that their ideas explaining why the World Wide Web works are just as applicable to solve enterprise application integration problems and to simplify the plumbing required to build service-oriented architectures. In this paper we objectify the WS-* vs. REST debate by giving a quantitative technical comparison based on architectural principles and decisions. We show that the two approaches differ in the number of architectural decisions that must be made and in the number of available alternatives. This discrepancy between freedom-from-choice and freedom-of-choice explains the complexity difference perceived. However, we also show that there are significant differences in the consequences of certain decisions in terms of resulting development and maintenance costs. Our comparison helps technical decision makers to assess the two integration styles and technologies more objectively and select the one that best fits their needs: REST is well suited for basic, ad hoc integration scenarios, WS-* is more flexible and addresses advanced quality of service requirements commonly occurring in enterprise computing.

1,000 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: This work investigates the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters, and presents the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement.
Abstract: Workload placement on servers has been traditionally driven by mainly performance objectives. In this work, we investigate the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters. The placement component of the application management middleware takes into account the power and migration costs in addition to the performance benefit while placing the application containers on the physical servers. The contribution of this work is two-fold: first, we present multiple ways to capture the cost-aware application placement problem that may be applied to various settings. For each formulation, we provide details on the kind of information required to solve the problems, the model assumptions, and the practicality of the assumptions on real servers. In the second part of our study, we present the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement. We present comprehensive theoretical and experimental evidence to establish the efficacy of pMapper.

938 citations


Journal ArticleDOI
TL;DR: A class of hybrid algorithms, of which branch-and-bound and polyhedral outer approximation are the two extreme cases, are proposed and implemented and Computational results that demonstrate the effectiveness of this framework are reported.

Journal ArticleDOI
TL;DR: A scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs is described, including the development of a personalized location anonymization model and a suite of location perturbation algorithms.
Abstract: Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.

Journal ArticleDOI
TL;DR: In this article, top-gated graphene transistors operating at high frequencies (GHz) have been fabricated and their characteristics analyzed, and the measured intrinsic current gain shows an ideal 1/f frequency dependence, indicating an FET-like behavior for GAs.
Abstract: Top-gated graphene transistors operating at high frequencies (GHz) have been fabricated and their characteristics analyzed. The measured intrinsic current gain shows an ideal 1/f frequency dependence, indicating an FET-like behavior for graphene transistors. The cutoff frequency fT is found to be proportional to the dc transconductance gm of the device. The peak fT increases with a reduced gate length, and fT as high as 26 GHz is measured for a graphene transistor with a gate length of 150 nm. The work represents a significant step towards the realization of graphene-based electronics for high-frequency applications.

Proceedings ArticleDOI
Kun Liu1, Evimaria Terzi1
09 Jun 2008
TL;DR: This work formally defines the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations, and devise simple and efficient algorithms for solving this problem.
Abstract: The proliferation of network data in various application domains has raised privacy concerns for the individuals involved. Recent studies show that simply removing the identities of the nodes before publishing the graph/social network data does not guarantee privacy. The structure of the graph itself, and in its basic form the degree of the nodes, can be revealing the identities of individuals. To address this issue, we study a specific graph-anonymization problem. We call a graph k-degree anonymous if for every node v, there exist at least k-1 other nodes in the graph with the same degree as v. This definition of anonymity prevents the re-identification of individuals by adversaries with a priori knowledge of the degree of certain nodes. We formally define the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations. We devise simple and efficient algorithms for solving this problem. Our algorithms are based on principles related to the realizability of degree sequences. We apply our methods to a large spectrum of synthetic and real datasets and demonstrate their efficiency and practical utility.

Journal ArticleDOI
17 Aug 2008
TL;DR: The experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P1P applications.
Abstract: As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications.

Proceedings ArticleDOI
08 Nov 2008
TL;DR: This analysis of user behavior and interviews presents the case that professionals use internal social networking to build stronger bonds with their weak ties and to reach out to employees they do not know.
Abstract: The introduction of a social networking site inside of a large enterprise enables a new method of communication between colleagues, encouraging both personal and professional sharing inside the protected walls of a company intranet. Our analysis of user behavior and interviews presents the case that professionals use internal social networking to build stronger bonds with their weak ties and to reach out to employees they do not know. Their motivations in doing this include connecting on a personal level with coworkers, advancing their career with the company, and campaigning for their projects.

Proceedings ArticleDOI
27 Jun 2008
TL;DR: MQTT-S is designed in such a way that it can be run on low-end and battery-operated sensor/actuator devices and operate over bandwidth-constraint WSNs such as ZigBee-based networks.
Abstract: Wireless sensor networks (WSNs) pose novel challenges compared with traditional networks. To answer such challenges a new communication paradigm, data-centric communication, is emerging. One form of data-centric communication is the publish/subscribe messaging system. Compared with other data-centric variants, publish/subscribe systems are common and wide-spread in distributed computing. Thus, extending publish/subscribe systems intoWSNs will simplify the integration of sensor applications with other distributed applications. This paper describes MQTT-S [1], an extension of the open publish/subscribe protocol message queuing telemetry transport (MQTT) [2] to WSNs. MQTT-S is designed in such a way that it can be run on low-end and battery-operated sensor/actuator devices and operate over bandwidth-constraint WSNs such as ZigBee-based networks. Various protocol design points are discussed and compared. MQTT-S has been implemented and is currently being tested on the IBM wireless sensor networking testbed [3]. Implementation aspects, open challenges and future work are also presented.

Journal ArticleDOI
James C. Spohrer1, Paul P. Maglio1
TL;DR: In this article, the authors describe the emergence of service science, a new interdisciplinary area of study that aims to address the challenge of becoming more systematic about innovating in service.
Abstract: The current growth of the service sector in global economies is unparalleled in human history—by scale and speed of labor migration. Even large manufacturing firms are seeing dramatic shifts in percent revenue derived from services. The need for service innovations to fuel further economic growth and to raise the quality and productivity levels of services has never been greater. Services are moving to center stage in the global arena, especially knowledge-intensive business services aimed at business performance transformation. One challenge to systematic service innovation is the interdisciplinary nature of service, integrating technology, business, social, and client (demand) innovations. This paper describes the emergence of service science, a new interdisciplinary area of study that aims to address the challenge of becoming more systematic about innovating in service.

Journal ArticleDOI
Masamitsu Hayashi1, Luc Thomas1, Rai Moriya1, Charles T. Rettner1, Stuart S. P. Parkin1 
11 Apr 2008-Science
TL;DR: Using permalloy nanowires, the successive creation, motion, and detection of domain walls are achieved by using sequences of properly timed, nanosecond-long, spin-polarized current pulses.
Abstract: The controlled motion of a series of domain walls along magnetic nanowires using spin-polarized current pulses is the essential ingredient of the proposed magnetic racetrack memory, a new class of potential non-volatile storage-class memories. Using permalloy nanowires, we achieved the successive creation, motion, and detection of domain walls by using sequences of properly timed, nanosecond-long, spin-polarized current pulses. The cycle time for the writing and shifting of the domain walls was a few tens of nanoseconds. Our results illustrate the basic concept of a magnetic shift register that relies on the phenomenon of spin-momentum transfer to move series of closely spaced domain walls.

Journal ArticleDOI
Geoffrey W. Burr1, B. N. Kurdi1, J. C. Scott1, Chung H. Lam1, Kailash Gopalakrishnan1, R. S. Shenoy1 
TL;DR: In this article, the authors review the candidate solid-state nonvolatile memory technologies that potentially could be used to construct a storage-class memory (SCM) and compare the potential for practical scaling to ultrahigh effective areal density for each of these candidate technologies.
Abstract: Storage-class memory (SCM) combines the benefits of a solid-state memory, such as high performance and robustness, with the archival capabilities and low cost of conventional hard-disk magnetic storage. Such a device would require a solid-state nonvolatile memory technology that could be manufactured at an extremely high effective areal density using some combination of sublithographic patterning techniques, multiple bits per cell, and multiple layers of devices. We review the candidate solid-state nonvolatile memory technologies that potentially could be used to construct such an SCM. We discuss evolutionary extensions of conventional flash memory, such as SONOS (silicon-oxide-nitride-oxide-silicon) and nanotraps, as well as a number of revolutionary new memory technologies. We review the capabilities of ferroelectric, magnetic, phase-change, and resistive random-access memories, including perovskites and solid electrolytes, and finally organic and polymeric memory. The potential for practical scaling to ultrahigh effective areal density for each of these candidate technologies is then compared.

Journal ArticleDOI
TL;DR: In this article, the spin angular momentum from a spin-polarized current to a ferromagnet can generate sufficient torque to reorient the magnet's moment, which could enable the development of efficient electrically actuated magnetic memories and nanoscale microwave oscillators.
Abstract: The transfer of spin angular momentum from a spin-polarized current to a ferromagnet can generate sufficient torque to reorient the magnet’s moment. This torque could enable the development of efficient electrically actuated magnetic memories and nanoscale microwave oscillators. Yet difficulties in making quantitative measurements of the spin-torque vector have hampered understanding. Here we present direct measurements of both the magnitude and direction of the spin torque in magnetic tunnel junctions, the type of device of primary interest for applications. At low bias V, the differential torque dτ/dV lies in the plane defined by the electrode magnetizations, and its magnitude is in excellent agreement with recent predictions for near-perfect spin-polarized tunnelling. We find that the strength of the in-plane differential torque remains almost constant with increasing bias, despite a substantial decrease in the device magnetoresistance, and that with bias the torque vector also rotates out of the plane.

Journal ArticleDOI
TL;DR: This work studies approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample to obtain a lower bound to the true optimal value.
Abstract: We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with a risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.

Journal ArticleDOI
Rusty Russell1
TL;DR: The virtio API layer is described as implemented in Linux, then the vring implementation, and finally its embodiment in a PCI device for simple adoption on otherwise fully-virtualized guests.
Abstract: The Linux Kernel currently supports at least 8 distinct virtualization systems: Xen, KVM, VMware's VMI, IBM's System p, IBM's System z, User Mode Linux, lguest and IBM's legacy iSeries. It seems likely that more such systems will appear, and until recently each of these had its own block, network, console and other drivers with varying features and optimizations.The attempt to address this is virtio: a series of efficient, well-maintained Linux drivers which can be adapted for various different hypervisor implementations using a shim layer. This includes a simple extensible feature mechanism for each driver. We also provide an obvious ring buffer transport implementation called vring, which is currently used by KVM and lguest. This has the subtle effect of providing a path of least resistance for any new hypervisors: supporting this efficient transport mechanism will immediately reduce the amount of work which needs to be done. Finally, we provide an implementation which presents the vring transport and device configuration as a PCI device: this means guest operating systems merely need a new PCI driver, and hypervisors need only add vring support to the virtual devices they implement (currently only KVM does this).This paper will describe the virtio API layer as implemented in Linux, then the vring implementation, and finally its embodiment in a PCI device for simple adoption on otherwise fully-virtualized guests. We'll wrap up with some of the preliminary work to integrate this I/O mechanism deeper into the Linux host kernel.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: This paper shows how the service-system abstraction can be used to understand how value is created, in the process unifying concepts from many disciplines and creating the foundation for an integrated science of service.
Abstract: ion is a powerful thing. During the 19th century, the industrial revolution was built on many powerful abstractions, such as mass, energy, work, and power. During the 20th century, the information revolution was built on many powerful abstractions, such as binary digit or bit, binary coding, and algorithmic complexity. Here, we propose an abstraction that will be important to the service revolution of the 21st century: the service system, which is a configuration of people, technologies, and other resources that interact with other service systems to create mutual value. Many systems can be viewed as service systems, including families, cities, and companies, among many others. In this paper, we show how the service-system abstraction can be used to understand how value is created, in the process unifying concepts from many disciplines and creating the foundation for an integrated science of service.

Journal ArticleDOI
09 Dec 2008-ACS Nano
TL;DR: This work reports the self-assembly and self-alignment of CNTs from solution into micron-wide strips that form regular arrays of dense and highly aligned CNT films covering the entire chip, which is ideally suitable for device fabrication.
Abstract: Thin film transistors (TFTs) are now poised to revolutionize the display, sensor, and flexible electronics markets. However, there is a limited choice of channel materials compatible with low-temperature processing. This has inhibited the fabrication of high electrical performance TFTs. Single-walled carbon nanotubes (CNTs) have very high mobilities and can be solution-processed, making thin film CNT-based TFTs a natural direction for exploration. The two main challenges facing CNT-TFTs are the difficulty of placing and aligning CNTs over large areas and low on/off current ratios due to admixture of metallic nanotubes. Here, we report the self-assembly and self-alignment of CNTs from solution into micron-wide strips that form regular arrays of dense and highly aligned CNT films covering the entire chip, which is ideally suitable for device fabrication. The films are formed from pre-separated, 99% purely semiconducting CNTs and, as a result, the CNT-TFTs exhibit simultaneously high drive currents and large...

Proceedings ArticleDOI
09 Jun 2008
TL;DR: Spade is the System S declarative stream processing engine that allows developers to construct their applications with fine granular stream operators without worrying about the performance implications that might exist, even in a distributed system.
Abstract: In this paper, we present Spade - the System S declarative stream processing engine. System S is a large-scale, distributed data stream processing middleware under development at IBM T. J. Watson Research Center. As a front-end for rapid application development for System S, Spade provides (1) an intermediate language for flexible composition of parallel and distributed data-flow graphs, (2) a toolkit of type-generic, built-in stream processing operators, that support scalar as well as vectorized processing and can seamlessly inter-operate with user-defined operators, and (3) a rich set of stream adapters to ingest/publish data from/to outside sources. More importantly, Spade automatically brings performance optimization and scalability to System S applications. To that end, Spade employs a code generation framework to create highly-optimized applications that run natively on the Stream Processing Core (SPC), the execution and communication substrate of System S, and take full advantage of other System S services. Spade allows developers to construct their applications with fine granular stream operators without worrying about the performance implications that might exist, even in a distributed system. Spade's optimizing compiler automatically maps applications into appropriately sized execution units in order to minimize communication overhead, while at the same time exploiting available parallelism. By virtue of the scalability of the System S runtime and Spade's effective code generation and optimization, we can scale applications to a large number of nodes. Currently, we can run Spade jobs on ≈ 500 processors within more than 100 physical nodes in a tightly connected cluster environment. Spade has been in use at IBM Research to create real-world streaming applications, ranging from monitoring financial market feeds to radio telescopes to semiconductor fabrication lines.

Journal ArticleDOI
TL;DR: Using SCM as a disk drive replacement, storage system products will have random and sequential I/O performance that is orders of magnitude better than that of comparable disk-based systems and require much less space and power in the data center.
Abstract: The dream of replacing rotating mechanical storage, the disk drive, with solid-state, nonvolatile RAM may become a reality in the near future. Approximately ten new technologies--collectively called storage-class memory (SCM)--are currently under development and promise to be fast, inexpensive, and power efficient. Using SCM as a disk drive replacement, storage system products will have random and sequential I/O performance that is orders of magnitude better than that of comparable disk-based systems and require much less space and power in the data center. In this paper, we extrapolate disk and SCM technology trends to 2020 and analyze the impact on storage systems. The result is a 100- to 1,O00-fold advantage for SCM in terms of the data center space and power required.

Journal ArticleDOI
G. I. Meijer1
21 Mar 2008-Science
TL;DR: New memory concepts may lead to computer systems that do not require a lengthy start-up process when turned on when the system is turned on.
Abstract: New memory concepts may lead to computer systems that do not require a lengthy start-up process when turned on.

Book ChapterDOI
01 Jan 2008
TL;DR: This paper provides a review of the state-of-the-art methods for privacy, including methods for randomization, k-anonymization, and distributed privacy-preserving data mining, and the computational and theoretical limits associated with privacy- Preserving over high dimensional data sets.
Abstract: In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.

Journal ArticleDOI
TL;DR: In this paper, an ultracompact switch that is insensitive to wavelength and temperature was demonstrated for multiple 40-Gbit s−1 optical channels and is suitable for scalable networks.
Abstract: Silicon photonics is deemed to be the solution for dense on-chip optical networks. Now, by using cascaded silicon microring resonators, scientists demonstrate an ultracompact switch that is insensitive to wavelength and temperature. The switch also has fast error-free operation in multiple 40-Gbit s−1 optical channels and is suitable for scalable networks.