scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 2011"


Journal ArticleDOI
17 Nov 2011-Nature
TL;DR: Tunnels based on ultrathin semiconducting films or nanowires could achieve a 100-fold power reduction over complementary metal–oxide–semiconductor transistors, so integrating tunnel FETs with CMOS technology could improve low-power integrated circuits.
Abstract: Power dissipation is a fundamental problem for nanoelectronic circuits. Scaling the supply voltage reduces the energy needed for switching, but the field-effect transistors (FETs) in today's integrated circuits require at least 60 mV of gate voltage to increase the current by one order of magnitude at room temperature. Tunnel FETs avoid this limit by using quantum-mechanical band-to-band tunnelling, rather than thermal injection, to inject charge carriers into the device channel. Tunnel FETs based on ultrathin semiconducting films or nanowires could achieve a 100-fold power reduction over complementary metal-oxide-semiconductor (CMOS) transistors, so integrating tunnel FETs with CMOS technology could improve low-power integrated circuits.

2,390 citations


Posted Content
TL;DR: In this article, the authors proposed a framework for fair classification comprising a task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand, and an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly.
Abstract: We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.

2,003 citations


Journal ArticleDOI
Raghu K. Ganti1, Fan Ye1, Hui Lei1
TL;DR: The need for a unified architecture for mobile crowdsensing is argued and the requirements it must satisfy are envisioned.
Abstract: An emerging category of devices at the edge of the Internet are consumer-centric mobile sensing and computing devices, such as smartphones, music players, and in-vehicle sensors. These devices will fuel the evolution of the Internet of Things as they feed sensor data to the Internet at a societal scale. In this article, we examine a category of applications that we term mobile crowdsensing, where individuals with sensing and computing devices collectively share data and extract information to measure and map phenomena of common interest. We present a brief overview of existing mobile crowdsensing applications, explain their unique characteristics, illustrate various research challenges, and discuss possible solutions. Finally, we argue the need for a unified architecture and envision the requirements it must satisfy.

1,833 citations


Journal ArticleDOI
TL;DR: Heusler compounds as discussed by the authors are a remarkable class of intermetallic materials with 1:1:1 or 2:1-1 composition comprising more than 1500 members, and their properties can easily be predicted by the valence electron count.

1,675 citations


Journal ArticleDOI
TL;DR: Low-energy electron microscopy analysis showed that the large graphene domains had a single crystallographic orientation, with an occasional domain having two orientations.
Abstract: Graphene single crystals with dimensions of up to 0.5 mm on a side were grown by low-pressure chemical vapor deposition in copper-foil enclosures using methane as a precursor. Low-energy electron microscopy analysis showed that the large graphene domains had a single crystallographic orientation, with an occasional domain having two orientations. Raman spectroscopy revealed the graphene single crystals to be uniform monolayers with a low D-band intensity. The electron mobility of graphene films extracted from field-effect transistor measurements was found to be higher than 4000 cm2 V−1 s−1 at room temperature.

1,255 citations


Journal ArticleDOI
David B. Mitzi1, Oki Gunawan1, Teodor K. Todorov1, Kejia Wang1, Supratik Guha1 
TL;DR: In this article, the development of kesterite-based Cu 2 ZnSn(S,Se) 4 (CZTSSe) thin-film solar cells, in which the indium and gallium from CIGSSe are replaced by the readily available elements zinc and tin, is reviewed.

1,151 citations


Proceedings Article
28 Jun 2011
TL;DR: This paper proposes a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes and describes a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy.
Abstract: Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.

1,058 citations


Journal ArticleDOI
Bryan D. McCloskey1, Donald S. Bethune1, Robert M. Shelby1, G. Girishkumar1, Alan C. Luntz1 
TL;DR: Coulometry has to be coupled with quantitative gas consumption and evolution data to properly characterize the rechargeability of Li-air batteries, and chemical and electrochemical electrolyte stability in the presence of lithium peroxide and its intermediates is essential to produce a truly reversible Li-O2 electrochemistry.
Abstract: Among the many important challenges facing the development of Li–air batteries, understanding the electrolyte’s role in producing the appropriate reversible electrochemistry (i.e., 2Li+ + O2 + 2e– ↔ Li2O2) is critical. Quantitative differential electrochemical mass spectrometry (DEMS), coupled with isotopic labeling of oxygen gas, was used to study Li–O2 electrochemistry in various solvents, including carbonates (typical Li ion battery solvents) and dimethoxyethane (DME). In conjunction with the gas-phase DEMS analysis, electrodeposits formed during discharge on Li–O2 cell cathodes were characterized using ex situ analytical techniques, such as X-ray diffraction and Raman spectroscopy. Carbonate-based solvents were found to irreversibly decompose upon cell discharge. DME-based cells, however, produced mainly lithium peroxide on discharge. Upon cell charge, the lithium peroxide both decomposed to evolve oxygen and oxidized DME at high potentials. Our results lead to two conclusions; (1) coulometry has to b...

959 citations


Journal ArticleDOI
07 Apr 2011-Nature
TL;DR: The systematic study of top-gated CVD-graphene r.f. transistors for radio-frequency applications and the cut-off frequency was found to scale as 1/(gate length), providing a much larger operation window than is available for conventional devices.
Abstract: Owing to its high carrier mobility and saturation velocity, graphene has attracted enormous attention in recent years1, 2, 3, 4, 5. In particular, high-performance graphene transistors for radio-frequency (r.f.) applications are of great interest6, 7, 8, 9, 10, 11, 12, 13. Synthesis of large-scale graphene sheets of high quality and at low cost has been demonstrated using chemical vapour deposition (CVD) methods14. However, very few studies have been performed on the scaling behaviour of transistors made from CVD graphene for r.f. applications, which hold great potential for commercialization. Here we report the systematic study of top-gated CVD-graphene r.f. transistors with gate lengths scaled down to 40 nm, the shortest gate length demonstrated on graphene r.f. devices. The CVD graphene was grown on copper film and transferred to a wafer of diamond-like carbon. Cut-off frequencies as high as 155 GHz have been obtained for the 40-nm transistors, and the cut-off frequency was found to scale as 1/(gate length). Furthermore, we studied graphene r.f. transistors at cryogenic temperatures. Unlike conventional semiconductor devices where low-temperature performance is hampered by carrier freeze-out effects, the r.f. performance of our graphene devices exhibits little temperature dependence down to 4.3 K, providing a much larger operation window than is available for conventional devices.

897 citations


Journal ArticleDOI
10 Jun 2011-Science
TL;DR: A wafer-scale graphene circuit was demonstrated in which all circuit components, including graphene field-effect transistor and inductors, were monolithically integrated on a single silicon carbide wafer.
Abstract: A wafer-scale graphene circuit was demonstrated in which all circuit components, including graphene field-effect transistor and inductors, were monolithically integrated on a single silicon carbide wafer. The integrated circuit operates as a broadband radio-frequency mixer at frequencies up to 10 gigahertz. These graphene circuits exhibit outstanding thermal stability with little reduction in performance (less than 1 decibel) between 300 and 400 kelvin. These results open up possibilities of achieving practical graphene technology with more complex functionality and performance.

896 citations


Journal ArticleDOI
TL;DR: Assessment of the potential role that electron microscopy of liquid samples can play in areas such as energy storage and bioimaging is assessed.
Abstract: This article reviews the use of electron microscopy in liquids and its application in biology and materials science.

Journal ArticleDOI
Fengnian Xia1, Vasili Perebeinos1, Yu-Ming Lin1, Yanqing Wu1, Phaedon Avouris1 
TL;DR: It is reported that the contact resistance in a palladium-graphene junction exhibits an anomalous temperature dependence, dropping significantly as temperature decreases to a value of just 110 ± 20 Ω µm at 6 K, which is two to three times the minimum achievable resistance.
Abstract: A high-quality junction between graphene and metallic contacts is crucial in the creation of high-performance graphene transistors. In an ideal metal-graphene junction, the contact resistance is determined solely by the number of conduction modes in graphene. However, as yet, measurements of contact resistance have been inconsistent, and the factors that determine the contact resistance remain unclear. Here, we report that the contact resistance in a palladium-graphene junction exhibits an anomalous temperature dependence, dropping significantly as temperature decreases to a value of just 110 ± 20 Ω µm at 6 K, which is two to three times the minimum achievable resistance. Using a combination of experiment and theory we show that this behaviour results from carrier transport in graphene under the palladium contact. At low temperature, the carrier mean free path exceeds the palladium-graphene coupling length, leading to nearly ballistic transport with a transfer efficiency of ~75%. As the temperature increases, this carrier transport becomes less ballistic, resulting in a considerable reduction in efficiency.

Book ChapterDOI
Craig Gentry1, Shai Halevi1
15 May 2011
TL;DR: In this article, the authors describe a working implementation of a variant of Gentry's fully homomorphic encryption scheme (STOC 2009), similar to the variant used in an earlier implementation effort by Smart and Vercauteren (PKC 2010).
Abstract: We describe a working implementation of a variant of Gentry's fully homomorphic encryption scheme (STOC 2009), similar to the variant used in an earlier implementation effort by Smart and Vercauteren (PKC 2010). Smart and Vercauteren implemented the underlying "somewhat homomorphic" scheme, but were not able to implement the bootstrapping functionality that is needed to get the complete scheme to work. We show a number of optimizations that allow us to implement all aspects of the scheme, including the bootstrapping functionality. Our main optimization is a key-generation method for the underlying somewhat homomorphic encryption, that does not require full polynomial inversion. This reduces the asymptotic complexity from O(n2.5) to O(n1.5) when working with dimension-n lattices (and practically reducing the time from many hours/days to a few seconds/minutes). Other optimizations include a batching technique for encryption, a careful analysis of the degree of the decryption polynomial, and some space/time trade-offs for the fully-homomorphic scheme. We tested our implementation with lattices of several dimensions, corresponding to several security levels. From a "toy" setting in dimension 512, to "small," "medium," and "large" settings in dimensions 2048, 8192, and 32768, respectively. The public-key size ranges in size from 70 Megabytes for the "small" setting to 2.3 Gigabytes for the "large" setting. The time to run one bootstrapping operation (on a 1-CPU 64- bit machine with large memory) ranges from 30 seconds for the "small" setting to 30 minutes for the "large" setting.

Journal ArticleDOI
TL;DR: An improved coating pan apparatus and spray arm assembly are disclosed for providing facilitated maintenance and cleaning of sensitive spray nozzles.
Abstract: Let $f:2^X \rightarrow \cal R_+$ be a monotone submodular set function, and let $(X,\cal I)$ be a matroid. We consider the problem ${\rm max}_{S \in \cal I} f(S)$. It is known that the greedy algorithm yields a $1/2$-approximation [M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, Math. Programming Stud., no. 8 (1978), pp. 73-87] for this problem. For certain special cases, e.g., ${\rm max}_{|S| \leq k} f(S)$, the greedy algorithm yields a $(1-1/e)$-approximation. It is known that this is optimal both in the value oracle model (where the only access to $f$ is through a black box returning $f(S)$ for a given set $S$) [G. L. Nemhauser and L. A. Wolsey, Math. Oper. Res., 3 (1978), pp. 177-188] and for explicitly posed instances assuming $P eq NP$ [U. Feige, J. ACM, 45 (1998), pp. 634-652]. In this paper, we provide a randomized $(1-1/e)$-approximation for any monotone submodular function and an arbitrary matroid. The algorithm works in the value oracle model. Our main tools are a variant of the pipage rounding technique of Ageev and Sviridenko [J. Combin. Optim., 8 (2004), pp. 307-328], and a continuous greedy process that may be of independent interest. As a special case, our algorithm implies an optimal approximation for the submodular welfare problem in the value oracle model [J. Vondrak, Proceedings of the $38$th ACM Symposium on Theory of Computing, 2008, pp. 67-74]. As a second application, we show that the generalized assignment problem (GAP) is also a special case; although the reduction requires $|X|$ to be exponential in the original problem size, we are able to achieve a $(1-1/e-o(1))$-approximation for GAP, simplifying previously known algorithms. Additionally, the reduction enables us to obtain approximation algorithms for variants of GAP with more general constraints.

Book
27 Jun 2011
TL;DR: The challenges that remain open, in particular the need for language generation and deeper semantic understanding of language that would be necessary for future advances in the field are discussed.
Abstract: It has now been 50 years since the publication of Luhn’s seminal paper on automatic summarization. During these years the practical need for automatic summarization has become increasingly urgent and numerous papers have been published on the topic. As a result, it has become harder to find a single reference that gives an overview of past efforts or a complete view of summarization tasks and necessary system components. This article attempts to fill this void by providing a comprehensive overview of research in summarization, including the more traditional efforts in sentence extraction as well as the most novel recent approaches for determining important content, for domain and genre specific summarization and for evaluation of summarization. We also discuss the challenges that remain open, in particular the need for language generation and deeper semantic understanding of language that would be necessary for future advances in the field. We would like to thank the anonymous reviewers, our students and Noemie Elhadad, Hongyan Jing, Julia Hirschberg, Annie Louis, Smaranda Muresan and Dragomir Radev for their helpful feedback. This paper was supported in part by the U.S. National Science Foundation (NSF) under IIS-05-34871 and CAREER 09-53445. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Full text available at: http://dx.doi.org/10.1561/1500000015

Book
17 Nov 2011
TL;DR: This tutorial is to introduce the fundamentals of biometric technology from a pattern recognition and signal processing perspective by discussing some of the prominent techniques used in the field and to convey the recent advances made in this field especially in the context of security, privacy and forensics.
Abstract: Biometric recognition, or simply biometrics, is the science of establishing the identity of a person based on physical or behavioral attributes. It is a rapidly evolving field with applications ranging from securely accessing ones computer to gaining entry into a country. While the deployment of large-scale biometric systems in both commercial and government applications has increased the public awareness of this technology, "Introduction to Biometrics" is the first textbook to introduce the fundamentals of Biometrics to undergraduate/graduate students. The three commonly used modalities in the biometrics field, namely, fingerprint, face, and iris are covered in detail in this book. Few other modalities like hand geometry, ear, and gait are also discussed briefly along with advanced topics such as multibiometric systems and security of biometric systems. Exercises for each chapter will be available on the book website to help students gain a better understanding of the topics and obtain practical experience in designing computer programs for biometric applications. These can be found at: http://www.csee.wvu.edu/~ross/BiometricsTextBook/.Designed for undergraduate and graduate students in computer science and electrical engineering, "Introduction to Biometrics" is also suitable for researchers and biometric and computer security professionals.

Proceedings ArticleDOI
21 Aug 2011
TL;DR: A novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce.
Abstract: We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm. We first develop a novel "stratified" SGD variant (SSGD) that applies to general loss-minimization problems in which the loss function can be expressed as a weighted sum of "stratum losses." We establish sufficient conditions for convergence of SSGD using results from stochastic approximation theory and regenerative process theory. We then specialize SSGD to obtain a new matrix-factorization algorithm, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. DSGD can handle a wide variety of matrix factorizations. We describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.

Journal ArticleDOI
TL;DR: A measure on graphs, the minrank, is identified, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes and for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDex codes.
Abstract: Motivated by a problem of transmitting supplemental data over broadcast channels (Birk and Kol, INFOCOM 1998), we study the following coding problem: a sender communicates with n receivers R1,..., Rn. He holds an input x ∈ {0,01l}n and wishes to broadcast a single message so that each receiver Ri can recover the bit xi. Each Ri has prior side information about x, induced by a directed graph Grain nodes; Ri knows the bits of a; in the positions {j | (i,j) is an edge of G}.G is known to the sender and to the receivers. We call encoding schemes that achieve this goal INDEXcodes for {0,1}n with side information graph G. In this paper we identify a measure on graphs, the minrank, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes. We show that for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDEX codes. For arbitrary INDEX codes and arbitrary graphs, we obtain a lower bound in terms of the size of the maximum acyclic induced subgraph. This bound holds even for randomized codes, but has been shown not to be tight.

Journal ArticleDOI
TL;DR: It is shown that past reports of traditional cathode electrocatalysis in nonaqueous Li-O(2) batteries were indeed true, but that gas evolution related to electrolyte solvent decomposition was the dominant process being catalyzed.
Abstract: Heterogeneous electrocatalysis has become a focal point in rechargeable Li–air battery research to reduce overpotentials in both the oxygen reduction (discharge) and especially oxygen evolution (charge) reactions. In this study, we show that past reports of traditional cathode electrocatalysis in nonaqueous Li–O2 batteries were indeed true, but that gas evolution related to electrolyte solvent decomposition was the dominant process being catalyzed. In dimethoxyethane, where Li2O2 formation is the dominant product of the electrochemistry, no catalytic activity (compared to pure carbon) is observed using the same (Au, Pt, MnO2) nanoparticles. Nevertheless, the onset potential of oxygen evolution is only slightly higher than the open circuit potential of the cell, indicating conventional oxygen evolution electrocatalysis may be unnecessary.

Journal ArticleDOI
Wanli Min1, Laura Wynter1
TL;DR: The method presented provides predictions of speed and volume over 5-min intervals for up to 1 h in advance for real-time road traffic prediction to be both fast and scalable to full urban networks.
Abstract: Real-time road traffic prediction is a fundamental capability needed to make use of advanced, smart transportation technologies. Both from the point of view of network operators as well as from the point of view of travelers wishing real-time route guidance, accurate short-term traffic prediction is a necessary first step. While techniques for short-term traffic prediction have existed for some time, emerging smart transportation technologies require the traffic prediction capability to be both fast and scalable to full urban networks. We present a method that has proven to be able to meet this challenge. The method presented provides predictions of speed and volume over 5-min intervals for up to 1 h in advance.

Proceedings ArticleDOI
17 Oct 2011
TL;DR: In this article, the authors introduce the notion of proof of ownership (PoW) which allows a client to efficiently prove to a server that the client holds a file, rather than just some short information about it.
Abstract: Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.

Journal ArticleDOI
TL;DR: It is shown that there is a "sudden death" in charge transport when film thickness is ~5 to 10 nm, and the theoretical model shows that this occurs when the tunneling current through the film can no longer support the electrochemical current.
Abstract: Non-aqueous Li-air or Li-O(2) cells show considerable promise as a very high energy density battery couple. Such cells, however, show sudden death at capacities far below their theoretical capacity and this, among other problems, limits their practicality. In this paper, we show that this sudden death arises from limited charge transport through the growing Li(2)O(2) film to the Li(2)O(2)-electrolyte interface, and this limitation defines a critical film thickness, above which it is not possible to support electrochemistry at the Li(2)O(2)-electrolyte interface. We report both electrochemical experiments using a reversible internal redox couple and a first principles metal-insulator-metal charge transport model to probe the electrical conductivity through Li(2)O(2) films produced during Li-O(2) discharge. Both experiment and theory show a "sudden death" in charge transport when film thickness is ~5 to 10 nm. The theoretical model shows that this occurs when the tunneling current through the film can no longer support the electrochemical current. Thus, engineering charge transport through Li(2)O(2) is a serious challenge if Li-O(2) batteries are ever to reach their potential.

Journal ArticleDOI
TL;DR: It is demonstrated that the nanoparticles disrupt microbial walls/membranes selectively and efficiently, thus inhibiting the growth of Gram-positive bacteria, methicillin-resistant Staphylococcus aureus (MRSA) and fungi, without inducing significant haemolysis over a wide range of concentrations.
Abstract: Macromolecular antimicrobial agents such as cationic polymers and peptides have recently been under an increased level of scrutiny because they can combat multi-drug-resistant microbes. Most of these polymers are non-biodegradable and are designed to mimic the facially amphiphilic structure of peptides so that they may form a secondary structure on interaction with negatively charged microbial membranes. The resulting secondary structure can insert into and disintegrate the cell membrane after recruiting additional polymer molecules. Here, we report the first biodegradable and in vivo applicable antimicrobial polymer nanoparticles synthesized by metal-free organocatalytic ring-opening polymerization of functional cyclic carbonate. We demonstrate that the nanoparticles disrupt microbial walls/membranes selectively and efficiently, thus inhibiting the growth of Gram-positive bacteria, methicillin-resistant Staphylococcus aureus (MRSA) and fungi, without inducing significant haemolysis over a wide range of concentrations. These biodegradable nanoparticles, which can be synthesized in large quantities and at low cost, are promising as antimicrobial drugs, and can be used to treat various infectious diseases such as MRSA-associated infections, which are often linked with high mortality.

Book
31 Aug 2011
TL;DR: In this paper, an empirical analysis indicates that the order of entry of a brand into a consumer product category is inversely related to its market share, and the coefficients of the entry, advertising, and positioning variables are significant in a regression analysis on an initial sample of 82 brands across 24 categories.
Abstract: An empirical analysis indicates that the order of entry of a brand into a consumer product category is inversely related to its market share. Market share is modeled as a log linear function of order of entry, time between entries, advertising, and positioning effectiveness. The coefficients of the entry, advertising, and positioning variables are significant in a regression analysis on an initial sample of 82 brands across 24 categories. These findings are confirmed by predictions on 47 not previously analyzed brands in 12 categories. Managerial implications for pioneers and later entrants are identified.

Journal ArticleDOI
TL;DR: This paper designs the first constant-factor approximation algorithms for maximizing nonnegative (non-monotone) submodular functions and proves NP- hardness of $(\frac{5}{6}+\epsilon)$-approximation in the symmetric case and NP-hardness of $\frac{3}{4}+ \epsil on)$ in the general case.
Abstract: Submodular maximization generalizes many important problems including Max Cut in directed and undirected graphs and hypergraphs, certain constraint satisfaction problems, and maximum facility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular functions is NP-hard. In this paper, we design the first constant-factor approximation algorithms for maximizing nonnegative (non-monotone) submodular functions. In particular, we give a deterministic local-search $\frac{1}{3}$-approximation and a randomized $\frac{2}{5}$-approximation algorithm for maximizing nonnegative submodular functions. We also show that a uniformly random set gives a $\frac{1}{4}$-approximation. For symmetric submodular functions, we show that a random set gives a $\frac{1}{2}$-approximation, which can also be achieved by deterministic local search. These algorithms work in the value oracle model, where the submodular function is accessible through a black box returning $f(S)$ for a given set $S$. We show that in this model, a $(\frac{1}{2}+\epsilon)$-approximation for symmetric submodular functions would require an exponential number of queries for any fixed $\epsilon>0$. In the model where $f$ is given explicitly (as a sum of nonnegative submodular functions, each depending only on a constant number of elements), we prove NP-hardness of $(\frac{5}{6}+\epsilon)$-approximation in the symmetric case and NP-hardness of $(\frac{3}{4}+\epsilon)$-approximation in the general case.

Journal ArticleDOI
TL;DR: The transformation to smarter cities will require innovation in planning, management, and operations, and technical obstacles will center on achieving system interoperability, ensuring security and privacy, accommodating a proliferation of sensors and devices, and adopting a new closed-loop human-computer interaction paradigm.
Abstract: The transformation to smarter cities will require innovation in planning, management, and operations. Several ongoing projects around the world illustrate the opportunities and challenges of this transformation. Cities must get smarter to address an array of emerging urbanization challenges, and as the projects highlighted in this article show, several distinct paths are available. The number of cities worldwide pursuing smarter transformation is growing rapidly. However, these efforts face many political, socioeconomic, and technical hurdles. Changing the status quo is always difficult for city administrators, and smarter city initiatives often require extensive coordination, sponsorship, and support across multiple functional silos. The need to visibly demonstrate a continuous return on investment also presents a challenge. The technical obstacles will center on achieving system interoperability, ensuring security and privacy, accommodating a proliferation of sensors and devices, and adopting a new closed-loop human-computer interaction paradigm.

Journal ArticleDOI
TL;DR: In this paper, the authors present the augment of the authors, who are IBM consultants, that companies need to meld social media programs with customer relationship management (CRM) to facilitate collaborative social experiences and dialogue that customers value.
Abstract: Purpose – The purpose of this paper is to present the augment of the authors, who are IBM consultants, that companies need to meld social media programs with customer relationship management (CRM). This new paradigm – Social CRM – recognizes that instead of just managing customers, the role of the business is to facilitate collaborative social experiences and dialogue that customers value.Design/methodology/approach – Social media holds enormous potential for companies to get closer to customers and, by doing so, increase revenue, cost reduction and efficiencies. However, using social media as a channel for customer engagement will fail if the traditional CRM approaches are not reinvented,Findings – According to IBM research, there is a large perception gap between what the customers seek via social media and what companies offer. Consumers are far more interested in obtaining tangible value, suggesting businesses may be confusing their own desire for customer intimacy with consumers' motivations for enga...

Journal ArticleDOI
TL;DR: This article presents an exploration of cooperative network localization and navigation from a theoretical foundation to applications, covering technologies and spatiotemporal cooperative algorithms.
Abstract: Network localization and navigation give rise to a new paradigm for communications and contextual data collection, enabling a variety of new applications that rely on position information of mobile nodes (agents). The performance of such networks can be significantly improved via the use of cooperation. Therefore, a deep understanding of information exchange and cooperation in the network is crucial for the design of location-aware networks. This article presents an exploration of cooperative network localization and navigation from a theoretical foundation to applications, covering technologies and spatiotemporal cooperative algorithms.

Proceedings ArticleDOI
01 Jan 2011
TL;DR: A novel automatic salient object segmentation algorithm which integrates both bottom-up salient stimuli and object-level shape prior, leading to binary segmentation of the salient object.
Abstract: We propose a novel automatic salient object segmentation algorithm which integrates both bottom-up salient stimuli and object-level shape prior, i.e., a salient object has a well-defined closed boundary. Our approach is formalized as an iterative energy minimization framework, leading to binary segmentation of the salient object. Such energy minimization is initialized with a saliency map which is computed through context analysis based on multi-scale superpixels. Object-level shape prior is then extracted combining saliency with object boundary information. Both saliency map and shape prior update after each iteration. Experimental results on two public benchmark datasets show that our proposed approach outperforms state-of-the-art methods.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: This work fabricated a key building block of a modular neuromorphic architecture, a neurosynaptic core, with 256 digital integrate-and-fire neurons and a 1024×256 bit SRAM crossbar memory for synapses using IBM's 45nm SOI process, leading to ultra-low active power consumption.
Abstract: The grand challenge of neuromorphic computation is to develop a flexible brain-like architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of the human brain—within the constraints of existing silicon and post-silicon technologies. To this end, we fabricated a key building block of a modular neuromorphic architecture, a neurosynaptic core, with 256 digital integrate-and-fire neurons and a 1024×256 bit SRAM crossbar memory for synapses using IBM's 45nm SOI process. Our fully digital implementation is able to leverage favorable CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. In contrast to a conventional von Neumann architecture, our core tightly integrates computation (neurons) alongside memory (synapses), which allows us to implement efficient fan-out (communication) in a naturally parallel and event-driven manner, leading to ultra-low active power consumption of 45pJ/spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and is thus amenable to a wide range of applications. As an example, we trained a restricted Boltzmann machine offline to perform a visual digit recognition task, and mapped the learned weights to our chip.