scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 2013"


Journal ArticleDOI
TL;DR: This work takes an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem, and generates confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.
Abstract: We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.

3,034 citations


Journal ArticleDOI
Jeffrey Dean1, Luiz Andre Barroso1
TL;DR: Software techniques that tolerate latency variability are vital to building responsive large-scale Web services.
Abstract: Software techniques that tolerate latency variability are vital to building responsive large-scale Web services.

1,613 citations


Journal ArticleDOI
TL;DR: The main applications and challenges of one of the hottest research areas in computer science are revealed.
Abstract: The main applications and challenges of one of the hottest research areas in computer science.

1,229 citations


Journal ArticleDOI
TL;DR: The challenges---and great promise---of modern symbolic execution techniques, and the tools to help implement them.
Abstract: The challenges---and great promise---of modern symbolic execution techniques, and the tools to help implement them.

730 citations


Journal ArticleDOI
TL;DR: Novel architecture allows programmers to quickly reconfigure network resource usage as well as provide real-time information about how the network is being used.
Abstract: Novel architecture allows programmers to quickly reconfigure network resource usage.

601 citations


Journal ArticleDOI
Vasant Dhar1
TL;DR: Big data promises automated actionable knowledge creation and predictive models for use by both humans and computers as discussed by the authors, and big data can be used for both human and computer to create knowledge.
Abstract: Big data promises automated actionable knowledge creation and predictive models for use by both humans and computers.

565 citations


Journal ArticleDOI
TL;DR: Google ads, black names and white names, racial discrimination, and click advertising.
Abstract: A Google search for a person's name, such as “Trevon Jones”, may yield a personalized ad for public records about Trevon that may be neutral, such as “Looking for Trevon Jones? …”, or may be suggestive of an arrest record, such as “Trevon Jones, Arrested?...”. This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. First names, previously identified by others as being assigned at birth to more black or white babies, are found predictive of race (88% black, 96% white), and those assigned primarily to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word "arrest" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. On the more ad trafficked website, a black-identifying name was 25% more likely to get an ad suggestive of an arrest record. A few names did not follow these patterns: Dustin, a name predominantly given to white babies, generated an ad suggestive of arrest 81 and 100 percent of the time. All ads return results for actual individuals and ads appear regardless of whether the name has an arrest record in the company’s database. Notwithstanding these findings, the company maintains Google received the same ad text for groups of last names (not first names), raising questions as to whether Google's advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness.

447 citations


Journal ArticleDOI
TL;DR: Supplementing the classroom experience with small private online courses for students with limited classroom experience.
Abstract: Supplementing the classroom experience with small private online courses.

320 citations


Journal ArticleDOI
TL;DR: Anonymous location data from cellular phone networks sheds light on how people move around on a large scale.
Abstract: Anonymous location data from cellular phone networks sheds light on how people move around on a large scale.

313 citations


Journal ArticleDOI
TL;DR: How can applications be built on eventually consistent infrastructure given no guarantee of safety when the infrastructure itself is not consistent?
Abstract: How can applications be built on eventually consistent infrastructure given no guarantee of safety?

202 citations


Journal ArticleDOI
TL;DR: New possibilities in online education create new challenges and inspire new ideas in teaching and learning.
Abstract: New possibilities in online education create new challenges.

Journal ArticleDOI
TL;DR: It is explained what it means for one graph to be a spectral approximation of another and the development of algorithms for spectral sparsification are reviewed, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.
Abstract: Graph sparsification is the approximation of an arbitrary graph by a sparse graph.We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsification has been an important tool in the design of nearly linear-time algorithms for solving systems of linear equations in symmetric, diagonally dominant matrices. The fast solution of these linear systems has already led to breakthrough results in combinatorial optimization, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.

Journal ArticleDOI
TL;DR: This research presents a biologically inspired approach to the design of autonomous, adaptive machines that combines machine learning, artificial intelligence and reinforcement learning.

Journal ArticleDOI
TL;DR: How pair programming, peer instruction, and media computation have improved computer science education has improvedComputer science education.
Abstract: How pair programming, peer instruction, and media computation have improved computer science education

Journal ArticleDOI
TL;DR: How to fairly allocate divisible resources, and why computer scientists should take notice.
Abstract: How to fairly allocate divisible resources, and why computer scientists should take notice.

Journal ArticleDOI
TL;DR: Results show that core count scaling provides much less performance gain than conventional wisdom suggests, which may prevent both scaling to higher core counts and ultimately the economic viability of continued silicon scaling.
Abstract: Starting in 2004, the microprocessor industry has shifted to multicore scaling---increasing the number of cores per die each generation---as its principal strategy for continuing performance growth Many in the research community believe that this exponential core scaling will continue into the hundreds or thousands of cores per chip, auguring a parallelism revolution in hardware or software However, while transistor count increases continue at traditional Moore's Law rates, the per-transistor speed and energy efficiency improvements have slowed dramatically Under these conditions, more cores are only possible if the cores are slower, simpler, or less utilized with each additional technology generation This paper brings together transistor technology, processor core, and application models to understand whether multicore scaling can sustain the historical exponential performance growth in this energy-limited era As the number of cores increases, power constraints may prevent powering of all cores at their full speed, requiring a fraction of the cores to be powered off at all times According to our models, the fraction of these chips that is "dark" may be as much as 50% within three process generations The low utility of this "dark silicon" may prevent both scaling to higher core counts and ultimately the economic viability of continued silicon scaling Our results show that core count scaling provides much less performance gain than conventional wisdom suggests Under (highly) optimistic scaling assumptions---for parallel workloads---multicore scaling provides a 79× (23% per year) over ten years Under more conservative (realistic) assumptions, multicore scaling provides a total performance gain of 37× (14% per year) over ten years, and obviously less when sufficiently parallel workloads are unavailable Without a breakthrough in process technology or microarchitecture, other directions are needed to continue the historical rate of performance improvement

Journal ArticleDOI
TL;DR: A framework for evaluating security risks associated with technologies used at home and a guide to selecting suitable technologies for use in the home.
Abstract: A framework for evaluating security risks associated with technologies used at home.

Journal ArticleDOI
Douglas B. Terry1
TL;DR: A broader class of consistency guarantees can, and perhaps should, be offered to clients that read shared data.
Abstract: A broader class of consistency guarantees can, and perhaps should, be offered to clients that read shared data.

Journal ArticleDOI
TL;DR: The programmability of FPGAs must improve if they are to be part of mainstream computing, and this paper presents a meta-modelling architecture suitable for this purpose.
Abstract: When looking at how hardware influences computing performance, we have GPPs (general-purpose processors) on one end of the spectrum and ASICs (application-specific integrated circuits) on the other...

Journal ArticleDOI
TL;DR: Exploring autonomous systems and the agents that control them is a step towards real-time decision-making in the rapidly changing environment.
Abstract: I l l u S t r a t I o n b y a l I C I a k u b I S t a /a n D r I J b o r y S a S S o C I a t e S in This arTiCle we consider the question: How should autonomous systems be analyzed? in particular, we describe how the confluence of developments in two areas—autonomous systems architectures and formal verification for rational agents—can provide the basis for the formal verification of autonomous systems behaviors. We discuss an approach to this question that involves: 1. Modeling the behavior and describing the interface (input/output) to an agent in charge of making decisions within the system; 2. Model checking the agent within an unrestricted environment representing the “real world” and those parts of the systems external to the agent, in order to establish some property, j; 3. Utilizing theorems or analysis of the environment, in the form of logical statements (where necessary), to derive properties of the larger system; and 4. if the agent is refined, modify (1), but if environmental properties are clarified, modify (3). Autonomous systems are now being deployed in safety, mission, or business critical scenarios, which means a thorough analysis of the choices the core software might make becomes crucial. But, should the analysis and verification of autonomous software be treated any differently than traditional software used in critical situations? Or is there something new going on here? Autonomous systems are systems that decide for themselves what to do and when to do it. Such systems might seem futuristic, but they are closer than we might think. Modern household, business, and industrial systems increasingly incorporate autonomy. There are many examples, all varying in the degree of autonomy used, from almost pure human control to fully autonomous activities with minimal human interaction. Application areas are broad, ranging from healthcare monitoring to autonomous vehicles. But what are the reasons for this increase in autonomy? Typically, autonomy is used in systems that: 1. must be deployed in remote environments where direct human control is infeasible; 2. must be deployed in hostile environments where it is dangerous for humans to be nearby, and so difficult for humans to assess the possibilities; 3. involve activity that is too lengthy Verifying autonomous systems Doi:10.1145/2494558

Journal ArticleDOI
TL;DR: Extending the data trust perimeter from the enterprise to the public cloud requires more than encrypting data; it needs to be about more than encryption.
Abstract: Extending the data trust perimeter from the enterprise to the public cloud requires more than encryption.

Journal ArticleDOI
TL;DR: Merging the art and science of software development with the aim of inspiring and inspiring the next generation of software developers.
Abstract: Software life-cycle management was, for a very long time, a controlled exercise. The duration of product design, development, and support was predictable enough that companies and their employees s...

Journal ArticleDOI
TL;DR: How to address the lack of transparency, trust, and acceptance in cloud services is addressed.
Abstract: C l o U D C o M P U t i N g i s an evolving paradigm that affects a large part of the IT industry, in particular the way hardware and software are deployed: as a service Cloud computing provides new opportunities for IT service providers, such as the adoption of new business models and the realization of economies of scale by increasing efficiency of resource utilization Adopters are supposed to benefit from advantages like up-to-date IT resources with a high degree of flexibility and low upfront capital investments However, despite advantages of cloud computing, small and medium enterprises (SMEs) in particular remain cautious implementing cloud service solutions This holds true for both IT service providers and IT service users The main reasons for the reluctance of companies to adopt cloud computing include: ˲ Due to the prevailing information asymmetry on the market, companies have difficulties comprehensively assessing the individual benefits and challenges associated with the adoption of cloud services Furthermore, the information asymmetry impedes providers from aligning their services with the needs of potential customers ˲ Companies lack appropriate, qualified, trustworthy information and benchmarks to assess cloud services with regard to individual benefits and associated risks ˲ Companies lack approaches and metrics to adequately assess and compare the service quality of cloud services, especially, regarding security and reliability ˲ Industry-specific requirements and restrictions on IT usage and data processing limit the adoption of cloud services in sectors like health care or banking Many of those requirements and restrictions are outdated and were issued long before broadband Internet connections and mobile devices became ubiquitous ˲ Noteworthy uncertainties concerning legal compliance and conformance with international privacy requirements can be observed Providers are constantly faced with the challenge to design niche-oriented, demand-specific services in a legally compliant manner Reflecting these reasons for inhibiting cloud computing adoption, the environment surrounding cloud computing is characterized by uncertainty and a lack of transparency Yet, trust is necessary in situations in which the interested party is confronted with unViewpoint Cloud services Certification


Journal ArticleDOI
TL;DR: Quantum computer architecture holds the key to building commercially viable systems and will be a major factor in the design of the next generation of quantum computers.
Abstract: Quantum computer architecture holds the key to building commercially viable systems.

Journal ArticleDOI
TL;DR: Improving academic success and social development by merging computational thinking with cultural practices.
Abstract: Improving academic success and social development by merging computational thinking with cultural practices.

Journal ArticleDOI
TL;DR: Privacy issues can evaporate when embarrassing content does likewise, and the need to protect personal information is not considered.
Abstract: Privacy issues can evaporate when embarrassing content does likewise.

Journal ArticleDOI
TL;DR: Improve online public discourse by connecting opinions across blogs, editorials, and social media.
Abstract: Improve online public discourse by connecting opinions across blogs, editorials, and social media.

Journal ArticleDOI
TL;DR: "Where's' in a name?" as discussed by the authors asks where "where's" in "a name" in a person's name, and "where" in his/her name.
Abstract: 'Where's' in a name?

Journal ArticleDOI
TL;DR: This approach enables taking into account and inferring indoor clutter without hand-labeling of the clutter in the training set, which is often inaccurate, and outperforms the state-of-the-art method of Hedau et al. that requires clutter labels.
Abstract: We address the problem of understanding an indoor scene from a single image in terms of recovering the room geometry (floor, ceiling, and walls) and furniture layout. A major challenge of this task arises from the fact that most indoor scenes are cluttered by furniture and decorations, whose appearances vary drastically across scenes, thus can hardly be modeled (or even hand-labeled) consistently. In this paper we tackle this problem by introducing latent variables to account for clutter, so that the observed image is jointly explained by the room and clutter layout. Model parameters are learned from a training set of images that are only labeled with the layout of the room geometry. Our approach enables taking into account and inferring indoor clutter without hand-labeling of the clutter in the training set, which is often inaccurate. Yet it outperforms the state-of-the-art method of Hedau et al. that requires clutter labels. As a latent variable based method, our approach has an interesting feature that latent variables are used in direct correspondence with a concrete visual concept (clutter in the room) and thus interpretable.