scispace - formally typeset
Search or ask a question

Showing papers in "ACM Queue in 2013"


Journal ArticleDOI
TL;DR: In this article, the authors take a scientific dive into online ad delivery to find answers, such as: Do online ads suggestive of arrest records appear more often with searches of black sounding names than white sounding names? What is a black sounding name or white sounding name, and how many more times would an ad have to appear adversely affecting one racial group for it to be considered discrimination?
Abstract: Do online ads suggestive of arrest records appear more often with searches of black-sounding names than white-sounding names? What is a black-sounding name or white-sounding name, anyway? How many more times would an ad have to appear adversely affecting one racial group for it to be considered discrimination? Is online activity so ubiquitous that computer scientists have to think about societal consequences such as structural racism in technology design? If so, how is this technology to be built? Let’s take a scientific dive into online ad delivery to find answers.

223 citations


Journal ArticleDOI
TL;DR: NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics, and at current processor speeds, the signal path length from the processor to memory plays a significant role.
Abstract: NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. At current processor speeds, the signal path length from the processor to memory plays a significant role. Increased signal path length not only increases latency to memory but also quickly becomes a throughput bottleneck if the signal path is shared by multiple processors. The performance differences to memory were noticeable first on large-scale systems where data paths were spanning motherboards or chassis. These systems required modified operating-system kernels with NUMA support that explicitly understood the topological properties of the system’s memory (such as the chassis in which a region of memory was located) in order to avoid excessively long signal path lengths. (Altix and UV, SGI’s large address space systems, are examples. The designers of these products had to modify the Linux kernel to support NUMA; in these machines, processors in multiple chassis are linked via a proprietary interconnect called NUMALINK.)

161 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a long history of trying to make computer networks more programmable, with the aid of SDN (software-defined networking), which is part of a longer history of programmable networks.
Abstract: Designing and managing networks has become more innovative over the past few years with the aid of SDN (software-defined networking). This technology seems to have appeared suddenly, but it is actually part of a long history of trying to make computer networks more programmable.

84 citations


Journal ArticleDOI
TL;DR: When looking at how hardware influences computing performance, the authors have GPPs (general-purpose processors) on one end of the spectrum and ASICs (application-specific integrated circuits) on the other.
Abstract: When looking at how hardware influences computing performance, we have GPPs (general-purpose processors) on one end of the spectrum and ASICs (application-specific integrated circuits) on the other. Processors are highly programmable but often inefficient in terms of power and performance. ASICs implement a dedicated and fixed function and provide the best power and performance characteristics, but any functional change requires a complete (and extremely expensive) re-spinning of the circuits.

70 citations


Journal ArticleDOI
TL;DR: How can applications be built on eventually consistent infrastructure given no guarantee of safety when the infrastructure itself is not consistent?
Abstract: In a July 2000 conference keynote, Eric Brewer, now VP of engineering at Google and a professor at the University of California, Berkeley, publicly postulated the CAP (consistency, availability, and partition tolerance) theorem, which would change the landscape of how distributed storage systems were architected. Brewer’s conjecture--based on his experiences building infrastructure for some of the first Internet search engines at Inktomi--states that distributed systems requiring always-on, highly available operation cannot guarantee the illusion of coherent, consistent single-system operation in the presence of network partitions, which cut communication between active servers. Brewer’s conjecture proved prescient: in the following decade, with the continued rise of large-scale Internet services, distributed-system architects frequently dropped "strong" guarantees in favor of weaker models--the most notable being eventual consistency.

53 citations


Journal ArticleDOI
TL;DR: What if all the software layers in a virtual appliance were compiled within the same safe, high-level language framework?
Abstract: Cloud computing has been pioneering the business of renting computing resources in large data centers to multiple (and possibly competing) tenants. The basic enabling technology for the cloud is operating-system virtualization such as Xen1 or VMWare, which allows customers to multiplex VMs (virtual machines) on a shared cluster of physical machines. Each VM presents as a self-contained computer, booting a standard operating-system kernel and running unmodified applications just as if it were executing on a physical machine.

43 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a reliable service that provides the high level of availability their users can depend on. But, failure is not predictable and occurs with uniform probability and frequency.
Abstract: Failure is inevitable. Disks fail. Software bugs lie dormant waiting for just the right conditions to bite. People make mistakes. Data centers are built on farms of unreliable commodity hardware. If you’re running in a cloud environment, then many of these factors are outside of your control. To compound the problem, failure is not predictable and doesn’t occur with uniform probability and frequency. The lack of a uniform frequency increases uncertainty and risk in the system. In the face of such inevitable and unpredictable failure, how can you build a reliable service that provides the high level of availability your users can depend on?

16 citations


Journal ArticleDOI
TL;DR: The ProvDMS project at the Oak Ridge National Laboratory (ORNL) described in this article aims to solve the issue in the context of sensor data of multiple versions of the data sitting in different locations.
Abstract: In today’s information-driven workplaces, data is constantly being moved around and undergoing transformation. The typical business-as-usual approach is to use e-mail attachments, shared network locations, databases, and more recently, the cloud. More often than not, there are multiple versions of the data sitting in different locations, and users of this data are confounded by the lack of metadata describing its provenance or in other words, its lineage. The ProvDMS project at the Oak Ridge National Laboratory (ORNL) described in this article aims to solve this issue in the context of sensor data.

9 citations


Journal ArticleDOI
Paul E. McKenney1
TL;DR: Developers often take a proactive approach to software design, especially those from cultures valuing industriousness over procrastination, but lazy approaches have proven their value, with examples including reference counting, garbage collection, and lazy evaluation.
Abstract: Developers often take a proactive approach to software design, especially those from cultures valuing industriousness over procrastination. Lazy approaches, however, have proven their value, with examples including reference counting, garbage collection, and lazy evaluation. This structured deferral takes the form of synchronization via procrastination, specifically reference counting, hazard pointers, and RCU (read-copy-update).

9 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that if an application does not have meaningful SLAs (service-level agreements) and can tolerate extended downtime and/or performance degradation, then the barrier to entry is greatly reduced.
Abstract: Distributed systems are difficult to understand, design, build, and operate. They introduce exponentially more variables into a design than a single machine does, making the root cause of an application problem much harder to discover. It should be said that if an application does not have meaningful SLAs (service-level agreements) and can tolerate extended downtime and/or performance degradation, then the barrier to entry is greatly reduced. Most modern applications, however, have an expectation of resiliency from their users, and SLAs are typically measured by "the number of nines" (e.g., 99.9 or 99.99 percent availability per month). Each additional 9 becomes harder and harder to achieve.

8 citations


Journal ArticleDOI
TL;DR: If it wasn’t your priority last year or the year before, it’s sure to be your priority now: bring your Web site or service to mobile devices in 2013 or suffer the consequences.
Abstract: If it wasn’t your priority last year or the year before, it’s sure to be your priority now: bring your Web site or service to mobile devices in 2013 or suffer the consequences. Early adopters have been talking about mobile taking over since 1999 - anticipating the trend by only a decade or so. Today, mobile Web traffic is dramatically on the rise, and creating a slick mobile experience is at the top of everyone’s mind. Total mobile data traffic is expected to exceed 10 exabytes per month by 2017.

Journal ArticleDOI
TL;DR: The software life-cycle management was, for a very long time, a controlled exercise as mentioned in this paper, where the duration of product design, development, and support was predictable enough that companies and their employees scheduled their finances, vacations, surgeries, and mergers around product releases.
Abstract: Software life-cycle management was, for a very long time, a controlled exercise. The duration of product design, development, and support was predictable enough that companies and their employees scheduled their finances, vacations, surgeries, and mergers around product releases. When developers were busy, QA (quality assurance) had it easy. As the coding portion of a release cycle came to a close, QA took over while support ramped up. Then when the product released, the development staff exhaled, rested, and started the loop again while the support staff transitioned to busily supporting the new product.

Journal ArticleDOI
TL;DR: The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, "More encryption is the solution."
Abstract: The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, "More encryption is the solution." This is not the case. In fact, more encryption will probably only make the privacy crisis worse than it already is.

Journal ArticleDOI
TL;DR: The biggest change in Web development over the past few years has been the remarkable rise of mobile computing as mentioned in this paper, where mobile phones used to be extremely limited devices that were best used for making phone calls and sending short text messages, and today's mobile phones are more powerful than the computers that took Apollo 11 to the moon with the ability to send data to and from nearly anywhere.
Abstract: The biggest change in Web development over the past few years has been the remarkable rise of mobile computing. Mobile phones used to be extremely limited devices that were best used for making phone calls and sending short text messages. Today’s mobile phones are more powerful than the computers that took Apollo 11 to the moon with the ability to send data to and from nearly anywhere. Combine that with 3G and 4G networks for data transfer, and now using the Internet while on the go is faster than my first Internet connection, which featured AOL and a 14.4-kbps dialup modem.

Journal ArticleDOI
TL;DR: While new and untested, Node continues to win more converts, Node.js has also managed to enrage some others, who have unleashed a barrage of negative blog posts to point out its perceived shortcomings.
Abstract: Node.js, the server-side JavaScript-based software platform used to build scalable network applications, has been all the rage among many developers for the past couple of years, although its popularity has also managed to enrage some others, who have unleashed a barrage of negative blog posts to point out its perceived shortcomings. Still, while new and untested, Node continues to win more converts.

Journal ArticleDOI
TL;DR: Measuring and monitoring network RTT (roundtrip time) is important for multiple reasons: it allows network operators and end users to understand their network performance and help optimize their environment, and it helps businesses understand the responsiveness of their services to sections of their user base.
Abstract: Measuring and monitoring network RTT (round-trip time) is important for multiple reasons: it allows network operators and end users to understand their network performance and help optimize their environment, and it helps businesses understand the responsiveness of their services to sections of their user base.

Journal ArticleDOI
TL;DR: I am a former high-frequency trader and led a group of brilliant engineers and mathematicians, and together the authors traded in the electronic marketplaces and pushed systems to the edge of their capability.
Abstract: I am a former high-frequency trader. For a few wonderful years I led a group of brilliant engineers and mathematicians, and together we traded in the electronic marketplaces and pushed systems to the edge of their capability.

Journal ArticleDOI
TL;DR: The overwhelming evidence indicates that a Web site's performance (speed) correlates directly to its success, across industries and business metrics as mentioned in this paper, and it is important to monitor how Web site performs.
Abstract: The overwhelming evidence indicates that a Web site’s performance (speed) correlates directly to its success, across industries and business metrics. With such a clear correlation (and even proven causation), it is important to monitor how your Web site performs. So, how fast is your Web site?

Journal ArticleDOI
TL;DR: This article looks at some realtime sound-synthesis applications and shares the authors’ experiences implementing them on GPUs (graphics processing units).
Abstract: Today’s CPUs are capable of supporting realtime audio for many popular applications, but some compute-intensive audio applications require hardware acceleration. This article looks at some realtime sound-synthesis applications and shares the authors’ experiences implementing them on GPUs (graphics processing units).

Journal ArticleDOI
TL;DR: The public cloud has introduced new technology and architectures that could reshape enterprise computing as mentioned in this paper. In particular, the public cloud is a new design center for enterprise applications, platform software, and services.
Abstract: The public cloud has introduced new technology and architectures that could reshape enterprise computing. In particular, the public cloud is a new design center for enterprise applications, platform software, and services. API-driven orchestration of large-scale, on-demand resources is an important new design attribute, which differentiates public-cloud from conventional enterprise data-center infrastructure. Enterprise applications must adapt to the new public-cloud design center, but at the same time new software and system design patterns can add enterprise attributes and service levels to public-cloud services.

Journal ArticleDOI
TL;DR: What is critical?
Abstract: What is critical? To what degree is critical defined as a matter of principle, and to what degree is it defined operationally? I am distinguishing what we say from what we do.

Journal ArticleDOI
TL;DR: In contrast to memory leaks, where the leaked memory is never released, the memory consumed by a space leak is released, but later than expected as discussed by the authors, which is called a memory leak.
Abstract: A space leak occurs when a computer program uses more memory than necessary. In contrast to memory leaks, where the leaked memory is never released, the memory consumed by a space leak is released, but later than expected. This article presents example space leaks and how to spot and eliminate them.

Journal ArticleDOI
TL;DR: In this paper, the authors present the best way of achieving scalability in web applications. But, the scalability of web applications can grow in fits and starts, and application usage patterns can vary seasonally.
Abstract: Web applications can grow in fits and starts. Customer numbers can increase rapidly, and application usage patterns can vary seasonally. This unpredictability necessitates an application that is scalable. What is the best way of achieving scalability?

Journal ArticleDOI
TL;DR: The rise of big data presents both big opportunities and big challenges in domains ranging from enterprises to sciences, including better-informed business decisions, more efficient supply-chain management and resource allocation, and faster turnaround of scientific discoveries.
Abstract: The rise of big data presents both big opportunities and big challenges in domains ranging from enterprises to sciences. The opportunities include better-informed business decisions, more efficient supply-chain management and resource allocation, more effective targeting of products and advertisements, better ways to "organize the world’s information," faster turnaround of scientific discoveries, etc.

Journal ArticleDOI
TL;DR: The problem of interoperability between languages has been a problem since the second programming language was invented as mentioned in this paper, and solutions have ranged from languageindependent object models such as COM (Component Object Model) and CORBA (Common Object Request Broker Architecture) to VMs (virtual machines) designed to integrate languages, such as JVM (Java Virtual Machine) and CLR (Common Language Runtime).
Abstract: Interoperability between languages has been a problem since the second programming language was invented. Solutions have ranged from language-independent object models such as COM (Component Object Model) and CORBA (Common Object Request Broker Architecture) to VMs (virtual machines) designed to integrate languages, such as JVM (Java Virtual Machine) and CLR (Common Language Runtime). With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. As modern compilers become more modular, there is potential for a new generation of interesting solutions.

Journal ArticleDOI
TL;DR: The transition from physical exchanges to electronic platforms has been particularly profitable for high-frequency trading firms, which invested heavily in the infrastructure of this new environment as discussed by the authors, which has been especially profitable for HFT firms.
Abstract: HFT (high-frequency trading) has emerged as a powerful force in modern financial markets. Only 20 years ago, most of the trading volume occurred in exchanges such as the New York Stock Exchange, where humans dressed in brightly colored outfits would gesticulate and scream their trading intentions. Nowadays, trading occurs mostly in electronic servers in data centers, where computers communicate their trading intentions through network messages. This transition from physical exchanges to electronic platforms has been particularly profitable for HFT firms, which invested heavily in the infrastructure of this new environment.

Journal ArticleDOI
TL;DR: The use of an IR as the compiler's internal representation of the program enables the compiler to be broken up into multiple phases and components, thus benefiting from modularity as mentioned in this paper, which is a technique that has been used extensively in the development of compilers.
Abstract: Program compilation is a complicated process. A compiler is a software program that translates a high-level source language program into a form ready to execute on a computer. Early in the evolution of compilers, designers introduced IRs (intermediate representations, also commonly called intermediate languages) to manage the complexity of the compilation process. The use of an IR as the compiler’s internal representation of the program enables the compiler to be broken up into multiple phases and components, thus benefiting from modularity.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a balance between throughput and latency to meet operating requirements in a cost-efficient manner in real-world systems with complicated quality-of-service guarantees.
Abstract: Real-world systems with complicated quality-of-service guarantees may require a delicate balance between throughput and latency to meet operating requirements in a cost-efficient manner. The increasing availability and decreasing cost of commodity multicore and many-core systems make concurrency and parallelism increasingly necessary for meeting demanding performance requirements. Unfortunately, the design and implementation of correct, efficient, and scalable concurrent software is often a daunting task.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the design considerations needed to optimize the back-end systems for mobile clients, and propose a framework to ensure that mobile clients are remotely served both data and application resources reliably and efficiently.
Abstract: Mobile clients have been on the rise and will only continue to grow. This means that if you are serving clients over the Internet, you cannot ignore the customer experience on a mobile device. There are many informative articles on mobile performance, and just as many on general API design, but you’ll find few discussing the design considerations needed to optimize the back-end systems for mobile clients. Whether you have an app, mobile Web site, or both, it is likely that these clients are consuming APIs from your back-end systems. Certainly, optimizing the on-mobile performance of the application is critical, but software engineers can do a lot to ensure that mobile clients are remotely served both data and application resources reliably and efficiently.

Journal ArticleDOI
TL;DR: Nonblocking synchronization can yield astonishing results in terms of scalability and realtime response, but at the expense of verification state space.
Abstract: Nonblocking synchronization can yield astonishing results in terms of scalability and realtime response, but at the expense of verification state space.