scispace - formally typeset
Search or ask a question
Book

Intermediate microeconomics : A modern approach

01 Jan 2006-
TL;DR: The Varian approach as mentioned in this paper gives students tools they can use on exams, in the rest of their classes, and in their careers after graduation, and is still the most modern presentation of the subject.
Abstract: This best-selling text is still the most modern presentation of the subject. The Varian approach gives students tools they can use on exams, in the rest of their classes, and in their careers after graduation.
Citations
More filters
Book ChapterDOI
21 Sep 2015
TL;DR: This paper presents an economic model of the privacy problem in data-centric business, in that drawing from contract theory, and analyzes how regulatory and technological instruments could balance efficiency of markets for personal data and data-subjects' right to informational self-determination.
Abstract: Personal data has emerged as a crucial asset of the digital economy. However, unregulated markets for personal data severely threaten consumers' privacy. Based upon a commodity-centric notion of privacy, this paper takes a principal-agent perspective on data-centric business. Specifically, this paper presents an economic model of the privacy problem in data-centric business, in that drawing from contract theory. Building upon a critical analysis of the model, this paper analyzes how regulatory and technological instruments could balance efficiency of markets for personal data and data-subjects' right to informational self-determination.

6 citations

Posted Content
TL;DR: In their article as discussed by the authors, Shamena Anwar and Hanming Fang study racial prejudice in motor vehicle searches by Florida Highway Patrol officers (Troopers) and present a model that exploits this information to test whether troopers go beyond statistical discrimination to racial prejudice.
Abstract: In their article “An Alternative Test of Racial Prejudice in Motor Vehicle Searches: Theory and Evidence,” published in the American Economic Review in 2006, Shamena Anwar and Hanming Fang study racial prejudice in motor vehicle searches by Florida Highway Patrol officers (“troopers”). Their data include the race and ethnicity of the trooper and of the motorist stopped and possibly searched. A search is deemed successful if the trooper finds contraband in the vehicle. Using data on troopers and motorists of three race-ethnicity groups (white non-Hispanic, black, and white Hispanic, with others being dropped), Anwar and Fang compute nine trooper-on-motorist search rates and nine search-success rates. They present a model that exploits this information to test whether troopers go beyond statistical discrimination to racial prejudice. Irrespective of whether troopers exhibit racial prejudice, the model has a crucial testable implication, an implication that concerns the rank-order of the search and search-success rates. Anwar and Fang report that their data neatly fit this predicted rank-order implication with high statistical significance across the board, strongly supporting the soundness of the model. In turn, the model is applied to address the question of racial prejudice. They do not find evidence of racial prejudice, and neither do I—so the present critique does not arrive at results about prejudice contrary to their results. The present critique starts by reporting on my effort to replicate Anwar and Fang’s preliminary rank-order findings. I am unable to replicate two of their nine reported search-success rates, nor can I replicate the reported statistical significance of four of the six Z-statistics and one of the three χ2 test statistics for the rankings of the search-success rates. My new results imply that the empirical support for the model’s soundness is not what Anwar and Fang claim it to be. This problem of irreplicability is my primary point, but I then move on to another matter: My replications draw attention to a neglected statistical caveat in Anwar and Fang’s implementation of the empirical tests of racial prejudice. It turns out that the novel resampling procedure they employ does not provide robust results. I pinpoint the empirical source of this issue and, in an appendix, show how a simple extension to their method improves robustness. In another appendix I put forth an alternative randomization test that seems more appropriate when testing such resampled data.

6 citations

Proceedings Article
10 Jun 2015
TL;DR: The results show that the number of mobile devices used and the portion of work carried out offsite have only a low impact on the perceived value of smartphones and tablets, while the impact on value is high if mobile devices support business processes and if the variety of information used is high.
Abstract: Today, mobile devices like smartphones and tablets are omnipresent in many parts of the world. They are used for private and business activities. The effects of mobile business are discussed more and more in micro-enterprises as well as in small and medium-sized enterprises (SMEs). The question is: Do these devices have an impact on the productivity, flexibility and business processes of companies? The goal of this paper is to develop an explorative model that helps to identify and explain these effects. The investigation is based on a quantitative empirical study conducted among 900 Swiss SMEs. The model is estimated and evaluated using Partial Least Square (PLS) structural equation modelling. The results show that the number of mobile devices used and the portion of work carried out offsite have only a low impact on the perceived value of smartphones and tablets. On the other hand, the impact on value is high if mobile devices support business processes and if the variety of information used is high.

6 citations


Cites background from "Intermediate microeconomics : A mod..."

  • ...The information needs in mobile work processes depend on the business sector and on the tasks that have to be fulfilled (Varian, 2010)....

    [...]

  • ...4 Information Used when Mobile The information needs in mobile work processes depend on the business sector and on the tasks that have to be fulfilled (Varian, 2010)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a dynamic edge partitioning algorithm is proposed to partition dynamics in real-time, which deals with dynamics by a distributed stream and improves partition quality by reassigning some closely connected edges.
Abstract: Graph partitioning is a mandatory step in large-scale distributed graph processing. When partitioning real-world power-law graphs, the edge partitioning algorithm performs better than the traditional vertex partitioning algorithm, because it can cut a single vertex into multiple replicas to apportion the computation. Many advanced edge partitioning methods are designed for partitioning a static graph from scratch. However, the real-world graph structure changes continuously, which leads to a decrease in partition quality and affects the performance of the graph applications. Some studies are devoted to offline repartitioning or batch incremental partitioning, but how to deal with dynamics in real-time is still worthy of in-depth study. In this article, we discuss the impact of dynamic change on partition and discover that both insertion and deletion will lead to local suboptimal partitioning, which is the reason for the degradation of partition quality. As a solution, a dynamic edge partitioning algorithm is proposed to partition dynamics in real-time. Specifically, we deal with dynamics by a distributed stream and improve partition quality by reassigning some closely connected edges. Experiments show that it is robust to initial partition quality, dynamic scale and type, and distributed scale. Compared with the state-of-the-art dynamic partitioner, it can reduce vertex-cuts by 29.5 percent. Compared with the repartitioning algorithms, it can save the partitioning time by 91.0 percent. Applied on the graph task, it can reduce the increase of communication cost and the increase of the total time of task by 41.5 and 71.4 percent.

6 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a definition of income that rejects both the BEA and Haig-Simons definitions concerning capital gains, and demonstrate that the rollover treatment implied by this definition ends double taxation, under-taxation, lock-in of capital, excessive incentives to consume capital, and other economic distortions.
Abstract: This paper presents a definition of income that rejects both the BEA and Haig-Simons definitions concerning capital gains. Specifically, capital gains represent future income unless brought to the present by consumption of the gain. We demonstrate that the rollover treatment implied by this definition ends double-taxation, under-taxation, lock-in of capital, excessive incentives to consume capital, and other economic distortions. Finally, we detail an administratively-simple deferred gain account rule for the rollover treatment which would require that taxpayers only track one additional item of information – the total deferred gain that would be rolled over into the next tax year.

6 citations


Cites methods from "Intermediate microeconomics : A mod..."

  • ...We begin with an analysis of the value of a capital asset, building from the standard approach for calculating the present value (Varian 1987, p. 191)....

    [...]

  • ...Thomas Wiedmer (2002) found, employing an endogenous growth theory model, that when an economy is characterized by a bubble, the higher the tax rate, the smaller the asset bubble....

    [...]