scispace - formally typeset
Search or ask a question
Author

Danny Wyatt

Other affiliations: University of Washington
Bio: Danny Wyatt is an academic researcher from Google. The author has contributed to research in topics: Social network & Activity recognition. The author has an hindex of 11, co-authored 20 publications receiving 1306 citations. Previous affiliations of Danny Wyatt include University of Washington.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a wearable activity recognition system is proposed to recognize human activities from body-worn sensors, which can further open the door to a world of healthcare applications, such as fitness monitoring, eldercare support, long-term preventive and chronic care, and cognitive assistance.
Abstract: Activity-aware systems have inspired novel user interfaces and new applications in smart environments, surveillance, emergency response, and military missions. Systems that recognize human activities from body-worn sensors can further open the door to a world of healthcare applications, such as fitness monitoring, eldercare support, long-term preventive and chronic care, and cognitive assistance. Wearable systems have the advantage of being with the user continuously. So, for example, a fitness application could use real-time activity information to encourage users to perform opportunistic activities. Furthermore, the general public is more likely to accept such activity recognition systems because they are usually easy to turn off or remove.

634 citations

Proceedings Article
09 Jul 2005
TL;DR: To the first human activity inferencing system shown to learn from sensed activity data with no human intervention per activity learned, even for labeling, this work shows that segmentation obtained is sufficient to bootstrap learning.
Abstract: A fundamental difficulty in recognizing human activities is obtaining the labeled data needed to learn models of those activities. Given emerging sensor technology, however, it is possible to view activity data as a stream of natural language terms. Activity models are then mappings from such terms to activity names, and may be extracted from text corpora such as the web. We show that models so extracted are sufficient to automatically produce labeled segmentations of activity data with an accuracy of 42% over 26 activities, well above the 3.89% baseline. The segmentation so obtained is sufficient to bootstrap learning, with accuracy of learned models increasing to 52%. To our knowledge, this is the first human activity inferencing system shown to learn from sensed activity data with no human intervention per activity learned, even for labeling.

246 citations

Journal ArticleDOI
TL;DR: New methods for inferring colocation and conversation networks from privacy-sensitive audio are applied in a study of face-to-face interactions among 24 students in a graduate school cohort during an academic year, and show that networks derived from colocations and conversation inferences are quite different.
Abstract: New technologies have made it possible to collect information about social networks as they are acted and observed in the wild, instead of as they are reported in retrospective surveys. These technologies offer opportunities to address many new research questions: How can meaningful information about social interaction be extracted from automatically recorded raw data on human behaviorq What can we learn about social networks from such fine-grained behavioral dataq And how can all of this be done while protecting privacyq With the goal of addressing these questions, this article presents new methods for inferring colocation and conversation networks from privacy-sensitive audio. These methods are applied in a study of face-to-face interactions among 24 students in a graduate school cohort during an academic year. The resulting analysis shows that networks derived from colocation and conversation inferences are quite different. This distinction can inform future research in computational social science, especially work that only measures colocation or employs colocation data as a proxy for conversation networks.

82 citations

Journal Article
TL;DR: Some of the current approaches in activity recognition which use a variety of different sensors to collect data about users’ activities, and probabilistic models and relational information that are used to transform the raw sensor data into higher-level descriptions of people’s behaviors and interactions are introduced.
Abstract: Automated reasoning about human behavior is a central goal of artificial intelligence. In order to engage and intervene in a meaningful way, an intelligent system must be able to understand what humans are doing, their goals and intentions. Furthermore, as social animals, people’s interactions with each other underlie many aspects of their lives: how they learn, how they work, how they play and how they affect the broader community. Understanding people’s interactions and their social networks will play an important role in designing technology and applications that are “socially-aware”. This paper introduces some of the current approaches in activity recognition which use a variety of different sensors to collect data about users’ activities, and probabilistic models and relational information that are used to transform the raw sensor data into higher-level descriptions of people’s behaviors and interactions. The end result of these methods is a richly structured dataset describing people’s daily patterns of activities and their evolving social networks. The potential applications of such datasets include mapping patterns of information-flow within an organization, predicting the spread of disease within a community, monitoring the health and activity-levels of elderly patients as well as healthy adults, and allowing “smart environments” to respond proactively to the needs and intentions of their users.

77 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: Experimental results show that the conversation finding method outperforms earlier approaches and that the speaker segmentation method is a significant improvement to the only other known privacy-sensitive method for speaker segmentsation.
Abstract: We present privacy-sensitive methods for (1) automatically finding multi-person conversations in spontaneous, situated speech data and (2) segmenting those conversations into speaker turns. The methods protect privacy through a feature set that is rich enough to capture conversational styles and dynamics, but not sufficient for reconstructing intelligible speech. Experimental results show that the conversation finding method outperforms earlier approaches and that the speaker segmentation method is a significant improvement to the only other known privacy-sensitive method for speaker segmentation.

71 citations


Cited by
More filters
01 Jan 2012

3,692 citations

Journal ArticleDOI
TL;DR: This work describes and evaluates a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing, and has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity.
Abstract: Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors These sensors include GPS sensors, vision sensors (ie, cameras), audio sensors (ie, microphones), light sensors, temperature sensors, direction sensors (ie, magnetic compasses), and acceleration sensors (ie, accelerometers) The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals We then used the resulting training data to induce a predictive model for activity recognition This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (eg, sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise

2,417 citations

Journal ArticleDOI
TL;DR: This article surveys existing mobile phone sensing algorithms, applications, and systems, and discusses the emerging sensing paradigms, and formulates an architectural framework for discussing a number of the open issues and challenges emerging in the new area ofMobile phone sensing research.
Abstract: Mobile phones or smartphones are rapidly becoming the central computer and communication device in people's lives. Application delivery channels such as the Apple AppStore are transforming mobile phones into App Phones, capable of downloading a myriad of applications in an instant. Importantly, today's smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera, which are enabling the emergence of personal, group, and communityscale sensing applications. We believe that sensor-equipped mobile phones will revolutionize many sectors of our economy, including business, healthcare, social networks, environmental monitoring, and transportation. In this article we survey existing mobile phone sensing algorithms, applications, and systems. We discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research.

2,316 citations

Journal ArticleDOI
TL;DR: The state of the art in HAR based on wearable sensors is surveyed and a two-level taxonomy in accordance to the learning approach and the response time is proposed.
Abstract: Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.

2,184 citations

Proceedings ArticleDOI
05 Nov 2008
TL;DR: The CenceMe application is presented, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace.
Abstract: We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.

1,184 citations