scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Emergency Management using Social Networks

01 Oct 2019-pp 721-726
TL;DR: An end-to-end framework is proposed that takes public posts from social networking sites and converts it into a structured format that makes the information actionable and applies influence maximization techniques to increase the reach and to warrant better public participation in the crisis in a timely manner.
Abstract: The popularity of social networks make them most efficient to integrate into the Emergency Management process. Posts on social networking sites can help people by ensuring timely detection of an emergency. Often during the situations of a natural disaster, there is an information chasm created between the affected and the unaffected area that further compounds the confusion and chaos. In this paper, we examine the various challenges that exist while attempting to integrate social networks and Emergency Management and trace the state-of-art techniques that exist in various domains that come together for this Emergency Management system. We propose an end-to-end framework that takes public posts from social networking sites and converts it into a structured format that makes the information actionable. A summarization technique may be applied to the acquired information post mining of social media feed to convert everything into a text message that can be released into various social platforms. To increase the reach of this post and to warrant better public participation in the crisis in a timely manner, we apply influence maximization techniques and monitor the diffusion process of this generated post through a diffusion modelling technique that we propose. We conduct experiments to analyze the performance of this model and of the influence maximization process and conclude with an analysis of the experiments and the observed results and list out improvements that we intend to incorporate in future versions of this work.
Citations
More filters
Journal ArticleDOI
TL;DR: Survey data collected from households in Jacksonville, Florida affected by 2016's Hurricane Matthew identifies perceived consistency of information as a key predictor of uncertainty regarding hurricane impact and evacuation logistics and provides practical implications regarding the need of information coordination for improved evacuation decision‐making.
Abstract: Understanding how information use contributes to uncertainties surrounding evacuation decisions is crucial during disasters. While literature increasingly establishes that people consult m...

7 citations


Cites background from "Emergency Management using Social N..."

  • ...…important to consider the impact of dynamic information environments such as augmented reality tools which assist with realistic visualization of spatial data and social networking sites which allows user-generated updates (Hiltz & Plotnick, 2013; Sharma & Kumar, 2019) on perceived uncertainties....

    [...]

References
More filters
Book
01 Nov 2013
TL;DR: A detailed description of well-established diffusion models, including the independent cascade model and the linear threshold model, that have been successful at explaining propagation phenomena are described as well as numerous extensions to them, introducing aspects such as competition, budget, and time-criticality, among many others.
Abstract: Research on social networks has exploded over the last decade. To a large extent, this has been fueled by the spectacular growth of social media and online social networking sites, which continue growing at a very fast pace, as well as by the increasing availability of very large social network datasets for purposes of research. A rich body of this research has been devoted to the analysis of the propagation of information, influence, innovations, infections, practices and customs through networks. Can we build models to explain the way these propagations occur? How can we validate our models against any available real datasets consisting of a social network and propagation traces that occurred in the past? These are just some questions studied by researchers in this area. Information propagation models find applications in viral marketing, outbreak detection, finding key blog posts to read in order to catch important stories, finding leaders or trendsetters, information feed ranking, etc. A number of algorithmic problems arising in these applications have been abstracted and studied extensively by researchers under the garb of influence maximization. This book starts with a detailed description of well-established diffusion models, including the independent cascade model and the linear threshold model, that have been successful at explaining propagation phenomena. We describe their properties as well as numerous extensions to them, introducing aspects such as competition, budget, and time-criticality, among many others. We delve deep into the key problem of influence maximization, which selects key individuals to activate in order to influence a large fraction of a network. Influence maximization in classic diffusion models including both the independent cascade and the linear threshold models is computationally intractable, more precisely #P-hard, and we describe several approximation algorithms and scalable heuristics that have been proposed in the literature. Finally, we also deal with key issues that need to be tackled in order to turn this research into practice, such as learning the strength with which individuals in a network influence each other, as well as the practical aspects of this research including the availability of datasets and software tools for facilitating research. We conclude with a discussion of various research problems that remain open, both from a technical perspective and from the viewpoint of transferring the results of research into industry strength applications. Table of Contents: Acknowledgments / Introduction / Stochastic Diffusion Models / Influence Maximization / Extensions to Diffusion Modeling and Influence Maximization / Learning Propagation Models / Data and Software for Information/Influence: Propagation Research / Conclusion and Challenges / Bibliography / Authors' Biographies / Index

358 citations


"Emergency Management using Social N..." refers background in this paper

  • ...[3] discuss the influence maximization algorithms and how they translate to real-world scenarios and various models for information diffusion on social networks....

    [...]

Journal ArticleDOI
TL;DR: This work develops an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate.
Abstract: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or publish the information, observing individual transmissions (who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks.We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.

337 citations

01 Jan 2001
TL;DR: In this paper, the authors show how a certain type of simulations that is based on complex systems studies (in this case stochastic cellular automata) may be used to generalize diffusion theory one of the fundamental theories of new product marketing.
Abstract: Aggregate level simulation procedures have been used in many areas of marketing. In this paper we show how individual level simulations may be used support marketing theory development. More specifically, we show how a certain type of simulations that is based on complex systems studies (in this case Stochastic Cellular Automata) may be used to generalize diffusion theory one of the fundamental theories of new product marketing. Cellular Automata models are simulations of global consequences, based on local interactions between individual members of a population, that are widely used in complex system analysis across disciplines. In this study we demonstrate how the Cellular Automata approach can help untangle complex marketing research problems. Specifically, we address two major issues facing current theory of innovation diffusion: The first is general lack of data at the individual level, while the second is the resultant inability of marketing researchers to empirically validate the main assumptions used in the aggregate models of innovation diffusion. Using a computer-based Cellular Automata Diffusion Simulation, we demonstrate how such problems can be overcome. More specifically, we show that relaxing the commonly used assumption of homogeneity in the consumers’ communication behavior is not a barrier to aggregate modeling. Thus we show that notwithstanding some exceptions, the well-known Bass model performs well on aggregate data when the assumption that that all adopters have a possible equal effect on all other potential adopters is relaxed. Through Cellular Automata we are better able to understand how individual level assumptions influence aggregate level parameter values, and learn the strengths and limitations of the aggregate level analysis. We believe that this study can serve as a demonstration towards a much wider use of Cellular Automata models for complex marketing research phenomena.

295 citations

Proceedings ArticleDOI
01 Oct 2011
TL;DR: This paper compares algorithms for extractive summarization of micro log posts with two algorithms that produce summaries by selecting several posts from a given set.
Abstract: Due to the sheer volume of text generated by a micro log site like Twitter, it is often difficult to fully understand what is being said about various topics. In an attempt to understand micro logs better, this paper compares algorithms for extractive summarization of micro log posts. We present two algorithms that produce summaries by selecting several posts from a given set. We evaluate the generated summaries by comparing them to both manually produced summaries and summaries produced by several leading traditional summarization systems. In order to shed light on the special nature of Twitter posts, we include extensive analysis of our results, some of which are unexpected.

174 citations

Proceedings ArticleDOI
20 Aug 2010
TL;DR: The goal is to produce summaries that are similar to what a human would produce for the same collection of posts on a specific topic, and evaluate the summaries produced by the summarizing algorithms, compare them with human-produced summaries and obtain excellent results.
Abstract: —This paper presents algorithms for summarizingmicroblog posts. In particular, our algorithms process collectionsof short posts on specific topics on the well-known site calledTwitter and create short summaries from these collections ofposts on a specific topic. The goal is to produce summariesthat are similar to what a human would produce for the samecollection of posts on a specific topic. We evaluate the summariesproduced by the summarizing algorithms, compare them withhuman-produced summaries and obtain excellent results. I. I NTRODUCTION Twitter, the microblogging site started in 2006, has becomea social phenomenon, with more than 20 million visitors eachmonth. While the majority posts are conversational or notvery meaningful, about 3.6% of the posts concern topics ofmainstream news 1 . At the end of 2009, Twitter had 75 millionaccount holders, of which about 20% are active 2 . There areapproximately 2.5 million Twitter posts per day 3 . To helppeople who read Twitter posts or tweets, Twitter provides ashort list of popular topics called

148 citations