scispace - formally typeset
Search or ask a question
Author

Hemank Lamba

Bio: Hemank Lamba is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Social media & Computer science. The author has an hindex of 12, co-authored 49 publications receiving 1134 citations. Previous affiliations of Hemank Lamba include Indraprastha Institute of Information Technology & IBM.


Papers
More filters
Proceedings ArticleDOI
13 May 2013
TL;DR: The role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster was highlighted, and automated techniques can be used in identifying real images from fake images posted on Twitter.
Abstract: In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.

586 citations

Proceedings Article
22 Apr 2015
TL;DR: It is found that, controlling for other factors, population has no effect on the number of geotag users, and instead it is predicted by a number of factors including higher median income, being in an urban area, being further east or on a coast, having more young people, and having high Asian, Black or Hispanic/Latino populations.
Abstract: Geotagged tweets are an exciting and increasingly popular data source, but like all social media data, they potentially have biases in who are represented. Motivated by this, we investigate the question, 'are users of geotagged tweets randomly distributed over the US population'? We link approximately 144 million geotagged tweets within the US, representing 2.6m unique users, to high-resolution Census population data and carry out a statistical test by which we answer this question strongly in the negative. We utilize spatial models and integrate further Census data to investigate the factors associated with this nonrandom distribution. We find that, controlling for other factors, population has no effect on the number of geotag users, and instead it is predicted by a number of factors including higher median income, being in an urban area, being further east or on a coast, having more young people, and having high Asian, Black or Hispanic/Latino populations.

165 citations

Proceedings ArticleDOI
20 Oct 2013
TL;DR: The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral on Twitter, and found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content.
Abstract: Online social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information.We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves.

144 citations

Proceedings ArticleDOI
19 Jul 2018
TL;DR: This work proposes a density-based ensemble outlier detector, called xStream, for feature-evolving streams, which has the following key properties: it is a constant-space and constant-time algorithm, it measures outlierness at multiple scales or granularities, it can handle high-dimensionality through distance-preserving projections, and non-stationarity via $O(1)$-time model updates as the stream progresses.
Abstract: This work addresses the outlier detection problem for feature-evolving streams, which has not been studied before. In this setting both (1) data points may evolve, with feature values changing, as well as (2) feature space may evolve, with newly-emerging features over time. This is notably different from row-streams, where points with fixed features arrive one at a time. We propose a density-based ensemble outlier detector, called xStream, for this more extreme streaming setting which has the following key properties: (1) it is a constant-space and constant-time (per incoming update) algorithm, (2) it measures outlierness at multiple scales or granularities, it can handle (3 i ) high-dimensionality through distance-preserving projections, and (3$ii$) non-stationarity via $O(1)$-time model updates as the stream progresses. In addition, xStream can address the outlier detection problem for the (less general) disk-resident static as well as row-streaming settings. We evaluate xStream rigorously on numerous real-life datasets in all three settings: static, row-stream, and feature-evolving stream. Experiments under static and row-streaming scenarios show that xStream is as competitive as state-of-the-art detectors and particularly effective in high-dimensions with noise. We also demonstrate that our solution is fast and accurate with modest space overhead for evolving streams, on which there exists no competition.

57 citations

Patent
Kuntal Dey1, Hemank Lamba1, Seema Nagar1
12 Aug 2014
TL;DR: In this paper, the authors present a method for automatically triggering actions on a user device based on biometrics of nearby individuals, which includes capturing, via sensors resident on a first computing device, one or more items of biometric information from individuals located within a given proximity of the first device.
Abstract: Methods, systems, and computer program products for automatically triggering actions on a user device based on biometrics of nearby individuals are provided herein. A method includes capturing, via one or more sensors resident on a first computing device, one or more items of biometric information from one or more individuals located within a given proximity of the first computing device; determining a degree of proximity of the one or more individuals to the first computing device; transmitting, to a second computing device, (i) the one or more captured items of biometric information from the one or more individuals and (ii) the determined degree of proximity of the one or more individuals to the first computing device; and automatically executing one or more actions on the first computing device based on an instruction provided by the second computing device in response to said transmitting.

46 citations


Cited by
More filters
Journal ArticleDOI
09 Mar 2018-Science
TL;DR: A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth, and false news was more novel than true news, which suggests that people were more likely to share novel information.
Abstract: We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

4,241 citations

01 Jan 2008
TL;DR: In this article, the authors argue that rational actors make their organizations increasingly similar as they try to change them, and describe three isomorphic processes-coercive, mimetic, and normative.
Abstract: What makes organizations so similar? We contend that the engine of rationalization and bureaucratization has moved from the competitive marketplace to the state and the professions. Once a set of organizations emerges as a field, a paradox arises: rational actors make their organizations increasingly similar as they try to change them. We describe three isomorphic processes-coercive, mimetic, and normative—leading to this outcome. We then specify hypotheses about the impact of resource centralization and dependency, goal ambiguity and technical uncertainty, and professionalization and structuration on isomorphic change. Finally, we suggest implications for theories of organizations and social change.

2,134 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets.
Abstract: Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.

1,891 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the characteristics of modern, sophisticated social bots and how their presence can endanger online ecosystems and our society, and review current efforts to detect social bots on Twitter.
Abstract: The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today's social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.

1,229 citations

01 Jan 2013

1,098 citations