scispace - formally typeset
Search or ask a question
Author

Ehab Abozinadah

Other affiliations: George Mason University
Bio: Ehab Abozinadah is an academic researcher from King Abdulaziz University. The author has contributed to research in topics: Electronic mail & Corporate governance. The author has an hindex of 5, co-authored 11 publications receiving 139 citations. Previous affiliations of Ehab Abozinadah include George Mason University.

Papers
More filters
Journal ArticleDOI
TL;DR: An adaptive NN controller is established which can ensure that all the signals in the closed-loop system are bounded under a class of switching signals with average dwell time and the tracking error converges to the predefined bounds.

68 citations

Proceedings ArticleDOI
17 May 2014
TL;DR: A large scale social graph is constructed from a smaller set of leaked data that included only the criminal's email addresses to identify profiles of high rank criminals, criminal organizations and large scale communities of criminals.
Abstract: Analysis of criminal social graph structures can enable us to gain valuable insights into how these communities are organized. Such as, how large scale and centralized these criminal communities are currently? While these types of analysis have been completed in the past, we wanted to explore how to construct a large scale social graph from a smaller set of leaked data that included only the criminal's email addresses. We begin our analysis by constructing a 43 thousand node social graph from one thousand publicly leaked criminals' email addresses. This is done by locating Facebook profiles that are linked to these same email addresses and scraping the public social graph from these profiles. We then perform a large scale analysis of this social graph to identify profiles of high rank criminals, criminal organizations and large scale communities of criminals. Finally, we perform a manual analysis of these profiles that results in the identification of many criminally focused public groups on Facebook. This analysis demonstrates the amount of information that can be gathered by using limited data leaks.

55 citations

Journal ArticleDOI
TL;DR: An Arabic word correction method is proposed that achieves a predictive accuracy of 96.5% for detecting abusive accounts with Arabic tweets and aims to address this vulnerability of spammers exploiting vulnerabilities in content-filtering and internet censorship systems.
Abstract: The increased use of social media in Arab regions has attracted spammers seeking new victims. Spammers use accounts on Twitter to distribute adult content in Arabic-language tweets, yet this content is prohibited in these countries due to Arabic cultural norms. These spammers succeed in sending targeted spam by exploiting vulnerabilities in content-filtering and internet censorship systems, primarily by using misspelled words to bypass content filters. In this paper we propose an Arabic word correction method to address this vulnerability. Using our approach, we achieve a predictive accuracy of 96.5% for detecting abusive accounts with Arabic tweets.

25 citations

Proceedings ArticleDOI
19 May 2017
TL;DR: This paper used a statistical learning approach to analyze Twitter content to detect abusive accounts that use obscenity, profanity, slang, and swearing words in Arabic text format and achieved a predictive accuracy of 96% and overcomes imitations of the bag-of-word (BOW) approach.
Abstract: The increased use of social media has motivated spammers to post their malicious activities on social network sites. Some of these spammers use adult content to further the distribution of their malicious activities. Moreover, the extensive number of users posting adult content in social media degrades the experience for other users for whom the adult content is not desired or appropriate. In this paper, we aim to detect abusive accounts that post adult content using Arabic language to target Arab speakers. There is limited natural language processing (NLP) resources for the Arabic language, and to the best of our knowledge no research has been done to detect adult accounts with Arabic language in social media. We used a statistical learning approach to analyze Twitter content to detect abusive accounts that use obscenity, profanity, slang, and swearing words in Arabic text format. Our approach achieved a predictive accuracy of 96% and overcomes imitations of the bag-of-word (BOW) approach.

24 citations

Journal ArticleDOI
TL;DR: The concept of deep journalism is introduced, a data-driven deep learning-based approach, to discover and analyse cross-sectional multi-perspective information to enable better decision making and develop better instruments for academic, corporate, national, and international governance.
Abstract: We live in a complex world characterised by complex people, complex times, and complex social, technological, economic, and ecological environments. The broad aim of our work is to investigate the use of ICT technologies for solving pressing problems in smart cities and societies. Specifically, in this paper, we introduce the concept of deep journalism, a data-driven deep learning-based approach, to discover and analyse cross-sectional multi-perspective information to enable better decision making and develop better instruments for academic, corporate, national, and international governance. We build three datasets (a newspaper, a technology magazine, and a Web of Science dataset) and discover the academic, industrial, public, governance, and political parameters for the transportation sector as a case study to introduce deep journalism and our tool, DeepJournal (Version 1.0), that implements our proposed approach. We elaborate on 89 transportation parameters and hundreds of dimensions, reviewing 400 technical, academic, and news articles. The findings related to the multi-perspective view of transportation reported in this paper show that there are many important problems that industry and academia seem to ignore. In contrast, academia produces much broader and deeper knowledge on subjects such as pollution that are not sufficiently explored in industry. Our deep journalism approach could find the gaps in information and highlight them to the public and other stakeholders.

14 citations


Cited by
More filters
Proceedings ArticleDOI
04 Aug 2017
TL;DR: A list of obscene words and hashtags is extracted using common patterns used in offensive and rude communications and Twitter users are classified according to whether they use any of these words or not in their tweets.
Abstract: In this paper, we present our work on detecting abusive language on Arabic social media. We extract a list of obscene words and hashtags using common patterns used in offensive and rude communications. We also classify Twitter users according to whether they use any of these words or not in their tweets. We expand the list of obscene words using this classification, and we report results on a newly created dataset of classified Arabic tweets (obscene, offensive, and clean). We make this dataset freely available for research, in addition to the list of obscene words and hashtags. We are also publicly releasing a large corpus of classified user comments that were deleted from a popular Arabic news site due to violations the site’s rules and guidelines.

247 citations

Proceedings ArticleDOI
25 Sep 2014
TL;DR: This work has an approach between computer science and criminal justice to develop a data mining procedure that can help solve crimes faster and is focusing mainly on crime factors of each day.
Abstract: Crime analysis and prevention is a systematic approach for identifying and analyzing patterns and trends in crime. Our system can predict regions which have high probability for crime occurrence and can visualize crime prone areas. With the increasing advent of computerized systems, crime data analysts can help the Law enforcement officers to speed up the process of solving crimes. Using the concept of data mining we can extract previously unknown, useful information from an unstructured data. Here we have an approach between computer science and criminal justice to develop a data mining procedure that can help solve crimes faster. Instead of focusing on causes of crime occurrence like criminal background of offender, political enmity etc we are focusing mainly on crime factors of each day.

114 citations

Posted Content
TL;DR: This paper introduces a method for building a dataset that is not biased by topic, dialect, or target, and produces the largest Arabic dataset to date with special tags for vulgarity and hate speech.
Abstract: Detecting offensive language on Twitter has many applications ranging from detecting/predicting bullying to measuring polarization. In this paper, we focus on building a large Arabic offensive tweet dataset. We introduce a method for building a dataset that is not biased by topic, dialect, or target. We produce the largest Arabic dataset to date with special tags for vulgarity and hate speech. We thoroughly analyze the dataset to determine which topics, dialects, and gender are most associated with offensive tweets and how Arabic speakers use offensive language. Lastly, we conduct many experiments to produce strong results (F1 = 83.2) on the dataset using SOTA techniques.

105 citations

Journal ArticleDOI
TL;DR: In this paper, a tracking control problem for a class of stochastic nonlinear systems with time-varying full-state constraints and asymmetric input saturation is studied.
Abstract: This article studies a tracking control problem for a class of stochastic nonlinear systems with time-varying full-state constraints and asymmetric input saturation. Firstly, the Gauss Error Functi...

80 citations

Journal ArticleDOI
TL;DR: Based on the Lyapunov stability theory, the applicability of the proposed adaptive AC optimal control method is verified to guarantee the boundedness of all signals in the considered closed-loop switched nonlinear systems.

80 citations