scispace - formally typeset
Search or ask a question
Author

Aly A. Fahmy

Other affiliations: Zagazig University
Bio: Aly A. Fahmy is an academic researcher from Cairo University. The author has contributed to research in topics: Support vector machine & Optimization problem. The author has an hindex of 19, co-authored 80 publications receiving 1740 citations. Previous affiliations of Aly A. Fahmy include Zagazig University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper aims to analyse people emotions from tweets extracted during the Arab Spring and the recent Egyptian Revolution by proposing a time emotional analysis framework that consists of four components namely annotating tweets, classifying at tweet/expression levels, clustering on some aspects, and analysing the distributions of people emo-tions, expression and aspects over specific time.
Abstract: Sentiment and emotional analyses have recently become effective tools to discover peoples attitudes towards real-life events. While Many corners of the emotional analysis research have been conducted, time emotional analysis at expression and aspect levels is yet to be intensively explored. This paper aims to analyse people emotions from tweets extracted during the Arab Spring and the recent Egyptian Revolution. Analysis is done on tweet, expression and aspect levels. In this research, we only consider surprise, happiness, sadness, and anger emotions in addition to sarcasm expression. We propose a time emotional analysis framework that consists of four components namely annotating tweets, classifying at tweet/expression levels, clustering on some aspects, and analysing the distributions of people emo-tions,expressions, and aspects over specific time. Our contribution is two-fold. First, our framework effectively analyzes people emotional trends over time, at different fine-granularity levels (tweets, expressions, and aspects) while being easily adaptable to other languages. Second, we developed a lightweight clustering algorithm that utilizes the short length of tweets. On this problem, the developed clustering algorithm achieved higher results compared to state-of-the-art clustering algorithms. Our approach achieved 70.1% F-measure in classification, compared to 85.4% which is the state of the art results on English. Our approach also achieved 61.45% purity in clustering.

7 citations

01 Jan 2009
TL;DR: The proposed model gives high accuracy in translating the Queries from English to Arabic solving the translation and transliteration ambiguities and with orthographic query expansion, it gave high degree of accuracy in handling OCR errors.
Abstract: In this paper, a novel for Query Translation and Expansion for enabling English/Arabic CLIR for both normal and OCR-Degraded Arabic Text model has been proposed, implemented, and tested. First, an English/Arabic Word Collocations Dictionary has been established plus reproducing three English/Arabic Single Words Dictionaries. Second, a modern Arabic Corpus has been built. Third, a model for simulating the Arabic OCR errors has been proposed. Forth, a comprehensive model for Query Translation and expansion is proposed. The model translates the Query from English to Arabic detecting and translating collocations, translating single words and transliterating names. It solves the replacement ambiguity then it expands the Arabic Query to handle the expected Arabic OCR errors. The proposed model gives high accuracy in translating the Queries from English to Arabic solving the translation and transliteration ambiguities and with orthographic query expansion, it gave high degree of accuracy in handling OCR errors.

6 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work adopts the idea and introduces a statistical model of the interactions between social network's actors, and uses Bayesian network (probabilistic graphical model) to show the relation between model variables.
Abstract: Community detection in complex networks has attracted a lot of attention in recent years. Communities play special roles in the structure-function relationship; therefore, detecting communities can be a way to identify substructures that could correspond to important functions. Social networks can be formalized by a statistical model in which interactions between actors are generated based on some assumptions. We adopt the idea and introduce a statistical model of the interactions between social network's actors, and we use Bayesian network (probabilistic graphical model) to show the relation between model variables. Through the use Expectation Maximization (EM) algorithm, we drive estimates for the model parameters and propose a community detection algorithm based on the EM estimates. The proposed algorithm works well with directed and undirected networks, and with weighted and un-weighted networks. The algorithm yields very promising results when applied to the community detection problem.

6 citations

Proceedings ArticleDOI
19 Apr 2014
TL;DR: Experimental evaluations using 4,878 theses data set in the medical sector at Cairo University indicate that the proposed approach yields results that correlate more closely with human assessments than other by using the standard ontology (MeSH).
Abstract: Knowledge exaction and text representation are considered as the main concepts concerning organizations nowadays. The estimation of the semantic similarity between words provides a valuable method to enable the understanding of texts. In the field of biomedical domains, using Ontologies have been very effective due to their scalability and efficiency. In this paper, we aim to cluster and classify medical thesis data to better discover the commonalities between theses data and hence, improve the accuracy of the similarity estimation which in return improves the scientific research sector. Experimental evaluations using 4,878 theses data set in the medical sector at Cairo University indicate that the proposed approach yields results that correlate more closely with human assessments than other by using the standard ontology (MeSH). Two different algorithms were used; the first is Lexical similarity and then applying K-means clustering and the second is fuzzy Euclidean distance clustering algorithm after using MeSH ontology on medical theses data for better categorization of the keywords within the data.

6 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Journal ArticleDOI
TL;DR: Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.
Abstract: Objective Electroencephalography (EEG) analysis has been an important tool in neuroscience with applications in neuroscience, neural engineering (e.g. Brain-computer interfaces, BCI's), and even commercial applications. Many of the analytical tools used in EEG studies have used machine learning to uncover relevant information for neural classification and neuroimaging. Recently, the availability of large EEG data sets and advances in machine learning have both led to the deployment of deep learning architectures, especially in the analysis of EEG signals and in understanding the information it may contain for brain functionality. The robust automatic classification of these signals is an important step towards making the use of EEG more practical in many applications and less reliant on trained professionals. Towards this goal, a systematic review of the literature on deep learning applications to EEG classification was performed to address the following critical questions: (1) Which EEG classification tasks have been explored with deep learning? (2) What input formulations have been used for training the deep networks? (3) Are there specific deep learning network structures suitable for specific types of tasks? Approach A systematic literature review of EEG classification using deep learning was performed on Web of Science and PubMed databases, resulting in 90 identified studies. Those studies were analyzed based on type of task, EEG preprocessing methods, input type, and deep learning architecture. Main results For EEG classification tasks, convolutional neural networks, recurrent neural networks, deep belief networks outperform stacked auto-encoders and multi-layer perceptron neural networks in classification accuracy. The tasks that used deep learning fell into five general groups: emotion recognition, motor imagery, mental workload, seizure detection, event related potential detection, and sleep scoring. For each type of task, we describe the specific input formulation, major characteristics, and end classifier recommendations found through this review. Significance This review summarizes the current practices and performance outcomes in the use of deep learning for EEG classification. Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.

777 citations

Journal ArticleDOI
TL;DR: The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.

756 citations

Proceedings ArticleDOI
01 Jun 2016
TL;DR: The SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. as mentioned in this paper discusses the fourth year of the Sentiment Analysis in Twitter Task and discusses the three new subtasks focus on two variants of the basic sentiment classification in Twitter task.
Abstract: This paper discusses the fourth year of the ”Sentiment Analysis in Twitter Task”. SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. The first two subtasks are reruns from prior years and ask to predict the overall sentiment, and the sentiment towards a topic in a tweet. The three new subtasks focus on two variants of the basic “sentiment classification in Twitter” task. The first variant adopts a five-point scale, which confers an ordinal character to the classification task. The second variant focuses on the correct estimation of the prevalence of each class of interest, a task which has been called quantification in the supervised learning literature. The task continues to be very popular, attracting a total of 43 teams.

702 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

699 citations