scispace - formally typeset
Search or ask a question
Author

Ahmed Abdeen Hamed

Other affiliations: Merck & Co., University of Vermont, Norwich University  ...read more
Bio: Ahmed Abdeen Hamed is an academic researcher from Suez Canal University. The author has contributed to research in topics: Social media & Bigram. The author has an hindex of 8, co-authored 20 publications receiving 181 citations. Previous affiliations of Ahmed Abdeen Hamed include Merck & Co. & University of Vermont.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors developed a cascading integrated assessment model to test the impacts of global climate change with anthropogenic land use and land cover changes (LULCCs) for watershed level hydrological dynamics and nutrientfluxes to freshwater lakes.
Abstract: Global climate change (GCC) is projected to bring higher-intensity precipitation and highervariability temperature regimes to theNortheasternUnited States. The interactive effects of GCCwith anthropogenic land use and land cover changes (LULCCs) are unknown for watershed level hydrological dynamics and nutrientfluxes to freshwater lakes. Increased nutrient fluxes can promote harmful algal blooms, also exacerbated bywarmerwater temperatures due toGCC. To address the complex interactions of climate, land and humans, we developed a cascading integrated assessment model to test the impacts of GCC and LULCCon the hydrological regime, water temperature, water quality, bloomduration and severity through 2040 in transnational Lake Champlain’sMissisquoi Bay. Temperature and precipitation inputs were statistically downscaled from four global circulation models (GCMs) for three Representative Concentration Pathways. An agent-basedmodel was used to generate four LULCC scenarios. Combined climate and LULCC scenarios drove a distributed hydrologicalmodel to estimate river discharge and nutrient input to the lake. Lake nutrient dynamics were simulatedwith a 3Dhydrodynamic-biogeochemicalmodel.We find acceleratedGCC could drastically limit landmanagement options tomaintainwater quality, but the nature and severity of this impact varies dramatically byGCMandGCC scenario.

42 citations

Journal ArticleDOI
TL;DR: The novel citation network classifier for the biomedical text mining domain, while not a top performing classifier in the challenge, performed above the central tendency of all submissions, and therefore indicates a promising new avenue to investigate further in bibliome informatics.
Abstract: We participated (as Team 9) in the Article Classification Task of the Biocreative II.5 Challenge: binary classification of full-text documents relevant for protein-protein interaction. We used two distinct classifiers for the online and offline challenges: 1) the lightweight Variable Trigonometric Threshold (VTT) linear classifier we successfully introduced in BioCreative 2 for binary classification of abstracts and 2) a novel Naive Bayes classifier using features from the citation network of the relevant literature. We supplemented the supplied training data with full-text documents from the MIPS database. The lightweight VTT classifier was very competitive in this new full-text scenario: it was a top-performing submission in this task, taking into account the rank product of the Area Under the interpolated precision and recall Curve, Accuracy, Balanced F-Score, and Matthew's Correlation Coefficient performance measures. The novel citation network classifier for the biomedical text mining domain, while not a top performing classifier in the challenge, performed above the central tendency of all submissions, and therefore indicates a promising new avenue to investigate further in bibliome informatics.

39 citations

Journal ArticleDOI
TL;DR: Though the K-H network model and the HashPairMiner algorithm are fairly young, their outcome is evidence of the considerable promise they offer to the biomedical science community in general and the drug research community in particular.

25 citations

Journal ArticleDOI
TL;DR: The ranking shows both consistency and promise in identifying drugs that can be repurposed and validates the ranking and awards those drugs that are currently being investigated in open clinical trials.
Abstract: Background: Driven by the COVID-19 pandemic and the dire need to discover an antiviral drug, we explored the landscape of the SARS-CoV-2 biomedical publications to identify potential treatments. Objective: The aims of this study are to identify off-label drugs that may have benefits for the coronavirus disease pandemic, present a novel ranking algorithm called CovidX to recommend existing drugs for potential repurposing, and validate the literature-based outcome with drug knowledge available in clinical trials. Methods: To achieve such objectives, we applied natural language processing techniques to identify drugs and linked entities (eg, disease, gene, protein, chemical compounds). When such entities are linked, they form a map that can be further explored using network science tools. The CovidX algorithm was based upon a notion that we called “diversity.” A diversity score for a given drug was calculated by measuring how “diverse” a drug is calculated using various biological entities (regardless of the cardinality of actual instances in each category). The algorithm validates the ranking and awards those drugs that are currently being investigated in open clinical trials. The rationale behind the open clinical trial is to provide a validating mechanism of the PubMed results. This ensures providing up to date evidence of the fast development of this disease. Results: From the analyzed biomedical literature, the algorithm identified 30 possible drug candidates for repurposing, ranked them accordingly, and validated the ranking outcomes against evidence from clinical trials. The top 10 candidates according to our algorithm are hydroxychloroquine, azithromycin, chloroquine, ritonavir, losartan, remdesivir, favipiravir, methylprednisolone, rapamycin, and tilorone dihydrochloride. Conclusions: The ranking shows both consistency and promise in identifying drugs that can be repurposed. We believe, however, the full treatment to be a multifaceted, adjuvant approach where multiple drugs may need to be taken at the same time.

20 citations

Journal ArticleDOI
TL;DR: This is the first effort to introduce a practical solution for digital recruitment campaigns that is large-scale, inexpensive, efficient and reaches out to individuals in near real-time as their needs are expressed.
Abstract: Digital recruitment is increasingly becoming a popular avenue for identifying human subjects for various studies. The process starts with an online ad that describes the task and explains expectations. As social media has exploded in popularity, efforts are being made to use social media advertisement for various recruitment purposes. There are, however, many unanswered questions about how best to do that. In this paper, we present an innovative Twitter recruitment system for a smoking cessation nicotine patch study. The goals of the paper are to: (1) present the approach we have taken to solve the problem of digital recruitment; (2) provide the system specification and design of a rule-based system; (3) present the algorithms and data mining approaches (classification and association analysis) using Twitter data; and (4) present the promising outcome of the initial version of the system and summarize the results. This is the first effort to introduce a practical solution for digital recruitment campaigns that is large-scale, inexpensive, efficient and reaches out to individuals in near real-time as their needs are expressed. A continuous update on how our system is performing, in real-time, can be viewed at https://twitter.com/TobaccoQuit .

19 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

01 Jan 2006
TL;DR: After you change your VT Google password, you will be unable to log on to VT Google Apps services including Mail, Drive, Groups, etc.
Abstract: IT Status "Password doesn't match" error. 4Help is aware that after you change your VT Google password, you will be unable to log on to VT Google Apps services including Mail, Drive, Groups, etc. 4Help is notifying the appropriate people. 12:00 Noon: Engineers have found a backlog on Google password replication. Once the backlog clears you should be able to log on with your changed password that you set earlier. You may be able to log on with your old VT Google password until the system catches up and syncs the new password. Service Degraded Service Degraded [Resolved] Created: Thu, 04/14/2016 11:20am Resolved: Fri, 04/15/2016 1:16pm Duration: 1 day 1 hour 56 min 1734 Views Source URL: https://computing.vt.edu/content/google-0

312 citations

Journal ArticleDOI
11 May 2015-PLOS ONE
TL;DR: It is demonstrated that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence, especially in classification of evidence sentences.
Abstract: Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.

238 citations

Journal ArticleDOI
TL;DR: The results of the ACT task of BioCreative III indicate that classification of large unbalanced article collections reflecting the real class imbalance is still challenging, and text-mining tools that report ranked lists of relevant articles for manual selection can potentially reduce the time needed to identify half of the relevant articles to less than 1/4 of the time when compared to unranked results.
Abstract: Background: Determining usefulness of biomedical text mining systems requires realistic task definition and data selection criteria without artificial constraints, measuring performance aspects that go beyond traditional metrics. The BioCreative III Protein-Protein Interaction (PPI) tasks were motivated by such considerations, trying to address aspects including how the end user would oversee the generated output, for instance by providing ranked results, textual evidence for human interpretation or measuring time savings by using automated systems. Detecting articles describing complex biological events like PPIs was addressed in the Article Classification Task (ACT), where participants were asked to implement tools for detecting PPI-describing abstracts. Therefore the BCIII-ACT corpus was provided, which includes a training, development and test set of over 12,000 PPI relevant and non-relevant PubMed abstracts labeled manually by domain experts and recording also the human classification times. The Interaction Method Task (IMT) went beyond abstracts and required mining for associations between more than 3,500 full text articles and interaction detection method ontology concepts that had been applied to detect the PPIs reported in them. Results: A total of 11 teams participated in at least one of the two PPI tasks (10 in ACT and 8 in the IMT) and a total of 62 persons were involved either as participants or in preparing data sets/evaluating these tasks. Per task, each team was allowed to submit five runs offline and another five online via the BioCreative Meta-Server. From the 52 runs submitted for the ACT, the highest Matthew’s Correlation Coefficient (MCC) score measured was 0.55 at an accuracy of 89% and the best AUC iP/R was 68%. Most ACT teams explored machine learning methods, some of them also used lexical resources like MeSH terms, PSI-MI concepts or particular lists of verbs and nouns, some integrated NER approaches. For the IMT, a total of 42 runs were evaluated by comparing systems against manually generated annotations done by curators from the BioGRID and MINT databases. The highest AUC iP/R achieved by any run was 53%, the best MCC score 0.55. In case of competitive systems with an acceptable recall (above 35%) the macro-averaged precision ranged between 50% and 80%, with a maximum F-Score of 55%.

206 citations