scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Innovations in Information Technology in 2020"


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, four machine learning models based on four classifiers are built: Naive Bayes, K-Nearest Neighbor, Support Vector Machine, and Decision Trees, using 82,000 records from UNSW-NB15 dataset, the decision trees model has yielded the best overall results with 99.89% testing accuracy, 100% precision,100% recall, and 100% $\Gamma-$score in detecting botnet attacks.
Abstract: With the advancement of computers and technology, security threats are also evolving at a fast pace. Botnets are one such security threat which requires a high level of research and focus in order to be eliminated. In this paper, we use machine learning to detect Botnet attacks. Using the Bot-IoT and University of New South Wales (UNSW) datasets, four machine learning models based on four classifiers are built: Naive Bayes, K-Nearest Neighbor, Support Vector Machine, and Decision Trees. Using 82,000 records from UNSW-NB15 dataset, the decision trees model has yielded the best overall results with 99.89% testing accuracy, 100% precision, 100% recall, and 100% $\Gamma-$score in detecting botnet attacks.

20 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, a hybrid approach using deep convolutional neural network (CNN) and natural language processing (NLP) was used to classify Acne density, skin sensitivity and to identify the specific acne subtypes.
Abstract: Busy lifestyles these days have led people to forget to drink water regularly which results in inadequate hydration and oily skin, oily skin has become one of the main factors for Acne vulgaris. Acne vulgaris, particularly on the face, greatly affects a person’s social, mental wellbeing and personal satisfaction for teens. Besides the fact that acne is well known as an inflammatory disorder, it was reported to have caused serious long-term consequences such as depression, scarring, mental illness, including pain and suicide. In this research work, a smartphone-based expert system namely “Cureto” is implemented using a hybrid approach i.e. using deep convolutional neural network (CNN) and natural language processing (NLP). The proposed work is designed, implemented and tested to classify Acne density, skin sensitivity and to identify the specific acne subtypes namely whiteheads, blackheads, papules, pustules, nodules and cysts. The proposed work not only classifies Acne Vulgaris but also recommends appropriate treatments based on their classification, severity and other demographic factors such as age, gender, etc. The results obtained show that for Acne type classification the accuracy ranges from 90%-95% and for Skin Sensitivity and Acne density the accuracy ranges from 93%-96%.

12 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors compared two different classifiers (Naive Bayes and Decision Tree) for the intrusion detection system on the publicly available dataset and evaluated the classifier performance in terms of accuracy, specificity, recall, precision, f1-score, error rates and response time.
Abstract: Due to the advancement in information exchange over the Internet and mobile technologies, malicious network attacks have significantly increased. Machine learning algorithms can play a vital role in network security and attacks classification. This paper compares two different types of classifiers (Naive Bayes and Decision Tree) for the intrusion detection system on the publicly available dataset. Simulations are carried out using the WEKA machine learning tool and experimentation is performed on full data and selected features using subset evaluator algorithm. The classifier performance is evaluated in terms of accuracy, specificity, recall, precision, f1-score, error rates and response time. Naive Bayes classifier performance was better in terms of computational time, however, the accuracy, error rate, f1-score, and recall values of Decision Tree were better than Naive Bayes.

10 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors proposed a user-friendly jobs and resources allocation manager for the ML server, which allows users to request and allocate resources on a server and monitor the progress of their tasks.
Abstract: In this paper we propose a user-friendly jobs and resources allocation manager for the ML server. We introduce some unique features of the designed system such as a protection of user’s sensitive data, automatic cleaning of unused information, secure of the host OS via environment virtualization (container), and direct access to the container via SSH. Proposed web-based tool allows users to request and allocate resources on a server and monitor the progress of their tasks. It is created to simplify access to servers particularly ML servers, to allocate computational resources while satisfying data security concerns. The proposed tool also relieves system administrators form manually allocating resources to users and monitor the progress. The tool is user friendly and transparent so that the system administrator and the user can simply view all jobs in progress to find the best allocation for their tasks. The implementation code, deployment instructions, and supplementary files are available at https://github.com/UAEUIRI/DGX1-scheduler.

10 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors proposed CS_IcIoTA, which identifies the application needs and classifies them into diverse categories, and assigns the tasks from these categories either to the local fog nodes or to remotely available rented-cloud-nodes for execution based on the current resource requirements of tasks.
Abstract: Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices by the individuals or business organizations. Fog computing was introduced for serving the raising needs of IoT applications locally with minimal delay and cost. Based on the Quality of Service (QoS) requirements like data requirements, rate of data updating, and accessing authority of IoT applications, their requests may be processed on the locally available fog nodes at low cost or forwarded to the globally available rentedcloud-nodes for processing at higher cost. Hence, there is a key need of optimizing the information-centric IoT architecture for classifying the tasks of IoT applications and scheduling them on to the most suitable fog or cloud nodes for processing. The proposed CS_IcIoTA identifies the application needs and classifies them into diverse categories. The scheduler assigns the tasks from these categories either to the local fog nodes or to the remotely available rented-cloud-nodes for execution based on the current resource requirements of tasks. If the demanded computing or storage resources by the tasks is huge and if that is not attainable at fogs then cloud nodes are preferred otherwise local fog nodes are used. Three cloud nodes, four fog nodes and three IoT application domains with a sum of 1500 tasks are considered for the experimental analysis and performance evaluation. Simulation results states that the proposed CS_IcIoTA minimizes the average makespan time and service-cost up to 11.45%, and 10.60% respectively. Proposed CS_IcIoTA also maximizes the average fog node utilization up to 77.83%.

9 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a zero-effort two-factor authentication (2FA) scheme based on something that is in the user's environment (ambient access points) is proposed, where data from broadcast messages are utilized to implement a 2FA scheme by determining whether two devices are proximate or not to ensure that they belong to the same user.
Abstract: Two-factor authentication (2FA) systems implement by verifying at least two factors. A factor is something a user knows (password, or phrase), something a user possesses (smart card, or smartphone), something a user is (fingerprint, or iris), something a user does (keystroke), or somewhere a user is (location). In conventional 2FA systems, a user is required to interact (e.g., typing a passcode) to implement the second layer of authentication, which is not very user-friendly. Nowadays, smart devices (phones, laptops, tablets, etc.) can receive signals from different radio frequency technologies within range. As these devices move among networks (Wi-Fi access points, cell phone towers, etc.), they receive broadcast messages, some of which can be used to collect information. This information can be utilized in various ways, such as establishing a connection, sharing information, locating devices, and, most appropriately, identifying users in range. The principal benefit of broadcast messages is that the devices can read and process the embedded information without being connected to the broadcaster.Moreover, the broadcast messages can be received only within a range of the wireless access point sending the broadcast, thus inherently limiting access to those devices in close physical proximity and facilitating many applications dependent on that proximity. In this paper, 0EI2FA is proposed, a zero-effort two-factor authentication (2FA) scheme based on something that is in the user’s environment (ambient access points). In our research, data from the broadcast messages are utilized to implement a 2FA scheme by determining whether two devices are proximate or not to ensure that they belong to the same user.

7 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the interference regime caused by multiple downlink aerial wireless transmission beams has been highlighted by estimating the UAVs coverage area that is analytically derived in a tractable closed-form expression.
Abstract: Unmanned Aerial Vehicles (UAVs) are considered, nowadays, as a futuristic and robust paradigm for 5G wireless networks, in terms of providing Internet connectivity services onto infrastructure cellular networks. In this paper, the interference regime caused by multiple downlink aerial wireless transmission beams has been highlighted. This has been introduced by estimating the UAVs coverage area that is analytically derived in a tractable closed-form expression. The rationale of the analysed coverage approach relies on observing and adapting the joint aerial distance between the aerial base stations. This can minimize the intra-overlapped coverage and ultimately maximize the overall coverage performance for a better quality of service demands. The novelty of our approach brings useful design insights for UAVs system-level performance that technically helps in aerial coverage computations without the need of performing an aerial deployment setup. To the end, the performance effectiveness of our methodology has been tested under an urban propagation environment conditions, in which the original probabilistic channel model approximation has been taken into account. Moreover, this paper identifies the interference issue of such an aerial network as a shrinkage or distortion phenomenon.

7 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, an adoptive batch size is implemented for the diffGrad to overcome the problem of slow convergence, which is an appropriate and enhanced technique that uses fraction constant based on previous gradient information for gradient calculation.
Abstract: Stochastic Gradient Descent is a major contributor to the success of the deep neural networks. The gradient provides basic knowledge about the function direction and its rate of change. However, SGD changes the step size equally for all parameters irrespective of their gradient behavior. Recently, several efforts have been made to improve the SGD method, such as AdaGrad, RMSprop, Adam, and diffGrad. The diffGrad is an appropriate and enhanced technique that uses fraction constant based on previous gradient information for gradient calculation. This fraction constant decreases the momentum resulting in slow convergence towards an optimal solution. This paper addresses the slow convergence problem of the diffGrad algorithm and proposed a new adaDiffGrad algorithm. In adaDiffGrad an adoptive batch size is implemented for the diffGrad to overcome the problem of slow convergence. The proposed model is experimented for image categorization and classification over CIFAR10, CIFAR100, and FakeImage dataset. The results are compared with the state of art models, such as Adam, AdaGrad, DiffGrad, RMSprop, and, SGD. The results show that adaDiffGrad outperforms other optimizers and improves the accuracy of the diffGrad.

6 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors present the current panorama of what are the main focus of academic studies in the last 5 years through a literature systematic review, detailing the technologies in use, motivations, what is being monitored and the animal under study.
Abstract: The capabilities of the IoT concept turned into a promising game changer in human-animal iterations. The pet owners can already use smart sensors to find ways to monitor the animals’ health, location, behavior and/or environment. On the other hand even after years of this concept being in use still there are common issues that needs to be addressed. In this survey the author presents the current panorama of what are the main focus of academic studies in the last 5 years through a literature systematic review, detailing the technologies in use, motivations, what is being monitored and the animal under study. This work suggests areas of interest and provides for future researchers relevant data and inspire more works on this field.

6 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors investigated the effectiveness of using a hybrid feature selection method in improving the results of aspect-based sentiment analysis by reducing the number of features, which consists of a one-way analysis of variance to examine the relationship between each feature and classes and the ridge regression that calculates the importance of features together during the learning phase.
Abstract: Sentiment analysis can be applied in many domains given the abundance of views in social networks, including the education sector, reflecting how cultures and nations grow and develop. In this context, aspect-based sentiment analysis with its two main tasks; aspect detection and aspect-opinion classification, might provide an accurate picture of many educational institutions ’ strengths and weaknesses. In this research, a real-world Twitter dataset was collected, containing approximately 7,934 Arabic tweets related to Qassim University in Saudi Arabia. In the text classification task, the high dimensionality problem is usually faced, and the feature selection methods contribute to tackling this problem. Accordingly, the purpose of the experimental study is to investigate the effectiveness of using a hybrid feature selection method in improving the results of aspect-based sentiment analysis by reducing the number of features. The proposed hybrid feature method consists of a one-way analysis of variance to examine the relationship between each feature and classes, and the ridge regression that calculates the importance of features together during the learning phase. Several experiments were conducted to study the effects of the proposed feature selection method on improving the Support Vector Machine classifier ’s performance. The experimental results demonstrate that the hybrid method successfully enhances the classifier’s performance in both subtasks.

6 citations


Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors employed Long Short Term Memory (LSTM) deep learning approach to predict future prices for low, medium, and high risk stocks, and the results show that their LSTM approach outperforms other traditional approaches for all stock categories over different time periods.
Abstract: Stock market is considered complex, fickle, and dynamic. Undoubtedly, prediction of its price is one of the most challenging tasks in time series forecasting. Traditionally, there are several techniques to effectively predict the next t lag of time series data such as Logistic Regression and Random Forest. With the recent progression in sophisticated machine learning approaches such as deep learning, new algorithms are developed to analyze and forecast time series data. This paper employs Long-Short Term Memory (LSTM) deep learning approach to predict future prices for low, medium, and high risk stocks. To the best of our knowledge, we are proposing an innovating technique to evaluate deep learning and other prediction techniques w.r.t. the stocks’ risk factor. The proposed approach is compared with other traditional algorithms over different periods of training data. The results show that our LSTM approach outperforms other traditional approaches for all stock categories over different time periods. Experimental results illustrate that, for low and medium risk stocks, it is better to use LSTM with long time period of training data. However, for high risk stocks, short time period of training data provides more accurate predictions.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors classify gamification into three categories: achievement, social, and immersion (ASI), and identify 27 gaming elements that have been used in a MOOC context and are mapped to the aforementioned categories.
Abstract: The growing participation of educational institutions in the movement of massive open online courses (MOOCs) generates a great multitude of opportunities and challenges. Firstly, the high dropout rate and the lack of learners’ motivation are significant issues in MOOCs, which cast doubt on the quality of teaching and learning. Secondly, MOOC users’ diversity creates problems and barriers to MOOC providers in designing successful and effective courses that will suit all types of users. It is still unclear to think about how to reach various types of MOOC participants. In this regard, gamification is proposed as a complement to existing learning approaches to provide learners with a powerful and motivational learning experience. For decreasing the high dropout rates in MOOCs and to engage the learners and improve their motivation by adopting gamification in education, this study aims to classify gamification into three categories: achievement, social, and immersion (ASI). Furthermore, 27 gaming elements are identified that have been used in a MOOC context and are mapped to the aforementioned categories in order to help the MOOC developers and designers better understand how the various gamification categories can influence the users.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a review of image based steganalysis schemes in the transform domain using colored images is presented, and the evaluation metrics and datasets used are discussed, and a review on image-based Steganalysis scheme in the transformation domain is presented.
Abstract: Image based applications are being utilized in many platforms nowadays such as medical applications, smart cities, social media, etc. The security of these applications is of great importance. Advanced steganography techniques can hide messages, possibly malicious, in innocent cover images. Steganography is a class of data hiding techniques that deal with concealing the existence of secret communication between two entities. Image steganalysis, the reverse of steganography, is concerned with detecting the existence of stego images and therefore exposing the secret communication between these entities. It can be used in police systems and other fields. Conventional steganalysis is divided into two distinct phases; manual feature extraction and classification employing either Ensemble Classifiers or Support Vector Machines. With the evolution of Deep Learning techniques, combining these two phases has become feasible and yields even better results than conventional steganalysis. In this paper, a review on image based steganalysis schemes in the transform domain using colored images is presented, and the evaluation metrics and datasets used are discussed.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a manufacturer-independent digital process platform for construction sites of the future is presented, along with a proof of concept for the platform along the platform use-case "static geo-fencing for construction machines" and conduct an initial evaluation of that exemplary use case.
Abstract: Today’s construction sites are still at the beginning of a continuous digital transformation process. Among others, this is due to complex site environments and an existing lack of inter-operability between various site components such as the involved machinery. Consequently, this lack leads to direct implications on the efficiency, quality as well as the production time of construction projects. In order to contribute to filling this gap, we develop a manufacturer-independent digital process platform for construction sites of the future. This paper presents first results after the successful software implementation of the digital process platform. Along the real-world use-case ‘road construction’, we highlight the need for the utilization of Big Data technologies and present the implemented communication infrastructure as well as the user interface of our digital process platform. Finally, we show a proof of concept for the platform along the platform use-case ‘static geo-fencing for construction machines’ and conduct an initial evaluation of that exemplary use-case.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors discuss the impact of amount of noise introduction in the original data, the relation between the added noise in the data, data utility, and the effect of data leakage to breach of privacy.
Abstract: With advancement in IoT and its application in various domains, huge amount of data is available and stored in databases both locally and on the cloud. Health care is one such domain. Patient medical data is stored and usually curator of the data is responsible for ensuring privacy of the patient. Privacy becomes the major concern in a scenario, where the data is shared with third party for further analysis or research purposes. To avoid any privacy breach in such or similar scenarios, this article discusses differential privacy approach. Major focus remains exploiting the unique property of differential privacy and its application to healthcare data. We discuss impact of amount of noise introduction in the original data, the relation between the added noise in the data, data utility, and the effect of data leakage to breach of privacy.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: A literature survey for the deep leaning approaches used in image captioning is presented in this article, where the authors discuss the used datasets, translation approach, evaluation metrics and the results for each method.
Abstract: This paper describes a literature survey for the deep leaning approaches used in image captioning. Approaches will be discussed based on four main categories: model architecture, attention mechanism, image model and language model. Most of the current research focuses on generating captions in English language, leaving a gap in research for other languages, especially for Arabic language. Therefore, we will highlight the available research and approaches used to generate captions in Arabic. We will discuss the used datasets, translation approach, evaluation metrics and the results for each method. We conclude the survey by proposing some possible future directions for Arabic image captioning.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors proposed a blockchain technology enabled smart contract approach for ensuring the security of and trust in the trade ecosystem, which is a promising solution for decentralized and distributed cross border trade business.
Abstract: Majority of current businesses are interested in building communities by collaborating with each other for solving common business problems forming decentralized peer to peer network. International trade is one of such industries which is striving to work in collaboration. Multiple entities involved like buyer, seller, service providers and regulators want to work together but having major trust and security concerns. Such applications are distributed in nature therefore require distributed control and security mechanisms. The current practical security solutions have centralized approach, so they might be inefficient for these applications. The blockchain technology is distributed in nature. Prominent features of blockchain like distributed ledger Technology and smart contracts makes it a promising solution for decentralized and distributed cross border trade business. The aim of this paper is to provide pain points of global trade system with respect to security and trust, and to provide solution by blockchain technology considering Letter of credit as a method of trade finance. We propose a blockchain technology enabled smart contract approach for ensuring the security of and trust in the trade ecosystem.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, the authors proposed base-centric and user-centric clustering algorithms that are based on thresholding and sorting processes to maximize the total sum-rate in a wireless communication system.
Abstract: The main goal of this paper is to model an optimization problem to maximize the total sum-rate in a wireless communication system. We solve it by proposing base-centric and user-centric clustering algorithms that are based on thresholding and sorting processes. In the system model, each user can select just one base-station to communicate while each base-station can serve one or more than one user in its coverage area, simultaneously. First, by proposing a thresholding phase, we limit the search region by removing those users who are far from all base-stations. This phase easily changes a four-constraint problem into a three-constraint optimization problem. Then, to show the superiority of the proposed solutions, the numerical analyses are compared to the popular K-means and Voronoi clustering algorithms, in the view of the number of supported users and the achieved average sum-rate per user for a different number of the resources for each base-station.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: Differential Directional Pixogram (DDP) as discussed by the authors was proposed to exploit the temporal redundancy in a video segment by extracting a vector of pixels in the direction of the motion in each scene.
Abstract: A video signal is a collection of images (frames) presented successively at a constant rate. To have a meaningful video segment, the images that compose the video segment should be related to each other. Therefore, the time domain of a video segment is a great source of data redundancy. Utilizing data redundancy is the driving force for data hiding schemes. This paper proposes the Differential Directional Pixogram (DDP) scheme to optimally exploit the temporal redundancy in a video segment. The idea is to extract a vector of pixels in the direction of the motion in each scene. Thus, it is expected that this vector will be composed of homogeneous pixels. The secret pixels are then embedded in the Discrete Cosine Transform (DCT) coefficients since the DCT has a powerful ability to express a highly correlated signal within few significant DCT coefficients leaving a large amount of insignificant DCT coefficients. As a result, the proposed DDP scheme is able to hide a substantial amount of secret data while achieving outstanding stego quality compared to competitive video steganography schemes.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a systematic literature review is conducted to study the implementation of deep learning algorithms in modulation classification, from four perspectives including: deep learning techniques/models, performance metrics used, deep learning models adopted, and types of modulation classification classified.
Abstract: Implementation of Automatic modulation classification originated in the military field several years ago. Threat analysis, surveillance, and welfare where of great concern, thus this urged the importance to study the application of automatic modulation classification for signal recognition. In this research survey a Systematic Literature Review is conducted to study the implementation of deep learning algorithms in modulation classification. Our survey conducts a study from four perspectives including: deep learning techniques/models, performance metrics used, deep learning models adopted, and types of modulation classification classified.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors proposed three layers of authentication based on user ID and password, silent signals, and biometrical data, using supervised ML to determine the user's risk level, whenever the risk level is higher than some threshold, an additional verification is required.
Abstract: Adding machine learning (ML) and artificial intelligence (AI) logic models to authentication is an inevitable process. In this work, we show that the combination of qualitative and quantitative verification over model created on training data may significantly reduce false access probability, even if the user’s credentials ID and password are compromised. We propose three layers of authentication based on user ID and password, silent signals, and biometrical data. The system uses supervised ML to determine the user’s risk level. Basic model and associate implementation performance shows that we can, with high probability, identify an intruder based on silent signals, historical data, and behavioural biometrics. The system is compositional, so further improvement by introducing more silent signals and behavioural analytics can, theoretically, eliminate false acceptance. Whenever the risk level is higher than some threshold, an additional verification is required. The threshold may increase over time and in that case, the probability of additional verification of a legitimate user decreases.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, an intelligent approach for the diagnosis of maize diseases is proposed which is capable of working over smartphone devices and is capable enough to communicate with the farmers in Egypt in the Arabic language and assist them in diagnosing maize diseases.
Abstract: In this paper, an intelligent approach for the diagnosis of Maize diseases is proposed which is capable of working over smartphone devices. The system is capable enough to communicate with the farmers in Egypt in the Arabic language and assist them in diagnosing Maize diseases. We have developed it to support farmers in protecting their crop. On this basis, the system provides suggestions on which disorders may affect the crops. This system uses a rule-based classifier technique to provide suggestions to farmers, and its inference method utilizes the uniqueness of the rule conditions. The classifier is constructed based on expert’s inputs. The output of the diagnosis process will be suspected diseases list and confirmed diseases list. The content of the confirmed diseases list is changed according to the additional information added to the system by the end-user.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a variational autoencoder (VAE) was proposed to generate denovo drug compounds by feeding in a known set of drug compounds against a particular biological target.
Abstract: A variational autoencoder (VAE) is a generational deep learning model based on encoding a particular observation into a latent space and then decoding it while incorporating some random noise with the intuition of being able to generate slightly different forms of the input observation. Here, I propose a VAE that is able to generate denovo drug compounds by feeding in a known set of drug compounds against a particular biological target. To demonstrate the ability of the proposed VAE as a generative model, known drug molecules belonging to the class of Neuraminidase (NA) inhibitors were taken from the ZINC database and fed into the VAE model as one-hot encoded SMILES strings. Similarly, active NA inhibitors and decoy molecules together were also fed to compare efficiency. The generated molecules were then screened to remove impractical structures. Next, a drug-likeness (QED) score was computed for each candidate molecule and a cutoff of 0.5 was used to extract viable candidates. To ensure that the generated drug compounds were active NA inhibitors, a series of Artificial Neural Networks (ANNs) classifiers based on three different characterization techniques, namely chemical fingerprinting, molecular descriptions and graph convolutions, were developed to identify active NA inhibitors from decoy molecules. The feature candidate data were then fed into the three designed ANNs to obtain the final set of novel, viable and active NA inhibitors. Seventy-one new NA inhibitors were obtained after three runs of the VAE model under different parameterizations. The proposed VAE can hence be used to generate de-novo drug compounds for a wide variety of biological targets.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, a rhythm-based investigation of Saudi speech dialects using a corpus called Saudi Accented Arabic Voice Bank (SAAVB) using a number of rhythm metrics, namely, $\Delta {\mathbf{V}}, \Delta$C, and %V, was investigated.
Abstract: In this study, we completed our previous work about rhythm-based investigation of Saudi speech dialects using a corpus called Saudi Accented Arabic Voice Bank (SAAVB). In this continuation paper speech rhythm of the remain four regions that are Eastern Region (ER), Madinah, Hail, and Asier was investigated. Analyses of the past and current results for all nine dialectal regions was conducted. The relations between different interval measures (IMs) and their capabilities of classifying native origins of speakers, their gender (male/female), and type of sentence (question or statement) are studied. SAAVB corpus was used for all experiments. It contains a set of Arabic speech that represents native Arabic speakers from all the cities around Saudi Arabia. A number of rhythm metrics, namely, $\Delta {\mathbf{V}}, \Delta$C, and %V, are calculated and studied in detail. The results show that $\Delta$V metric can differentiate between question and statement sentences. Moreover, the %V detects a significant difference between male and female speakers. The results of the study demonstrate the uniqueness of the Riyadh dialect. In other words, the Riyadh, ER and Asier dialect is different from all other investigated dialects. In Addition to that, a high similarity is observed between the dialects of Madinah and Hail. these two dialects are geographically neighbors and both located in north of Saudi Arabia. Finally, a conclusion and general discussion on speech rhythm metrics is provided.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, a fusion query expansion framework was proposed to improve the query mismatch problem in biomedical literature retrieval by incorporating and re-weighting additional similar terms in the original query.
Abstract: The explosive growth of biomedical literature has made it difficult for biomedical scientists to locate precise articles and keep them up to date with the latest knowledge. In biomedical literature retrieval, the heterogeneity of medical terminologies and jargons suffer from query mismatch (QM). The query expansion approaches significantly improve query mismatch by incorporating and re-weighting additional similar terms in the original query. The reliance on medical ontologies to alleviate QM has garnered significant attention in biomedical literature retrieval. However, sole reliance on these ontologies is not sufficient to retrieve relevant results. Considering the foregoing statement, in this article, we design and implement a fusion query expansion framework by integrating the combination of clinical diagnosis information (CDI) and medical ontology (MO); to improve the query mismatch problem. In the proposed system, we have explored the top three MOs (MeSH, UMLS, SNOMEDCT) to select candidate expansion terms. The outcomes of the ontologies are then integrated, with clinical diagnosis information predicted by the unstructured knowledge bases to get the best query combination leading to more focused BLR. The experimental results procured on Text REtrieval Conference (TREC) Clinical Decision Support (CDS) dataset show that this fusion QE framework performed significantly better when CDI and MeSH ontology used jointly to retrieve articles. Furthermore, our results demonstrate the notable ability of the proposed framework to help search engines to improve QM in biomedical literature retrieval. We expect our proposed approach would assist investigators to use this query combination to retrieve relevant articles.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors used fixed block segmentation to create several unique homogeneous regions of secret data and mapped them on to the corresponding hiding area of high frequency DCT coefficients of cover image.
Abstract: A common practice of several steganography schemes is to replace the insignificant Discrete Cosine Transform (DCT) coefficients with the secret data. Scaling is an important step in image-based steganography schemes where secret image data is scaled down to blend closely to small high frequency DCT coefficients of the cover image. Scaling affects the stego image quality. Previous work either improved stego quality at the cost of low capacity or improved capacity at the cost of low stego quality. The proposed technique uses fixed block segmentation to create several unique homogeneous regions. The homogeneous regions on secret image are mapped on to the corresponding hiding area of high frequency DCT coefficients of cover image. Each homogeneous region of secret data has a corresponding single scale. The scaling factor value for each region is determined by finding a single value of the corresponding secret homogeneous regions and DCT coefficient in the cover using mean scale. The proposed methodology gives better stego image quality while keeping high hiding capacity. Results compared with fixed scaling shows improvement of stego quality for high fixed hiding capacity.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: This paper introduces a segmentation method that exploits the internal statistics of input image without using any prior labeled data and is the first unsupervised deep learning-based method proposed for single-image flower segmentation.
Abstract: Segmentation plays an important role in imagebased plant phenotyping applications. Deep learning has led to a dramatic improvement in segmentation performance. Most deep learning-based methods are supervised and require abundant application-specific training data. Considering the wide range of plant phenotyping applications, such data may not be always available. To mitigate this problem, we introduce a segmentation method that exploits the power of deep learning without using any prior training. In this paper, we specifically focus on flower segmentation. Recurrence of information inside a flower image is used to train an image-specific deep network that is subsequently used for segmentation. The proposed method is self-supervised as it exploits the internal statistics of input image without using any prior labeled data. To the best of our knowledge, this is the first unsupervised deep learning-based method proposed for single-image flower segmentation.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this article, a novel approach to locate stranded victims in flooded scenarios using an autonomous unmanned surface vehicle (USV) by detecting the distress sounds using the Time Difference of Arrival principle with the help of a microphone array to detect victims.
Abstract: This paper presents a novel approach to locate stranded victims in flooded scenarios using an autonomous unmanned surface vehicle (USV) by detecting the distress sounds using the Time Difference of Arrival principle with the help of a microphone array to detect victims. This paper also presents a noise reduction algorithm that reduces the number of false-positive detections thus increasing the accuracy of detection. A path planning algorithm that produces a time and energy-efficient path for the USV to follow, enabling shorter mission time, increasing operation range, and achieving operational excellence is also discussed.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors measured the citizen engagement in different topics within the official Twitter account of Mohammed Bin Rashid and classified them into five categories: political engagement, achievements, development, and share values.
Abstract: Citizen engagement is one of the main concepts of smart governments. In this paper we will measure the citizen engagement in different topics within the official Twitter account of Mohammed Bin Rashid. We have streamed the account tweets and classified them into five categories. The categories were identified after descriptive analysis of the total tweets. We used Retweetability rate to measure citizen engagement and Pearson Chi-Square test to measure the relationship between citizen retweets in response to Mohammed Bin Rashid tweets in four categories, namely, political engagement, achievements, development, and share values. Our results showed that there is a correlation between tweet contents and number of retweets in the development category only. Additionally, we found that other categories did not motivate citizen engagement.

Proceedings ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors compare single and double energy detection algorithms in terms of the characteristics, performance, and simulation results, and analyze the noise effect on the energy detection approach and present opportunistic potential solutions such as Machine Learning (ML) and Graph Signal Processing (GSP) techniques.
Abstract: Spectrum scarcity has been one of the most conspicuous constraints in networking, amidst the growing demand for 5G and Internet of things (IoT) services. Consequently, a cognitive radio networking system (CRN) presents as a reliable solution to counter the current spectrum limitations. Many spectrum sensing techniques have been integrated with CRN to address the spectrum shortage; through the detection of the frequency band and investigating the status of the primary user (PU). Nonetheless, energy detection approaches are perceived as one of the most effective solutions for spectrum sensing, as they can operate in a non-causal manner. Energy detection works with a single or double threshold. Each threshold plays a fundamental role taking into account the stochastic properties of the detected signal and consequently the calculation of the optimal energy level. Despite various research efforts towards improving the energy detection scheme, the proposed algorithms still struggle in coping with noise uncertainty and signal-to-noise ratio (SNR) wall. The main contribution of this paper is the implementation and comparison between a single (conventional) and double energy detection algorithms, in terms of the characteristics, performance, and simulation results. Finally, we analyze the noise effect on the energy detection approach and present opportunistic potential solutions such as Machine learning (ML) and Graph signal processing (GSP) techniques.