Bio: Majdi Rawashdeh is an academic researcher from Princess Sumaya University for Technology. The author has contributed to research in topics: Social media & Context (language use). The author has an hindex of 13, co-authored 46 publications receiving 478 citations. Previous affiliations of Majdi Rawashdeh include New York University Abu Dhabi & University of Ottawa.
TL;DR: The potential of including a user’s biological signal and leveraging it within an adapted collaborative filtering algorithm and proposing a recommendation algorithm to improve the user experience and satisfaction with the use of a biosignal in the recommendation process is highlighted.
Abstract: With the rapid increase of social media resources and services, Internet users are overwhelmed by the vast quantity of social media available. Most recommender systems personalize multimedia content to the users by analyzing two main dimensions of input: content (item), and user (consumer). In this study, we address the issue of how to improve the recommendation and the quality of the user experience by analyzing the contextual aspect of the users, at the time when they wish to consume multimedia content. Mainly, we highlight the potential of including a user's biological signal and leveraging it within an adapted collaborative filtering algorithm. First, the proposed model utilizes existing online social networks by incorporating social tags and rating information in ways that personalize the search for content in a particular detected context. Second, we propose a recommendation algorithm to improve the user experience and satisfaction with the use of a biosignal in the recommendation process. Our experimental results show the feasibility of personalizing the recommendation according to the user's context, and demonstrate some improvement on cold start situations where relatively little information is known about a user or an item.
TL;DR: Families of a smart class environment for engineering education that can improve the co-learning procedure are focused on, particularly the use of 3G and 4G technology in asmart class.
Abstract: Annotation facility on slides from both students and educators are explored.Live broadcast of a lecture to multi-class is realized by real time mixture.Students are highly in favor of annotation facility.High internet speed is required for live broadcast using real time mixture. The availability of 3G and 4G technologies coupled with tablet PCs and smart phones makes the communication and learning system easier. In engineering education, these sophisticated technologies can bring immense advantages to the methods of delivering and acquiring knowledge. In this paper, we focus on facilities of a smart class environment for engineering education that can improve the co-learning procedure. We particularly emphasize the use of 3G and 4G technology in a smart class.
TL;DR: A new recommendation model is proposed that personalizes recommendations and improves the user experience by analyzing the context when a user wishes to access multimedia content by utilizing latent preferences for ranking items under a given context.
Abstract: Context-aware recommendations offer the potential of exploiting social contents and utilize related tags and rating information to personalize the search for content considering a given context. Recommendation systems tackle the problem of trying to identify relevant resources from the vast number of choices available online. In this study, we propose a new recommendation model that personalizes recommendations and improves the user experience by analyzing the context when a user wishes to access multimedia content. We conducted empirical analysis on a dataset from last.fm to demonstrate the use of latent preferences for ranking items under a given context. Additionally, we use an optimization function to maximize the mean average precision measure of the resulted recommendation. Experimental results show a potential improvement to the quality of the recommendation in terms of accuracy when compared with state-of-the-art algorithms.
TL;DR: In this article, the authors proposed a framework that provides the updated information of the Corona Patients in the vicinity and thus provides identifiable data for remote monitoring of locality cohorts for early detection of COVID-19 based on ontology method.
Abstract: The Internet of Things (IoT) is the most promising technology in health technology systems. IoT-based systems ensure continuous monitoring in indoor and outdoor settings. Remote monitoring has revolutionized healthcare by connecting remote and hard-to-reach regions. Specifically, during this COVID-19 pandemic, it is imperative to have a remote monitoring system to assess patients remotely and curb its spread prematurely. This paper proposes a framework that provides the updated information of the Corona Patients in the vicinity and thus provides identifiable data for remote monitoring of locality cohorts. The proposed model is IoT-based remote access and an alarm-enabled bio wearable sensor system for early detection of COVID-19 based on ontology method using sensory 1D Biomedical Signals such as ECG, PPG, temperature, and accelerometer. The proposed ontology-based remote monitoring system analyzes the challenges of encompassing security and privacy issues. The proposed model is also simulated using cooza simulator. During the simulation, it is observed that the proposed model achieves an accuracy of 96.33 %, which establishes the efficacy of the proposed model. The effectiveness of the proposed model is also strengthened by efficient power consumption.
TL;DR: A model to improve Activity Recognition in smart homes is proposed based on defining a profile for each activity from training datasets, which will be used to induce extra features and will help in distinguishing residents’ activities (fingerprinting).
Abstract: The Internet of Things (IoT) is a technology for seamlessly connecting a large number of small-end devices and enabling the development of many smart applications to control different aspects of our life; shifting us, ever-closer to living in a smart city. IoT makes it possible to convert our homes to smart environments in which sensors are responsible for handling inhabitants’ behaviours and monitor their daily activities. Activity Recognition (AR) is a new service within smart homes. It has been introduced as a solution to improve the quality of life of people such as elderly and children. AR is concerned with the assignment of an activity label to a sequence of sensors’ events that are generated from the smart infrastructure. To help in effectively recognizing home activities, classification algorithms are applied on segmented sequences that are extracted automatically. Segments are subject to error due to the existence of irrelevant data and difficulties in how segmentation is applied. This negatively affects the accuracy on the classification task. In addition, the data generated from the network is streamed in nature, and big data techniques need to be utilized. In this paper, we propose a model to improve Activity Recognition in smart homes. The proposed technique is based on defining a profile for each activity from training datasets. The profile will be used to induce extra features and will help in distinguishing residents’ activities (fingerprinting). To validate our model, real datasets have been used for the experiments, and results show a significant enhancement in accuracy, compared with traditional techniques.
TL;DR: It is demonstrated that the novel MCNN and CCNN fusion methods outperforms all the state-of-the-art machine learning and deep learning techniques for EEG classification.
Abstract: Electroencephalography (EEG) motor imagery (MI) signals have recently gained a lot of attention as these signals encode a person’s intent of performing an action. Researchers have used MI signals to help disabled persons, control devices such as wheelchairs and even for autonomous driving. Hence decoding these signals accurately is important for a Brain–Computer interface (BCI) system. But EEG decoding is a challenging task because of its complexity, dynamic nature and low signal to noise ratio. Convolution neural network (CNN) has shown that it can extract spatial and temporal features from EEG, but in order to learn the dynamic correlations present in MI signals, we need improved CNN models. CNN can extract good features with both shallow and deep models pointing to the fact that, at different levels relevant features can be extracted. Fusion of multiple CNN models has not been experimented for EEG data. In this work, we propose a multi-layer CNNs method for fusing CNNs with different characteristics and architectures to improve EEG MI classification accuracy. Our method utilizes different convolutional features to capture spatial and temporal features from raw EEG data. We demonstrate that our novel MCNN and CCNN fusion methods outperforms all the state-of-the-art machine learning and deep learning techniques for EEG classification. We have performed various experiments to evaluate the performance of the proposed CNN fusion method on public datasets. The proposed MCNN method achieves 75.7% and 95.4% on the BCI Competition IV-2a dataset and the High Gamma Dataset respectively. The proposed CCNN method based on autoencoder cross-encoding achieves more than 10% improvement for cross-subject EEG classification.
TL;DR: It is argued that the second enabling pillar towards this vision is the increasing power of computers and algorithms to learn, reason, and build the ‘digital twin’ of a patient.
Abstract: Providing therapies tailored to each patient is the vision of precision medicine, enabled by the increasing ability to capture extensive data about individual patients. In this position paper, we argue that the second enabling pillar towards this vision is the increasing power of computers and algorithms to learn, reason, and build the 'digital twin' of a patient. Computational models are boosting the capacity to draw diagnosis and prognosis, and future treatments will be tailored not only to current health status and data, but also to an accurate projection of the pathways to restore health by model predictions. The early steps of the digital twin in the area of cardiovascular medicine are reviewed in this article, together with a discussion of the challenges and opportunities ahead. We emphasize the synergies between mechanistic and statistical models in accelerating cardiovascular research and enabling the vision of precision medicine.
TL;DR: A Blockchain-based infrastructure to support security- and privacy-oriented spatio-temporal smart contract services for the sustainable Internet of Things (IoT)-enabled sharing economy in mega smart cities.
Abstract: In this paper, we propose a Blockchain-based infrastructure to support security- and privacy-oriented spatio-temporal smart contract services for the sustainable Internet of Things (IoT)-enabled sharing economy in mega smart cities. The infrastructure leverages cognitive fog nodes at the edge to host and process offloaded geo-tagged multimedia payload and transactions from a mobile edge and IoT nodes, uses AI for processing and extracting significant event information, produces semantic digital analytics, and saves results in Blockchain and decentralized cloud repositories to facilitate sharing economy services. The framework offers a sustainable incentive mechanism, which can potentially support secure smart city services, such as sharing economy, smart contracts, and cyber-physical interaction with Blockchain and IoT. Our unique contribution is justified by detailed system design and implementation of the framework.
•21 Apr 2006
TL;DR: This year's extended abstracts include submissions from six different sub-communities of the human-computer interaction field: design; education; engineering; management; research and usability, as well as materials from other traditional CHI venues.
Abstract: Welcome to the CHI 2006 Extended Abstracts. We hope that you will enjoy this year's extended abstracts and the changes that we have made. For the first time this year, we encouraged submissions from six different sub-communities of the human-computer interaction (HCI) field: design; education; engineering; management; research and usability. The intent was to make sure that the technical program included elements that would be of interest to the broad HCI world. We especially sought Experience Reports, Panels, SIGs, and HCI Overviews. Submissions to each of these were carefully reviewed by the Conference Committee and their respective Community Chairs. These Extended Abstracts include these new or modified venues as well as materials from other traditional CHI venues.