scispace - formally typeset
Search or ask a question
Author

Ali Idri

Bio: Ali Idri is an academic researcher from Mohammed V University. The author has contributed to research in topics: Computer science & Software. The author has an hindex of 29, co-authored 240 publications receiving 3682 citations. Previous affiliations of Ali Idri include Florida Atlantic University & École Normale Supérieure.


Papers
More filters
Journal ArticleDOI
TL;DR: Most of the e-Health applications and serious games investigated have been proven to yield solely short-term engagement through extrinsic rewards and it is therefore necessary to build e- health solutions on well-founded theories that exploit the core experience and psychological effects of game mechanics.

532 citations

Journal ArticleDOI
TL;DR: The results show that the empirical evaluation methods employed as regards usability could be improved by the adoption of automated mechanisms, and the evaluation processes should also be revised to combine more than one method.
Abstract: The release of smartphones and tablets, which offer more advanced communication and computing capabilities, has led to the strong emergence of mHealth on the market. mHealth systems are being used to improve patients' lives and their health, in addition to facilitating communication between doctors and patients. Researchers are now proposing mHealth applications for many health conditions such as dementia, autism, dysarthria, Parkinson's disease, and so on. Usability becomes a key factor in the adoption of these applications, which are often used by people who have problems when using mobile devices and who have a limited experience of technology. The aim of this paper is to investigate the empirical usability evaluation processes described in a total of 22 selected studies related to mHealth applications by means of a Systematic Literature Review. Our results show that the empirical evaluation methods employed as regards usability could be improved by the adoption of automated mechanisms. The evaluation processes should also be revised to combine more than one method. This paper will help researchers and developers to create more usable applications. Our study demonstrates the importance of adapting health applications to users' need.

415 citations

Book ChapterDOI
TL;DR: A comparison of recent Deep Convolutional Neural Network (DCNN) architectures for automatic binary classification of pneumonia images based fined tuned versions of VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inceptions_V3, Resnet50, MobileNet-V2 and Xception is presented.
Abstract: Recently, researchers, specialists, and companies around the world are rolling out deep learning and image processing-based systems that can fastly process hundreds of X-Ray and Computed Tomography (CT) images to accelerate the diagnosis of pneumonia such as SARS, covid-19, etc., and aid in its containment. Medical image analysis is one of the most promising research areas; it provides facilities for diagnosis and making decisions of several diseases such as MERS, covid-19, etc. In this paper, we present a comparison of recent deep convolutional neural network (CNN) architectures for automatic binary classification of pneumonia images based on fined tuned versions of (VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inception_V3, Resnet50, MobileNet_V2 and Xception) and a retraining of a baseline CNN. The proposed work has been tested using chest X-Ray & CT dataset, which contains 6087 images (4504 pneumonia and 1583 normal). As a result, we can conclude that the fine-tuned version of Resnet50 shows highly satisfactory performance with rate of increase in training and testing accuracy (more than 96% of accuracy).

161 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper study the interpretation of cost estimation models based on a backpropagation three layer perceptron network based on the COCOMO'81 dataset and proposes a method that maps this neural network to a fuzzy rule based system.
Abstract: Software development effort estimation with the aid of neural networks has generally been viewed with skepticism by a majority of the software cost estimation community. Although, neural networks have shown their strengths in solving complex problems, their shortcoming of being 'black boxes' models has prevented them from being accepted as a common practice for cost estimation. In this paper, we study the interpretation of cost estimation models based on a backpropagation three layer perceptron network. Our proposed idea comprises mainly of the use of a method that maps this neural network to a fuzzy rule based system. Consequently, if the obtained fuzzy rules are easily interpreted, the neural network will also be easy to interpret. Our case study is based on the COCOMO'81 dataset.

156 citations

Journal ArticleDOI
TL;DR: A systematic mapping of studies for which the primary goal is to develop or to improve ASEE techniques published in the period 1990–2012 revealed that most researchers focus on addressing problems related to the first step of an ASEE process, that is, feature and case subset selection.
Abstract: Context Analogy-based Software development Effort Estimation (ASEE) techniques have gained considerable attention from the software engineering community. However, existing systematic map and review studies on software development effort prediction have not investigated in depth several issues of ASEE techniques, to the exception of comparisons with other types of estimation techniques. Objective The objective of this research is twofold: (1) to classify ASEE studies which primary goal is to propose new or modified ASEE techniques according to five criteria: research approach, contribution type, techniques used in combination with ASEE methods, and ASEE steps, as well as identifying publication channels and trends and (2) to analyze these studies from five perspectives: estimation accuracy, accuracy comparison, estimation context, impact of the techniques used in combination with ASEE methods, and ASEE tools. Method We performed a systematic mapping of studies for which the primary goal is to develop or to improve ASEE techniques published in the period 1990–2012, and reviewed them based on an automated search of four electronic databases. Results In total, we identified 65 studies published between 1990 and 2012, and classified them based on our predefined classification criteria. The mapping study revealed that most researchers focus on addressing problems related to the first step of an ASEE process, that is, feature and case subset selection. The results of our detailed analysis show that ASEE methods outperform the eight techniques with which they were compared, and tend to yield acceptable results especially when combining ASEE techniques with Fuzzy Logic (FL) or Genetic Algorithms (GA). Conclusion Based on the findings of this study, the use of other techniques such FL and GA in combination with an ASEE method is promising to generate more accurate estimates. However, the use of ASEE techniques by practitioners is still limited: developing more ASEE tools may facilitate the application of these techniques and then lead to increasing the use of ASEE techniques in industry.

156 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Reading a book as this basics of qualitative research grounded theory procedures and techniques and other references can enrich your life quality.

13,415 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations