scispace - formally typeset
Search or ask a question
Author

Waisq Khan

Bio: Waisq Khan is an academic researcher from Manchester Metropolitan University. The author has contributed to research in topics: Artificial neural network & Decision tree learning. The author has an hindex of 1, co-authored 1 publications receiving 6 citations.

Papers
More filters
Proceedings ArticleDOI
08 Jul 2018
TL;DR: This paper investigates the use of a hybrid model comprising multiple artificial neural networks with a final C4.5 decision tree classifier to investigate the potential of explaining the classification decision through production rules and the significant tree size questions the rule transparency to a human.
Abstract: The Artificial Neural Network is generally considered to be an effective classifier, but also a “Black Box” component whose internal behavior cannot be understood by human users. This lack of transparency forms a barrier to acceptance in high-stakes applications by the general public. This paper investigates the use of a hybrid model comprising multiple artificial neural networks with a final C4.5 decision tree classifier to investigate the potential of explaining the classification decision through production rules. Two large datasets collected from comprehension studies are used to investigate the value of the C4.5 decision tree as the overall comprehension classifier in terms of accuracy and decision transparency. Empirical trials show that higher accuracies are achieved through using a decision tree classifier, but the significant tree size questions the rule transparency to a human.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper critically examines a recently developed proposal for a border control system called iBorderCtrl, designed to detect deception based on facial recognition technology and the measurement of micro-expressions, termed 'biomarkers of deceit'.
Abstract: This paper critically examines a recently developed proposal for a border control system called iBorderCtrl, designed to detect deception based on facial recognition technology and the measurement ...

25 citations

01 Jan 2015
TL;DR: The results of this study indicate that the negativity associated with a particular 5-HTTLPR genotype may be due to decreased processing of positive emotion rather than increased processing of negative emotion.
Abstract: Facial mimicry has been considered an automatic, spontaneous process. However, recent research suggests that facial mimicry is dependent on the context of the social interaction, with increased mimicry occurring when the understanding of another’s emotional states is important. In this study, we examined the social context of facial mimicry of positive and negative facial expressions of emotion, and how mimicry relates to common variants in the serotonin transporter genotype 5-HTTLPR, which has been found to relate to proneness to negativity and to social sensitivity. Overall, the results of this study indicate that the negativity associated with a particular 5-HTTLPR genotype may be due to decreased processing of positive emotion rather than increased processing of negative emotion. ASSOCIATION BETWEEN 5-HTTLPR AND MIMICRY 3

10 citations

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of three data mining techniques: Artificial Neural Network (ANN), Genetic Algorithm (GA), and Tobit Regression (Tobin) in determining the credit risk of local government units in Croatia.
Abstract: Over the past few decades, data mining techniques, especially artificial neural networks, have been used for modelling many real-world problems. This paper aims to test the performance of three methods: (1) an artificial neural network (ANN), (2) a hybrid artificial neural network and genetic algorithm approach (ANN-GA), and (2) the Tobit regression approach in determining the credit risk of local government units in Croatia. The evaluation of credit risk and prediction of debtor bankruptcy have long been regarded as an important topic in accounting and finance literature. In this research, credit risk is modelled under a regression approach unlike typical credit risk analysis, which is generally viewed as a classification problem. Namely, a standard evaluation of credit risk is not possible due to a lack of bankruptcy data. Thus, the credit risk of a local unit is approxi-mated using the ratio of outstanding liabilities maturing in a given year to total expendi-ture of the local unit in the same period. The results indicate that the ANN-GA hybrid approach performs significantly better than the Tobit model by providing a significantly smaller average mean squared error. This work is beneficial to researchers and the govern-ment in evaluating a local government unit’s credit score.

4 citations

Book ChapterDOI
18 Sep 2018
TL;DR: The chapter concludes by examining the future of ex-plainable decision making through proposing a new Hierarchy of Explainability and Empowerment that allows information and decision-making complexity to be explained at different levels depending on a person’s abilities.
Abstract: Adaptive Psychological Profiling systems use artificial intelligence algorithms to analyze a person’s non-verbal behavior in order to determine a specific mental state such as deception. One such system known as, Silent Talker, combines image processing and artificial neural networks to classify multiple non-verbal signals mainly from the face during a verbal exchange i.e. interview, to produce an accurate and comprehensive time-based profile of a subject’s psychological state. Artificial neural networks are typically black-box algorithms; hence, it is difficult to understand how the classification of a person’s behaviour is obtained. The new European Data Protection Legislation (GDPR), states that individuals who are automatically profiled, have the right to an explanation of how the “machine” reached its decision and receive meaningful information on the logic involved in how that decision was reached. This is practically difficult from a technical perspective, whereas from a legal point of view, it remains unclear whether this is sufficient to safeguard the data subject’s rights. This chapter is an extended version of a previous published paper in IJCCI 2019 [35] which examines the new European Data Protection Legislation and how it impacts on an application of psychological profiling within an Automated Deception Detection System (ADDS) which is one component of a smart border control system known as iBorderCtrl. ADDS detects deception through an avatar border guard interview, during a participants’ pre-registration, to demonstrate the challenges faced in trying to obtain explainable decisions from models derived through computational intelligence techniques. The chapter concludes by examining the future of explainable decision making through proposing a new Hierarchy of Explainability and Empowerment that allows information and decision-making complexity to be explained at different levels depending on a person’s abilities.

3 citations

Posted Content
TL;DR: In this paper, a recently developed proposal for a border control system called iBorderCtrl, designed to detect deception based on facial recognition technology and the measurement of micro-expressions, termed "biomarkers of deceit".
Abstract: This paper critically examines a recently developed proposal for a border control system called iBorderCtrl, designed to detect deception based on facial recognition technology and the measurement of micro-expressions, termed 'biomarkers of deceit'. Funded under the European Commission's Horizon 2020 programme, we situate our analysis in the wider political economy of 'emotional AI' and the history of deception detection technologies. We then move on to interrogate the design of iBorderCtrl using publicly available documents and assess the assumptions and scientific validation underpinning the project design. Finally, drawing on a Bayesian analysis we outline statistical fallacies in the foundational premise of mass screening and argue that it is very unlikely that the model that iBorderCtrl provides for deception detection would work in practice. By interrogating actual systems in this way, we argue that we can begin to question the very premise of the development of data-driven systems, and emotional AI and deception detection in particular, pushing back on the assumption that these systems are fulfilling the tasks they claim to be attending to and instead ask what function such projects carry out in the creation of subjects and management of populations. This function is not merely technical but, rather, we argue, distinctly political and forms part of a mode of governance increasingly shaping life opportunities and fundamental rights.

3 citations