scispace - formally typeset
Search or ask a question
Author

Alan Tickle

Bio: Alan Tickle is an academic researcher from Queensland University of Technology. The author has contributed to research in topics: Artificial neural network & Denial-of-service attack. The author has an hindex of 12, co-authored 29 publications receiving 1907 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs, extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement).
Abstract: It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.

1,223 citations

Journal ArticleDOI
TL;DR: This paper shows that not only is the ADT taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types and explanation structures.
Abstract: To date, the preponderance of techniques for eliciting the knowledge embedded in trained artificial neural networks (ANN's) has focused primarily on extracting rule-based explanations from feedforward ANN's. The ADT taxonomy for categorizing such techniques was proposed in 1995 to provide a basis for the systematic comparison of the different approaches. This paper shows that not only is this taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types (e,g., recurrent neural networks) and explanation structures. In addition we identify some of the key research questions in extracting the knowledge embedded within ANN's including the need for the formulation of a consistent theoretical basis for what has been, until recently, a disparate collection of empirical results.

421 citations

Proceedings ArticleDOI
22 Aug 2011
TL;DR: This paper proposes parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate the proposal.
Abstract: Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.

44 citations

Journal ArticleDOI
TL;DR: A traffic generation and testbed framework for synthetically generating different types of realistic DDoS attacks, FEs and other benign traffic traces, and monitoring their effects on the target, using only modest hardware resources.

43 citations

Proceedings Article
01 Jan 2005
TL;DR: Practical sessions in a 2nd year undergraduate networking unit are designed to use a network simulation tool, Packet Tracer ™, to facilitate active learning by providing an analytical, problem solving and evaluation framework.
Abstract: Computer networking concepts can be difficult to understand and teach as they frequently relate to complex and dynamic processes which are not readily visible or intuitive and are therefore problematic to conceptualise. Consequently teachers often incorporate simulation or visualisation tools to support the learning process, but often in a superficial way and without evaluating their effectiveness.To tackle this issue we designed the practical sessions in a 2nd year undergraduate networking unit to use a network simulation tool, Packet Tracer ™, to facilitate active learning by providing an analytical, problem solving and evaluation framework. To then evaluate the effectiveness of using Packet Tracer ™ in this way, students were assessed before and after participating in one specific practical session. Measured results showed a marked improvement in student understanding of the topic presented (VLANs). We show that the use of the simulation tool, not merely to demonstrate concepts, but to also provide feedback and guidance enhanced deep learning.

41 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.

2,827 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Abstract: In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.

2,805 citations

Journal ArticleDOI
Amina Adadi1, Mohammed Berrada1
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

2,258 citations

Journal ArticleDOI
TL;DR: The steps that should be followed in the development of artificial neural network models are outlined, including the choice of performance criteria, the division and pre-processing of the available data, the determination of appropriate model inputs and network architecture, optimisation of the connection weights (training) and model validation.
Abstract: Artificial Neural Networks (ANNs) are being used increasingly to predict and forecast water resources variables. In this paper, the steps that should be followed in the development of such models are outlined. These include the choice of performance criteria, the division and pre-processing of the available data, the determination of appropriate model inputs and network architecture, optimisation of the connection weights (training) and model validation. The options available to modellers at each of these steps are discussed and the issues that should be considered are highlighted. A review of 43 papers dealing with the use of neural network models for the prediction and forecasting of water resources variables is undertaken in terms of the modelling process adopted. In all but two of the papers reviewed, feedforward networks are used. The vast majority of these networks are trained using the backpropagation algorithm. Issues in relation to the optimal division of the available data, data pre-processing and the choice of appropriate model inputs are seldom considered. In addition, the process of choosing appropriate stopping criteria and optimising network geometry and internal network parameters is generally described poorly or carried out inadequately. All of the above factors can result in non-optimal model performance and an inability to draw meaningful comparisons between different models. Future research efforts should be directed towards the development of guidelines which assist with the development of ANN models and the choice of when ANNs should be used in preference to alternative approaches, the assessment of methods for extracting the knowledge that is contained in the connection weights of trained ANNs and the incorporation of uncertainty into ANN models.

2,181 citations