scispace - formally typeset
Search or ask a question
Author

John McHugh

Bio: John McHugh is an academic researcher from Dalhousie University. The author has contributed to research in topics: Intrusion detection system & Network security. The author has an hindex of 25, co-authored 64 publications receiving 3964 citations. Previous affiliations of John McHugh include Software Engineering Institute & Research Triangle Park.


Papers
More filters
Journal ArticleDOI
TL;DR: The purpose of this article is to attempt to identify the shortcomings of the Lincoln Lab effort in the hope that future efforts of this kind will be placed on a sounder footing.
Abstract: In 1998 and again in 1999, the Lincoln Laboratory of MIT conducted a comparative evaluation of intrusion detection systems (IDSs) developed under DARPA funding. While this evaluation represents a significant and monumental undertaking, there are a number of issues associated with its design and execution that remain unsettled. Some methodologies used in the evaluation are questionable and may have biased its results. One problem is that the evaluators have published relatively little concerning some of the more critical aspects of their work, such as validation of their test data. The appropriateness of the evaluation techniques used needs further investigation. The purpose of this article is to attempt to identify the shortcomings of the Lincoln Lab effort in the hope that future efforts of this kind will be placed on a sounder footing. Some of the problems that the article points out might well be resolved if the evaluators were to publish a detailed description of their procedures and the rationale that led to their adoption, but other problems would clearly remain./par>

1,346 citations

ReportDOI
01 Jan 2000
TL;DR: A goal of this report is to provide an unbiased assessment of publicly available ID technology and it is hoped this will help those who purchase and use ID technology to gain a realistic understanding of its capabilities and limitations.
Abstract: : Attacks on the nation's computer infrastructures are a serious problem. Over the past 12 years, the growing number of computer security incidents on the Internet has reflected the growth of the Internet itself. Because most deployed computer systems are vulnerable to attack, intrusion detection (ID) is a rapidly developing field. Intrusion detection is an important technology business sector as well as an active area of research. Vendors make many claims for their products in the commercial marketplace so separating hype from reality can be a major challenge. A goal of this report is to provide an unbiased assessment of publicly available ID technology. We hope this will help those who purchase and use ID technology to gain a realistic understanding of its capabilities and limitations. The report raises issues that we believe are important for ID system (IDS) developers to address as they formulate product strategies. The report also points out relevant issues for the research community as they formulate research directions and allocate funds.

431 citations

Journal ArticleDOI
TL;DR: A life cycle model for system vulnerabilities is proposed, then applied to three case studies to reveal how systems often remain vulnerable long after security fixes are available.
Abstract: The authors propose a life cycle model for system vulnerabilities, then apply it to three case studies to reveal how systems often remain vulnerable long after security fixes are available. For each case, we provide background information about the vulnerability, such as how attackers exploited it and which systems were affected. We then tie the case to the life-cycle model by identifying the dates for each state within the model. Finally, we use a histogram of reported intrusions to show the life of the vulnerability, and we conclude with an analysis specific to the particular vulnerability.

286 citations

Proceedings ArticleDOI
30 Nov 1992
TL;DR: The results of an experiment show that it is very simple to contaminate digital images with information that can later be extracted and that image downgrading based on visual display of the image to be downgraded not be performed if there is any threat of image contamination by Trojan horse programs.
Abstract: The results of an experiment that shows that it is very simple to contaminate digital images with information that can later be extracted are presented. This contamination cannot be detected when the image is displayed on a good quality graphics workstation. Based on these results, it is recommended that image downgrading based on visual display of the image to be downgraded not be performed if there is any threat of image contamination by Trojan horse programs. Potential Trojan horse programs may include untrusted image processing software. >

277 citations

Journal ArticleDOI
TL;DR: The role of IDSs in an organization's overall defensive posture is considered and guidelines for IDS deployment, operation, and maintenance are provided.
Abstract: Intrusion detection systems are an important component of defensive measures protecting computer systems and networks from abuse. This article considers the role of IDSs in an organization's overall defensive posture and provides guidelines for IDS deployment, operation, and maintenance.

264 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Reading a book as this basics of qualitative research grounded theory procedures and techniques and other references can enrich your life quality.

13,415 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations