scispace - formally typeset
Search or ask a question
Conference

Conference on Steps to Reducing Unwanted Traffic on Internet 

About: Conference on Steps to Reducing Unwanted Traffic on Internet is an academic conference. The conference publishes majorly in the area(s): The Internet & Denial-of-service attack. Over the lifetime, 28 publications have been published by the conference receiving 2038 citations.

Papers
More filters
Proceedings Article
07 Jul 2005
TL;DR: This paper outlines the origins and structure of bots and botnets and uses data from the operator community, the Internet Motion Sensor project, and a honeypot experiment to illustrate the botnet problem today and describes a system to detect botnets that utilize advanced command and control systems by correlating secondary detection data from multiple sources.
Abstract: Global Internet threats are undergoing a profound transformation from attacks designed solely to disable infrastructure to those that also target people and organizations. Behind these new attacks is a large pool of compromised hosts sitting in homes, schools, businesses, and governments around the world. These systems are infected with a bot that communicates with a bot controller and other bots to form what is commonly referred to as a zombie army or botnet. Botnets are a very real and quickly evolving problem that is still not well understood or studied. In this paper we outline the origins and structure of bots and botnets and use data from the operator community, the Internet Motion Sensor project, and a honeypot experiment to illustrate the botnet problem today. We then study the effectiveness of detecting botnets by directly monitoring IRC communication or other command and control activity and show a more comprehensive approach is required. We conclude by describing a system to detect botnets that utilize advanced command and control systems by correlating secondary detection data from multiple sources.

588 citations

Proceedings Article
07 Jul 2006
TL;DR: An anomaly-based algorithm for detecting IRC-based botnet meshes that has been deployed in PSU's DMZ for over a year and has proven effective in reducing the number of botnet clients.
Abstract: We present an anomaly-based algorithm for detecting IRC-based botnet meshes. The algorithm combines an IRC mesh detection component with a TCP scan detection heuristic called the TCP work weight. The IRC component produces two tuples, one for determining the IRC mesh based on IP channel names, and a sub-tuple which collects statistics (including the TCP work weight) on individual IRC hosts in channels. We sort the channels by the number of scanners producing a sorted list of potential botnets. This algorithm has been deployed in PSU's DMZ for over a year and has proven effective in reducing the number of botnet clients.

348 citations

Proceedings Article
07 Jul 2006
TL;DR: It is found that bots are performing reconnaissance on behalf of other bots, and counterintelligence techniques that may be useful for early bot detection are suggested.
Abstract: Botnets--networks of (typically compromised) machines--are often used for nefarious activities (e.g., spam, click fraud, denial-of-service attacks, etc.). Identifying members of botnets could help stem these attacks, but passively detecting botnet membership (i.e., without disrupting the operation of the botnet) proves to be difficult. This paper studies the effectiveness of monitoring lookups to a DNS-based blackhole list (DNSBL) to expose botnet membership. We perform counter-intelligence based on the insight that botmasters themselves perform DNSBL lookups to determine whether their spamming bots are blacklisted. Using heuristics to identify which DNSBL lookups are perpetrated by a botmaster performing such reconnaissance, we are able to compile a list of likely bots. This paper studies the prevalence of DNSBL reconnaissance observed at a mirror of a well-known blacklist for a 45- day period, identifies the means by which botmasters are performing reconnaissance, and suggests the possibility of using counter-intelligence to discover likely bots. We find that bots are performing reconnaissance on behalf of other bots. Based on this finding, we suggest counterintelligence techniques that may be useful for early bot detection.

265 citations

Proceedings Article
07 Jul 2005
TL;DR: The results are the first to quantify the extent and nature of filtering and the ability to spoof on the Internet and suggest that a large portion of the Internet is vulnerable to spoofing and concerted attacks employing spoofing remain a serious concern.
Abstract: Forging, or "spoofing," the source addresses of IP packets provides malicious parties with anonymity and novel attack vectors Spoofing-based attacks complicate network operator's defense techniques; tracing spoofing remains a difficult and largely manual process More sophisticated next generation distributed denial of service (DDoS) attacks may test filtering policies and adaptively attempt to forge source addresses To understand the current state of network filtering, this paper presents an Internet-wide active measurement spoofing project Clients in our study attempt to send carefully crafted UDP packets designed to infer filtering policies When filtering of valid packets is in place we determine the filtering granularity by performing adjacent netblock scanning Our results are the first to quantify the extent and nature of filtering and the ability to spoof on the Internet We find that approximately one-quarter of the observed addresses, netblocks and autonomous systems (AS) permit full or partial spoofing Projecting this number to the entire Internet, an approximation we show is reasonable, yields over 360 million addresses and 4,600 ASes from which spoofing is possible Our findings suggest that a large portion of the Internet is vulnerable to spoofing and concerted attacks employing spoofing remain a serious concern

141 citations

Proceedings Article
07 Jul 2005
TL;DR: A multi-stage spam filter based on trust, and reputation for detecting the spam, which shows that multistage feedback loop fares better than any single stage and the larger the network size, the harder to detect a spam call.
Abstract: Voice over IP (VoIP) is a key enabling technology for the migration of circuit-switched PSTN architectures to packet-based networks. The problem of spam in VoIP networks has to be solved in real time compared to e-mail systems. Many of the techniques devised for e-mail spam detection rely upon content analysis and in the case of VoIP it is too late to analyze the media after picking up the receiver. So we need to stop the spam calls before the telephone rings. From our observation, when it comes to receiving or rejecting a voice call people use social meaning of trust and reputation of the calling party. In this paper, we describe a multi-stage spam filter based on trust, and reputation for detecting the spam. In particular we used closed loop feedback between different stages in deciding if the incoming call is a spam or not. For verifying the concepts, we used a laboratory setup of several thousand soft-phones and a commercial grade proxy server. We verified our filtering mechanisms by simulating the spam calls and measured the accuracy of the filter. Results show that multistage feedback loop fares better than any single stage. Also, the larger the network size, the harder to detect a spam call. Further work includes understanding the behavior of different controlling parameters in trust and reputation calculations and deriving meaningful relationships between them.

118 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20075
200610
200513