About: Blacklist is a research topic. Over the lifetime, 1362 publications have been published within this topic receiving 15158 citations.
Papers published on a yearly basis
TL;DR: The ENCODE blacklist is defined- a comprehensive set of regions in the human, mouse, worm, and fly genomes that have anomalous, unstructured, or high signal in next-generation sequencing experiments independent of cell line or experiment.
Abstract: Functional genomics assays based on high-throughput sequencing greatly expand our ability to understand the genome. Here, we define the ENCODE blacklist- a comprehensive set of regions in the human, mouse, worm, and fly genomes that have anomalous, unstructured, or high signal in next-generation sequencing experiments independent of cell line or experiment. The removal of the ENCODE blacklist is an essential quality measure when analyzing functional genomics data.
••04 Oct 2010
TL;DR: A characterization of spam on Twitter finds that 8% of 25 million URLs posted to the site point to phishing, malware, and scams listed on popular blacklists, and examines whether the use of URL blacklists would help to significantly stem the spread of Twitter spam.
Abstract: In this work we present a characterization of spam on Twitter. We find that 8% of 25 million URLs posted to the site point to phishing, malware, and scams listed on popular blacklists. We analyze the accounts that send spam and find evidence that it originates from previously legitimate accounts that have been compromised and are now being puppeteered by spammers. Using clickthrough data, we analyze spammers' use of features unique to Twitter and the degree that they affect the success of spam. We find that Twitter is a highly successful platform for coercing users to visit spam pages, with a clickthrough rate of 0.13%, compared to much lower rates previously reported for email spam. We group spam URLs into campaigns and identify trends that uniquely distinguish phishing, malware, and spam, to gain an insight into the underlying techniques used to attract users.Given the absence of spam filtering on Twitter, we examine whether the use of URL blacklists would help to significantly stem the spread of Twitter spam. Our results indicate that blacklists are too slow at identifying new threats, allowing more than 90% of visitors to view a page before it becomes blacklisted. We also find that even if blacklist delays were reduced, the use by spammers of URL shortening services for obfuscation negates the potential gains unless tools that use blacklists develop more sophisticated spam filtering.
••11 Aug 2006
TL;DR: It is found that most spam is being sent from a few regions of IP address space, and that spammers appear to be using transient "bots" that send only a few pieces of email over very short periods of time.
Abstract: This paper studies the network-level behavior of spammers, including: IP address ranges that send the most spam, common spamming modes (e.g., BGP route hijacking, bots), how persistent across time each spamming host is, and characteristics of spamming botnets. We try to answer these questions by analyzing a 17-month trace of over 10 million spam messages collected at an Internet "spam sinkhole", and by correlating this data with the results of IP-based blacklist lookups, passive TCP fingerprinting information, routing information, and botnet "command and control" traces.We find that most spam is being sent from a few regions of IP address space, and that spammers appear to be using transient "bots" that send only a few pieces of email over very short periods of time. Finally, a small, yet non-negligible, amount of spam is received from IP addresses that correspond to short-lived BGP routes, typically for hijacked prefixes. These trends suggest that developing algorithms to identify botnet membership, filtering email messages based on network-level properties (which are less variable than email content), and improving the security of the Internet routing infrastructure, may prove to be extremely effective for combating spam.
•11 Aug 2010
TL;DR: Notos, a dynamic reputation system for DNS, is proposed that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services.
Abstract: The Domain Name System (DNS) is an essential protocol used by both legitimate Internet applications and cyber attacks For example, botnets rely on DNS to support agile command and control infrastructures An effective way to disrupt these attacks is to place malicious domains on a "blocklist" (or "blacklist") or to add a filtering rule in a firewall or network intrusion detection system To evade such security countermeasures, attackers have used DNS agility, eg, by using new domains daily to evade static blacklists and firewalls In this paper we propose Notos, a dynamic reputation system for DNS The premise of this system is that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services Notos uses passive DNS query data and analyzes the network and zone features of domains It builds models of known legitimate domains and malicious domains, and uses these models to compute a reputation score for a new domain indicative of whether the domain is malicious or legitimate We have evaluated Notos in a large ISP's network with DNS traffic from 14 million users Our results show that Notos can identify malicious domains with high accuracy (true positive rate of 968%) and low false positive rate (038%), and can identify these domains weeks or even months before they appear in public blacklists
••14 Mar 2010
TL;DR: The system exploits the observation that attackers often employ simple modifications to URLs to evade blacklisting, and proposes five heuristics to enumerate simple combinations of known phishing sites to discover new phishing URLs.
Abstract: Phishing has been easy and effective way for trickery and deception on the Internet While solutions such as URL blacklisting have been effective to some degree, their reliance on exact match with the blacklisted entries makes it easy for attackers to evade We start with the observation that attackers often employ simple modifications (eg, changing top level domain) to URLs Our system, PhishNet, exploits this observation using two components In the first component, we propose five heuristics to enumerate simple combinations of known phishing sites to discover new phishing URLs The second component consists of an approximate matching algorithm that dissects a URL into multiple components that are matched individually against entries in the blacklist In our evaluation with real-time blacklist feeds, we discovered around 18,000 new phishing URLs from a set of 6,000 new blacklist entries We also show that our approximate matching algorithm leads to very few false positives (3%) and negatives (5%)
Trending Questions (10)