scispace - formally typeset
Search or ask a question
Author

James Moscola

Bio: James Moscola is an academic researcher from York College of Pennsylvania. The author has contributed to research in topics: Network packet & Parsing. The author has an hindex of 14, co-authored 35 publications receiving 1051 citations. Previous affiliations of James Moscola include Washington University in St. Louis & University of Washington.

Papers
More filters
Proceedings ArticleDOI
09 Apr 2003
TL;DR: A module has been implemented in Field Programmable Gate Array (FPGA) hardware that scans the content of Internet packets at Gigabits/second rates and automatically generates the Finite State Machines (FSMs) to search for regular expressions.
Abstract: A module has been implemented in Field Programmable Gate Array (FPGA) hardware that scans the content of Internet packets at Gigabits/second rates. All of the packet processing operations are performed using reconfigurable hardware within a single Xilinx Virtex XCV2000E FPGA. A set of layered protocol wrappers is used to parse the headers and payloads of packets for Internet protocol data. A content matching server automatically generates the Finite State Machines (FSMs) to search for regular expressions. The complete system is operated on the Field-programmable Port Extender (FPX) platform.

286 citations

Journal ArticleDOI
TL;DR: A hardware-based TCP/IP content-processing system that supports content scanning and flow blocking for millions of flows at gigabit line rates and integrates and extends the capabilities of the TCP splitter and the old content-scanning engine.
Abstract: The transmission control protocol is the workhorse protocol of the Internet. Most of the data passing through the Internet transits the network using TCP layered atop the Internet protocol (IP). Monitoring, capturing, filtering, and blocking traffic on high-speed Internet links requires the ability to directly process TCP packets in hardware. High-speed network intrusion detection and prevention systems guard against several types of threats. As the gap between network bandwidth and computing power widens, improved microelectronic architectures are needed to monitor and filter network traffic without limiting throughput. To address these issues, we've designed a hardware-based TCP/IP content-processing system that supports content scanning and flow blocking for millions of flows at gigabit line rates. The TCP splitter2 technology was previously developed to monitor TCP data streams, sending a consistent byte stream of data to a client application for every TCP data flow passing through the circuit. The content-scanning engine can scan the payload of packets for a set of regular expressions. The new TCP-based content-scanning engine integrates and extends the capabilities of the TCP splitter and the old content-scanning engine. IP packets travel to the TCP processing engine from the lower-layer-protocol wrappers. Hash tables are used to index memory that stores each flow's state.

128 citations

Patent
21 May 2002
TL;DR: In this paper, a reprogrammable packet processing system for processing a stream of data is described, where a reconfiguration device receives input from a user specifying the data pattern and action, processes the input to generate the configuration information necessary to reprogram the PLD, and transmits the configuration to the packet processor for reprogramming thereof.
Abstract: A reprogrammable packet processing system for processing a stream of data is disclosed herein. A reprogrammable data processor is implemented with a programmable logic device (PLD), such as a field programmable gate array (FPGA), that is programmed to determine whether a stream of data applied thereto includes a string that matches a redefinable data pattern. If a matching string is found, the data processor performs a specified action in response thereto. The data processor is reprogrammable to search packets for the presence of different data patterns and/or perform different actions when a matching string is detected. A reconfiguration device receives input from a user specifying the data pattern and action, processes the input to generate the configuration information necessary to reprogram the PLD, and transmits the configuration information to the packet processor for reprogramming thereof

124 citations

Proceedings ArticleDOI
24 Feb 2015
TL;DR: A preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities confirms existing hypotheses and explores a number of interesting correlations in the data that confirm existing hypotheses.
Abstract: Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers.

68 citations

01 Jan 2003
TL;DR: A platform has been implemented that actively scans and filters Internet traffic for Internet worms and viruses at multi-Gigabit/second rates using the Field-programmable Port Extender (FPX), and logic that allows modules to be dynamically reconfigured to scan for new signatures.
Abstract: The security of the Internet can be improved using Programmable Logic Devices (PLDs). A platform has been implemented that actively scans and filters Internet traffic for Internet worms and viruses at multi-Gigabit/second rates using the Field-programmable Port Extender (FPX). Modular components implemented with Field Programmable Gate Array (FPGA) logic on the FPX process packet headers and scan for signatures of malicious software (malware) carried in packet payloads. FPGA logic is used to implement circuits that track the state of Internet flows and search for regular expressions and fixed-strings that appear in the content of packets. The FPX contains logic that allows modules to be dynamically reconfigured to scan for new signatures. Network-wide protection is achieved by the deployment of multiple systems throughout the Internet.

61 citations


Cited by
More filters
Proceedings Article
06 Dec 2004
TL;DR: The initial experience suggests that, for a wide range of network pathogens, it may be practical to construct fully automated defenses - even against so-called "zero-day" epidemics.
Abstract: Network worms are a clear and growing threat to the security of today's Internet-connected hosts and networks. The combination of the Internet's unrestricted connectivity and widespread software homogeneity allows network pathogens to exploit tremendous parallelism in their propagation. In fact, modern worms can spread so quickly, and so widely, that no human-mediated reaction can hope to contain an outbreak. In this paper, we propose an automated approach for quickly detecting previously unknown worms and viruses based on two key behavioral characteristics - a common exploit sequence together with a range of unique sources generating infections and destinations being targeted. More importantly, our approach - called "content sifting" - automatically generates precise signatures that can then be used to filter or moderate the spread of the worm elsewhere in the network. Using a combination of existing and novel algorithms we have developed a scalable content sifting implementation with low memory and CPU requirements. Over months of active use at UCSD, our Earlybird prototype system has automatically detected and generated signatures for all pathogens known to be active on our network as well as for several new worms and viruses which were unknown at the time our system identified them. Our initial experience suggests that, for a wide range of network pathogens, it may be practical to construct fully automated defenses - even against so-called "zero-day" epidemics.

786 citations

Journal ArticleDOI
TL;DR: This work describes a hardware-based technique using Bloom filters, which can detect strings in streaming data without degrading network throughput and queries a database of strings to check for the membership of a particular string.
Abstract: There is a class of packet processing applications that inspect packets deeper than the protocol headers to analyze content. For instance, network security applications must drop packets containing certain malicious Internet worms or computer viruses carried in a packet payload. Content forwarding applications look at the hypertext transport protocol headers and distribute the requests among the servers for load balancing. Packet inspection applications, when deployed at router ports, must operate at wire speeds. With networking speeds doubling every year, it is becoming increasingly difficult for software-based packet monitors to keep up with the line rates. We describe a hardware-based technique using Bloom filters, which can detect strings in streaming data without degrading network throughput. A Bloom filter is a data structure that stores a set of signatures compactly by computing multiple hash functions on each member of the set. This technique queries a database of strings to check for the membership of a particular string. The answer to this query can be false positive but never a false negative. An important property of this data structure is that the computation time involved in performing the query is independent of the number of strings in the database provided the memory used by the data structure scales linearly with the number of strings stored in it. Furthermore, the amount of storage required by the Bloom filter for each string is independent of its length.

707 citations

Patent
26 Dec 2006
TL;DR: In this article, a runtime adaptable search processor is proposed that provides high speed content search capability to meet the performance need of network line rates growing to 1Gbps, IOGbps and higher.
Abstract: A runtime adaptable search processor is disclosed. The search processor provides high speed content search capability to meet the performance need of network line rates growing to 1Gbps, IOGbps and higher. The search processor provides a unique combination of NFA and DFA based search engines that can processincoming data in parallel to perform the search against the specific rules programmed in the search engines. The procesor architecture also provides capabilities to transport and process Internet Protocol (IP) packets from Layer 2 through transport protocol layer and may also provide packet inspection through Layer 7. A security system is also disclosed that enables a new way of implementing security capabilities inside enterprise networks in a distributed manner using a protocol processing hardware with appropriate security features.

577 citations

Journal ArticleDOI
11 Aug 2006
TL;DR: This paper introduces a new representation for regular expressions, called the Delayed Input DFA (D2FA), which substantially reduces space equirements as compared to a DFA, and describes an efficient architecture that can perform deep packet inspection at multi-gigabit rates.
Abstract: There is a growing demand for network devices capable of examining the content of data packets in order to improve network security and provide application-specific services. Most high performance systems that perform deep packet inspection implement simple string matching algorithms to match packets against a large, but finite set of strings. owever, there is growing interest in the use of regular expression-based pattern matching, since regular expressions offer superior expressive power and flexibility. Deterministic finite automata (DFA) representations are typically used to implement regular expressions. However, DFA representations of regular expression sets arising in network applications require large amounts of memory, limiting their practical application.In this paper, we introduce a new representation for regular expressions, called the Delayed Input DFA (D2FA), which substantially reduces space equirements as compared to a DFA. A D2FA is constructed by transforming a DFA via incrementally replacing several transitions of the automaton with a single default transition. Our approach dramatically reduces the number of distinct transitions between states. For a collection of regular expressions drawn from current commercial and academic systems, a D2FA representation reduces transitions by more than 95%. Given the substantially reduced space equirements, we describe an efficient architecture that can perform deep packet inspection at multi-gigabit rates. Our architecture uses multiple on-chip memories in such a way that each remains uniformly occupied and accessed over a short duration, thus effectively distributing the load and enabling high throughput. Our architecture can provide ostffective packet content scanning at OC-192 rates with memory requirements that are consistent with current ASIC technology.

553 citations

Book
02 Nov 2007
TL;DR: This book is intended as an introduction to the entire range of issues important to reconfigurable computing, using FPGAs as the context, or "computing vehicles" to implement this powerful technology.
Abstract: The main characteristic of Reconfigurable Computing is the presence of hardware that can be reconfigured to implement specific functionality more suitable for specially tailored hardware than on a simple uniprocessor. Reconfigurable computing systems join microprocessors and programmable hardware in order to take advantage of the combined strengths of hardware and software and have been used in applications ranging from embedded systems to high performance computing. Many of the fundamental theories have been identified and used by the Hardware/Software Co-Design research field. Although the same background ideas are shared in both areas, they have different goals and use different approaches.This book is intended as an introduction to the entire range of issues important to reconfigurable computing, using FPGAs as the context, or "computing vehicles" to implement this powerful technology. It will take a reader with a background in the basics of digital design and software programming and provide them with the knowledge needed to be an effective designer or researcher in this rapidly evolving field. · Treatment of FPGAs as computing vehicles rather than glue-logic or ASIC substitutes · Views of FPGA programming beyond Verilog/VHDL · Broad set of case studies demonstrating how to use FPGAs in novel and efficient ways

531 citations