scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Emerging Security Information, Systems and Technologies in 2008"


Proceedings ArticleDOI
25 Aug 2008
TL;DR: A new method for trust network analysis is described which is considered optimal because it does not require trust graph simplification, but instead uses edge splitting to obtain a canonical graph.
Abstract: Trust network analysis with subjective logic (TNA-SL) simplifies complex trust graphs into series-parallel graphs by removing the most uncertain paths to obtain a canonical graph. This simplification could in theory cause loss of information and thereby lead to sub-optimal results. This paper describes a new method for trust network analysis which is considered optimal because it does not require trust graph simplification, but instead uses edge splitting to obtain a canonical graph. The new method is compared with TNA-SL, and our simulation shows that both methods produce equal results. This indicates that TNA-SL in fact also represents an optimal method for trust network analysis and that the trust graph simplification does not affect the result.

142 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This work presents the design of a trusted platform module (TPM) that supports hardware-based virtualization techniques and introduces an additional privilege level that is only used by a virtual machine monitor to issue management commands to the TPM.
Abstract: We present the design of a trusted platform module (TPM) that supports hardware-based virtualization techniques. Our approach enables multiple virtual machines to use the complete power of a hardware TPM by providing for every virtual machine (VM) the illusion that it has its own hardware TPM. For this purpose, we introduce an additional privilege level that is only used by a virtual machine monitor to issue management commands, such as scheduling commands, to the TPM. Based on a TPM Control Structure, we can ensure that state information of a virtual machine's TPM cannot corrupt the TPM state of another VM. Our approach uses recent developments in the virtualization technology of processor architectures.

61 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: A method for measuring the performance of the implementation and operation of an ISMS is presented and it is shown that the number of certified enterprises and the quality and performance of its implementation are low.
Abstract: The ISO27001:2005, as an information security management system (ISMS), is establishing itself more and more as the security standard in enterprises. In 2008 more than 4457 certified enterprises could be registered worldwide. Nevertheless, the registering an ISMS still says nothing about the quality and performance of its implementation. Therefore, in this article, a method for measuring the performance of the implementation and operation of an ISMS is presented.

59 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper proposes an anti-counterfeiting system applying physical uncloneable function (PUF) as the physical identifier and uses a random pattern formed by scattering particles as PUF to encode and decode the digital identifier in the random pattern.
Abstract: Anti-counterfeiting is a global problem. Brand owners turn to advanced anti-counterfeiting techniques to seek for a good technical solution. An anti-counterfeiting system normally binds a product with a digital identifier, which per se is encoded by a physical identifier. The physical identifier then is attached with the product. If the physical identifier is cloneable or reusable, the counterfeit products could easily cheat the anti-counterfeiting system. In this paper, we propose an anti-counterfeiting system applying physical uncloneable function (PUF) as the physical identifier. We use a random pattern formed by scattering particles as PUF and we encode the digital identifier in the random pattern. The randomness makes the pattern utmost difficult to be physically cloned. The contributions of this paper are two-fold: (1) a method to encode and decode the digital identifier in the random pattern is proposed, and (2) a user-friendly anti-counterfeiting system using PUF is built.

41 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: The effectiveness of current authentication mechanism for MANETs in coping with the Sybil attack, the infrastructure requirement posed by these mechanisms and applicability of these mechanisms to different kinds of ad hoc networks are analyzed.
Abstract: In Sybil attack, an attacker acquires multiple identities and uses them simultaneously or one by one to attack various operation of the network. Such attacks pose a serious threat to the security of self-organized networks like mobile ad-hoc networks (MANETs) that require unique and unchangeable identity per node for detecting routing misbehavior and reliable computation of node's reputation. The purpose of this paper is to analyze the effectiveness of current authentication mechanism for MANETs in coping with the Sybil attack, the infrastructure requirement posed by these mechanisms and applicability of these mechanisms to different kinds of ad hoc networks. We identify open research issues that need to be addressed by the next generation of authentication mechanisms for MANETs.

33 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper uses location-based services to bind devices to an organisational predefined geographical locations, helping to ensure that each organisation owns and manages its own devices (where its sensitive content can 'only' be accessed).
Abstract: This paper proposes a novel mechanism for protecting sensitive information inside big organisations against unauthorised disclosure by insider or outsider adversaries. Protecting sensitive content from being disclosed to unauthorised parties is a major concern for big enterprises, such as government agencies, banks, clinics and corporations. This paper mainly focuses on preventing insider information leakage whilst allowing authorised users to have privileged access on sensitive information from inside or outside an organisational premises. In this paper we use location-based services to bind devices to an organisational predefined geographical locations, helping to ensure that each organisation owns and manages its own devices (where its sensitive content can 'only' be accessed).

32 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This article presents a proof-of-concept implementation of a single-database PIR scheme proposed by Aguilar and Gaborit and highlights that linear algebra PIR schemes allow to process database contents several orders of magnitude faster than previous protocols.
Abstract: A Private Information Retrieval (PIR) scheme is a protocol in which a user retrieves a record out of n from a replicated database, while hiding from the database which record has been retrieved, as long as the different replicas do not collude. A specially interesting sub-field of research, called single-database PIR, deals with the schemes that allow a user to retrieve privately an element of a non-replicated database. In these schemes, user privacy is related to the intractability of a mathematical problem, instead of being based on the assumption that different replicas exist and do not collude against their users. Single-database PIR schemes have generated an enormous amount of research in the privacy protection field during the last two decades. However, many scientists believe that these are theoretical tools unusable in almost any situation. It is true that these schemes usually require the database to use a lot of computational power, but considering the large number of applications these protocols have, it is important to develop practical approaches that provide acceptable performances for as many applications as possible. We present in this article a proof-of-concept implementation of a single-database PIR scheme proposed by Aguilar and Gaborit [2, 3]. This implementation can run in a CPU or in a GPU using CUDA, nVidia's library for General Purpose computing on Graphics Processing Units (GPGPU). The performance results highlight that linear algebra PIR schemes allow to process database contents several orders of magnitude faster than previous protocols.

31 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper details the analysis of the traffic of a large set of real ADSL customers in the corenet work and builds a profile of network usage for each customer and detects malicious ones.
Abstract: Epidemiology, the science that studies the cause and propagation of diseases, provides us with the concepts and methods to analyze the potential risk factors to which ADSL customers' PCs are exposed, with respect to their usage of network applications. This paper details the analysis of the traffic of a large set of real ADSL customers in the corenet work. We build a profile of network usage for each customer and we detect malicious ones. Based on these data we study the impact of some characteristics in ADSL customer profiles on their likeliness to generate malicious traffic. We find two application types that are risk factors and we also bring evidence that the type of operating system impacts greatly the odds of being infected. Based on these results we build a profile of customers more likely to be infected.

30 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: A novel attack (the synchronization attack) on listen-sleep MAC protocols, which can cause 100% message loss and approximately 30% higher energy drain throughout either a cluster or the entire network, with only a single constrained malicious node modifying its schedule.
Abstract: As wireless motes are battery powered, many listen-sleep Medium Access Control (MAC) protocols have been proposed to reduce energy consumption. Security issues related to the design of these protocols have, however, largely been ignored. In this paper, we present a novel attack (the synchronization attack) on listen-sleep MAC protocols. This attack can cause 100% message loss and approximately 30% higher energy drain throughout either a cluster or the entire network, with only a single constrained malicious node modifying its schedule. We show this attack can be applied to many slotted listen-sleep protocols such as Sensor MAC (S-MAC), its enhanced version Global Schedule Adoption (GSA), Timeout MAC (T-MAC), Dynamic Sensor-MAC (DSMAC), and Mobile S-MAC (MS-MAC). We propose a heuristically near-optimal threshold-based scheme to defend against large scale synchronization attack. Depending on the traffic rate, our defense can limit the message delay to at most 20% and the message drop to at most 12%. We performed extensive simulations to show the attack and its defense. Our theoretical analysis proves these results in a general listen-sleep framework. An important impact of this work is that without a reliable MAC layer, higher layer secure protocols cannot be developed, e.g. secure routing depends on a reliable exchange of messages and our attack disrupts this exchange.

27 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper proposes the generic and complete three-level identity management model, which is relevant to service composition and provisioning in telecommunications and in computer systems, especially to their security, enrichment and customization.
Abstract: Identity management has become an issue of central importance. It is relevant to service composition and provisioning in telecommunications and in computer systems, especially to their security, enrichment and customization. However, the plentitude of proprietary and open source identity management solutions causes interoperability problems as well as legal issues. This in turn slows down the pace of services development. The key step for reconciliation and cohesive creation of identity management systems is to create a single and complete identity management model. This paper is a step towards this approach. It proposes the generic and complete three-level identity management model.

26 citations


Proceedings ArticleDOI
25 Aug 2008
TL;DR: An alternative for PKM-RSA is proposed in this paper which uses visual secret sharing in a pre-authentication scenario to avoid DoS attacks caused by the large number of rouge requests in WiMAX.
Abstract: Denial of Service (DoS) attacks pose a serious threat to wireless networks. In WiMAX a large number of authentication requests sent to the Base Station (BS) by rouge Subscriber Stations (SSs) might lead to a DoS attack. Since authentication involves exchanging and checking certificates of both parties and is a computationally heavy task, the BS would allocate most of its resources for the process of evaluating certificates. An alternative for PKM-RSA is proposed in this paper which uses visual secret sharing in a pre-authentication scenario to avoid DoS attacks caused by the large number of rouge requests. A simple XOR operation is used to check the validity of requesting SS and BS both, thus providing mutual authentication scheme. It not only provides an additional layer of security but also successfully counters the above mentioned problem in the WiMAX authentication.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: New anti-phishing methods using tokens and authentication e-mail are proposed, based on the assumption that the number of OPs is small, and are hence safer from attack and easier to realize from the technical viewpoint than existing methods.
Abstract: Following recent developments in digital environments, online services have become more diverse. Because of this, many users find it difficult to manage their ID and password. As such, there is a rising need for effective ID management systems. Among them, OpenID is a simple, light and user- centric ID management system. However, because of OpenID is vulnerable to phishing, so many measures being are taken to solve this problem, although no perfect anti-phishing method has so far been devised. In this paper, new anti-phishing methods using tokens and authentication e-mail are proposed. The suggested methods are based on the assumption that the number of OPs is small, and are hence safer from attack and easier to realize from the technical viewpoint than existing methods.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: This is the first attempt to handle variable ranges of the nodes in an ad hoc network where only the source and the destination are assumed to trust each other and the effect of error in the location information is negligible and can be ignored most of the times.
Abstract: In this paper, we present a very simple and efficient end-to-end algorithm to handle wormhole attacks on ad hoc networks with variable ranges of communication. Most of the existing approaches focus on the prevention of wormholes between neighbors that trust each other. The known end-to-end mechanisms assume that all the nodes of the network have same communication range. To the best of our knowledge this is the first attempt to handle variable ranges of the nodes in an ad hoc network where only the source and the destination are assumed to trust each other. We provide a lower bound on the minimum number of hops on a good route. Any path showing lesser hop-counts is shown to be under attack. Our algorithm requires every node to know its location. With very accurate GPS available, this assumption is not unreasonable. Since our protocol does not require speed or time, we do not need clock synchronization. In the absence of any error in the location, there are no false alarms i.e. no good paths are discarded. We have shown that the effect of error in the location information is negligible and can be ignored most of the times. The storage and computation overhead is low. For a path of length I, it takes only O(l) space and time.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: It is argued that without a more restrictive definition, the apparently common term degenerates to a mere buzzword, which can be dangerous in terms of suggested comparability and sketch an alternative concept for the operational IS security evaluation.
Abstract: In this survey paper, we assess existing approaches to security metric definition. We classify proposed definitions and discuss their advantages and problems. We argue that without a more restrictive definition, the apparently common term degenerates to a mere buzzword, which can be dangerous in terms of suggested comparability. We conclude with some guidelines on IS metric definition and sketch an alternative concept for the operational IS security evaluation.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper proposes a secure task delegation model within a workflow, separating the various aspects of delegation with regards tousers, tasks, events and data, portraying them in terms of a multi-layered state machine and details a delegation protocol with a specific focus on the initial negotiation steps between the involved principals.
Abstract: Workflow management systems provide some of the required technical means to preserve integrity, confidentiality and availability at the control-, data- and task assignment layers of a workflow. We currently observe a move away from predefined strict workflow enforcement approaches towards supporting exceptions which are difficult to foresee when modelling a workflow. One specific approach for exception handling is that of task delegation. The delegation of a task from one principal to another, however, has to be managed and executed in a secure way, in this context implying the presence of a fixed set of delegation events. In this paper, we propose first and foremost, a secure task delegation model within a workflow. The novel part of this model is separating the various aspects of delegation with regards tousers, tasks, events and data, portraying them in terms of a multi-layered state machine. We then define delegation scenarios and analyse additional requirements to support secure task delegation over these layers. Moreover, we detail a delegation protocol with a specific focus on the initial negotiation steps between the involved principals.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: From an assisted living environment installed in 2007, real life data are available to draw conclusions on how to assist persons living there and it is shown that either of the approaches has its own advantages and drawbacks in terms of short- and long-term analyses.
Abstract: From an assisted living environment installed in 2007, real life data are available to draw conclusions on how to assist persons living there. Reasoning is based on a large set of raw data obtained by home automation sensors. The first important step is to condense the large but rather unspecific raw data information into more significant information. Two different approaches are introduced and compared to each other. It is shown how they can be used and that either of them has its own advantages and drawbacks in terms of short- and long-term analyses. The evaluation of the activity and inactivity was the base of an automated health monitoring system that allows seniors to live in their accustomed homes safely and that gives peace of mind to their relatives.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: A generic solution that ensures end-to-end access control for data generated by wireless sensors and consumed by business applications, based on a new approach for encryption-based access control is described.
Abstract: A security pattern describes a particular recurring security problem that arises in specific contexts, and presents a well-proven generic solution for it [1]. This paper describes a generic solution that ensures end-to-end access control for data generated by wireless sensors and consumed by business applications, based on a new approach for encryption-based access control. The existing security mechanism is captured as serenity ("system engineering for security and dependability") security patterns that describe a security problem and its solution in an abstract way. The structured description makes the security solution better understandable for non-security experts and helps to disseminate the security knowledge among application developers.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: A framework for attack detection which allows for an integration of various detection methods as lightweight modules which enables an easy evaluation with meaningful and comparable results based on realistic large-scale scenarios, e.g. by using a network simulator.
Abstract: Distributed denial-of-service attacks pose unpredictable threats to the Internet infrastructure and Internet-based business. Thus, many attack detection systems and anomaly detection methods were developed in the past. A realistic evaluation of these mechanisms and comparable results, however, are impossible up to now. Furthermore, an adaptation to new situations or an extension of existing systems in most cases is complex and time-consuming. Therefore, we developed a framework for attack detection which allows for an integration of various detection methods as lightweight modules. These modules can be combined easily and arbitrarily and thus, adapted to varying situations. Additionally, our framework can be applied in different runtime environments transparently. This enables an easy evaluation with meaningful and comparable results based on realistic large-scale scenarios, e.g. by using a network simulator.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: The main idea is to understand, manipulate and exploit this delicate and valuable know-how without being necessarily an expert on SIS into UML models through profiles according to the meta-object facility (MOF) standards from the object management group (OMG).
Abstract: Managing the security of information systems (SIS) is at once a tedious task and the cornerstone of business in a highly competitive environment. Successful control of the SIS in a business requires many years of experience, expertise and continuous improvement. This implies real know-how for the employees who are responsible for SIS control. This article focuses on the encapsulation of this know-how into UML models through profiles according to the meta-object facility (MOF) standards from the object management group (OMG). The main idea is to understand, manipulate and exploit this delicate and valuable know-how without being necessarily an expert on SIS. This challenge is threefold: firstly, to find a common language, as well as approaches and tools for Engineering of both IS and SIS; secondly, to generate earnings and make use of the enormous progress in Engineering of IS, to establish and improve Engineering of SIS; and thirdly, to achieve homogeneous management and follow-up of IT projects and their security. This paper presents the context of the Engineering of Security of Information Systems, its importance and the feasibility of this challenge.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: A framework proposal to solve the problem of securing applications against input manipulation attacks by the use of XML files and a XML Schema for security parameters specification is presented.
Abstract: Input manipulation attacks are becoming one of the most common attacks against Web applications and Web services security. As the use of firewalls and other security mechanisms are not effective against application-level attacks, new means of defense are needed. This paper presents a framework proposal to solve this problem, securing applications against input manipulation attacks. The proposed mechanism offers a reusable approach by the use of XML files and a XML Schema for security parameters specification. Furthermore, a case of study and experiment results are presented. The experiment demonstrates how common input manipulation flaws could be observed.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: It is shown how different configurations of SIP can be specified in a protocol centric formal language, and how both static analysis and simulations can be performed on the resulting specifications by the recently developed tool PROSA.
Abstract: The Session Initiation Protocol (SIP) is increasingly used as a signaling protocol for administrating Voice over IP (VoIP) phone calls. SIP can be configured in several ways so that different functional and security requirements are met. Careless configuration of the SIP protocol is known to lead to a large set of attacks. In this paper we show how different configurations of SIP can be specified in a protocol centric formal language. Both static analysis and simulations can be performed on the resulting specifications by the recently developed tool PROSA. In particular, we analyze the VoIP architecture of a medium size Norwegian company, and show that several of the well known threats can be found.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper presents a pattern-based approach to run-time requirements monitoring and threat detection being developed as part of an approach to build frameworks supporting the construction of secure and dependable systems for ambient intelligence.
Abstract: This paper presents our pattern-based approach to run-time requirements monitoring and threat detection being developed as part of an approach to build frameworks supporting the construction of secure and dependable systems for ambient intelligence. Our patterns infra-structure is based on templates. From templates we generate event-calculus formulas expressing security requirements to monitor at run-time. From these theories we generate attack signatures, describing threats or possible attacks to the system. At run-time, we evaluate the likelihood of threats from run-time observations using a probabilistic model based on Bayesian networks.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: An adaptation of the safety case methodology, a 'dependability case' approach, is proposed as a practical form of organizing heterogeneous information concerning the dependability of a large communication network, to build structured argumentation for the support of dependability claims.
Abstract: IP networks, composing the Internet, form a central part of the information infrastructure of the modern society. Integrated approaches to the assessment of their dependability are, however, only emerging. This paper presents three contributions for meeting this challenge. First, we propose an adaptation of the safety case methodology, a 'dependability case' approach, as a practical form of organizing heterogeneous information concerning the dependability of a large communication network. The idea is to build structured argumentation for the support of dependability claims, making use of various kinds of evidences. Second, we suggest a conceptual frameworkfor considering the dependability of IP networks in an integrated way. Third, we propose to structure dependability cases of IP networks according to the main aspects of dependability, rather than structural units or layers of the network. The proposed methodology is tested on the Finnish University Network and found promising.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: The MATINE method is used to find vulnerabilities in antivirus (AV) software and some vulnerabilities arising from these dependencies indicate, that different aspects of AV software should be observed in the context of critical infrastructure planning and management.
Abstract: In this paper we present an application of the MATINE method for investigating dependencies in antivirus (AV) software and some vulnerabilities arising from these dependencies. Previously, this method has been effectively used to find vulnerabilities in network protocols. Because AV software is as vulnerable as any other software and has a great security impact, we decided to use this method to find vulnerabilities in AV software. These findings may have implications to critical infrastructure, for the use of AV is often considered obligatory. The results were obtained by gathering semantic data on AV vulnerabilities, analysis of the data and content analysis of media follow-up. The results indicate, that different aspects of AV software should be observed in the context of critical infrastructure planning and management.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: The proposed method is part of a Phd Doctoral thesis aimed at defining a model for secure operation of an Internet Banking environment, even in the presence of malware on the client side, to be resistant to the nowadays too frequent phishing and pharming attacks.
Abstract: This paper presents the authentication environment defined for securing E-Banking applications. The proposed method is part of a Phd Doctoral thesis aimed at defining a model for secure operation of an Internet Banking environment, even in the presence of malware on the client side. The authentication model has been designed to be easily applicable with minimum impact to the current Internet banking systems. Its goal is to be resistant to the nowadays too frequent phishing and pharming attacks, and also to more classical ones like social engineering or man-in-the-middle attacks. The key point of this model is the need for multifactor mutual authentication, instead of simply basing the security on the digital certificate of the financial entity, since in many cases users are not able to discern the validity of a certificate, and may not even pay attention to it. By following the rules defined in this proposal, the security level of the Web Banking environment will increase and customers' trust will be enhanced, thus allowing a more beneficial use of this service.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: Developed software is presented which implements described methodology and results of analysis for an exemplar information system which is automatically transformed into an input file for modified SSFNet simulator.
Abstract: The paper presents a method of analyzing functionality aspects of complex information systems. The system analysis is based on integrating different computer system models, the networked systems and applications model, clients behavior model and collaboration between services in the analyzed system, into one coherent functionality based view suitable for system simulation. Modeling process incorporates hierarchical approach, i.e. the system is decomposed into groups of hosts and therefore groups of services, called molecules. The model is described in design by authors XML Domain Modeling Language (XDML). The integrated model is automatically transformed into an input file for modified SSFNet simulator. Additional information on input models could be read from other models or the user could play with all parameters of the XDML model. The paper presents developed software which implements described methodology and results of analysis for an exemplar information system.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: The detection rules formulated to detect fuzzing attacks, which attempt to crash a VoIP device by sending it invalid SIP messages, are outlined and a system architecture that utilizes multi-core processors is proposed in order to scale up the performance of detection.
Abstract: The VoIP technology has been increasingly popular and the number of its users has surged in the past years, because of its economical advantage over the traditional PSTN services. As a side effect, various VoIP servers and clients are becoming attractive targets of malicious attacks. This paper outlines the detection rules we have formulated to detect fuzzing attacks, which attempt to crash a VoIP device by sending it invalid SIP messages. This paper also proposes a system architecture that utilizes multi-core processors in order to scale up the performance of detection using these rules.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: This paper proposes a new method called PCSM which tries to overcome the weaknesses of previous systems and is more flexible and transparent and with the proposed architecture can prevent many attacks and therefore provides higher level of security.
Abstract: Data security has been an important concern from many years ago and has gained special importance in Information Technology. Since the present computer systems use layered and modular architectures and execute the instructions in a number of different phases, therefore it has become an imperative to establish a trusted chain between various layers. It usually is integrity checking by hashing of executable codes. With guarantee of software integrity, the web servers and other network entities can trust to client systems or workstations. Several methods have been proposed for this purpose, each of them have their own advantages and weakness. This paper is an attempt at evaluation of these methods and proposes a new method called PCSM which tries to overcome the weaknesses of previous systems. This method is more flexible and transparent and with the proposed architecture can prevent many attacks and therefore provides higher level of security. This paper is concluded with a comparison between the proposed method and other methods.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: A rule order independent local inconsistency detection algorithm is proposed to prevent automatic rule updates that can cause inconsistencies in firewall rule sets, with special focus on automatic frequent rule set updates, which is the case of the dynamic nature of next generation networks.
Abstract: Filtering is a very important issue in next generation networks. These networks consist of a relatively high number of resource constrained devices with very special features, such as managing frequent topology changes. At each topology change, the access control policy of all nodes of the network must be automatically modified. In order to manage these access control requirements, firewalls have been proposed by several researchers. However, many of the problems of traditional firewalls are aggravated due to these networks particularities. In this paper we deeply analyze the local consistency problem in firewall rule sets, with special focus on automatic frequent rule set updates, which is the case of the dynamic nature of next generation networks. We propose a rule order independent local inconsistency detection algorithm to prevent automatic rule updates that can cause inconsistencies. The proposed algorithms have very low computational complexity as experimental results will show, and can be used in real time environments.

Proceedings ArticleDOI
25 Aug 2008
TL;DR: A novel technique for the automatic identification, analysis and measurement of botnets used to deliver malicious email is presented and a reference implementation of a system developed to demonstrate these techniques is described.
Abstract: The majority of virus, spam and malicious emails are sent through the use of a network of compromised computers, or botnet. The early discovery and identification of the botnet is an important aspect in the understanding of, and the development of responses to new threats aimed at email systems and their users. In this paper we present a novel technique for the automatic identification, analysis and measurement of botnets used to deliver malicious email. The paper also describes a reference implementation of a system developed to demonstrate these techniques. This system has been deployed in a live environment, and has shown to be highly effective in use. Practical applications for the use of the techniques developed, include improved anti-spam and anti-virus systems, are presented.