Showing papers in "Journal of Information Security in 2013"
••
TL;DR: The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption and are referred to as “common language of organizations around the world” for information security.
Abstract: With the increasing significance of information technology, there is
an urgent need for adequate measures of information security.
Systematic information security management is one of most important initiatives
for IT management. At least since reports about privacy and security breaches,
fraudulent accounting practices, and attacks on IT systems appeared
in public, organizations have recognized their responsibilities to safeguard
physical and information assets. Security standards can be used as guideline or
framework to develop and maintain an adequate information security management
system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international
standards that are receiving growing recognition and adoption. They are
referred to as “common language of organizations around the world” for
information security [1]. With ISO/IEC 27001 companies can have their ISMS
certified by a third-party organization and thus show their customers evidence
of their security measures.
177 citations
••
TL;DR: A novel solution is proposed to handle DDoS attacks in mobile ad hoc networks (MANETs) because of the properties of ad hoc network such as dynamic topologies, low battery life, multicast routing, frequency of updates or network overhead, scalability, mobile agent based routing, and power aware routing.
Abstract: Distributed Denial of Service (DDoS) attacks in the
networks needs to be prevented or handled if it occurs, as early as possible
and before reaching the victim. Dealing with DDoS attacks is difficult due to
their properties such as dynamic attack rates, various kinds of targets, big
scale of botnet, etc. Distributed Denial of Service (DDoS) attack is hard to deal
with because it is difficult to distinguish legitimate traffic from malicious
traffic, especially when the traffic is coming at a different rate from
distributed sources. DDoS attack becomes more difficult to handle if it occurs
in wireless network because of the properties of ad hoc network such as dynamic
topologies, low battery life, multicast routing, frequency of updates or
network overhead, scalability, mobile agent based routing, and power aware
routing, etc. Therefore, it is better to prevent the distributed denial of
service attack rather than allowing it to occur and then taking the necessary
steps to handle it. This paper discusses various the attack mechanisms and
problems due to DDoS attack, also how MANET can be affected by these attacks. In
addition to this, a novel solution is proposed to handle DDoS attacks in mobile
ad hoc networks (MANETs).
61 citations
••
TL;DR: A novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks.
Abstract: Distributed denial of service (DDoS) attacks continues to grow as a
threat to organizations worldwide. From the first known attack in 1999 to the
highly publicized Operation Ababil, the DDoS attacks have a history of flooding
the victim network with an enormous number of packets, hence exhausting the
resources and preventing the legitimate users to access them. After having
standard DDoS defense mechanism, still attackers are able to launch an attack.
These inadequate defense mechanisms need to be improved and integrated with
other solutions. The purpose of this paper is to study the characteristics of
DDoS attacks, various models involved in attacks and to provide a timeline of defense
mechanism with their improvements to combat DDoS attacks. In addition to this,
a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce
programming model.
57 citations
••
TL;DR: A survey for most of the common attacks for anonymization-based PPDM & PPDP and explain their effects on Data Privacy is presented.
Abstract: Data mining is the extraction of vast interesting patterns or
knowledge from huge amount of data. The initial idea of privacy-preserving
data mining PPDM was to extend traditional data mining techniques to work with
the data modified to mask sensitive information. The key issues were how to
modify the data and how to recover the data mining result from the modified
data. Privacy-preserving data mining considers the problem of running data mining
algorithms on confidential data that is not supposed to be revealed even to the
party running the algorithm. In contrast, privacy-preserving
data publishing (PPDP) may not necessarily be tied to a specific data mining
task, and the data mining task may be unknown at the time of data publishing.
PPDP studies how to transform raw data into a version that is immunized against
privacy attacks but that still supports effective data mining tasks. Privacy-preserving
for both data mining (PPDM) and data publishing (PPDP) has become increasingly
popular because it allows sharing of privacy sensitive data for analysis
purposes. One well studied approach is the k-anonymity model [1]
which in turn led to other models such as confidence bounding, l-diversity,
t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to
minimize information loss and such an attempt provides a loophole for attacks.
The aim of this paper is to present a survey for most of the common attacks
techniques for anonymization-based PPDM & PPDP and explain their effects
on Data Privacy.
50 citations
••
TL;DR: The results suggest that multiple facets of smartphone user’s personalities significantly affect the cognitive determinants, which indicate the behavioral intention to use security measures.
Abstract: In the last years, increasing smartphones’
capabilities have caused a paradigm shift in the way of users’ view and using mobile
devices. Although researchers have started to focus on behavioral models to
explain and predict human behavior, there is limited empirical research about
the influence of smartphone users’ individual differences on the usage of
security measures. The aim of this study is to examine the influence of
individual differences on cognitive determinants of behavioral intention to use
security measures. Individual differences are measured by the Five-Factor
Model; cognitive determinants of behavioral intention are adapted from the
validated behavioral models theory of planned behavior and technology
acceptance model. An explorative, quantitative survey of 435 smartphone users is served as data basis. The results suggest
that multiple facets of smartphone user’s personalities significantly affect
the cognitive determinants, which indicate the behavioral intention to use
security measures. From these findings, practical and theoretical implications
for companies, organizations, and researchers are derived and discussed.
36 citations
••
TL;DR: The author selected the recognizable characteristics from ordinary users with their families collected from 58 malware families and 1485 malware samples and proposed solutions as recommendations to users before installing it with the ultimate desire to mitigate the damage in the community that is on the android phone.
Abstract: The sale of products using the android Operation System (OS) phone
is increasing in rate: the fact is that its price is cheaper but its configured hardware is higher, users easily buy it and the approach to this product
increases the risk of the spread of mobile malware. The understanding of
majority of the users of this mobile malware is still limited. While they are
growing at a faster speed in the number and level of sophistication, especially
their variations have created confusion for
users; therefore worrying about the safety of its users is required. In this paper, the author discussed the identification and analysis of malware families on Android Mobiles.
The author selected the recognizable characteristics from ordinary users with
their families collected from 58 malware families and 1485 malware samples and
proposed solutions as recommendations to users before installing it with the
ultimate desire to mitigate the damage in the community that is on the android
phone, especially the ordinary users with limited understanding about potential
hazards. It would be helpful for the ordinary users to identify the mobile
malware in order to mitigate the information security risk.
26 citations
••
TL;DR: This article will help identify the factors that make one method a clear choice over another, and provide the evaluation techniques necessary for readers to apply to other popular methodologies in order to make the most appropriate personal determinations.
Abstract: The defense in depth methodology was popularized in the early 2000’s amid growing concerns for information
security; this paper will address the shortcomings of early implementations. In
the last two years, many supporters of the defense in depth security
methodology have changed their allegiance to an offshoot method dubbed the
defense in breadth methodology. A substantial portion of this paper’s body will
be devoted to comparing real-world usage scenarios and discussing the flaws in
each method. A major goal of this publication will be to assist readers in
selecting a method that will best benefit their personal environment. Scenarios
certainly exist where one method may be clearly favored; this article will help
identify the factors that make one method a clear choice over another. This
paper will strive not only to highlight key strengths and weaknesses for the
two strategies listed, but also provide the evaluation techniques necessary
for readers to apply to other popular methodologies in order to make the most
appropriate personal determinations.
17 citations
••
TL;DR: A traceability index is derived as a useful indicator in measuring the accuracy and completeness of discovering the evidence in digital forensic investigation process to present the level of tracing ability, mapping ability and identifying the offender ability.
Abstract: Digital crime inflicts immense damage to users and systems and now it has reached a level of sophistication that makes it difficult to track its sources or origins especially with the advancements in modern computers, networks and the availability of diverse digital devices. Forensic has an important role to facilitate investigations of illegal activities and inappropriate behaviors using scientific methodologies, techniques and investigation frameworks. Digital forensic is developed to investigate any digital devices in the detection of crime. This paper emphasized on the research of traceability aspects in digital forensic investigation process. This includes discovering of complex and huge volume of evidence and connecting meaningful relationships between them. The aim of this paper is to derive a traceability index as a useful indicator in measuring the accuracy and completeness of discovering the evidence. This index is demonstrated through a model (TraceMap) to facilitate the investigator in tracing and mapping the evidence in order to identify the origin of the crime or incident. In this paper, tracing rate, mapping rate and offender identification rate are used to present the level of tracing ability, mapping ability and identifying the offender ability respectively. This research has a high potential of being expanded into other research areas such as in digital evidence presentation.
16 citations
••
TL;DR: The research presented outlines the development of a detection framework by introducing a process that is to be implemented in conjunction with information requests that can be used to determine the probability of an intrusion by the authorized entity, which ultimately address the insider threat phenomena at its most basic level.
Abstract: When considering Intrusion Detection and the Insider Threat, most researchers tend to focus on the network architecture rather than the database which is the primary target of data theft. It is understood that the network level is adequate for many intrusions where entry into the system is being sought however it is grossly inadequate when considering the database and the authorized insider. Recent writings suggest that there have been many attempts to address the insider threat phenomena in regards to database technologies by the utilization of detection methodologies, policy management systems and behavior analysis methods however, there appears to be a lacking in the development of adequate solutions that will achieve the level of detection that is required. While it is true that Authorization is the cornerstone to the security of the database implementation, authorization alone is not enough to prevent the authorized entity from initiating malicious activities in regards to the data stored within the database. Behavior of the authorized entity must also be considered along with current data access control policies. Each of the previously mentioned approaches to intrusion detection at the database level has been considered individually, however, there has been limited research in producing a multileveled approach to achieve a robust solution. The research presented outlines the development of a detection framework by introducing a process that is to be implemented in conjunction with information requests. By utilizing this approach, an effective and robust methodology has been achieved that can be used to determine the probability of an intrusion by the authorized entity, which ultimately address the insider threat phenomena at its most basic level.
12 citations
••
TL;DR: This work proposes a watermarking technique that also uses the 2D representation of self-inverting permutations and utilizes marking at specific areas thanks to partial modifications of the image’s Discrete Fourier Transform (DFT).
Abstract: In this work we propose efficient codec algorithms for watermarking
images that are intended for uploading on the web under intellectual property
protection. Headed to this direction, we recently suggested a way in which an
integer number w which being
transformed into a self-inverting permutation, can be represented in a two
dimensional (2D) object and thus, since images are 2D structures, we have proposed
a watermarking algorithm that embeds marks on them using the 2D representation
of w in the spatial
domain. Based on the idea behind this technique, we now expand the usage of
this concept by marking the image in the frequency domain. In particular, we
propose a watermarking technique that also uses the 2D representation of
self-inverting permutations and utilizes marking at specific areas thanks to
partial modifications of the image’s Discrete Fourier Transform (DFT). Those
modifications are made on the magnitude of specific frequency bands and they
are the least possible additive information ensuring robustness and
imperceptiveness. We have experimentally evaluated our algorithms using various
images of different characteristics under JPEG compression. The experimental
results show an improvement in comparison to the previously obtained results
and they also depict the validity of our proposed codec algorithms.
11 citations
••
TL;DR: In this paper, the authors demonstrate the performance advantage of j-lanes hashing on SIMD architectures, by coding a 4-lane-SHA-256 implementation and measuring its performance on the latest 3rd Generation IntelR CoreTM.
Abstract: j-lanes hashing is a tree mode that splits an input message to j slices, computes j independent digests of each slice, and outputs the hash value of their concatenation. We demonstrate the performance advantage of j-lanes hashing on SIMD architectures, by coding a 4-lanes-SHA-256 implementation and measuring its performance on the latest 3rd Generation IntelR CoreTM. For messages whose lengths range from 2 KB to 132 KB, we show that the 4-lanes SHA-256 is between 1.5 to 1.97 times faster than the fastest publicly available implementation that we are aware of, and between ~2 to ~2.5 times faster than the OpenSSL 1.0.1c implementation. For long messages, there is no significant performance difference between different choices of j. We show that the 4-lanes SHA-256 is faster than the two SHA3 finalists (BLAKE and Keccak) that have a published tree mode implementation. Finally, we explain why j-lanes hashing will be faster on the coming AVX2 architecture that facilitates using 256 bits registers. These results suggest that standardizing a tree mode for hash functions (SHA-256 in particular) could be useful for performance hungry applications.
••
TL;DR: In performing risk management in a cyber security and safety context, a detailed picture of the impact that a security/safety incident can have on an organisation is developed and stimulates a more holistic view of the effectiveness, and appropriateness, of a counter measure.
Abstract: Technology is increasingly being used by organisations to mediate social/business relationships and social/business transactions. While traditional models of impact assessment have focused on the loss of confidentiality, integrity and availability, we propose a new model based upon socio-technical systems thinking that places the people and the technology within an organisation’s business/functional context. Thus in performing risk management in a cyber security and safety context, a detailed picture of the impact that a security/safety incident can have on an organisation is developed. This in turn stimulates a more holistic view of the effectiveness, and appropriateness, of a counter measure.
••
TL;DR: The potential use of this technology includes RFID chipped passports, human implants, item-level tagging, inventory tracking and access control systems in Bangladesh.
Abstract: Radio frequency identification (RFID) is an emerging technology,
radio wave communication between a microchip and an electronic reader,
consisting of data gathering, distribution, and management systems that has the
ability to identify or scan information for remoting recognition of objects with
increased speed and accuracy. An attempt has been made to know how using of
RFID technology helps to improve services and business process efficiency in
public and private sectors of Bangladesh.
With this aim, we have conducted extensive literature survey. At the end of
this attempt, we have come to the conclusion that the potential use of this
technology includes RFID chipped passports, human implants, item-level
tagging, inventory tracking and access control systems. RFID technology is at
its early stage of adoption in Bangladesh
with a few business applications and pilot studies. However, when tags begin to
be associated with individuals, privacy is threatened. RFID is a new
type of threat to personal information and must be treated as such; indeed, it
must be recognized that existing privacy legislation is not adequate. This
paper also explores some current and emerging applications of RFID in Bangladesh
and examines the challenges including privacy and ethical issues arising from
its use, as well as addressing potential means of handling those issues.
••
TL;DR: A new method based on encryption algorithm applied over spreading codes, named hidden frequency hopping is proposed to improve the security of FHSS, and can be applied to all existing data communication systems based on spread spectrum techniques.
Abstract: Frequency Hopping Spread Spectrum (FHSS) system is often deployed to protect wireless communication from jamming or to preclude undesired reception of the signal. Such themes can only be achieved if the jammer or undesired receiver does not have the knowledge of the spreading code. For this reason, unencrypted M-sequences are a deficient choice for the spreading code when a high level of security is required. The primary objective of this paper is to analyze vulnerability of linear feedback shift register (LFSRs) codes. Then, a new method based on encryption algorithm applied over spreading codes, named hidden frequency hopping is proposed to improve the security of FHSS. The proposed encryption security algorithm is highly reliable, and can be applied to all existing data communication systems based on spread spectrum techniques. Since the multi-user detection is an inherent characteristic for FHSS, the multi-user interference must be studied carefully. Hence, a new method called optimum pair “key-input” selection is proposed which reduces interference below the desired constant threshold.
••
TL;DR: An approach for extending the constraint model defined for conformity testing of a given method of class to its overriding method in subclass using inheritance principle shows that the test cases developed for testing an original method can be used for testing its override method in a subclass and then the number of test cases can be reduced considerably.
Abstract: This paper presents an approach for extending the constraint model defined for conformity testing of a given method of class to its overriding method in subclass using inheritance principle. The first objective of the proposed work is to find the relationship between the test model of an overriding method and its overridden method using the constraint propagation. In this context the approach shows that the test cases developed for testing an original method can be used for testing its overriding method in a subclass and then the number of test cases can be reduced considerably. The second objective is the use of invalid data which do not satisfy the precondition constraint and induce valid output values for introducing a new concept of test called secure testing. The implementation of this approach is based on a random generation of test data and analysis by formal proof.
••
TL;DR: The aim of this work is the development of a steganographic technique for the MP3 audio format, which is based on the Peak Shaped Model algorithm used for JPEG images, showing interesting results.
Abstract: The aim of this work is the development of a steganographic technique for the MP3 audio format, which is based on the Peak Shaped Model algorithm used for JPEG images. The proposed method relies on the statistical properties of MP3 samples, which are compressed by a Modified Discrete Cosine Transform (MDCT). After the conversion of MP3, it’s possible to hide some secret information by replacing the least significant bit of the MDCT coefficients. Those coefficients are chosen according to the statistical relevance of each coefficient within the distribution. The performance analysis has been made by calculating three steganographic parameters: the Embedding Capacity, the Embedding Efficiency and the PSNR. It has been also simulated an attack with the Chi-Square test and the results have been used to plot the ROC curve, in order to calculate the error probability. Performances have been compared with performances of other existing techniques, showing interesting results.
••
TL;DR: An all-optical EXOR for cryptographic application based on spatial soliton beams based on the propagation and interactions properties of spatialsoliton in a Kerr nonlinear material is presented.
Abstract: The purpose of this paper is to present an all-optical EXOR for
cryptographic application based on spatial soliton beams. The device is based
on the propagation and interactions properties of spatial soliton in a Kerr
nonlinear material. The interaction force between parallel soliton beam is
analyzed from the analytical point of view and an exact solution is presented.
••
TL;DR: A method to present DNA sequences as conjugate maps to show DNA’s characteristics intuitively and can provide a reference for in-depth visualization study of DNA sequences on their measurement maps.
Abstract: Random sequences play an important
role in wider security applications, such as mobile communication and network
security. Due to DNA sequences owning natural randomness, in order to show DNA’s characteristics intuitively,
this paper proposes a method to present DNA sequences as conjugate maps. The method
includes two core models: measuring models to transfer DNA data into
measurements, and visual models to test random sequences as distribution maps
to show DNA’s characteristics. The spatial relations between sample DNA and CA
random sequences are illustrated and compared in the end. The results show that
the distribution of DNA sequences and CA random sequences has significant differences and
similarities. It can provide a reference for in-depth visualization study of
DNA sequences on their measurement maps.
••
TL;DR: It was found that the proposed anonymous authentication protocol does not provide anonymous authentication and users can be easily tracked while using their anonymous identity and the scheme is also subject to denial of service attack.
Abstract: One of the promising multimedia services is the mobile pay-TV
service. Due to its wireless nature, mobile pay-TV is vulnerable to attacks
especially during hand-off. In 2011, an efficient anonymous authentication
protocol for mobile pay-TV is proposed. The authors claim that their scheme
provides an anonymous authentication to users by preventing intruders from
obtaining users’ IDs during the mutual authentication between mobile subscribers and
head end systems. However, after analysis, it was found that the scheme does
not provide anonymous authentication and users can be easily tracked while using
their anonymous identity. The scheme is also subject to denial of service
attack. In this paper the deficiencies of the original scheme are demonstrated, and then a proposed improved scheme that eliminates these deficiencies is
presented.
••
TL;DR: Risks that affect web applications are discussed, and how network-centric and host-centric techniques, as much as they are crucial in an enterprise, lack necessary depth to comprehensively analyze overall application security.
Abstract: The World Wide Web has been an environment with many security
threats and lots of reported cases of security breaches. Various tools and
techniques have been applied in trying to curb this problem, however new
attacks continue to plague the Internet. We discuss risks that affect web
applications and explain how network-centric and host-centric techniques, as much
as they are crucial in an enterprise, lack necessary depth to comprehensively
analyze overall application security. The nature of web applications to span
a number of servers introduces a new dimension of security requirement
that calls for a holistic approach to protect the information asset regardless
of its physical or logical separation of modules and tiers. We therefore
classify security mechanisms as either infrastructure-centric or application-centric
based on what asset is being secured. We then describe requirements for such
application-centric security mechanisms.
••
TL;DR: Result of analysis proves that IPAS would enhance the transport layer security protocol for enhancement of Network Security, and effectiveness of identity authentication parameters for various attacks and security requirements is verified.
Abstract: In this paper, we have proved the diminution in error
approximation when identity authentication is done with Ideal Password
Authentication Scheme (IPAS) for Network Security. Effectiveness of identity
authentication parameters for various attacks and security requirements is
verified in the paper. Result of analysis proves that IPAS would enhance the
transport layer security. Proof of efficiency of result is generated with drastic
diminution in error approximation. IPAS would have advanced security
parameters with implemented RNA-FINNT which would result in fortification of
the transport layer security protocol for enhancement of Network Security.
••
TL;DR: The risk of likelihood that URDA might be vulnerable to bit pattern attack due to the different ratios of characters appearing in real world files is found, which is solved by modifying URDA with variable fragment lengths, which results in that all the bits in revealed sequences are distributed uniformly and independently at random.
Abstract: This paper investigates the security features of the distributed archive scheme named Uniformly Random Distributed Archive (URDA). It is a simple, fast and practically secure algorithm that meets the needs of confidentiality and availability requirements of data. URDA cuts a file archived into fragments, and distributes each fragment into randomly selected n -k + 1 storages out of n storages. As the result, users only need to access at least k storages to recover original file, whereas stolen data from k -1 storages cannot cover original file. Thus, archived files are nothing but sequences of a large number of fixed length fragments. URDA is proved of disappearing both characters and biased bits of original data in archived files, indicating the probabilities of both a fragment and a bit appearing at particular position are uniformly constant respectively. Yet, through running experiments, we found out the risk of likelihood that URDA might be vulnerable to bit pattern attack due to the different ratios of characters appearing in real world files. However, we solved the problem by modifying URDA with variable fragment lengths, which results in that all the bits in revealed sequences are distributed uniformly and independently at random.
••
TL;DR: This paper comes out with how to control resource usage by a concept known as the package concept, which can be implemented both with internet connection and without the internet connection to ensure continual control of resource.
Abstract: Access and usage control is a major challenge in information and
computer security in a distributed network connected environment. Many models
have been proposed such as traditional access control and UCONABC. Though these
models have achieved their objectives in some areas, there are some issues both
have not dealt with. The issue of what happens to a resource once it has been
accessed rightfully. In view of this, this paper comes out with how to control
resource usage by a concept known as the package concept. This concept can be
implemented both with internet connection and without the internet connection
to ensure continual control of resource. It packages the various types of resources
with the required policies and obligations that pertain to the use of these
different resources. The package concept of ensuring usage control focuses on
resource by classifying them into three: Intellectual, sensitive and non-sensitive
resources. Also this concept classifies access or right into three as: access
to purchase, access to use temporally online and access to modify. The concept
also uses biometric mechanism such as fingerprints for authentication to check
redistribution of resource and a logic bomb to help ensure the fulfillment of
obligations.
••
TL;DR: This paper analyzed the diffusion property of message expansion of STITCH-256 by observing the effect of a single bit difference over the output bits, and comparing the result with that of SHA-256, and showed that the probability to construct differential characteristic in the message Expansion of STitch-256 is reduced.
Abstract: Cryptographic hash functions are built up from individual
components, namely pre-processing, step transformation, and final processing.
Some of the hash functions, such as SHA-256 and STITCH-256, employ non-linear
message expansion in their pre-processing stage. However, STITCH-256 was claimed
to produce high diffusion in its message expansion. In a
cryptographic algorithm, high diffusion is desirable as it helps prevent an
attacker finding collision-producing differences, which would allow one to find
collisions of the whole function without resorting to a brute force search. In
this paper, we analyzed the diffusion property of message expansion of STITCH-256 by
observing the effect of a single bit difference over the output bits, and
compare the result with that of SHA-256. We repeated the same procedure in 3
experiments of different round. The results from the experiments showed that
the minimal weight in the message expansion of STITCH-256 is very much lower than
that in the message expansion of SHA-256, i.e. message expansion of STITCH-256 produce high diffusion. Significantly, we
showed that the probability to construct differential characteristic in
the message expansion of STITCH-256 is reduced.