scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Emerging Security Information, Systems and Technologies in 2015"


Proceedings Article
23 Aug 2015
TL;DR: This paper provides an initial analysis of nonces and classifying types ofnonces used in different cryptographical protocols, in an attempt to improve on ENISA's main verdict that the design and analysis of cryptographic protocols has not reached maturity.
Abstract: Last year the European Union Agency for Network and Information Security (ENISA) published a report on cryptographic protocols. A main verdict was that we still have not reached maturity for the design and analysis of cryptographic protocols. This is bad news for a society that has become dependent on well-functioning informationand communication technology (ICT) infrastructures. In this paper, we address this by investigating the nonce. The nonce, a number-used-once, is but one small element of a cryptographic protocol. That is, a small, but nevertheless a critically important element. Yet, there is relatively little to be found in the literature regarding the properties of the nonce. This is thus an attempt to improve on this, by providing an initial analysis of nonces and classifying types of nonces used in different cryptographical protocols. Keywords–Number-used-once; Nonce; Randomness; Freshness; Timeliness; Uniqueness; Non-repeatability; Cryptographic protocol.

9 citations


Proceedings Article
23 Aug 2015
TL;DR: The main contribution is the use of short signatures for SMS origin authentication, which makes it possible to pack in a single, 140-byte SMS all information necessary to authenticate the origin of encrypted messages, while the user is still left with a useful length of text.
Abstract: This paper details the construction of an application framework for SMS security that provides secrecy, integrity, authentication, and non-repudiation for SMS messages. The proposed framework integrates authenticated encryption and short digital signatures to management services for cryptographic keys and digital certificates. The framework hides from final users all details concerning certificate and key management. A flexible trade-off between security objectives and message length makes it possible to offer three levels of security: (i) secrecy only, (ii) secrecy and message authentication, and (iii) secrecy, origin authentication and nonrepudiation. The main contribution is the use of short signatures for SMS origin authentication, which makes it possible to pack in a single, 140-byte SMS all information necessary to authenticate the origin of encrypted messages, while the user is still left with a useful length of text.

9 citations


Proceedings Article
23 Aug 2015
TL;DR: The risk factors identified in the previous study are analyzed and quantitatively evaluated and it was found that the countermeasures proposed could reduce their corresponding risk factors by about 18% 36%.
Abstract: With recent progress in Internet services and highspeed network environments, cloud computing has rapidly developed. Furthermore, the hybrid cloud configuration is now attracting attention, because it offers the advantages of both public and private clouds. However, public clouds have the problem of uncertain security, while private clouds have the problem of high cost. Thus, risk assessment in a hybrid cloud configuration is an important issue. Our previous study analyzed qualitatively risk assessment of the hybrid cloud configuration. Accordingly, through analysis of risk in a hybrid cloud configuration, 21 risk factors were extracted and evaluated, and countermeasures were proposed. However, we recognized that it was only a qualitative study and that a quantitative evaluation would be needed to make its countermeasures more practical. Hence, in this paper, the risk factors identified in the previous study are analyzed and quantitatively evaluated. Specifically, the values of the risk factors were approximately calculated by using a risk formula used in the field of information security management systems (ISMS). On the basis of these values, the effect of the countermeasures proposed in the previous study was evaluated quantitatively. It was found that the countermeasures in the previous study could reduce their corresponding risk factors by about 18% 36%. The results herein can be used to promote hybrid cloud computing services in the future. Keywords-Risk Assessment; Hybrid Cloud Configuration; Risk Matrix; Risk Value Formula; Information Security Management System (ISMS)

5 citations


Proceedings Article
23 Aug 2015
TL;DR: It is shown how APTs can be tackled using a generic ICT risk analysis framework using graph databases and the major benefits of this graph database approach, i.e., the simple representation of the interconnected risk model as a graph and the availability of efficient traversals over complex sections of the graph.
Abstract: Advanced Persistent Threats (APTs) impose an increasing threat on today’s information and communication technology (ICT) infrastructure. These highly-sophisticated attacks overcome the typical perimeter protection mechanisms of an organization and generate a large amount of damage. Based on a practical use case of a real-life APT lifecycle, this paper shows how APTs can be tackled using a generic ICT risk analysis framework. Further, it provides details for the implementation of this risk analysis framework using graph databases. The major benefits of this graph database approach, i.e., the simple representation of the interconnected risk model as a graph and the availability of efficient traversals over complex sections of the graph, are illustrated giving several examples. Keywordsrisk management; APT; ICT security; graph databases; interconnected risk model.

5 citations


Proceedings Article
23 Aug 2015
TL;DR: A security analysis applying an adaptive threat model suited to analysing human-device and human-human channels is undertaken, and more realistic threats on these channels are identified, compared to those from an analysis using a Dolev-Yao attacker.
Abstract: The Helios verifiable voting system offers voters an opportunity to verify the integrity of their individual vote, that it is cast, and included in the final count, as intended. While not all voters have to verify, these steps can be cumbersome for those who aim to carry them out. Therefore, new verification processes have been proposed in order to improve usability. Voters can use a web-based verifier provided by any one of several independent verification institutes, or a smartphone app developed and provided by these institutes. In this work, we describe these verification processes as ceremonies, and thus model the human peer’s interaction. We undertake a security analysis applying an adaptive threat model suited to analysing human-device and human-human channels. More realistic threats on these channels are identified, compared to those from an analysis using a Dolev-Yao attacker. Keywords–Voting; Threat models; Security Ceremonies.

4 citations


Proceedings Article
23 Aug 2015
TL;DR: This paper proposes a tool that uses information available within the browser (and is, thus, implementable as a browser extension), and it allows users to detect and terminate the suspicious types of behavior typical of hijacked browsers.
Abstract: The steady evolution of browser tools and scripting languages has created a new, emergent threat to safe network operations: browser hijacking. In this type of attack, the user is not infected with regular malware but, while connected to a malicious or compromised website, front end languages such as javascript allow the user’s browser to perform malicious activities; in fact, attackers usually operate within the scope of actions that a browser is expected to execute. Paradigmatic examples are the recent attacks on GitHub, where malicious javascript was injected into the browser of users accessing the search-giant Baidu, launching a devastating denial-of-service against a US-based company. Detecting this type of threat is particularly challenging, since the behavior of a browser is context specific. Detection can still be achieved, but to effectively hamper the effectiveness of this type of attack, users have to be empowered with appropriate detection tools, giving them the ability to autonomously detect and terminate suspicious types of browser behavior. This paper proposes such a tool. It uses information available within the browser (and is, thus, implementable as a browser extension), and it allows users to detect and terminate the suspicious types of behavior typical of hijacked browsers. Keywords–IDS; Browser hijacking; Malicious attack detection; User empowerment.

3 citations


Proceedings Article
23 Aug 2015
TL;DR: An ADVISE meta model is used, with the Möbius framework, to generate ADVISE models and other M Öbius components from a higher level model constructed from components, adversaries, and metrics provided by associated Web Ontology Language libraries.
Abstract: Building secure, complex systems is a daunting task. The ADversary VIew Security Evaluation (ADVISE) formalism was designed to offer a model of an adversary attacking a system. As currently implemented in Möbius, ADVISE provides a rich and flexible system security model that, with the other features of Möbius, offers quantitative security metrics. For large systems, constructing realistic ADVISE models can be tedious and impractical. To remedy this issue, we propose the ADVISE meta modeling formalism. An ADVISE meta model is used, with the Möbius framework, to generate ADVISE models and other Möbius components from a higher level model constructed from components, adversaries, and metrics provided by associated Web Ontology Language libraries. This paper briefly reviews Möbius and ADVISE, then introduces the ADVISE meta modeling formalism.

3 citations


Proceedings Article
Rainer Falk1, Steffen Fries1
23 Aug 2015
TL;DR: This paper presents several new applications of PUFs, which can be used to check the integrity, or authenticity of presented data, and an identifying information in a communication protocol can be determined using a PUF, or a licensing mechanism can be realized.
Abstract: Physical Unclonable Functions (PUF) realize the functionality of a “fingerprint” of a digital circuit. They can be used to authenticate devices without requiring a cryptographic authentication algorithm, or to determine a unique cryptographic key based on hardware-intrinsic, device-specific properties. It is also known to design PUF-based cryptographic protocols. This paper presents several new applications of PUFs. They can be used to check the integrity, or authenticity of presented data. A PUF can be used to build a digital tamper sensor. An identifying information in a communication protocol can be determined using a PUF, or a licensing mechanism can be realized. Keywords–physical unclonable function; key extraction; embedded security; licensing; configuration integrity

3 citations


Proceedings Article
23 Aug 2015
TL;DR: In the analysis, it is found that information security financial instruments can be a solution to address (at least to some extent) various economic problems in the information security domain.
Abstract: Recent cyber-attacks on various organizations indicate that even the most sophisticated technical controls are vulnerable. Furthermore, due to the problem of misaligned incentives it is inevitable to achieve absolute protection with technical controls against the risks and its impact. Thus, there is a space for alternative risk management methods. However, there is a lack of an (effective) financial mechanism to incentivize coordinated efforts by stakeholders in addressing the problem of information asymmetry, negative externality, and free-riding in the information security ecosystem. Therefore, we propose a novel financial instrument called information security financial instrument to incentivize investments in collaborative and multistakeholder initiatives to develop and implement stronger defense systems. The mechanism can contribute to an improvement in information security environment in a time bound manner. We have used a case-study to demonstrate the application of the information security financial instrument. Furthermore, we have analyzed the information security financial instrument against a set of requirements and its usefulness over cyber-insurance in incentivizing investments in information security mechanisms to manage risks. In our analysis, we found that information security financial instruments can be a solution to address (at least to some extent) various economic problems in the information security domain. Keywords–Information Security; Security Economics; Risk Management; Financial Instrument.

3 citations


Proceedings Article
01 Jun 2015
TL;DR: In this paper, a massive-multi-sensor zero-configuration intrusion detection system is proposed for detecting timing attacks, which uses a huge number of sensors for attack detection.
Abstract: Timing attacks are a challenge for current intrusion detection solutions. Timing attacks are dangerous for web applications because they may leak information about side channel vulnerabilities. This paper presents a massive-multi-sensor zero-configuration Intrusion Detection System that is especially good at detecting timing attacks. Unlike current solutions, the proposed Intrusion Detection System uses a huge number of sensors for attack detection. These sensors include sensors automatically inserted into web application or into the frameworks used to build web applications. With this approach the Intrusion Detection System is able to detect sophisticated attacks like timing attacks or other brute-force attacks with increased accuracy. The proposed massive-multi-sensor zero-configuration intrusion detection system does not need specific knowledge about the system to protect, hence it offers zero-configuration capability.

2 citations


Proceedings Article
23 Aug 2015
TL;DR: This paper describes and comments three different points of view on security incident according to international standards or law (Cyber Security law in Czech Republic) and proposes of methodology focused on audits of security incident management.
Abstract: This paper presents comprehensive theoretical background for future work, which will be aimed on multi- criterial evaluation and assessment of security incidents and proposal of methodology focused on audits of security incident management. This paper describes and comments three different points of view on security incident according to international standards or law (Cyber Security law in Czech Republic). The paper is mainly intended for Czech companies since it is based on project about Cyber Security Level in Czech Companies. Some criteria for assessment and evaluation of severity of security incident are proposed at the end of this contribution.

Proceedings Article
11 Jul 2015
TL;DR: Apate as mentioned in this paper is a Linux Kernel Module (LKM) that is able to log, block and manipulate system calls based on preconfigurable conditions like Process ID (PID), User Id (UID), and many more.
Abstract: Honeypots are used in IT Security to detect and gather information about ongoing intrusions, e.g., by documenting the approach of an attacker. Honeypots do so by presenting an interactive system that seems just like a valid application to an attacker. One of the main design goals of honeypots is to stay unnoticed by attackers as long as possible. The longer the intruder interacts with the honeypot, the more valuable information about the attack can be collected. Of course, another main goal of honeypots is to not open new vulnerabilities that attackers can exploit. Thus, it is necessary to harden the honeypot and the surrounding environment. This paper presents Apate, a Linux Kernel Module (LKM) that is able to log, block and manipulate system calls based on preconfigurable conditions like Process ID (PID), User Id (UID), and many more. Apate can be used to build and harden High Interaction Honeypots. Apate can be configured using an integrated high level language. Thus, Apate is an important and easy to use building block for upcoming High Interaction Honeypots.

Proceedings Article
23 Aug 2015
TL;DR: An analysis on a dataset of 100 Italian public and private sector websites, it is obtained that only 1% of the websites show the Heartbleed vulnerability, however, new vulnerabilities as Padding Oracle on Downgraded Legacy Encryption (POODLE) & Factoring Attack on RSA-Export Keys (FREAK) affect a lot of websites.
Abstract: Heartbleed, a big Open Secure Socket Layer (OpenSSL) vulnerability appeared on the web on 7th April 2014. This highly risked vulnerability enabled attackers to remotely read protected memory contents from Hyper Text Transfer Protocol Secure (HTTPS) sites. In this paper, the authors will review and analyze Heartbleed vulnerability effects on secured websites, a year later (April 2015). To accomplish this, we conducted an analysis on a dataset of 100 Italian public and private sector websites like banks, stock exchanges, Cloud Organizations and services on HTTPS websites, thereby obtained that only 1% of the websites show the vulnerability. However, new vulnerabilities as Padding Oracle on Downgraded Legacy Encryption (POODLE) & Factoring Attack on RSA-Export Keys (FREAK) affect a lot of websites, particularly the websites used as point of accesses of Italian telematics process. We concluded the paper with the analysis of the Cloud risks that are very harmful for the Cloud customers as well as the Cloud venders due to Heartbleed attack. Keywords–Heartbleed; OpenSSL; Poodle; Freak; Vulnerability.

Proceedings Article
23 Aug 2015
TL;DR: It is shown that it can be fairly easily constructed from mathematical pseudo-random parameters and known secure cryptographic functions and it is described how time intervals can be used to establish the algorithm for encryption purposes.
Abstract: --We present the main theoretical ideas behind a proposed symmetric key algorithm. We show that it can be fairly easily constructed from mathematical pseudo-random parameters and known secure cryptographic functions. We describe how time intervals can be used to establish our algorithm for encryption purposes. We will briefly discuss the decryption of messages passed through the algorithm. Keywords-algorithm; encryption; gate; symmetric

Proceedings Article
23 Aug 2015
TL;DR: This paper develops a checklist that will be a reference for the Cloud tenant to control the security of Card data and information on the Cloud Computing and recommends more requirements and controls that the norm PCI-DSS could adopt to be more efficient on Cloud.
Abstract: The Payment Card Industry Data Security Standard (PCI-DSS) is a standard that aims to harmonize and strengthen the protection of Card Data in the whole lifecycle. Since its introduction, it has always been an efficient tool for controlling Card data on a platform deployed internally. In addition, it has been proved that this standard is among the best one for gauging data security, because it dictates a series of scrupulous controls and how they could be implemented. However, with the coming of the Cloud, the strategies have changed and the issues in protecting Card data become more complex. In this paper, we continue our previous work by developing a checklist that will be a reference for the Cloud tenant to control the security of Card data and information on the Cloud Computing. In the next steps, we will focus on evaluating this checklist on a real Cloud environment. Afterward, we work on recommending more requirements and controls that the norm PCI-DSS could adopt to be more efficient on Cloud and later we will develop a new SelfAssessment Questionnaire as a reference for Qualified Security Assessors (QSA) to check on the environment. Keywords-Cloud Computing; PCI-DSS; Card Industry; PCISSC; Cloud Computing Alliance (CSA); Cloud Controls Matrix (CCM)

Proceedings Article
01 Jun 2015
TL;DR: The anti-spam approach presented in this paper is based on a peer-to-peer mechanisms and a so-called paranoid trust model to avoid manipulation by spammers and makes advertisement by unsolicited emails unattractive.
Abstract: Unsolicited email (spam) is still a problem for users of the email service. Even though current email anti-spam solutions filter most spam emails, some spam emails still are delivered to the inbox of users. A special class of spam emails advertises websites, e.g., online dating sites or online pharmacies. The success rate of this kind of advertisement is rather low, however, as sending an email does only involve minimal costs, even a very low success rate results in enough revenue such that this kind of advertisement pays off. The anti-spam approach presented in this paper aims on increasing the costs for websites that are advertised by spam emails and on lowering the revenues from spam. Costs can be increased for a website by increasing traffic. Revenues can be decreased by making the website slow responding, hence some business gets lost. To increase costs and decreased revenues a decentralized peer-to-peer coordination mechanism is used to have mail clients to agree on a start date and time for an anti-spam campaign. During a campaign, all clients that received spam emails advertings a website send an opt-out request to this website. A huge number of opt-out requests results in increased traffic to this website and will likely result in a slower responsibility of the website. The coordination mechanism presented in this paper is based on a peer-to-peer mechanisms and a so-called paranoid trust model to avoid manipulation by spammers. An implementation for the Thunderbird email client exist. The anti-spam approach presented in this paper breaks the economy of spam, hence makes advertisement by unsolicited emails unattractive.

Proceedings Article
23 Aug 2015
TL;DR: Pattern based approach to security incident detection in NetFlow data shows that while such pattern based approach is unlikely to provide a highly reliable incident detection method on its own, it can well complement other methods and can detect attacks that remain unnoticed by statistical analysis of network traffic.
Abstract: In this work, we explore the option of using graph topology patterns for security incident detection in NetFlow data. NetFlow data sets in which data flows related to attacks are specially marked are analyzed using graph visualization techniques in combination with manual methods to identify prospective network topology patterns related to attacks. These patterns are subsequently validated and their merit for incident detection assessed. The current research shows that while such pattern based approach is unlikely to provide a highly reliable incident detection method on its own, it can well complement other methods and can detect attacks that remain unnoticed by statistical analysis of network traffic. Keywords—Network security; Data visualization; Graph topology patterns.

Proceedings Article
23 Aug 2015
TL;DR: Key findings are that while webs-oftrust provide an interesting alternative mechanism for identity proofing that may have merit in use cases where no more efficient registration processes are available, its implementation is complex and mainly challenged by usability.
Abstract: Digital identity assurance emerges from two aspects: the strength of the authentication solution, or how you identify yourself towards an online service, and quality of the identity proofing and registration process, or how the authentication solution was issued to you. A reliable registration process, however, is often expensive. For example, it may require the establishment of a registration desk, which is not very user friendly as it demands much effort on the part of the user. This paper investigates the feasibility of using webs-of-trust for reliable identity proofing in digital authentication. Webs-oftrust entail communities of people that trust each other, i.e. utilizing social contacts to confirm people’s identities. A functional decomposition of an attestation service and protocol for web-of-trust enhanced authentication are provided. A prototype for an attestation service was developed as a proofof-concept, leveraging LinkedIn as a web-of-trust, and evaluated by users. Finally, characteristics of using web-oftrust for authentication assurance are discussed and a Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was conducted. Key findings are that while webs-oftrust provide an interesting alternative mechanism for identity proofing that may have merit in use cases where no more efficient registration processes are available, its implementation is complex and mainly challenged by usability. Keywords-authentication; web-of-trust; level of assurance; attestation service; identity proofing.

Proceedings Article
23 Aug 2015
TL;DR: This paper proposes a model to conduct internal security assessment to ensure all organisational assets are protected and secured and discusses the activities and processes involved in conducting the security assessment.
Abstract: Security Assessment is widely used to audit security protection of web applications. However, it is often performed by outside security experts or third parties appointed by a company. The problem appears when the assessment involves highly confidential areas which might impact the company’s data privacy where important information may be accessed and revealed by the third party. Even though the company and third party might have signed a non-disclosure agreement, it is still considered a high risk since confidential information on infrastructure and architecture are already exposed. It is important to keep the confidential information within the project team members to protect the data used by the system. Therefore, this paper proposes a model to conduct internal security assessment to ensure all organisational assets are protected and secured. The main objective of this paper is to discuss the activities and processes involved in conducting the security assessment. Keywords-Web application; vulnerability; security testing; security assessment; penetration testing.

Proceedings Article
23 Aug 2015
TL;DR: This paper proposes a method to choose a polynomial that does not generate malicious access structures, and an analysis of tailored access structure that has different subsets to rebuild the secret, each subset with his threshold.
Abstract: In threshold secret sharing schemes, the threshold and the total number of shareholders are public information. We believe that such information should be secret and the threshold value must be determined before the secret reconstruction. Thus, this paper propose a method to do it. We propose also, an analysis of tailored access structure that has different subsets to rebuild the secret, each subset with his threshold. Furthermore, in our investigation we have seen that is possible that there are malicious access structures where a previleged subgroup of shareholders, in smaller number than the threshold, can reconstruct the secret. Finally, we propose a method to choose a polynomial that does not generate malicious access structures. Keywords–Secret sharing; threshold; information

Proceedings Article
23 Aug 2015
TL;DR: The approach proposed in this work is based on principles of Organic Computing and conceives a self-adapting system for malware reaction in automotive environments, which focuses on the reaction on malware-caused incidents.
Abstract: In an interconnected world malware is not only a topic in classical computer environments (information technology) anymore. Recent examples have shown which damage malware can cause in modern interconnected cyberphysical systems. Based on the increasing threat of malware we propose an approach to counter malware. In contrast to classical approaches like signature scanners or IDS (Intrusion Detection Systems), which focus on detection our approach focuses on the reaction on malware-caused incidents. While these classic approaches usually require manual intervention we focus our approach on an automatic, adaptive reaction. The approach proposed in this work is based on principles of Organic Computing and conceives a self-adapting system for malware reaction in automotive environments.

Proceedings Article
23 Aug 2015
TL;DR: Methods of dynamic traffic management and access control that are used during international space experiment “Kontur-2” performed aboard the ISS are considered.
Abstract: Space experiment “Kontur-2” aboard the International Space Station (ISS) is focused on the transfer of information between station and on-ground robot. Station’s resources are limited, including communication ones. That is why for the space experiment “Kontur-2” it was decided to use the methods of priority traffic management. New access control mechanisms based on these methods are researched. The usage of the priority traffic processing methods allows using more efficiently the bandwidth of receiving and transmitting equipment onboard the ISS through the application of randomized push-out mechanism. The paper considers methods of dynamic traffic management and access control that are used during international space experiment “Kontur-2” performed aboard the ISS. Keywords-space experiment; access control; traffic management; virtual connection