scispace - formally typeset
Search or ask a question
Author

Markus Kont

Bio: Markus Kont is an academic researcher from NATO. The author has contributed to research in topics: Autonomous agent & Complex event processing. The author has an hindex of 5, co-authored 7 publications receiving 58 citations. Previous affiliations of Markus Kont include NATO Cooperative Cyber Defence Centre of Excellence.

Papers
More filters
Proceedings ArticleDOI
23 Apr 2018
TL;DR: A novel data mining based framework for detecting anomalous log messages from syslog- based system log files is presented and the implementation and performance of the framework in a large organizational network is described.
Abstract: System logs provide valuable information about the health status of IT systems and computer networks Therefore, log file monitoring has been identified as an important system and network management technique While many solutions have been developed for monitoring known log messages, the detection of previously unknown error conditions has remained a difficult problem In this paper, we present a novel data mining based framework for detecting anomalous log messages from syslog- based system log files We also describe the implementation and performance of the framework in a large organizational network

26 citations

Book ChapterDOI
02 Nov 2016
TL;DR: The Internet Protocol Version 6 (IPv6) transition opens a wide scope for potential attack vectors, and effective tools are required for the execution of security operations for assessment of possible attack vectors related to IPv6 security.
Abstract: The Internet Protocol Version 6 (IPv6) transition opens a wide scope for potential attack vectors. IPv6 transition mechanisms could allow the set-up of covert egress communication channels over an IPv4-only or dual-stack network, resulting in full compromise of a target network. Therefore effective tools are required for the execution of security operations for assessment of possible attack vectors related to IPv6 security.

14 citations

Proceedings ArticleDOI
01 Nov 2016
TL;DR: The application of the LogCluster tool is described for mining event patterns and anomalous events from security and system logs for automating the manual review of collected data.
Abstract: Today, event logging is a widely accepted concept with a number of event formatting standards and event collection protocols. Event logs contain valuable information not only about system faults and performance issues, but also about security incidents. Unfortunately, since modern data centers and computer networks are known to produce large volumes of log data, the manual review of collected data is beyond human capabilities. For automating this task, a number of data mining algorithms and tools have been suggested in recent research papers. In this paper, we will describe the application of the LogCluster tool for mining event patterns and anomalous events from security and system logs.

12 citations

Posted Content
01 Mar 2018
TL;DR: This report describes an initial reference architecture for intelligent software agents performing active, largely autonomous cyber defense actions on military networks of computing and communicating devices and describes the rationale of the AICA concept.
Abstract: This report describes an initial reference architecture for intelligent software agents performing active, largely autonomous cyber defense actions on military networks of computing and communicating devices. The report is produced by the North Atlantic Treaty Organization (NATO) Research Task Group (RTG) IST-152 "Intelligent Autonomous Agents for Cyber Defense and Resilience". In a conflict with a technically sophisticated adversary, NATO military tactical networks will operate in a heavily contested battlefield. Enemy software cyber agents - malware - will infiltrate friendly networks and attack friendly command, control, communications, computers, intelligence, surveillance, and reconnaissance and computerized weapon systems. To fight them, NATO needs artificial cyber hunters - intelligent, autonomous, mobile agents specialized in active cyber defense. With this in mind, in 2016, NATO initiated RTG IST-152. Its objective is to help accelerate development and transition to practice of such software agents by producing a reference architecture and technical roadmap. This report presents the concept and architecture of an Autonomous Intelligent Cyber Defense Agent (AICA). We describe the rationale of the AICA concept, explain the methodology and purpose that drive the definition of the AICA Reference Architecture, and review some of the main features and challenges of the AICA.

9 citations

Posted Content
TL;DR: This report summarizes the discussions and findings of the Workshop on Intelligent Autonomous Agents for Cyber Defence and Resilience organized by the NATO research group IST-152-RTG held in Prague, Czech Republic, on 18-20 October 2017.
Abstract: This report summarizes the discussions and findings of the Workshop on Intelligent Autonomous Agents for Cyber Defence and Resilience organized by the NATO research group IST-152-RTG. The workshop was held in Prague, Czech Republic, on 18-20 October 2017. There is a growing recognition that future cyber defense should involve extensive use of partially autonomous agents that actively patrol the friendly network, and detect and react to hostile activities rapidly (far faster than human reaction time), before the hostile malware is able to inflict major damage, evade friendly agents, or destroy friendly agents. This requires cyber-defense agents with a significant degree of intelligence, autonomy, self-learning, and adaptability. The report focuses on the following questions: In what computing and tactical environments would such an agent operate? What data would be available for the agent to observe or ingest? What actions would the agent be able to take? How would such an agent plan a complex course of actions? Would the agent learn from its experiences, and how? How would the agent collaborate with humans? How can we ensure that the agent will not take undesirable destructive actions? Is it possible to help envision such an agent with a simple example?

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA.
Abstract: Network Traffic Monitoring and Analysis (NTMA) represents a key component for network management, especially to guarantee the correct operation of large-scale networks such as the Internet. As the complexity of Internet services and the volume of traffic continue to increase, it becomes difficult to design scalable NTMA applications. Applications such as traffic classification and policing require real-time and scalable approaches. Anomaly detection and security mechanisms require to quickly identify and react to unpredictable events while processing millions of heterogeneous events. At last, the system has to collect, store, and process massive sets of historical data for post-mortem analysis. Those are precisely the challenges faced by general big data approaches: Volume, Velocity, Variety, and Veracity. This survey brings together NTMA and big data. We catalog previous work on NTMA that adopt big data approaches to understand to what extent the potential of big data is being explored in NTMA. This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA. Finally, we provide guidelines for future work, discussing lessons learned, and research directions.

82 citations

Journal ArticleDOI
19 Oct 2018-Sensors
TL;DR: This paper proposes deep normative modeling as a probabilistic novelty detection method, in which the distribution of normal human movements recorded by wearable sensors and try to detect abnormal movements in patients with PD and ASD in a novelty detection framework.
Abstract: Detecting and monitoring of abnormal movement behaviors in patients with Parkinson’s Disease (PD) and individuals with Autism Spectrum Disorders (ASD) are beneficial for adjusting care and medical treatment in order to improve the patient’s quality of life. Supervised methods commonly used in the literature need annotation of data, which is a time-consuming and costly process. In this paper, we propose deep normative modeling as a probabilistic novelty detection method, in which we model the distribution of normal human movements recorded by wearable sensors and try to detect abnormal movements in patients with PD and ASD in a novelty detection framework. In the proposed deep normative model, a movement disorder behavior is treated as an extreme of the normal range or, equivalently, as a deviation from the normal movements. Our experiments on three benchmark datasets indicate the effectiveness of the proposed method, which outperforms one-class SVM and the reconstruction-based novelty detection approaches. Our contribution opens the door toward modeling normal human movements during daily activities using wearable sensors and eventually real-time abnormal movement detection in neuro-developmental and neuro-degenerative disorders.

53 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA.
Abstract: Network Traffic Monitoring and Analysis (NTMA) represents a key component for network management, especially to guarantee the correct operation of large-scale networks such as the Internet. As the complexity of Internet services and the volume of traffic continue to increase, it becomes difficult to design scalable NTMA applications. Applications such as traffic classification and policing require real-time and scalable approaches. Anomaly detection and security mechanisms require to quickly identify and react to unpredictable events while processing millions of heterogeneous events. At last, the system has to collect, store, and process massive sets of historical data for post-mortem analysis. Those are precisely the challenges faced by general big data approaches: Volume, Velocity, Variety, and Veracity. This survey brings together NTMA and big data. We catalog previous work on NTMA that adopt big data approaches to understand to what extent the potential of big data is being explored in NTMA. This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA. Finally, we provide guidelines for future work, discussing lessons learned, and research directions.

37 citations

Proceedings ArticleDOI
01 Oct 2020
TL;DR: A novel framework, LogTransfer, which applies transfer learning to transfer the anomalous knowledge of one type of software system (source system) to another (target system) and outperforms the state-of-the-art supervised and unsupervised log-based anomaly detection methods, which are consistent with the public HDFS and Hadoop application datasets.
Abstract: System logs, which describe a variety of events of software systems, are becoming increasingly popular for anomaly detection. However, for a large software system, current unsupervised learning-based methods are suffering from low accuracy due to the high diversity of logs, while the supervised learning methods are nearly infeasible to be used in practice because it is time-consuming and labor-intensive to obtain sufficient labels for different types of software systems. In this paper, we propose a novel framework, LogTransfer, which applies transfer learning to transfer the anomalous knowledge of one type of software system (source system) to another (target system). We represent every template using Glove, which considers both global word co-occurrence and local context information, to address the challenge that different types of software systems are different in log syntax while the semantics of logs should be reserved. We apply an LSTM network to extract the sequential patterns of logs, and propose a novel transfer learning method sharing fully connected networks between source and target systems, to minimize the impact of noises in anomalous log sequences. Extensive experiments have been performed on switch logs of different vendors collected from a top global cloud service provider. LogTransfer achieves an averaged 0.84 F1-score and outperforms the state-of-the-art supervised and unsupervised log-based anomaly detection methods, which are consistent with the experiments conducted on the public HDFS and Hadoop application datasets.

36 citations