scispace - formally typeset
Search or ask a question

Showing papers by "Jelena Mirkovic published in 2021"


Journal ArticleDOI
01 Mar 2021
TL;DR: A significant cybersecurity event has recently been discovered in which malicious actors gained access to the source code for the Orion monitoring and management software made by the company SolarWinds and inserted malware into that source code as mentioned in this paper.
Abstract: A significant cybersecurity event has recently been discovered in which malicious actors gained access to the source code for the Orion monitoring and management software made by the company SolarWinds and inserted malware into that source code. This article contains brief perspectives from a few members of the IEEE Security & Privacy editorial board regarding that incident.

29 citations


Journal ArticleDOI
29 Oct 2021
TL;DR: In this article, the authors summarize integrating cybersecurity and artificial intelligence (AI) research in cybersecurity education and implement a module in an existing non-cybersecurity undergraduate engineering course to drive the broader community to focus on the convergence of cybersecurity and AI education.
Abstract: We summarize integrating cybersecurity and artificial intelligence (AI) research in cybersecurity education and implementing a module in an existing noncybersecurity undergraduate engineering course. This initiative will drive the broader community to focus on the convergence of cybersecurity and AI education.

4 citations


Proceedings ArticleDOI
09 Aug 2021
TL;DR: In this paper, the authors report on two surveys they administered to investigate and document possible obstacles in user interaction with network testbeds and conclude that most users overcome their initial orientational obstacles, but that implementational and domain-specific obstacles remain and should be addressed by test beds through significant new developments.
Abstract: Network testbeds are used by researchers to evaluate their research products in a controlled setting. Teachers and students also use network testbeds in classes to facilitate active learning in authentic settings. However, testbeds have scarce human resources to develop documentation or support users one-on-one. Therefore, using testbeds can be difficult, especially for novice users. A user’s lack of experience, coupled with user support deficiencies, can turn into research or learning obstacles. In this paper we report on two surveys we administered to investigate and document possible obstacles in user interaction with network testbeds. In the first survey we conducted interviews with 13 students that used a network testbed in class. Informed by their answers, we created the second, more comprehensive online survey and circulated it to both research and education users of network testbeds. We received 69 responses. User responses indicate three broad sources of usability challenges: orientational – learning a new environment, implementational – setting up and running experiments and domain-specific – monitoring experiments and diagnosing failures. Responses further show that most users overcome their initial orientational obstacles, but that implementational and domain-specific obstacles remain and should be addressed by testbeds through significant new developments. Overall, users regard network testbeds as a positive and useful influence on their learning and research.

1 citations


Book ChapterDOI
21 Jun 2021
TL;DR: In this article, the authors propose robust and reliable models of human interaction with server, which can identify and block a wide variety of bots, and evaluate them on three Web servers with different server applications and content, showing that FRADE detects both naive and sophisticated bots within seconds, and successfully filters out attack traffic.
Abstract: A flash crowd attack (FCA) floods a service, such as a Web server, with well-formed requests, generated by numerous bots. FCA traffic is difficult to filter, since individual attack and legitimate service requests look identical. We propose robust and reliable models of human interaction with server, which can identify and block a wide variety of bots. We implement the models in a system called FRADE, and evaluate them on three Web servers with different server applications and content. Our results show that FRADE detects both naive and sophisticated bots within seconds, and successfully filters out attack traffic. FRADE significantly raises the bar for a successful attack, by forcing attackers to deploy at least three orders of magnitude larger botnets than today.