scispace - formally typeset
Search or ask a question
Author

Brian Lee

Other affiliations: Dublin City University, Ericsson
Bio: Brian Lee is an academic researcher from Athlone Institute of Technology. The author has contributed to research in topics: Cloud computing & The Internet. The author has an hindex of 19, co-authored 100 publications receiving 1360 citations. Previous affiliations of Brian Lee include Dublin City University & Ericsson.


Papers
More filters
Proceedings ArticleDOI
24 Aug 2015
TL;DR: In this paper, the suitability of container and cluster technology for edge cloud environments is discussed, where applications are distributed across distributed multi-cloud platforms build from a range of networked nodes ranging from data centres to small devices.
Abstract: Cloud technology is moving towards more distribution across multi-clouds and the inclusion of various devices, as evident through IoT and network integration in the context of edge cloud and fog computing. Generally, lightweight virtualisation solutions are beneficial for this architectural setting with smaller, but still virtualised devices to host application and platform services, and the logistics required to manage this. Containerisation is currently discussed as a lightweight virtualisation solution. In addition to having benefits over traditional virtual machines in the cloud in terms of size and flexibility, containers are specifically relevant for platform concerns typically dealt with Platform-as-a-Service (PaaS) clouds such as application packaging and orchestration. For the edge cloud environment, application and service orchestration can help to manage and orchestrate applications through containers as an application packaging mechanism. We review edge cloud requirements and discuss the suitability container and cluster technology of that arise from having to facilitate applications through distributed multi-cloud platforms build from a range of networked nodes ranging from data centres to small devices, which we refer to here as edge cloud.

219 citations

Posted Content
TL;DR: In this article, a survey of recent developments in deep learning based object detectors is presented along with some of the prominent backbone architectures used in recognition tasks and compared the performances of these architectures on multiple metrics.
Abstract: Object Detection is the task of classification and localization of objects in an image or video. It has gained prominence in recent years due to its widespread applications. This article surveys recent developments in deep learning based object detectors. Concise overview of benchmark datasets and evaluation metrics used in detection is also provided along with some of the prominent backbone architectures used in recognition tasks. It also covers contemporary lightweight classification models used on edge devices. Lastly, we compare the performances of these architectures on multiple metrics.

174 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is shown that edge cloud requirements such as cost-efficiency, low power consumption, and robustness can be met by implementing container and cluster technology on small single-board devices like Raspberry Pis.
Abstract: Cloud technology is moving towards multi-cloud environments with the inclusion of various devices. Cloud and IoT integration resulting in so-called edge cloud and fog computing has started. This requires the combination of data centre technologies with much more constrained devices, but still using virtualised solutions to deal with scalability, flexibility and multi-tenancy concerns. Lightweight virtualisation solutions do exist for this architectural setting with smaller, but still virtualised devices to provide application and platform technology as services. Containerisation is a solution component for lightweight virtualisation solution. Containers are furthermore relevant for cloud platform concerns dealt with by Platform-as-a-Service (PaaS) clouds like application packaging and orchestration. We demonstrate an architecture for edge cloud PaaS. For edge clouds, application and service orchestration can help to manage and orchestrate applications through containers. In this way, computation can be brought to the edge of the cloud, rather than data from the Internet-of-Things (IoT) to the cloud. We show that edge cloud requirements such as cost-efficiency, low power consumption, and robustness can be met by implementing container and cluster technology on small single-board devices like Raspberry Pis. This architecture can facilitate applications through distributed multi-cloud platforms built from a range of nodes from data centres to small devices, which we refer to as edge cloud. We illustrate key concepts of an edge cloud PaaS and refer to experimental and conceptual work to make that case.

131 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: This survey presents an overview of the techniques and strategies proposed recently to achieve green IoT using UAVs infrastructure for a reliable and sustainable smart world.
Abstract: Recently, Unmanned Aerial Vehicles (UAVs) present a promising advanced technology that can enhance people life quality and smartness of cities dramatically and increase overall economic efficiency UAVs have attained a significant interest in supporting many applications such as surveillance, agriculture, communication, transportation, pollution monitoring, disaster management, public safety, healthcare, and environmental preservation Industry 40 applications are conceived of intelligent things that can automatically and collaboratively improve beyond 5G (B5G) Therefore, the Internet of Things (IoT) is required to ensure collaboration between the vast multitude of things efficiently anywhere in real-world applications that are monitored in real-time However, many IoT devices consume a significant amount of energy when transmitting the collected data from surrounding environments Due to a drone's capability to fly closer to IoT, UAV technology plays a vital role in greening IoT by transmitting collected data to achieve a sustainable, reliable, eco-friendly Industry 40 This survey presents an overview of the techniques and strategies proposed recently to achieve green IoT using UAVs infrastructure for a reliable and sustainable smart world This survey is different from other attempts in terms of concept, focus, and discussion Finally, various use cases, challenges, and opportunities regarding green IoT using UAVs are presented

104 citations

Book ChapterDOI
10 Sep 2018
TL;DR: This paper describes a computational efficient anomaly based intrusion detection system based on Recurrent Neural Networks using Gated Recurrent Units rather than the normal LSTM networks to obtain a set of comparable results with reduced training times.
Abstract: Cyber security has become one of the most challenging aspects of modern world digital technology and it has become imperative to minimize and possibly avoid the impact of cybercrimes. Host based intrusion detection systems help to protect systems from various kinds of malicious cyber attacks. One approach is to determine normal behaviour of a system based on sequences of system calls made by processes in the system [1]. This paper describes a computational efficient anomaly based intrusion detection system based on Recurrent Neural Networks. Using Gated Recurrent Units rather than the normal LSTM networks it is possible to obtain a set of comparable results with reduced training times. The incorporation of stacked CNNs with GRUs leads to improved anomaly IDS. Intrusion Detection is based on determining the probability of a particular call sequence occurring from a language model trained on normal call sequences from the ADFA Data set of system call traces [2]. Sequences with a low probability of occurring are classified as an anomaly.

88 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties, and elaborates further on open research challenges.
Abstract: Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties Finally, this paper overviews the current standardization activities and elaborates further on open research challenges

1,351 citations

Journal ArticleDOI
TL;DR: This paper provides a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provides a taxonomy of research topics in fog computing.

783 citations

Journal Article
TL;DR: Der DES basiert auf einer von Horst Feistel bei IBM entwickelten Blockchiffre („Lucipher“) with einer Schlüssellänge von 128 bit zum Sicherheitsrisiko, und zuletzt konnte 1998 mit einem von der „Electronic Frontier Foundation“ (EFF) entwickkelten Spezialmaschine mit 1.800 parallel arbeit
Abstract: Im Jahre 1977 wurde der „Data Encryption Algorithm“ (DEA) vom „National Bureau of Standards“ (NBS, später „National Institute of Standards and Technology“ – NIST) zum amerikanischen Verschlüsselungsstandard für Bundesbehörden erklärt [NBS_77]. 1981 folgte die Verabschiedung der DEA-Spezifikation als ANSI-Standard „DES“ [ANSI_81]. Die Empfehlung des DES als StandardVerschlüsselungsverfahren wurde auf fünf Jahre befristet und 1983, 1988 und 1993 um jeweils weitere fünf Jahre verlängert. Derzeit liegt eine Neufassung des NISTStandards vor [NIST_99], in dem der DES für weitere fünf Jahre übergangsweise zugelassen sein soll, aber die Verwendung von Triple-DES empfohlen wird: eine dreifache Anwendung des DES mit drei verschiedenen Schlüsseln (effektive Schlüssellänge: 168 bit) [NIST_99]. Der DES basiert auf einer von Horst Feistel bei IBM entwickelten Blockchiffre („Lucipher“) mit einer Schlüssellänge von 128 bit. Da die amerikanische „National Security Agency“ (NSA) dafür gesorgt hatte, daß der DES eine Schlüssellänge von lediglich 64 bit besitzt, von denen nur 56 bit relevant sind, und spezielle Substitutionsboxen (den „kryptographischen Kern“ des Verfahrens) erhielt, deren Konstruktionskriterien von der NSA nicht veröffentlicht wurden, war das Verfahren von Beginn an umstritten. Kritiker nahmen an, daß es eine geheime „Trapdoor“ in dem Verfahren gäbe, die der NSA eine OnlineEntschlüsselung auch ohne Kenntnis des Schlüssels erlauben würde. Zwar ließ sich dieser Verdacht nicht erhärten, aber sowohl die Zunahme von Rechenleistung als auch die Parallelisierung von Suchalgorithmen machen heute eine Schlüssellänge von 56 bit zum Sicherheitsrisiko. Zuletzt konnte 1998 mit einer von der „Electronic Frontier Foundation“ (EFF) entwickelten Spezialmaschine mit 1.800 parallel arbeitenden, eigens entwickelten Krypto-Prozessoren ein DES-Schlüssel in einer Rekordzeit von 2,5 Tagen gefunden werden. Um einen Nachfolger für den DES zu finden, kündigte das NIST am 2. Januar 1997 die Suche nach einem „Advanced Encryption Standard“ (AES) an. Ziel dieser Initiative ist, in enger Kooperation mit Forschung und Industrie ein symmetrisches Verschlüsselungsverfahren zu finden, das geeignet ist, bis weit ins 21. Jahrhundert hinein amerikanische Behördendaten wirkungsvoll zu verschlüsseln. Dazu wurde am 12. September 1997 ein offizieller „Call for Algorithm“ ausgeschrieben. An die vorzuschlagenden symmetrischen Verschlüsselungsalgorithmen wurden die folgenden Anforderungen gestellt: nicht-klassifiziert und veröffentlicht, weltweit lizenzfrei verfügbar, effizient implementierbar in Hardund Software, Blockchiffren mit einer Blocklänge von 128 bit sowie Schlüssellängen von 128, 192 und 256 bit unterstützt. Auf der ersten „AES Candidate Conference“ (AES1) veröffentlichte das NIST am 20. August 1998 eine Liste von 15 vorgeschlagenen Algorithmen und forderte die Fachöffentlichkeit zu deren Analyse auf. Die Ergebnisse wurden auf der zweiten „AES Candidate Conference“ (22.-23. März 1999 in Rom, AES2) vorgestellt und unter internationalen Kryptologen diskutiert. Die Kommentierungsphase endete am 15. April 1999. Auf der Basis der eingegangenen Kommentare und Analysen wählte das NIST fünf Kandidaten aus, die es am 9. August 1999 öffentlich bekanntmachte: MARS (IBM) RC6 (RSA Lab.) Rijndael (Daemen, Rijmen) Serpent (Anderson, Biham, Knudsen) Twofish (Schneier, Kelsey, Whiting, Wagner, Hall, Ferguson).

624 citations

Proceedings ArticleDOI
14 May 2017
TL;DR: This paper designs and implements ProvChain, an architecture to collect and verify cloud data provenance by embedding the provenance data into blockchain transactions, and demonstrates that ProvChain provides security features including tamper-proof provenance, user privacy and reliability with low overhead for the cloud storage applications.
Abstract: Cloud data provenance is metadata that records the history of the creation and operations performed on a cloud data object. Secure data provenance is crucial for data accountability, forensics and privacy. In this paper, we propose a decentralized and trusted cloud data provenance architecture using blockchain technology. Blockchain-based data provenance can provide tamper-proof records, enable the transparency of data accountability in the cloud, and help to enhance the privacy and availability of the provenance data. We make use of the cloud storage scenario and choose the cloud file as a data unit to detect user operations for collecting provenance data. We design and implement ProvChain, an architecture to collect and verify cloud data provenance, by embedding the provenance data into blockchain transactions. ProvChain operates mainly in three phases: (1) provenance data collection, (2) provenance data storage, and (3) provenance data validation. Results from performance evaluation demonstrate that ProvChain provides security features including tamper-proof provenance, user privacy and reliability with low overhead for the cloud storage applications.

581 citations