scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 2019"



Journal ArticleDOI
TL;DR: This review paper uses the simple taxonomy of government services to provide an overview of data science automation being deployed by governments world-wide and encourages the Computer Science community to engage with government to develop these new systems to transform public services and support the work of civil servants.
Abstract: The data science technologies of artificial intelligence (AI), Internet of Things (IoT), big data and behavioral/predictive analytics, and blockchain are poised to revolutionize government and create a new generation of GovTech start-ups. The impact from the ‘smartification’ of public services and the national infrastructure will be much more significant in comparison to any other sector given government's function and importance to every institution and individual. Potential GovTech systems include Chatbots and intelligent assistants for public engagement, Robo-advisors to support civil servants, real-time management of the national infrastructure using IoT and blockchain, automated compliance/regulation, public records securely stored in blockchain distributed ledgers, online judicial and dispute resolution systems, and laws/statutes encoded as blockchain smart contracts. Government is potentially the major ‘client’ and also ‘public champion’ for these new data technologies. This review paper uses our simple taxonomy of government services to provide an overview of data science automation being deployed by governments world-wide. The goal of this review paper is to encourage the Computer Science community to engage with government to develop these new systems to transform public services and support the work of civil servants.

126 citations


Journal ArticleDOI
TL;DR: A new dynamic multi-level (DM) auto-scaling method with dynamically changing thresholds, which uses not only infrastructure, but also application-level monitoring data, which has better overall performance under varied amount of workloads than the other auto- scaling methods.
Abstract: Container-based cloud applications require sophisticated auto-scaling methods in order to operate under different workload conditions. The choice of an auto-scaling method may significantly affect important service quality parameters, such as response time and resource utilization. Current container orchestration systems such as Kubernetes and cloud providers such as Amazon EC2 employ auto-scaling rules with static thresholds and rely mainly on infrastructure-related monitoring data, such as CPU and memory utilization. This paper presents a new dynamic multi-level (DM) auto-scaling method with dynamically changing thresholds, which uses not only infrastructure, but also application-level monitoring data. The new method is compared with seven existing auto-scaling methods in different synthetic and real-world workload scenarios. Based on experimental results, all eight auto-scaling methods are compared according to the response time and the number of instantiated containers. The results show that the proposed DM method has better overall performance under varied amount of workloads than the other auto-scaling methods. Due to satisfactory results, the proposed DM method is implemented in the SWITCH software engineering system for time-critical cloud applications.

58 citations



Journal ArticleDOI
TL;DR: In this paper, the authors propose a new context-aware access control (CAAC) approach with both dynamic associations of user-role and role-permission capabilities, which supports context sensitive access control to information resources and dynamically re-evaluates the access control decisions when there are dynamic changes to the context.
Abstract: In today’s dynamic ICT environments, the ability to control users’ access to information resources and services has become ever important. On the one hand, it should provide flexibility to adapt to the users’ changing needs, while on the other hand, it should not be compromised. The user is often faced with different contexts and environments that may change the user’s information needs. To allow for this, it is essential to incorporate the dynamically changing context information into the access control policies to reflect different contexts and environments through the use of a new context-aware access control (CAAC) approach with both dynamic associations of user-role and role-permission capabilities. Our proposed CAAC framework differs from the existing access control frameworks in that it supports context-sensitive access control to information resources and dynamically re-evaluates the access control decisions when there are dynamic changes to the context. It uses the dynamic context information to specify the user-role and role-permission assignment policies. We first present a formal policy model for our framework, specifying CAAC policies. Using this model, we then introduce a policy ontology for modeling CAAC policies and a policy enforcement architecture which supports access to resources according to the dynamically changing context information. In addition, we demonstrate the feasibility of our framework by considering (i) the completeness, correctness and consistency of the ontology concepts through application to healthcare scenarios and (ii) the performance and usability testing of the framework when using desktop and mobile-based prototypes.

30 citations


Journal ArticleDOI
TL;DR: The view of understanding the term of ‘men-in-the-middle-attack’ is essentially included to accumulate related data/information in a single article so that it can be a reference to conduct research further on this topic at college/undergraduate level.
Abstract: These days cyber-attack is a serious criminal offense and it is a hot debated issue moreover. A man-in-the-middle-attack is a kind of cyberattack where an unapproved outsider enters into an online correspondence between two users, remains escaped the two parties. The malware that is in the middle-attack often monitors and changes individual/classified information that was just realized by the two users. A man-in-the-middle-attack as a protocol is subjected to an outsider inside the system, which can access, read and change secret information without keeping any tress of manipulation. This issue is intense, and most of the cryptographic systems without having a decent authentication security are threatened to be hacked by the malware named ‘men-in-the-middle-attack’ (MITM/MIM). This paper essentially includes the view of understanding the term of ‘men-in-the-middle-attack’; the current work is mainly emphasized to accumulate related data/information in a single article so that it can be a reference to conduct research further on this topic at college/undergraduate level. This paper likewise audits most cited research and survey articles on ‘man-in-the-middle-attack’ recorded on 'Google Scholar'. The motivation behind this paper is to help the readers for understanding and familiarizing the topic 'man-in-the-middle attack'.

27 citations


Journal ArticleDOI
TL;DR: In this paper, an Openflow/SDN network remedy to combat specifically TCP SYN flood is proposed. But it is not shown that it can reduce the time a flow entry occupies the switch resource by 94% in comparison with the Avant-Guard solution.
Abstract: Recently, TCP SYN flood has been the most common and serious type of Distributed Denial of Service attack that causes outages of server resource of Internet Service Providers. In another aspect, Software Defined Networking (SDN) has emerged as a new networking paradigm to increase network agility and programmability. SDN is also a promising architecture to deal with the network security issue where we can flexibly change security rules and control incoming flows. In this article, we design an Openflow/SDN network remedy to combat specifically TCP SYN flood. We show security threats for the SDN architecture and exploit SDN capabilities and features to design a SDN-based SYN Proxy (SSP) paradigm to mitigate such TCP SYN threats. Our SSP is proved to be a network-based solution to protect application servers in terms of decreasing number of Half-Open Connections at an application server and increasing probability of successful establishment for a TCP flow connection under TCP SYN Flood attack. Using SSP to support application servers is shown to outperform the case where the servers adopt only the protection scheme of Microsoft Windows server reference model without utilizing SSP. SSP also shows that it can reduce the time a flow entry occupies the switch resource by 94% in comparison with the Avant-Guard solution. In addition, SSP improves the successful connection rate and average connection retrieval time in comparison with the standard Openflow solution.

25 citations


Journal ArticleDOI
TL;DR: This paper proposes an idea to construct a general multivariate public key cryptographic scheme based on a user’s identity and uses this scheme to construct practical identity-based signature schemes named ID-UOV and ID-Rainbow based on two well-known and promising MPKC signature schemes.
Abstract: In this paper, we proposed an idea to construct a general multivariate public key cryptographic (MPKC) scheme based on a user’s identity. In our construction, each user is distributed a unique identity by the key distribution center (KDC) and we use this key to generate user’s private keys. Thereafter, we use these private keys to produce the corresponding public key. This method can make key generating process easier so that the public key will reduce from dozens of Kilobyte to several bits. We then use our general scheme to construct practical identity-based signature schemes named ID-UOV and ID-Rainbow based on two well-known and promising MPKC signature schemes, respectively. Finally, we present the security analysis and give experiments for all of our proposed schemes and the baseline schemes. Comparison shows that our schemes are both efficient and practical.

23 citations


Journal ArticleDOI
TL;DR: A robust image hashing with singular values of quaternion singular value decomposition (QSVD) is proposed, the key contribution is the innovative use of QSVD, which can extract stable and discriminative image features from CIE L*a*b* color space.
Abstract: Image hashing is an efficient technique of many multimedia systems, such as image retrieval, image authentication and image copy detection. Classification between robustness and discrimination is one of the most important performances of image hashing. In this paper, we propose a robust image hashing with singular values of quaternion singular value decomposition (QSVD). The key contribution is the innovative use of QSVD, which can extract stable and discriminative image features from CIE L*a*b* color space. In addition, image features of a block are viewed as a point in the Cartesian coordinates and compressed by calculating the Euclidean distance between its point and a reference point. As the Euclidean distance requires smaller storage than the original block features, this technique helps to make a discriminative and compact hash. Experiments with three open image databases are conducted to validate efficiency of our image hashing. The results demonstrate that our image hashing can resist many digital operations and reaches a good discrimination. Receiver operating characteristic curve comparisons illustrate that our image hashing outperforms some state-of-the-art algorithms in classification performance.

21 citations


Journal ArticleDOI
TL;DR: This paper proposes a CL-SC scheme with KSSTIS, which is provably secure in the standard model and shows that unfortunately Luo and Wan made a significant error in the construction of their proposed scheme.
Abstract: Certificateless public key cryptography (CL-PKC) promises a practical resolution in establishing practical schemes, since it addresses two fundamental issues, namely the necessity of requiring certificate managements in traditional public key infrastructure (PKI) and the key escrow problem in identity-based (ID-based) setting concurrently. Signcryption is an important primitive that provides the goals of both encryption and signature schemes as it is more efficient than encrypting and signing messages consecutively. Since the concept of certificateless signcryption (CL-SC) scheme was put forth by Barbosa and Farshim in 2008, many schemes have been proposed where most of them are provable in the random oracle model (ROM) and only a few number of them are provable in the standard model. Very recently, Luo and Wan (Wireless Personal Communication, 2018) proposed a very efficient CL-SC scheme in the standard model. Furthermore, they claimed that their scheme is not only more efficient than the previously proposed schemes in the standard model, but also it is the only scheme which benefits from known session-specific temporary information security (KSSTIS). Therefore, this scheme would indeed be very practical. The contributions of this paper are 2-fold. First, in contrast to the claim made by Luo and Wan, we show that unfortunately Luo and Wan made a significant error in the construction of their proposed scheme. While their main intention is indeed interesting and useful, the failure of their construction has indeed left a gap in the research literature. Hence, the second contribution of this paper is to fill this gap by proposing a CL-SC scheme with KSSTIS, which is provably secure in the standard model.

21 citations


Journal ArticleDOI
TL;DR: This work develops methods that operate based on the hotness measure and determine how to pre-transcode videos to minimize the cost of stream providers, and shows the efficacy of the proposed methods when a video stream repository includes a high percentage of the Frequently Accessed Video Streams.
Abstract: Video streaming providers generally have to store several formats of the same video and stream the appropriate format based on the characteristics of the viewer’s device. This approach, called pre-transcoding, incurs a significant cost to the stream providers that rely on cloud services. Furthermore, pretranscoding proven to be inefficient due to the long-tail access pattern to video streams. To reduce the incurred cost, we propose to pre-transcode only frequentlyaccessed videos (called hot videos) and partially pre-transcode others, depending on their hotness degree. Therefore, we need to measure video stream hotness. Accordingly, we first provide a model to measure the hotness of video streams. Then, we develop methods that operate based on the hotness measure and determine how to pre-transcode videos to minimize the cost of stream providers. The partial pre-transcoding methods operate at different granularity levels to capture different patterns in accessing videos. Particularly, one of the methods operates faster but cannot partially pre-transcode videos with the non-long-tail access pattern. Experimental results show the efficacy of our proposed methods, specifically, when a video stream repository includes a high percentage of the Frequently Accessed Video Streams and a high percentage of videos with the non-long-tail accesses pattern.

Journal ArticleDOI
TL;DR: By exploring algebraic and combinatorial properties of $n$-dimensional balanced hypercubes, the upper bound of the-component (edge) connectivity of BH_n is obtained and the two kinds of connectivities can help to measure the robustness of the graph corresponding to a network.
Abstract: For an integer $\ell \geqslant 2$ , the $\ell $ -component connectivity (resp. $\ell $ -component edge connectivity) of a graph $G$ , denoted by $\kappa _{\ell }(G)$ (resp. $\lambda _{\ell }(G)$ ), is the minimum number of vertices (resp. edges) whose removal from $G$ results in a disconnected graph with at least $\ell $ components. The two parameters naturally generalize the classical connectivity and edge connectivity of graphs defined in term of the minimum vertex-cut and the minimum edge-cut, respectively. The two kinds of connectivities can help us to measure the robustness of the graph corresponding to a network. In this paper, by exploring algebraic and combinatorial properties of $n$ -dimensional balanced hypercubes $BH_n$ , we obtain the $\ell $ -component (edge) connectivity $\kappa _{\ell }(BH_n)$ ( $\lambda _{\ell }(BH_n)$ ). For $\ell $ -component connectivity, we prove that $\kappa _2(BH_n)=\kappa _3(BH_n)=2n$ for $n\geq 2$ , $\kappa _4(BH_n)=\kappa _5(BH_n)=4n-2$ for $n\geq 4$ , $\kappa _6(BH_n)=\kappa _7(BH_n)=6n-6$ for $n\geq 5$ . For $\ell $ -component edge connectivity, we prove that $\lambda _3(BH_n)=4n-1$ , $\lambda _4(BH_n)=6n-2$ for $n\geq 2$ and $\lambda _5(BH_n)=8n-4$ for $n\geq 3$ . Moreover, we also prove $\lambda _\ell (BH_n)\leq 2n(\ell -1)-2\ell +6$ for $4\leq \ell \leq 2n+3$ and the upper bound of $\lambda _\ell (BH_n)$ we obtained is tight for $\ell =4,5$ .


Journal ArticleDOI
TL;DR: This paper introduces Advanced Encryption Standard-enabled Trust-based Secure Routing protocol based on the proposed Dolphin Cat Optimizer (AES-TDCO), which is an energy and trust-aware routing protocol.
Abstract: Routing in mobile ad hoc networks (MANETs) is a hectic challenge due to the dynamic nature of the network. The provisional communication links are assured due to the infrastructure-independent capability of MANET, but with no proper centralized monitoring process, which makes routing in MANETs with respect to the security and trust a major issue. Thus, the paper introduces Advanced Encryption Standard-enabled Trust-based Secure Routing protocol based on the proposed Dolphin Cat Optimizer (AES-TDCO), which is an energy and trust-aware routing protocol. The proposed Dolphin Cat Optimizer is engaged in the optimal route selection based on the modeled objective function based on the trust factors, recent trust, historical trust, direct and indirect trust in addition to delay, distance and link lifetime. The Dolphin Cat Optimizer is the integration of Dolphin Echolocation and Cat Swarm Optimization algorithm that inherits the faster global convergence. The simulation using 75 nodes revealed that the proposed routing protocol acquired the maximal throughput, minimal delay, minimal packet drop and detection rate of 0.6531, 0.0107, 0.3267 and 0.9898 in the absence of network attacks and 0.7693, 0.0112, 0.3605 and 0.9961 in the presence of the network attacks.

Journal ArticleDOI
TL;DR: A novel transform domain method is proposed to provide a better data hiding method that uses a multi-resolution transform function, integer wavelet transform (IWT) that decomposes an image into four subbands: low-low, low-high, high-low and high-high subband.
Abstract: Steganography is a data hiding technique, which is used for securing data. Both spatial and transform domains are used to implement a steganography method. In this paper, a novel transform domain method is proposed to provide a better data hiding method. The method uses a multi-resolution transform function, integer wavelet transform (IWT) that decomposes an image into four subbands: low-low, low-high, high-low and high-high subband. The proposed method utilizes only the three subbands keeping the low-low subband untouched which helps to improve the quality of the stego image. The method applies a coefficient value differencing approach to determine the number of secret bits to be embedded in the coefficients. The method shows a good performance in terms of embedding capacity, imperceptibility and robustness. A number of metrics are computed to show the quality of the stego image. It can also withstand RS steganalysis, Chi-squared test and Subtractive Pixel Adjacency Matrix steganalysis successfully. The deformation of the histogram and Pixel Difference Histogram for different embedding percentages are also demonstrated, which show a significant similarity with the original cover image. The proposed method shows an achievement of 2.3bpp embedding capacity with a good quality of stego image.




Journal ArticleDOI
TL;DR: A high-throughput and low-cost pipelined architecture using a new recursive endpoint-cutting (REC) decision tree that outperforms most FPGA-based search engines for large and complex rule tables.
Abstract: Packet classification is one of the important functions in today's high-speed Internet routers. Many existing FPGA-based approaches can achieve a high throughput but cannot accommodate the memory required for large rule tables because on-chip memory in FPGA devices is limited. In this paper, we propose a high-throughput and low-cost pipelined architecture using a new recursive endpoint-cutting (REC) decision tree. In the software environment, REC needs only 5–66% of the memory needed in Efficuts for various rule tables. Since the rule buckets associated with leaf nodes in decision trees consume a large portion of total memory, a bucket compression scheme is also proposed to reduce rule duplication. Based on experimental results on Xilinx Virtex-5/6 FPGA, the block RAM required by REC is much less than the existing FPGA-based approaches. The proposed parallel and pipelined architecture can accommodate various tables of 20 K or more rules, in the FPGA devices containing 1.6 Mb block RAM. By using dual-ported memory, throughput of beyond 100 Gbps for 40-byte packets can be achieved. The proposed architecture outperforms most FPGA-based search engines for large and complex rule tables.

Journal ArticleDOI
TL;DR: In this paper, a symmetric cryptographic primitives based 2-party private function evaluation (PFE) scheme is proposed to improve the efficiency of OEP and 2PC protocols, and yields a more than 40% reduction in overall communication cost.
Abstract: Private function evaluation (PFE) is a special case of secure multi-party computation (MPC), where the function to be computed is known by only one party. PFE is useful in several real-life applications where an algorithm or a function itself needs to remain secret for reasons such as protecting intellectual property or security classification level. In this paper, we focus on improving 2-party PFE based on symmetric cryptographic primitives. In this respect, we look back at the seminal PFE framework presented by Mohassel and Sadeghian at Eurocrypt’13. We show how to adapt and utilize the well-known half gates garbling technique (Zahur et al., Eurocrypt’15) to their constant-round 2-party PFE scheme. Compared to their scheme, our resulting optimization significantly improves the efficiency of both the underlying Oblivious Evaluation of Extended Permutation (OEP) and secure 2-party computation (2PC) protocols, and yields a more than 40% reduction in overall communication cost (the computation time is also slightly decreased and the number of rounds remains unchanged).


Journal ArticleDOI
TL;DR: This study presents a framework for Arabic Twitter content analysis to gain transportation insight and proves the efficiency of the developed analyser in identifying tweets on transportation and the potential of the RGS in defining the location of tweets with no registered location information.
Abstract: The amount of data available online has grown enormously over the last decade as a result of the rapid growth of smartphone users and the availability of communication applications. Due to the anonymity and instantaneous nature of social media broadcasting compared to conventional attitudinal survey methods, social media mining is becoming popular for complementing traditional traffic detection methods due to its accessibility in reaching a large population and the opportunities for reflecting the true and immediate behaviour of participants for free. This study presents a framework for Arabic Twitter content analysis to gain transportation insight. The study is done with a dataset of more than 1 million tweets collected within 3 months. The proposed model comprises three main components: data acquisition, data analysis and the reverse geotagging scheme (RGS). The RGS tackles the problem of lack of location information in the tweets. Results show that 13% of the dataset reports traffic-related incidents with an overall precision of 55% and 87% for incidents identification prediction without and with reverse geotagging, respectively. This proves the efficiency of the developed analyser in identifying tweets on transportation and the potential of the RGS in defining the location of tweets with no registered location information.

Journal ArticleDOI
TL;DR: This paper modeled the operational behavior of IoT network events using timed ACs and proposed a novel hybrid NIDS, where a web server is integrated with IoT devices for remote access, and Constrained Application Protocol is employed in inter- and intra-smart device communication.
Abstract: The proliferation of Internet of Things (IoT) devices has led to many applications, including smart homes, smart cities and smart industrial control systems. Attacks like Distributed Denial of Service, event control hijacking, spoofing, event replay and zero day attacks are prevalent in smart environments. Conventional Network Intrusion Detection Systems (NIDSs) are tedious to deploy in the smart environment because of numerous communication architectures, manufacturer policies, technologies, standards and application-specific services. To overcome these challenges, we modeled the operational behavior of IoT network events using timed ACs and proposed a novel hybrid NIDS in this paper. A web server is integrated with IoT devices for remote access, and Constrained Application Protocol is employed in inter- and intra-smart device communication. Experiments are conducted in real time to validate our proposal and achieve 99.17% detection accuracy and 0.01% false positives.



Journal ArticleDOI
TL;DR: This work presents the first architecture of a scaler dedicated for this augmented three-moduli sets, and Experimental results for 65 nm CMOS technology show that the proposed reverse converter reduces the delay by about about $17.6 - 33.5) in comparison with the state of the art.
Abstract: Inter-modulo arithmetic operations, such as reverse conversion and scaling, are important but difficult operations in the RNS domain. This paper proposes efficient arithmetic units to perform this hard class of RNS operations for a new family of augmented three-moduli sets $\{2^{n+k}, 2^{n} - 1,2^{n+1}-1 \} \ (0 \leq k \leq n)$ . Experimental results for 65 nm CMOS technology show that the proposed reverse converter reduces the delay by about $(17.6 - 33.4)\%$ in comparison with the state of the art. Moreover, this work presents the first architecture of a scaler dedicated for this augmented three-moduli sets.

Journal ArticleDOI
TL;DR: Investigation of the use of Web Audio API as a Web Browser Fingerprinting method capable of identifying the devices shows that the initial results show that the proposed method is capable of identifies the device’s class, based on features like device�'s type, web browser's version and rendering engine.
Abstract: Web Browser Fingerprinting is a process in which the users are, with high likelihood, uniquely identified by the extracted features from their devices, generating an identifier key (fingerprint). Although it can be used for malicious purposes, especially regarding privacy invasion, Web Browser Fingerprinting can also be used to enhance security (e.g. as a factor in two-factor authentication). This paper investigates the use of Web Audio API as a Web Browser Fingerprinting method capable of identifying the devices. The idea is to prove or not if audio can provide features capable to identify users and devices. Our initial results show that the proposed method is capable of identifying the device’s class, based on features like device’s type, web browser’s version and rendering engine.



Journal ArticleDOI
TL;DR: This paper explores the set of consistency models not supported in an available and partition-tolerant service (CAP-constrained models) and proposes a hierarchy of consistency model depending on their strength and convergence is built.
Abstract: The CAP theorem states that only two of these properties can be simultaneously guaranteed in a distributed service: (i) consistency, (ii) availability and (iii) network partition tolerance. This theorem was stated and proved assuming that ‘consistency’ refers to atomic consistency. However, multiple consistency models exist and atomic consistency is located at the strongest edge of that spectrum. Many distributed services deployed in cloud platforms should be highly available and scalable. Network partitions may arise in those deployments and should be tolerated. One way of dealing with CAP constraints consists in relaxing consistency. Therefore, it is interesting to explore the set of consistency models not supported in an available and partition-tolerant service (CAP-constrained models). Other weaker consistency models could be maintained when scalable services are deployed in partitionable systems (CAP-free models). Three contributions arise: (i) multiple other CAP-constrained models are identified, (ii) a borderline between CAP-constrained and CAP-free models is set and (iii) a hierarchy of consistency models depending on their strength and convergence is built.