scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computer Network and Information Security in 2023"


Journal ArticleDOI
TL;DR: In this article , a threat detection model on Twitter using a semantic network was proposed, named DetThr, which is integrated with a set of well-known machine learning algorithms.
Abstract: Social media provides a free space to users to post their information, opinions, feelings, etc. Also, it allows users to easily and simultaneously communicate with each other. As a result, threat detection in social media is critical for ensuring the user’s safety and preventing suspicious activities such as criminal behavior, hate speech, ethnic conflicts and terrorist plots. These suspicious activities have a negative impact on the community’s life and cause tension and social unrest among individuals in both inside and outside of cyberspace. Furthermore, with the recent popularity of social networking sites, the number of discussions containing threats is increasing, causing fear in various parties, whether at the individual or state level. Moreover, these social networking service providers do not have complete control over the content that users post. In this paper, we propose to design a threat detection model on Twitter using a semantic network. To achieve this aim, we designed a threat semantic network, named, ThrNet that will be integrated in our proposed threat detection model called, DetThr. We compared the performance of our model (DetThr) with a set of well-known Machine Learning algorithms. Results show that the DetThr model achieves an accuracy of 76% better than Machine Learning algorithms. It works well with an error rate of forecasting threatening tweet messages as non-threatening (false negatives) is about 29%, while the error rate of forecasting non-threatening tweet messages as threatening (false positives) is about 19%.

1 citations



Journal ArticleDOI
TL;DR: In this article , a generator network (G) is used to generate fake URL strings and discriminator network (D) is employed to distinguish the real and the fake URL samples.
Abstract: Phishing attacks by malicious URL/web links are common nowadays. The user data, such as login credentials and credit card numbers can be stolen by their careless clicking on these links. Moreover, this can lead to installation of malware on the target systems to freeze their activities, perform ransomware attack or reveal sensitive information. Recently, GAN-based models have been attractive for anti-phishing URLs. The general motivation is using Generator network (G) to generate fake URL strings and Discriminator network (D) to distinguish the real and the fake URL samples. This is operated in adversarial way between G and D so that the synthesized URL samples by G become more and more similar to the real ones. From the perspective of cybersecurity defense, GAN-based motivation can be exploited for D as a phishing URL detector or classifier. This means after training GAN on both malign and benign URL strings, a strong classifier/detector D can be achieved. From the perspective of cyberattack, the attackers would like to to create fake URLs that are as close to the real ones as possible to perform phishing attacks. This makes them easier to fool users and detectors. In the related proposals, GAN-based models are mainly exploited for anti-phishing URLs. There have been no evaluations specific for GAN-generated fake URLs. The attacker can make use of these URL strings for phishing attacks. In this work, we propose to use TLD (Top-level Domain) and SSIM (Structural Similarity Index Score) scores for evaluation the GAN-synthesized URL strings in terms of the structural similariy with the real ones. The more similar in the structure of the GAN-generated URLs are to the real ones, the more likely they are to fool the classifiers. Different GAN models from basic GAN to others GAN extensions of DCGAN, WGAN, SEQGAN are explored in this work. We show from the intensive experiments that D classifier of basic GAN and DCGAN surpasses other GAN models of WGAN and SegGAN. The effectiveness of the fake URL patterns generated from SeqGAN is the best compared to other GAN models in both structural similarity and the ability in deceiving the phishing URL classifiers of LSTM (Long Short Term Memory) and RF (Random Forest).

Journal ArticleDOI
TL;DR: In this paper , an outlier detection technique based on deep learning was proposed for WSNs using Generative Adversarial Networks (GANs) with autoencoder neural network.
Abstract: In wireless sensor networks (WSN), sensor nodes are expected to operate autonomously in a human inaccessible and the hostile environment for which the sensor nodes and communication links are therefore, prone to faults and potential malicious attacks. Sensor readings that differ significantly from the usual pattern of sensed data due to faults in sensor nodes, unreliable communication links, and physical and logical malicious attacks are considered as outliers. This paper presents an outlier detection technique based on deep learning namely, generative adversarial networks (GANs) with autoencoder neural network. The two-level architecture proposed for WSN makes the proposed technique robust. The simulation result indicates improvement in detection accuracy compared to existing state-of-the-art techniques applied for WSNs and increase of the network lifetime. Robustness of outlier detection algorithm with respect to channel fault and robustness concerning different types of distribution of faulty communication channel is analyzed.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a load balancing strategy with cluster head selection protocol to boost the lifespan of network and energy consumption in wireless sensor networks, which improves the relay nodes life and shares the load on relay nodes equitably across the network to enhance the lifespan.
Abstract: Researchers have been paying close attention to the wireless sensor (WSN) networks area because of its variety of applications, including industrial management, human detection, and health care management. In Wireless Sensor (WSN) Network, consumption of efficient energy is a challenging problem. Many clustering techniques were used for balancing the load of WSN network. In clustering, the cluster head (CH) is selected as a relay node with greater power which is compared with the nodes of non-CH. In the existing system, it uses LBC-COFL algorithm to reduce the energy consumption problem. To overcome this problem, the proposed system uses Quasi oppositional based Jaya load balancing strategy with cluster head (QOJ-LCH) selection protocol to boost the lifespan of network and energy consumption. The QOJ-LCH method improves the relay nodes life and shares the load on relay nodes equitably across the network to enhance the lifespan. It also reduces the load-balancing problems in Wireless Sensor networks. It uses two routing methods single-hop and multiple-hop. The proposed QOJ-LCH with cluster head selection method enhances the network’s lifespan, total amount of power utilization and the active sensor devices present in the Single-hop routing,it worked with 1600 rounds in network and 300 sensor nodes, for Multiple-hop routing, it worked with 1800 rounds in network and 350 sensor nodes. It achieves better performance, scalability and reliability.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed and implemented solutions to augment data confidentiality, integrity and availability in DkSAP-IoT in accordance with the tenets of information security using symmetric encryption and data storage leveraging decentralized storage architecture.
Abstract: Blockchain technology unarguably has over a decade gained widespread attention owing to its often-tagged disruptive nature and remarkable features of decentralization, immutability and transparency among others. However, the technology comes bundled with challenges. At center-stage of these challenges is privacy-preservation which has massively been researched with diverse solutions proposed geared towards privacy protection for transaction initiators, recipients and transaction data. Dual-key stealth address protocol for IoT (DkSAP-IoT) is one of such solutions aimed at privacy protection for transaction recipients. Induced by the need to reuse locally stored data, the current implementation of DkSAP-IoT is deficient in the realms of data confidentiality, integrity and availability consequently defeating the core essence of the protocol in the event of unauthorized access, disclosure or data tampering emanating from a hack and theft or loss of the device. Data unavailability and other security-related data breaches in effect render the existing protocol inoperable. In this paper, we propose and implement solutions to augment data confidentiality, integrity and availability in DkSAP-IoT in accordance with the tenets of information security using symmetric encryption and data storage leveraging decentralized storage architecture consequently providing data integrity. Experimental results show that our solution provides content confidentiality consequently strengthening privacy owing to the encryption utilized. We make the full code of our solution publicly available on GitHub.

Journal ArticleDOI
TL;DR: In this article , a weighted random forest (W-RF) algorithm is used to detect attacks in encrypted remote access network traffic. But, it is limited to a small proportion of network traffic and cannot detect all the attacks.
Abstract: Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated the concept of software testing effort dependent software reliability growth models by considering the exponentiated-gompertz function as testing effort function to determine the release time of the software.
Abstract: Quality is a consequential factor for the software product. During the software development at most care was taken at each step for the quality product. Development process generally embedded with several qualitative and quantitative techniques. The characteristics of final software product should reach all the standards. Reliability is a paramount element which quantifications the probability that a software product could able to work afore it authentically fails to perform its intended functionality. Software testing is paramount phase where gargantuan resources were consumed. Over around fifty percent of cost was consumed during this testing phase, that is why testing was performed in disciplined environment. Software product release time is considered to be crucial subject at which the software product testing was stopped and it could be release into market, such that the software product should have quality and reliability. In this paper we have investigated the concept of software testing effort dependent software reliability growth models by considering the exponentiated-gompertz function as testing effort function to determine the release time of the software. Thus, constructed testing effort dependent models was computed on three authentic time datasets. Parameter estimation is done through least square estimation and metrics like Mean square Error (MSE) and Absolute Error (AE) are utilized for model comparison. The proposed testing effort dependent model performance was better than the rest of the models.

Journal ArticleDOI
TL;DR: In this article , a patch-based sclera and periocular segmentation method was proposed for multi-modal biometric authentication in today's digitized world using deep learning neural networks.
Abstract: Biometric authentication has become an essential security aspect in today's digitized world. As limitations of the Unimodal biometric system increased, the need for multimodal biometric has become more popular. More research has been done on multimodal biometric systems for the past decade. sclera and periocular biometrics have gained more attention. The segmentation of sclera is a complex task as there is a chance of losing some of the features of sclera vessel patterns. In this paper we proposed a patch-based sclera and periocular segmentation. Experiments was conducted on sclera patches, periocular patches and sclera-periocular patches. These sclera and periocular patches are trained using deep learning neural networks. The deep learning network CNN is applied individually for sclera and periocular patches, on a combination of three Data set. The data set has images with occlusions and spectacles. The accuracy of the proposed sclera-periocular patches is 97.3%. The performance of the proposed patch-based system is better than the traditional segmentation methods.

Journal ArticleDOI
TL;DR: In this paper , the Multi Objective Congestion Metric based Artificial Ecosystem Optimization (MOCMAEO) is proposed to enhance road safety in VANETs.
Abstract: Vehicular Ad-hoc Network (VANET) is a growing technology that utilizes moving vehicles as mobile nodes for exchanging essential information between users. Unlike the conventional radio frequency based VANET, the Visible Light Communication (VLC) is used in the VANET to improve the throughput. However, the road safety is considered as a significant issue for users of VANET. Therefore, congestion-aware routing is required to be developed for enhancing road safety, because it creates a collision between the vehicles that causes packet loss. In this paper, the Multi Objective Congestion Metric based Artificial Ecosystem Optimization (MOCMAEO) is proposed to enhance road safety. The MOCMAEO is used along with the Ad hoc On-Demand Distance Vector (AODV) routing protocol for generating the optimal routing path between the source node to the Road Side Unit (RSU). Specifically, the performance of the MOCMAEO is improved using the multi-objective fitness functions such as congestion metric, residual energy, distance, and some hops. The performance of the MOCMAEO is analyzed by means of Packet Delivery Ratio (PDR), throughput, delay, and Normalized Routing Load (NRL). The PSO based geocast routing protocols such as LARgeoOPT, DREAMgeoOPT, and ZRPgeoOPT are used to evaluate the performance of the MOCMAEO method. The PDR of the MOCMAEO method is 99.92 % for 80 nodes, which is high when compared to the existing methods.

Journal ArticleDOI
TL;DR: In this paper , the authors consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a nonpositional code structure.
Abstract: An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.

Journal ArticleDOI
TL;DR: In this paper , a security system that relies on encryption as a kind of effective solution to secure image data is presented, where the confusion and diffusion strategies are carried out utilizing Hilbert curvature and chaotic map such as two-dimensional Henon map (2D-HM) to assert image confusion with pixel level permutation.
Abstract: The huge availability and prosperity of net technology results in raised on-line media sharing over the cloud platform which has become one of the important resources and tools for development in our societies. So, in the epoch of enormous data great amount of sensitive information and transmission of different media transmitted over the net for communication. And recently, fog computing has captured the world's attention due to their inherent features relevant compared to the cloud domain, But this push to head for many issues related to data security and privacy in fog computing which it's still under studied in their initial juncture. Therefore, in this paper, we will review a security system that relies on encryption as a kind of effective solution to secure image data. We use an approach of using chaotic map plus space curve techniques moreover the confusion and diffusion strategies are carried out utilizing Hilbert curvature and chaotic map such as two-dimensional Henon map (2D-HM) to assert image confusion with pixel level permutation .Also we relied in our system the way of shuffling the image with blocks and use a key for each block, which is chooses randomly to have a high degree of security. The efficiency of the proposed technique has been tested utilizing different investigations like analysis of entropy [7.9993], NPCR [99.6908%] and finally UACI [33.6247%]. Analysis of results revealed that the proposed system of image encryption technique has favorable effects, and can achieve a good results moreover it fights different attacks and by comparing with another techniques denote that our proposed fulfills high security level with high quality.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a novel DDoS mitigation scheme using LCDT-M (Log-Cluster DDoS Tree Mitigation) framework for the hybrid cloud environment, which detects and mitigates DDoS attacks in the Software-Defined Network (SDN) based cloud environment.
Abstract: In the cloud computing platform, DDoS (Distributed Denial-of-service) attacks are one of the most commonly occurring attacks. Research studies on DDoS mitigation rarely considered the data shift problem in real-time implementation. Concurrently, existing studies have attempted to perform DDoS attack detection. Nevertheless, they have been deficient regarding the detection rate. Hence, the proposed study proposes a novel DDoS mitigation scheme using LCDT-M (Log-Cluster DDoS Tree Mitigation) framework for the hybrid cloud environment. LCDT-M detects and mitigates DDoS attacks in the Software-Defined Network (SDN) based cloud environment. The LCDT-M comprises three algorithms: GFS (Greedy Feature Selection), TLMC (Two Log Mean Clustering), and DM (Detection-Mitigation) based on DT (Decision Tree) to optimize the detection of DDoS attacks along with mitigation in SDN. The study simulated the defined cloud environment and considered the data shift problem during the real-time implementation. As a result, the proposed architecture achieved an accuracy of about 99.83%, confirming its superior performance.

Journal ArticleDOI
TL;DR: In this paper , the authors presented a system with a fusion of Cryptography and Steganography methods for scrambling the input image and embedding into a carrier media by enhancing the security level.
Abstract: Information security is an important part of the current interactive world. It is very much essential for the end-user to preserve the confidentiality and integrity of their sensitive data. As such, information encoding is significant to defend against access from the non-authorized user. This paper is presented with an aim to build a system with a fusion of Cryptography and Steganography methods for scrambling the input image and embed into a carrier media by enhancing the security level. Elliptic Curve Cryptography (ECC) is helpful in achieving high security with a smaller key size. In this paper, ECC with modification is used to encrypt and decrypt the input image. Carrier media is transformed into frequency bands by utilizing Discrete Wavelet Transform (DWT). The encrypted hash of the input is hidden in high-frequency bands of carrier media by the process of Least-Significant-Bit (LSB). This approach is successful to achieve data confidentiality along with data integrity. Data integrity is verified by using SHA-256. Simulation outcomes of this method have been analyzed by measuring performance metrics. This method enhances the security of images obtained with 82.7528db of PSNR, 0.0012 of MSE, and SSIM as 1 compared to other existing scrambling methods.

Journal ArticleDOI
TL;DR: In this article , the authors used the Nearest Neighbor Distance Variance (NNDV) classifier for the prediction of intrusion and compared the predictive accuracy of NNDV with that of the KNN or K Nearest Neighbors classifier.
Abstract: Activities in network traffic can be broadly classified into two categories: normal and malicious. Malicious activities are harmful and their detection is necessary for security reasons. The intrusion detection process monitors network traffic to identify malicious activities in the system. Any algorithm that divides objects into two categories, such as good or bad, is a binary class predictor or binary classifier. In this paper, we utilized the Nearest Neighbor Distance Variance (NNDV) classifier for the prediction of intrusion. NNDV is a binary class predictor and uses the concept of variance on the distance between objects. We used KDD CUP 99 dataset to evaluate the NNDV and compared the predictive accuracy of NNDV with that of the KNN or K Nearest Neighbor classifier. KNN is an efficient general purpose classifier, but we only considered its binary aspect. The results are quite satisfactory to show that NNDV is comparable to KNN. Many times, the performance of NNDV is better than KNN. We experimented with normalized and unnormalized data for NNDV and found that the accuracy results are generally better for normalized data. We also compared the accuracy results of different cross validation techniques such as 2 fold, 5 fold, 10 fold, and leave one out on the NNDV for the KDD CUP 99 dataset. Cross validation results can be helpful in determining the parameters of the algorithm.