scispace - formally typeset
Search or ask a question

Showing papers in "Cybernetics and Information Technologies in 2022"


Journal ArticleDOI
TL;DR: The results presented include a blockchain-enabled supply-chain model facilitating seeds certification process, monitoring and supervision of the grain process, provenance and as optional interactions with regulatory bodies, logistics and financial services.
Abstract: Abstract The purpose of this paper is to propose an approach to blockchain-enabled supply-chain model for a smart crop production framework. The defined tasks are: (1) analysis of blockchain ecosystem as a network of stakeholders and as an infrastructure of technical and logical elements; (2) definition of a supply-chain model; (3) design of blockchain reference infrastructure; (4) description of blockchain information channels with smart contracts basic functionalities. The results presented include: а supply-chain model facilitating seeds certification process, monitoring and supervision of the grain process, provenance and as optional interactions with regulatory bodies, logistics and financial services; the three level blockchain reference infrastructure and a blockchain-enabled supply-chain supporting five information channels with nine participants and smart contracts. An account management user application tool, the general descriptions of smart contract basic functionalities and a selected parts of one smart contract code are provided as examples.

11 citations


Journal ArticleDOI
TL;DR: An Expert Shoplifting Activity Recognition (ESAR) system to reduce shoplifting incidents in stores/shops is introduced and shows that the proposed approach attains better consequences up to 90.26% detection accuracy compared to the other prevalent approaches.
Abstract: Abstract Shoplifting is a troubling and pervasive aspect of consumers, causing great losses to retailers. It is the theft of goods from the stores/shops, usually by hiding the store item either in the pocket or in carrier bag and leaving without any payment. Revenue loss is the most direct financial effect of shoplifting. Therefore, this article introduces an Expert Shoplifting Activity Recognition (ESAR) system to reduce shoplifting incidents in stores/shops. The system being proposed seamlessly examines each frame in video footage and alerts security personnel when shoplifting occurs. It uses dual-stream convolutional neural network to extract appearance and salient motion features in the video sequences. Here, optical flow and gradient components are used to extract salient motion features related to shoplifting movement in the video sequence. Long Short Term Memory (LSTM) based deep learner is modeled to learn the extracted features in the time domain for distinguishing person actions (i.e., normal and shoplifting). Analyzing the model behavior for diverse modeling environments is an added contribution of this paper. A synthesized shoplifting dataset is used here for experimentations. The experimental outcomes show that the proposed approach attains better consequences up to 90.26% detection accuracy compared to the other prevalent approaches.

7 citations


Journal ArticleDOI
TL;DR: An approach to designing a predictive model for an academic course or module taught in a blended learning format and introduces certain requirements to predictive models concerning their applicability to the educational process such as interpretability, actionability, and adaptability to a course design.
Abstract: Abstract The article is focused on the problem of early prediction of students’ learning failures with the purpose of their possible prevention by timely introducing supportive measures. We propose an approach to designing a predictive model for an academic course or module taught in a blended learning format. We introduce certain requirements to predictive models concerning their applicability to the educational process such as interpretability, actionability, and adaptability to a course design. We test three types of classifiers meeting these requirements and choose the one that provides best performance starting from the early stages of the semester, and therefore provides various opportunities to timely support at-risk students. Our empirical studies confirm that the proposed approach is promising for the development of an early warning system in a higher education institution. Such systems can positively influence student retention rates and enhance learning and teaching experience for a long term.

3 citations


Journal ArticleDOI
TL;DR: This research has proved that every honeyword generation method has many weaknesses points.
Abstract: Abstract Honeyword system is a successful password cracking detection system. Simply the honeywords are (False passwords) that are accompanied to the sugarword (Real password). Honeyword system aims to improve the security of hashed passwords by facilitating the detection of password cracking. The password database will have many honeywords for every user in the system. If the adversary uses a honeyword for login, a silent alert will indicate that the password database might be compromised. All previous studies present a few remarks on honeyword generation methods for max two preceding methods only. So, the need for one that lists all preceding researches with their weaknesses is shown. This work presents all generation methods then lists the strengths and weaknesses of 26 ones. In addition, it puts 32 remarks that highlight their strengths and weaknesses points. This research has proved that every honeyword generation method has many weaknesses points.

2 citations


Journal ArticleDOI
TL;DR: Proposed is an addition of the augmented Backus-Naur form syntax that enables the formal language to be expressed with a parser grammar and optionally with an additional lexer grammar.
Abstract: Abstract The article describes a string recognition approach, engraved in the parsers generated by Tunnel Grammar Studio that use the tunnel parsing algorithm, of how a lexer and a parser can operate on the input during its recognition. Proposed is an addition of the augmented Backus-Naur form syntax that enables the formal language to be expressed with a parser grammar and optionally with an additional lexer grammar. The tokens outputted from the lexer are matched to the phrases in the parser grammar by their name and optionally by their lexeme, case sensitively or insensitively.

2 citations


Journal ArticleDOI
TL;DR: This paper presents a novel approach for detecting phishing Uniform Resource Locators (URLs) applying the Gated Recurrent Unit (GRU), a fast and highly accurate phishing classifier system.
Abstract: Abstract Public health responses to the COVID-19 pandemic since March 2020 have led to lockdowns and social distancing in most countries around the world, with a shift from the traditional work environment to virtual one. Employees have been encouraged to work from home where possible to slow down the viral infection. The massive increase in the volume of professional activities executed online has posed a new context for cybercrime, with the increase in the number of emails and phishing websites. Phishing attacks have been broadened and extended through years of pandemics COVID-19. This paper presents a novel approach for detecting phishing Uniform Resource Locators (URLs) applying the Gated Recurrent Unit (GRU), a fast and highly accurate phishing classifier system. Comparative analysis of the GRU classification system indicates better accuracy (98.30%) than other classifier systems.

2 citations


Journal ArticleDOI
TL;DR: An extended classification of Internet users penetrating in computer networks and a definition of the motivation as a psychological and emotional state and main prerequisites for modelling of network intruder’s activity are suggested.
Abstract: Abstract In the present study, an extended classification of Internet users penetrating in computer networks and a definition of the motivation as a psychological and emotional state and main prerequisites for modelling of network intruder’s activity are suggested. A mathematical model as a quadratic function of malicious individual’s behavior and impact on the computer network based on three quantified factors, motivation, satisfaction and system protection is developed. Numerical simulation experiments of the unauthorized access and its effect onto the computer network are carried out. The obtained results are graphically illustrated and discussed.

2 citations


Journal ArticleDOI
TL;DR: In this article , a model is introduced to improve forgery detection on the basis of superpixel clustering algorithm and enhanced Grey Wolf Optimizer (GWO) based AlexNet.
Abstract: Abstract In this work a model is introduced to improve forgery detection on the basis of superpixel clustering algorithm and enhanced Grey Wolf Optimizer (GWO) based AlexNet. After collecting the images from MICC-F600, MICC-F2000 and GRIP datasets, patch segmentation is accomplished using a superpixel clustering algorithm. Then, feature extraction is performed on the segmented images to extract deep learning features using an enhanced GWO based AlexNet model for better forgery detection. In the enhanced GWO technique, multi-objective functions are used for selecting the optimal hyper-parameters of AlexNet. Based on the obtained features, the adaptive matching algorithm is used for locating the forged regions in the tampered images. Simulation outcome showed that the proposed model is effective under the conditions: salt & pepper noise, Gaussian noise, rotation, blurring and enhancement. The enhanced GWO based AlexNet model attained maximum detection accuracy of 99.66%, 99.75%, and 98.48% on MICC-F600, MICC-F2000 and GRIP datasets.

2 citations


Journal ArticleDOI
TL;DR: Dyna-Q-Learning task scheduling technique is designed over the uncertainty free tasks and resource parameters and the performance is good based on metrics such as learning rate, accuracy, execution time and resource utilization rate.
Abstract: Abstract Task scheduling is an important activity in parallel and distributed computing environment like grid because the performance depends on it. Task scheduling gets affected by behavioral and primary uncertainties. Behavioral uncertainty arises due to variability in the workload characteristics, size of data and dynamic partitioning of applications. Primary uncertainty arises due to variability in data handling capabilities, processor context switching and interplay between the computation intensive applications. In this paper behavioral uncertainty and primary uncertainty with respect to tasks and resources parameters are managed using Type-2-Soft-Set (T2SS) theory. Dyna-Q-Learning task scheduling technique is designed over the uncertainty free tasks and resource parameters. The results obtained are further validated through simulation using GridSim simulator. The performance is good based on metrics such as learning rate, accuracy, execution time and resource utilization rate.

1 citations


Journal ArticleDOI
TL;DR: A citation determinants model for a set of academic engineering texts from Colombia establishes the determinants of the probability that a text receives at least one citation through the relationship among previous citations, journal characteristics, the author and the text.
Abstract: Abstract This article provides the results of a citation determinants model for a set of academic engineering texts from Colombia. The model establishes the determinants of the probability that a text receives at least one citation through the relationship among previous citations, journal characteristics, the author and the text. Through a similarity matrix constructed by Latent Semantic Analysis (LSA), a similarity variable has been constructed to capture the fact that the texts have similar titles, abstracts and keywords to the most cited texts. The results show: i) joint significance of the variables selected to characterize the text; ii) direct relationship of the citation with similarity of keywords, published in an IEEE journal, research article, more than one author; and authored by at least one foreign author; and iii) inverse relationship between the probability of citation with the similarity of abstracts, published in 2016 or 2017, and published in a Colombian journal.

1 citations


Journal ArticleDOI
TL;DR: An exploratory data analysis of CTI reports is performed to dig-out and visualize interesting patterns of cyber threats which help security analysts to proactively mitigate vulnerabilities and timely predict cyber threats in their networks.
Abstract: Abstract In an advanced and dynamic cyber threat environment, organizations need to yield more proactive methods to handle their cyber defenses. Cyber threat data known as Cyber Threat Intelligence (CTI) of previous incidents plays an important role by helping security analysts understand recent cyber threats and their mitigations. The mass of CTI is exponentially increasing, most of the content is textual which makes it difficult to analyze. The current CTI visualization tools do not provide effective visualizations. To address this issue, an exploratory data analysis of CTI reports is performed to dig-out and visualize interesting patterns of cyber threats which help security analysts to proactively mitigate vulnerabilities and timely predict cyber threats in their networks.

Journal ArticleDOI
TL;DR: This research proposes to cross diagonal embedding Pixel Value Differencing and Modulus Function techniques using edge area patterns to improve embedding capacity and imperceptibility simultaneously, at the same time still, maintain a good quality of security.
Abstract: Abstract The existence of a trade-off between embedding capacity and imperceptibility is a challenge to improve the quality of steganographic images. This research proposes to cross diagonal embedding Pixel Value Differencing (PVD) and Modulus Function (MF) techniques using edge area patterns to improve embedding capacity and imperceptibility simultaneously. At the same time still, maintain a good quality of security. By implementing them into 14 public datasets, the proposed techniques are proven to increase both capacity and imperceptibility. The cross diagonal embedding PVD is responsible for increasing the embedding capacity reaching an average value of 3.18 bits per pixel (bpp), and at the same time, the implementation of edge area block patterns-based embedding is a solution of improving imperceptibility toward an average value of PSNR above 40 dB and that of SSIM above 0.98. Aside from its success in increasing the embedding capacity and the imperceptibility, the proposed techniques remain resistant to RS attacks.

Journal ArticleDOI
TL;DR: In this paper , the authors combine ResNet 50 with Spatial Pyramid Pooling (SPP) to identify musical instruments that are similar to one another, which can increase the detection performance of musical instruments.
Abstract: Abstract Identifying similar objects is one of the most challenging tasks in computer vision image recognition. The following musical instruments will be recognized in this study: French horn, harp, recorder, bassoon, cello, clarinet, erhu, guitar saxophone, trumpet, and violin. Numerous musical instruments are identical in size, form, and sound. Further, our works combine Resnet 50 with Spatial Pyramid Pooling (SPP) to identify musical instruments that are similar to one another. Next, the Resnet 50 and Resnet 50 SPP model evaluation performance includes the Floating-Point Operations (FLOPS), detection time, mAP, and IoU. Our work can increase the detection performance of musical instruments similar to one another. The method we propose, Resnet 50 SPP, shows the highest average accuracy of 84.64% compared to the results of previous studies.

Journal ArticleDOI
TL;DR: A modified version of LVQ, which is called PDLVQ is proposed to accelerate the traditional version and shows clear enhancement to decrease runtime when the size of dimensions, the number of clusters, or thesize of data becomes increased compared with the traditional one which is LVQ.
Abstract: Abstract Learning Vector Quantization (LVQ) is one of the most widely used classification approaches. LVQ faces a problem as when the size of data grows large it becomes slower. In this paper, a modified version of LVQ, which is called PDLVQ is proposed to accelerate the traditional version. The proposed scheme aims to avoid unnecessary computations by applying an efficient Partial Distance (PD) computation strategy. Three different benchmark datasets are used in the experiments. The comparisons have been done between LVQ and PDLVQ in terms of runtime and in result, it turns out that PDLVQ shows better efficiency than LVQ. PDLVQ has achieved up to 37% efficiency in runtime compared to LVQ when the dimensions have increased. Also, the enhanced algorithm (PDLVQ) shows clear enhancement to decrease runtime when the size of dimensions, the number of clusters, or the size of data becomes increased compared with the traditional one which is LVQ.

Journal ArticleDOI
TL;DR: A new algorithm Enhancing Weak Nodes in Decision Tree (EWNDT), which reinforces them by increasing their data from other similar tree nodes by temporarily recalculating the best splitting attribute and the best threshold in the weak node.
Abstract: Abstract Decision trees are among the most popular classifiers in machine learning, artificial intelligence, and pattern recognition because they are accurate and easy to interpret. During the tree construction, a node containing too few observations (weak node) could still get split, and then the resulted split is unreliable and statistically has no value. Many existing machine-learning methods can resolve this issue, such as pruning, which removes the tree’s non-meaningful parts. This paper deals with the weak nodes differently; we introduce a new algorithm Enhancing Weak Nodes in Decision Tree (EWNDT), which reinforces them by increasing their data from other similar tree nodes. We called the data augmentation a virtual merging because we temporarily recalculate the best splitting attribute and the best threshold in the weak node. We have used two approaches to defining the similarity between two nodes. The experimental results are verified using benchmark datasets from the UCI machine-learning repository. The results indicate that the EWNDT algorithm gives a good performance.

Journal ArticleDOI
TL;DR: The proposed NDF IoT approach uses the Owl optimizer for selecting the best subset of features that help in identifying suspicious behavior in such environments and outperforms related works that used the same dataset while reducing the number of features to three features only.
Abstract: Abstract The Internet of Things (IoT) is widespread in our lives these days (e.g., Smart homes, smart cities, etc.). Despite its significant role in providing automatic real-time services to users, these devices are highly vulnerable due to their design simplicity and limitations regarding power, CPU, and memory. Tracing network traffic and investigating its behavior helps in building a digital forensics framework to secure IoT networks. This paper proposes a new Network Digital Forensics approach called (NDF IoT). The proposed approach uses the Owl optimizer for selecting the best subset of features that help in identifying suspicious behavior in such environments. The NDF IoT approach is evaluated using the Bot IoT UNSW dataset in terms of detection rate, false alarms, accuracy, and f-score. The approach being proposed has achieved 100% detection rate and 99.3% f-score and outperforms related works that used the same dataset while reducing the number of features to three features only.

Journal ArticleDOI
TL;DR: The conception developed in this work can be applied to control, prevent and protect computer networks from malware intrusions.
Abstract: Abstract Malware attacks cause great harms in the contemporary information systems and that requires analysis of computer networks reaction in case of malware impact. The focus of the present study is on the analysis of the computer network’s states and reactions in case of malware attacks defined by the susceptibility, exposition, infection and recoverability of computer nodes. Two scenarios are considered – equilibrium without secure software and not equilibrium with secure software in the computer network. The behavior of the computer network under a malware attack is described by a system of nonhomogeneous differential equations. The system of the nonhomogeneous differential equations is solved, and analytical expressions are derived to analyze network characteristics in case of susceptibility, exposition, infection and recoverability of computer nodes during malware attack. The analytical expressions derived are illustrated with results of numerical experiments. The conception developed in this work can be applied to control, prevent and protect computer networks from malware intrusions.

Journal ArticleDOI
TL;DR: Shoc allows running any CPU-intensive and data-intensive workloads in the cloud without needing to manage HPC infrastructure, complex software, and hardware environment deployments.
Abstract: Abstract HPC clouds may provide fast access to fully configurable and dynamically scalable virtualized HPC clusters to address the complex and challenging computation and storage-intensive requirements. The complex environmental, software, and hardware requirements and dependencies on such systems make it challenging to carry out our large-scale simulations, prediction systems, and other data and compute-intensive workloads over the cloud. The article aims to present an architecture that enables HPC workloads to be serverless over the cloud (Shoc), one of the most critical cloud capabilities for HPC workloads. On one hand, Shoc utilizes the abstraction power of container technologies like Singularity and Docker, combined with the scheduling and resource management capabilities of Kubernetes. On the other hand, Shoc allows running any CPU-intensive and data-intensive workloads in the cloud without needing to manage HPC infrastructure, complex software, and hardware environment deployments.

Journal ArticleDOI
TL;DR: A deep learning framework is proposed to predict parameters such as fine particulate matter and carbon monoxide and achieves good optimization and performs better than the simple LSTM and a Recurrent Neural Network (RNN) based model.
Abstract: Abstract Air pollution has increased worries regarding health and ecosystems. Precise prediction of air quality parameters can assist in the effective action of air pollution control and prevention. In this work, a deep learning framework is proposed to predict parameters such as fine particulate matter and carbon monoxide. Long Short Term Memory (LSTM) neural network-based model that processes sequences in forward and backward direction to consider the influence of timesteps in both directions is employed. For further learning, unidirectional layers’ stacking is implemented. The performance of the model is optimized by fine-tuning hyperparameters, regularization techniques for overfitting resolution, and various merging options for the bidirectional input layer. The proposed model achieves good optimization and performs better than the simple LSTM and a Recurrent Neural Network (RNN) based model. Moreover, an attention-based mechanism is adopted to focus on more significant timesteps for prediction. The self-attention approach improves performance further and works well especially for longer sequences and extended time horizons. Experiments are conducted using real-world data collected, and results are evaluated using the mean square error loss function.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a load balancing scheme using Fuzzy Neutrosophic Soft Set theory (FNSS) based transfer Q-learning with pre-trained knowledge.
Abstract: Abstract Effective load balancing is tougher in grid computing compared to other conventional distributed computing platforms due to its heterogeneity, autonomy, scalability, and adaptability characteristics, resource selection and distribution mechanisms, and data separation. Hence, it is necessary to identify and handle the uncertainty of the tasks and grid resources before making load balancing decisions. Using two potential forms of Hidden Markov Models (HMM), i.e., Profile Hidden Markov Model (PF_HMM) and Pair Hidden Markov Model (PR_HMM), the uncertainties in the task and system parameters are identified. Load balancing is then carried out using our novel Fuzzy Neutrosophic Soft Set theory (FNSS) based transfer Q-learning with pre-trained knowledge. The transfer Q-learning enabled with FNSS solves large scale load balancing problems efficiently as the models are already trained and do not need pre-training. Our expected value analysis and simulation results confirm that the proposed scheme is 90 percent better than three of the recent load balancing schemes.

Journal ArticleDOI
TL;DR: In this paper , the authors developed several methods of noise generation with different distributions that keep the global image characteristics and evaluated the internal noise in the visual system and its ability to filter the added noise.
Abstract: Abstract In many visual perception studies, external visual noise is used as a methodology to broaden the understanding of information processing of visual stimuli. The underlying assumption is that two sources of noise limit sensory processing: the external noise inherent in the environmental signals and the internal noise or internal variability at different levels of the neural system. Usually, when external noise is added to an image, it is evenly distributed. However, the color intensity and image contrast are modified in this way, and it is unclear whether the visual system responds to their change or the noise presence. We aimed to develop several methods of noise generation with different distributions that keep the global image characteristics. These methods are appropriate in various applications for evaluating the internal noise in the visual system and its ability to filter the added noise. As these methods destroy the correlation in image intensity of neighboring pixels, they could be used to evaluate the role of local spatial structure in image processing.

Journal ArticleDOI
TL;DR: A novel method to generate honeyword using the meerkat clan intelligence algorithm, a metaheuristic swarm intelligence algorithm is proposed, which will improve the honeyword generating process, enhance the honeywords properties, and solve the issues of previous methods.
Abstract: Abstract An effective password cracking detection system is the honeyword system. The Honeyword method attempts to increase the security of hashed passwords by making password cracking easier to detect. Each user in the system has many honeywords in the password database. If the attacker logs in using a honeyword, a quiet alert trigger indicates that the password database has been hacked. Many honeyword generation methods have been proposed, they have a weakness in generating process, do not support all honeyword properties, and have many honeyword issues. This article proposes a novel method to generate honeyword using the meerkat clan intelligence algorithm, a metaheuristic swarm intelligence algorithm. The proposed generation methods will improve the honeyword generating process, enhance the honeyword properties, and solve the issues of previous methods. This work will show some previous generation methods, explain the proposed method, discuss the experimental results and compare the new one with the prior ones.

Journal ArticleDOI
TL;DR: In this paper , a unique reinforcement learning driven anti-jamming scheme that uses adversarial learning mechanism to counter hostile jammers is introduced, where a mathematical model is employed in the formulation of jamming and antijamming strategies based on deep deterministic policy gradients to improve their policies against each other.
Abstract: Abstract Modern networking systems can benefit from Cognitive Radio (CR) because it mitigates spectrum scarcity. CR is prone to jamming attacks due to shared communication medium that results in a drop of spectrum usage. Existing solutions to jamming attacks are frequently based on Q-learning and deep Q-learning networks. Such solutions have a reputation for slow convergence and learning, particularly when states and action spaces are continuous. This paper introduces a unique reinforcement learning driven anti-jamming scheme that uses adversarial learning mechanism to counter hostile jammers. A mathematical model is employed in the formulation of jamming and anti-jamming strategies based on deep deterministic policy gradients to improve their policies against each other. An open-AI gym-oriented customized environment is used to evaluate proposed solution concerning power-factor and signal-to-noise-ratio. The simulation outcome shows that the proposed anti-jamming solution allows the transmitter to learn more about the jammer and devise the optimal countermeasures than conventional algorithms.

Journal ArticleDOI
TL;DR: In this article , a necessary and sufficient condition for an arbitrary family to be dense is provided, and the dense families are used to characterize minimal keys of the closure operation under the viewpoint of transversal hypergraphs.
Abstract: Abstract As a basic notion in algebra, closure operations have been successfully applied to many fields of computer science. In this paper we study dense family in the closure operations. In particular, we prove some families to be dense in any closure operation, in which the greatest and smallest dense families, including the collection of the whole closed sets and the minimal generator of the closed sets, are also pointed out. More important, a necessary and sufficient condition for an arbitrary family to be dense is provided in our paper. Then we use these dense families to characterize minimal keys of the closure operation under the viewpoint of transversal hypergraphs and construct an algorithm for determining the minimal keys of a closure operation.

Journal ArticleDOI
TL;DR: An augmented Union ConvAttention-LSTM (UCAL) model is proposed that contains an Attention technique with a Long Short-Term Memory in order to capture patterns from current trajectories and the experimental results prove the effectiveness of the proposed methodology that outperforms the existing models.
Abstract: Abstract Predicting human mobility between locations plays an important role in a wide range of applications and services such as transportation, economics, sociology and other fields. Mobility prediction can be implemented through various machine learning algorithms that can predict the future trajectory of a user relying on the current trajectory and time, learning from historical sequences of locations previously visited by the user. But, it is not easy to capture complex patterns from the long historical sequences of locations. Inspired by the methods of the Convolutional Neural Network (CNN), we propose an augmented Union ConvAttention-LSTM (UCAL) model. The UCAL consists of the 1D CNN that allows capturing locations from historical trajectories and the augmented proposed model that contains an Attention technique with a Long Short-Term Memory (LSTM) in order to capture patterns from current trajectories. The experimental results prove the effectiveness of our proposed methodology that outperforms the existing models.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new fish classification workflow using a combination of Contrast-Adaptive Color Correction (NCACC) image enhancement and optimization-based feature construction called Grey Wolf Optimizer (GWO).
Abstract: Abstract The low quality of the collected fish image data directly from its habitat affects its feature qualities. Previous studies tended to be more concerned with finding the best method rather than the feature quality. This article proposes a new fish classification workflow using a combination of Contrast-Adaptive Color Correction (NCACC) image enhancement and optimization-based feature construction called Grey Wolf Optimizer (GWO). This approach improves the image feature extraction results to obtain new and more meaningful features. This article compares the GWO-based and other optimization method-based fish classification on the newly generated features. The comparison results show that GWO-based classification had 0.22% lower accuracy than GA-based but 1.13 % higher than PSO. Based on ANOVA tests, the accuracy of GA and GWO were statistically indifferent, and GWO and PSO were statistically different. On the other hand, GWO-based performed 0.61 times faster than GA-based classification and 1.36 minutes faster than the other.

Journal ArticleDOI
TL;DR: In this article , the authors focus on attribute extraction from an existing enterprise relational database management system (RDBMS) by reverse engineering, some metadata elements and ranking values are calculated for each part and assign a final rank that helps to decide what attribute subset is a candidate to be an optimal input for ABAC implementation.
Abstract: Abstract One of the challenges in Attribute-Based Access Control (ABAC) implementation is acquiring sufficient metadata against entities and attributes. Intelligent mining and extracting ABAC policies and attributes make ABAC implementation more feasible and cost-effective. This research paper focuses on attribute extraction from an existing enterprise relational database management system – RDBMS. The proposed approach tends to first classify entities according to some aspects of RDBMS systems. By reverse engineering, some metadata elements and ranking values are calculated for each part. Then entities and attributes are assigned a final rank that helps to decide what attribute subset is a candidate to be an optimal input for ABAC implementation. The proposed approach has been tested and implemented against an existing enterprise RDBMS, and the results are then evaluated. The approach enables the choice to trade-off between accuracy and overhead. The results score an accuracy of up to 80% with no overhead or 88% of accuracy with 65% overhead.

Journal ArticleDOI
TL;DR: This article proposes a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features to detect face spoofing attacks.
Abstract: Abstract Given the face spoofing attack, adequate protection of human identity through face has become a significant challenge globally. Face spoofing is an act of presenting a recaptured frame before the verification device to gain illegal access on behalf of a legitimate person with or without their concern. Several methods have been proposed to detect face spoofing attacks over the last decade. However, these methods only consider the luminance information, reflecting poor discrimination of spoofed face from the genuine face. This article proposes a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features. This paper analyzes three color spaces (i.e., RGB, HSV, and YCrCb) to understand the impact of the color distribution on real and spoofed faces for the NUAA benchmark dataset. In-depth analysis of experimental results and comparison with other existing approaches show the superiority and effectiveness of our proposed models.

Journal ArticleDOI
TL;DR: In this article , the authors propose an encryption approach for cipher JSON objects through the use of chaotic synchronization, which includes mechanisms for diffusing and confusing JSON objects (plaintext), which yields a proper ciphertext.
Abstract: Abstract Nowadays the interoperability of web applications is carried out by the use of data exchange formats such as XML and JavaScript Object Notation (JSON). Due to its simplicity, JSON objects are the most common way for sending information over the HTTP protocol. With the aim of adding a security mechanism to JSON objects, in this work we propose an encryption approach for cipher JSON objects through the use of chaotic synchronization. Synchronization ability between two chaotic systems offers the possibility of securing information between two points. Our approach includes mechanisms for diffusing and confusing JSON objects (plaintext), which yields a proper ciphertext. Our approach can be applied as an alternative to the existing securing JSON approaches such as JSON Web Encryption (JWE).

Journal ArticleDOI
TL;DR: The article discusses the solution to the problem of research and development of corrective codes for rectifying several types of quantum errors that occur during computational processes in quantum algorithms and models of quantum computing devices.
Abstract: Abstract Intensive research is currently being carried out to develop and create quantum computers and their software. This work is devoted to study of the influence of the environment on the quantum system of qubits. Quantum error correction is a set of methods for protecting quantum information and quantum state from unwanted interactions of the environment (decoherence) and other forms and types of noise. The article discusses the solution to the problem of research and development of corrective codes for rectifying several types of quantum errors that occur during computational processes in quantum algorithms and models of quantum computing devices. The aim of the work is to study existing methods for correcting various types of quantum errors and to create a corrective code for quantum error rectification. The scientific novelty is expressed in the exclusion of one of the shortcomings of the quantum computing process.