scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Security in 2022"


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a verifiable credential scheme with selective disclosure based on BLS (Bohen- Lynn-Shacham) aggregate signature, where instead of signing the credentials, the user can select a part of but not all claims to be presented.
Abstract: Decentralized identity authentication is generally based on blockchain, with the protection of user privacy as the core appeal. But traditional decentralized credential system requires users to show all the information of the entire credential to the verifier, resulting in unnecessary overexposure of personal information. From the perspective of user privacy, this paper proposed a verifiable credential scheme with selective disclosure based on BLS (Bohen- Lynn-Shacham) aggregate signature. Instead of signing the credentials, we sign the claims in the credentials. When the user needs to present the credential to verifier, the user can select a part of but not all claims to be presented. To reduce the number of signatures of claims after selective disclosure, BLS aggregate signature is achieved to aggregate signatures of claims into one signature. In addition, our scheme also supports the aggregation of credentials from different users. As a result, verifier only needs to verify one signature in the credential to achieve the purpose of batch verification of credentials. We analyze the security of our aggregate signature scheme, which can effectively resist aggregate signature forgery attack and credential theft attack. The simulation results show that our selective disclosure scheme based on BLS aggregate signature is acceptable in terms of verification efficiency, and can reduce the storage cost and communication overhead. As a result, our scheme is suitable for blockchain, which is strict on bandwidth and storage overhead.

5 citations



Journal ArticleDOI
TL;DR: Huang et al. as discussed by the authors presented the most common Hadoop security problems and a review of unauthorized access issues, and a new type of Intrusion Detector, which will be based on using an artificial neural network, is presented.
Abstract: Hadoop technology is followed by some security issues. At its beginnings, developers paid attention to the development of basic functionalities mostly, and proposal of security components was not of prime interest. Because of that, the technology remained vulnerable to malicious activities of unauthorized users whose purpose is to endanger system functionalities or to compromise private user data. Researchers and developers are continuously trying to solve these issues by upgrading Hadoop’s security mechanisms and preventing undesirable malicious activities. In this paper, the most common HDFS security problems and a review of unauthorized access issues are presented. First, Hadoop mechanism and its main components are described as the introduction part of the leading research problem. Then, HDFS architecture is given, and all including components and functionalities are introduced. Further, all possible types of users are listed with an accent on unauthorized users, which are of great importance for the paper. One part of the research is dedicated to the consideration of Hadoop security levels, environment and user assessments. The review also includes an explanation of Log Monitoring and Audit features, and detail consideration of authorization and authentication issues. Possible consequences of unauthorized access to a system are covered, and a few recommendations for solving problems of unauthorized access are offered. Honeypot nodes, security mechanisms for collecting valuable information about malicious parties, are presented in the last part of the paper. Finally, the idea for developing a new type of Intrusion Detector, which will be based on using an artificial neural network, is presented. The detector will be an integral part of a new kind of virtual honeypot mechanism and represents the initial base for future scientific work of authors.

3 citations



Journal ArticleDOI
TL;DR: A systematic methodology and meta-review analysis were applied to the selection, evaluation, and qualitative examination of the most influential Honeypot surveys and/or reviews available in scientific bibliographic databases as discussed by the authors .
Abstract: The growing interest in Honeypots has resulted in increased research, and consequently, a large number of research surveys and/or reviews. Most Honeypot surveys and/or reviews focus on specific and narrow Honeypot research areas. This study aims at exploring and presenting advances and trends in Honeypot’s research and development areas. To this end, a systematic methodology and meta-review analysis were applied to the selection, evaluation, and qualitative examination of the most influential Honeypot surveys and/or reviews available in scientific bibliographic databases. A total of 188 papers have been evaluated and 22 research papers are found by this study to have a higher impact. The findings of the study suggest that the Honeypot survey and/or review papers of considerable relevance to the research community were mostly published in 2018, by IEEE, in conferences organized in India, and included in the IEEE Xplore database. Also, there have been few qualities Honeypot surveys and/or reviews published after 2018. Furthermore, the study identified 10 classes of vital and emerging themes and/or key topics in Honeypot research. This work contributes to research efforts employing established systematic review and reporting methods in Honeypot research. We have included our meta-review methodology, in order to allow further work in this area aiming at a better understanding of the progression of Honeypot research and advances.

2 citations


Journal ArticleDOI
TL;DR: In this article , a solution to create a knowledge management strategy for handling cyber incidents in CSIRT E-commerce in Indonesia was proposed, which resulted in 4 KM Processes and 2 KM Enablers which were then translated into concrete actions.
Abstract: Electronic Commerce (E-Commerce) was created to help expand the market share network through the internet without the boundaries of space and time. However, behind all the benefits obtained, E-Commerce also raises the issue of consumer concerns about the responsibility for personal data that has been recorded and collected by E-Commerce companies. The personal data is in the form of consumer identity names, passwords, debit and credit card numbers, conversations in email, as well as information related to consumer requests. In Indonesia, cyber attacks have occurred several times against 3 major E-Commerce companies in Indonesia. In 2019, users’ personal data in the form of email addresses, telephone numbers, and residential addresses were sold on the deep web at Bukalapak and Tokopedia. Even though E-Commerce affected by the cyber attack already has a Computer Security Incident Response Team (CSIRT) by recruiting various security engineers, both defense and attack, this system still has a weakness, namely that the CSIRT operates in the aspect of handling and experimenting with defense, not yet on how to store data and prepare for forensics. CSIRT will do the same thing again, and so on. This is called an iterative procedure, one day the attack will come back and only be done with technical handling. Previous research has succeeded in revealing that organizations that have Knowledge Management (KM), the organization has succeeded in reducing costs up to four times from the original without using KM in the cyber security operations. The author provides a solution to create a knowledge management strategy for handling cyber incidents in CSIRT E-Commerce in Indonesia. This research resulted in 4 KM Processes and 2 KM Enablers which were then translated into concrete actions. The KM Processes are Knowledge Creation, Knowledge Storing, Knowledge Sharing, and Knowledge Utilizing. While the KM Enabler is Technology Infrastructure and People Competency.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors conclude that the construction of low-density parity-check matrix tends to be more flexible and the parameter variability is enhanced, and they propose that the current development cost should be lower with the progress of electronic technology and they need research on more practical Low-Density Parity-Check Codes (LDPC).
Abstract: In this paper, we conclude five kinds of methods for construction of the regular low-density parity matrix H and three kinds of methods for the construction of irregular low-density parity-check matrix H. Through the analysis of the code rate and parameters of these eight kinds of structures, we find that the construction of low-density parity-check matrix tends to be more flexible and the parameter variability is enhanced. We propose that the current development cost should be lower with the progress of electronic technology and we need research on more practical Low-Density Parity-Check Codes (LDPC). Combined with the application of the quantum distribution key, we urgently need to explore the research direction of relevant theories and technologies of LDPC codes in other fields of quantum information in the future.

1 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors introduced a trusted and privacy-preserving carpooling matching scheme in Vehicular Networks (TPCM), which adopted the privacy set intersection technology based on the Bloom filter to match the passengers with the vehicles to achieve the purpose of protecting privacy and meeting the individual needs of passengers simultaneously.
Abstract: With the rapid development of intelligent transportation, carpooling with the help of Vehicular Networks plays an important role in improving transportation efficiency and solving environmental problems. However, attackers usually launch attacks and cause privacy leakage of carpooling users. In addition, the trust issue between unfamiliar vehicles and passengers reduces the efficiency of carpooling. To address these issues, this paper introduced a trusted and privacy-preserving carpooling matching scheme in Vehicular Networks (TPCM). TPCM scheme introduced travel preferences during carpooling matching, according to the passengers’ individual travel preferences needs, which adopted the privacy set intersection technology based on the Bloom filter to match the passengers with the vehicles to achieve the purpose of protecting privacy and meeting the individual needs of passengers simultaneously. TPCM scheme adopted a multi-faceted trust management model, which calculated the trust value of different travel preferences of vehicle based on passengers’ carpooling feedback to evaluate the vehicle’s trustworthiness from multi-faceted when carpooling matching. Moreover, a series of experiments were conducted to verify the effectiveness and robustness of the proposed scheme. The results show that the proposed scheme has high accuracy, lower computational and communication costs when compared with the existing carpooling schemes.

1 citations


DOI
TL;DR: This paper pro-poses a completely transparent blockchain based strategy to promote public participation in the redistricting process, to increase public confidence in the outcome of the process.
Abstract: Redistricting is the process of grouping all census blocks within a region to form larger subdivisions, or districts. The process is typically subject to some hard rules and some (soft) preferences to improve fairness of the solution. Achieving public consensus on the fairness of proposed redistricting plans is highly desirable. Unfortunately, fair redistricting is an NP hard optimization problem. The complexity of the process makes it even more challenging to convince the public of the fairness of the proposed solution. This paper pro-poses a completely transparent blockchain based strategy to promote public participation in the redistricting process, to increase public confidence in the outcome of the process. The proposed approach is based on the fact that one does not have to worry about how the NP hard problem was solved, as long as it is possible for anyone to compute a “goodness” metric for the proposed plan. In the proposed approach, anyone can submit a plan along with the ex-pected metric. Only the plan with the best claimed metric needs to be eva-luated in a blockchain network.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a protocol for processing geographic data is proposed to guarantee authoritative and unbiased responses to geographic queries, without the need to rely on trusted third parties by employing novel hash tree based authenticated data structures (ADS) in conjunction with a blockchain ledger.
Abstract: A protocol for processing geographic data is proposed to guarantee authoritative and unbiased responses to geographic queries, without the need to rely on trusted third parties. The integrity of the proposed authoritative and unbiased geographic services (AUGS) protocol is guaranteed by employing novel hash tree based authenticated data structures (ADS) in conjunction with a blockchain ledger. Hash tree based ADSes are used to incrementally compute a succinct dynamic commitments to AUGS data. A blockchain ledger is used to record 1) transactions that trigger updates to AUGS data, and 2) the updated cryptographic commitments to AUGS data. Untrusted service providers are required to provide verification objects (VOs) as proof-of-correctness of their responses to AUGS queries. Anyone with access to commitments in ledger entries can verify the proof.

1 citations



Journal ArticleDOI
TL;DR: In this article , the authors present a construction that reduces trust in the private key generator without unrealistic non-collusion assumptions, by incorporating a combination of digital credential technology and bilinear maps, and making use of multiple random-ly chosen entities to complete certain tasks.
Abstract: Identity-Based Encryption (IBE) has seen limited adoption, largely due to the absolute trust that must be placed in the private key generator (PKG)—an authority that computes the private keys for all the users in the environment. Several constructions have been proposed to reduce the trust required in the PKG (and thus preserve the privacy of users), but these have generally relied on unrealistic assumptions regarding non-collusion between various entities in the system. Unfortunately, these constructions have not significantly improved IBE adoption rates in real-world environments. In this paper, we present a construction that reduces trust in the PKG without unrealistic non-collusion assumptions. We achieve this by incorporating a novel combination of digital credential technology and bilinear maps, and making use of multiple random-ly-chosen entities to complete certain tasks. The main result and primary contri-bution of this paper are a thorough security analysis of this proposed construction, examining the various entity types, attacker models, and collusion oppor-tunities in this environment. We show that this construction can prevent, or at least mitigate, all considered attacks. We conclude that our construction ap-pears to be effective in preserving user privacy and we hope that this construction and its security analysis will encourage greater use of IBE in real-world environments.

Journal ArticleDOI
TL;DR: In this paper , the authors defined two important aspects of the computer operating system concerning the number of its vulnerabilities behavior, namely the vulnerability intensity function (VIF) and the vulnerability index indicator (VII) of a computer operating network.
Abstract: The objective of the present study is to define two important aspects of the computer operating system concerning the number of its vulnerabilities behavior. We identify the Vulnerability Intensity Function (VIF), and the Vulnerability Index Indicator (VII) of a computer operating network. Both of these functions, VIF and VII are entities of the stochastic process that we have identified, which characterizes the probabilistic behavior of the number of vulnerabilities of a computer operating network. The VIF identifies the rate at which the number of vulnerabilities changes with respect to time. The VII is an important index indicator that conveys the following information about the number of vulnerabilities of Desktop Operating Systems: the numbers are increasing, decreasing, or remaining the same at a particular time of interest. This decision type of index indicator is crucial in every strategic planning and decision-making. The proposed VIF and VII illustrate their importance by using real data for Microsoft Windows Operating Systems 10, 8, 7, and Apple MacOS. The results of the actual data attest to the importance of VIF and VII in the cybersecurity problem we are currently facing.

Journal ArticleDOI
TL;DR: In this paper , it was shown that the correspondence between cyclic lattices in ℝn and finitely generated R-modules is a one-to-one correspondence.
Abstract: In this article, we introduce the discrete subgroup in ℝn as preliminaries first. Then we provide some theories of cyclic lattices and ideal lattices. By regarding the cyclic lattices and ideal lattices as the correspondences of finitely generated R-modules, we prove our main theorem, i.e. the correspondence between cyclic lattices in ℝn and finitely generated R-modules is one-to-one. Finally, we give an explicit and countable upper bound for the smoothing parameter of cyclic lattices.

Journal ArticleDOI
TL;DR: In this paper , the authors present a review of visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data, and suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results.
Abstract: Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.

Journal ArticleDOI
TL;DR: In this paper , a questionnaire was developed and surveyed 508 employees working at different organizations to ascertain the level of awareness of social engineering, provide appropriate solutions to problems to reduce those engineering risks, and avoid obstacles that could prevent increasing awareness of the dangers of social-engineering.
Abstract: An attacker has several options for breaking through an organization’s information security protections. Human factors are determined to be the source of some of the worst cyber-attacks every day in every business. The human method, often known as “social engineering”, is the hardest to cope with. This paper examines many types of social engineering. The aim of this study was to ascertain the level of awareness of social engineering, provide appropriate solutions to problems to reduce those engineering risks, and avoid obstacles that could prevent increasing awareness of the dangers of social engineering—Shaqra University (Kingdom of Saudi Arabia). A questionnaire was developed and surveyed 508 employees working at different organizations. The overall Cronbach’s alpha was 0.756, which very good value, the correlation coefficient between each of the items is statistically significant at 0.01 level. The study showed that 63.4% of the surveyed sample had no idea about social engineering. 67.3% of the total samples had no idea about social engineering threats. 42.1% have a weak knowledge of social engineering and only 7.5% of the sample had a good knowledge of social engineering. 64.7% of the male did not know what social engineering is. 68.0% of the administrators did not know what social engineering is. Employees who did not take courses showed statistically significant differences.

Journal ArticleDOI
TL;DR: Evidence of the weakness of the public administration information system in Burkina-Faso public administration is shown and some recommendations are provided to improve the information system.
Abstract: The purpose of this research is to show the instability and the security risks of the information system in Burkina-Faso public administration. In this paper, witnessing unsatisfactory services such as government messaging (mailer.gov.bf) as well as G-cloud services which are the government cloud were studied. The behavior of user agents on the administration’s IT infra-structures which could expose the information system to security risks was also studied. The expected result shows evidence of the weakness of the public administration information system and provides some recommenda-tion.

Journal ArticleDOI
TL;DR: A new robust hybrid algorithm combining successively chaotic encryption and blind watermarking of images based on the quaternionic wavelet transform (QWT) to ensure the secure transfer of digital data is presented.
Abstract: In this paper, we present a new robust hybrid algorithm combining successively chaotic encryption and blind watermarking of images based on the quaternionic wavelet transform (QWT) to ensure the secure transfer of digital data. The calculations of the different evaluation parameters have been per-formed in order to determine the robustness of our algorithm to certain attacks. The application of this hybrid algorithm on CFA (Color Filter Array) images, allowed us to guarantee the integrity of the digital data and to pro-pose an autonomous transmission system. The results obtained after simulation of this successive hybrid algorithm of chaotic encryption and then blind watermarking are appreciated through the values of the evaluation parameters which are the peak signal-to-noise ratio (PSNR) and the correlation coefficient (CC), and by the visual observation of the extracted watermarks before and after attacks. The values of these parameters show that this successive hybrid algorithm is robust against some conventional attacks.

Journal ArticleDOI
TL;DR: In this paper , the implementation of algorithms and tools for the security of academic data protection in the Democratic Republic of the Congo has been discussed, which are based on the algorithms of Caesar, Christopher Hill, and RSA.
Abstract: This paper deals with the implementation of algorithms and tools for the security of academic data protection in the Democratic Republic of the Congo. It consists principally in implementing two algorithms and two distinct tools to secure data and in this particular case, academic data of higher and university education in the Democratic Republic of the Congo. The design of algorithms meets the approach that any researcher in data encryption must use during the development of a computer system. Briefly, these algorithms are steps to follow to encrypt information in any programming language. These algorithms are based on symmetric and asymmetric encryption, the first one uses Christopher Hill’s algorithm, which uses texts in the form of matrices before they are encrypted and RSA as one of the asymmetric algorithms, it uses the prime numbers that we have encoded on more than 512 bits. As for tools, we have developed them in php which is only a programming language taken as an example because it is impossible to use all of them. The tools implemented are based on the algorithms of Caesar, Christopher Hill, and RSA showing how the encryption operations are carried out thanks to graphical interfaces. They are only tools for pedagogical reasons to help students and other researchers learn how to use developed algorithms. We have not developed them for pleasure but rather to be used in any information system, which would prevent and limit unauthorized access to computer systems. They will not be used only for the management of academic fees but for any other information system, which explains and shows the complexity of the tools developed. We have not been able to solve the problems of versions for the developed prototype, because if there is a new version later some functions may be obsolete, which would constitute the limitation of these tools. This work targets primarily the Ministry of Higher Education and Universities, which will make these results its own and implement them in order to solve the problem of intrusions, and unauthorized access to developers and researchers who will use tools already made instead of thinking about their development. We are trying to demonstrate the steps and the methodology that allowed us to reach our results, in the following lines.

Journal ArticleDOI
TL;DR: An analytical view of the fact that when Accessibility is degraded during the presence of an ongoing attack, the other factors reliability and timeliness can also get affected, therefore creating a degrading impact on the overall Availability of the system, which eventually leads to the Denial of Service Attack and therefore affecting the security of the System.
Abstract: Information Security is determined by three well know security parameters i.e. Confidentiality, Integrity and Availability. Availability is an important pillar when it comes to security of an information system. It is dependent upon the reliability, timeliness and accessibility of the Information System. This paper presents an analytical view of the fact that when Accessibility is degraded during the presence of an ongoing attack, the other factors reliability and timeliness can also get affected, therefore creating a degrading impact on the overall Availability of the system, which eventually leads to the Denial of Service Attack and therefore affecting the security of the System.