scispace - formally typeset
Search or ask a question

Showing papers in "Ksii Transactions on Internet and Information Systems in 2021"


Journal ArticleDOI
TL;DR: It is concluded that when fine-tuned, the recent EfficientNetB0 will generate highly accurate deep learning solutions for the identification of malaria parasites in blood smears without the need for stringent pre-processing, optimization, or data augmentation of images.
Abstract: In this work, we empirically evaluated the efficiency of the recent EfficientNetB0 model to identify and diagnose malaria parasite infections in blood smears. The dataset used was collected and classified by relevant experts from the Lister Hill National Centre for Biomedical Communications (LHNCBC). We prepared our samples with minimal image transformations as opposed to others, as we focused more on the feature extraction capability of the EfficientNetB0 baseline model. We applied transfer learning to increase the initial feature sets and reduced the training time to train our model. We then fine-tuned it to work with our proposed layers and re-trained the entire model to learn from our prepared dataset. The highest overall accuracy attained from our evaluated results was 94.70% from fifty epochs and followed by 94.68% within just ten. Additional visualization and analysis using the Gradientweighted Class Activation Mapping (Grad-CAM) algorithm visualized how effectively our fine-tuned EfficientNetB0 detected infections better than other recent state-of-the-art DCNN models. This study, therefore, concludes that when fine-tuned, the recent EfficientNetB0 will generate highly accurate deep learning solutions for the identification of malaria parasites in blood smears without the need for stringent pre-processing, optimization, or data augmentation of images.

22 citations


Journal ArticleDOI
TL;DR: There are significant and positive relationships between technology factors (effort expectancy, performance expectancy, IT infrastructure and security), organizational factors (top management support, financial support, training, and policy), environmental factors (competitiveness pressure, facilitating conditions and trust) and behavioral intention to adopt ERMS, which in return has a significant relationship with the process of decision-making in HLI.
Abstract: Electronic Records Management System (ERMS) is a computer program or set of applications that is utilized for keeping up to date records along with their storage. ERMS has been extensively utilized for enhancing the performance of academic institutions. The system assists in the planning and decision-making processes, which in turn enhances the competencies. However, although ERMS is significant in supporting the process of decision-making, the majority of organizations have failed to take an initiative to implement it, taking into account that are some implementing it without an appropriate framework, and thus resulted in the practice which does not meet the accepted standard. Therefore, this study identifies the factors influencing the adoption of ERMS among employees of HLI in Yemen and the role of such adoption in the decision-making process, using the Unified Theory of Acceptance and Use of Technology (UTAUT) along with Technology, Organization and Environment (TOE) as the underpinning theories. The study conducts a cross-sectional survey with a questionnaire as the technique for data collection, distributed to 364 participants in various Yemeni public Higher Learning Institutions (HLI). Using AMOS as a statistical method, the findings revealed there are significant and positive relationships between technology factors (effort expectancy, performance expectancy, IT infrastructure and security), organizational factors (top management support, financial support, training, and policy),environmental factors (competitiveness pressure, facilitating conditions and trust) and behavioral intention to adopt ERMS, which in return has a significant relationship with the process of decision-making in HLI. The study also presents a variety of theoretical and empirical contributions that enrich the body of knowledge in the field of technology adoption and the electronic record’s domain.

15 citations


Journal ArticleDOI
TL;DR: This research study has employed an integrated methodology of fuzzy logic, ANP and TOPSIS to estimate the usable - security of Hospital Management System Software.
Abstract: One of the biggest challenges that the software industry is facing today is to create highly efficient applications without affecting the quality of healthcare system software. The demand for the provision of software with high quality protection has seen a rapid increase in the software business market. Moreover, it is worthless to offer extremely user-friendly software applications with no ideal security. Therefore a need to find optimal solutions and bridge the difference between accessibility and protection by offering accessible software services for defense has become an imminent prerequisite. Several research endeavours on usable security assessments have been performed to fill the gap between functionality and security. In this context, several Multi-Criteria Decision Making (MCDM) approaches have been implemented on different usability and security attributes so as to assess the usable-security of software systems. However, only a few specific studies are based on using the integrated approach of fuzzy Analytic Network Process (FANP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) technique for assessing the significant usable-security of hospital management software. Therefore, in this research study, the authors have employed an integrated methodology of fuzzy logic, ANP and TOPSIS to estimate the usable - security of Hospital Management System Software. For the intended objective, the study has taken into account 5 usable-security factors at first tier and 16 sub-factors at second tier with 6 hospital management system softwares as alternative solutions. To measure the weights of parameters and their relation with each other, Fuzzy ANP is implemented. Thereafter, Fuzzy TOPSIS methodology was employed and the rating of alternatives was calculated on the foundation of the proximity to the positive ideal solution.

15 citations


Journal ArticleDOI
Jieren Cheng1, Jingxin Liu1, Xinbin Xu1, Dongwan Xia1, Le Liu1, Victor S. Sheng 
TL;DR: This review introduces the sequence labeling system and evaluation metrics of NER, and divides Chinese NER methods into rule- based methods, statistics-based machine learning methods and deep learning-based methods.
Abstract: Named Entity Recognition (NER) is used to identify entity nouns in the corpus such as Location, Person and Organization, etc. NER is also an important basic of research in various natural language fields. The processing of Chinese NER has some unique difficulties, for example, there is no obvious segmentation boundary between each Chinese character in a Chinese sentence. The Chinese NER task is often combined with Chinese word segmentation, and so on. In response to these problems, we summarize the recognition methods of Chinese NER. In this review, we first introduce the sequence labeling system and evaluation metrics of NER. Then, we divide Chinese NER methods into rule-based methods, statistics-based machine learning methods and deep learning-based methods. Subsequently, we analyze in detail the model framework based on deep learning and the typical Chinese NER methods. Finally, we put forward the current challenges and future research directions of Chinese NER technology.

12 citations


Journal Article
TL;DR: In this article, a Deep Learning based Locational Detection technique is proposed to continuously recognize the specific areas of FDIA, which is based on the development area solver gap happiness is a False Data Detector (FDD) that incorporates a Convolutional Neural Network (CNN).
Abstract: The smart grid replaces the traditional power structure with information inventiveness that contributes to a new physical structure. In such a field, malicious information injection can potentially lead to extreme results. Incorrect, FDI attacks will never be identified by typical residual techniques for false data identification. Most of the work on the detection of FDI attacks is based on the linearized power system model DC and does not detect attacks from the AC model. Also, the overwhelming majority of current FDIA recognition approaches focus on FDIA, whilst significant injection location data cannot be achieved. Building on the continuous developments in deep learning, we propose a Deep Learning based Locational Detection technique to continuously recognize the specific areas of FDIA. In the development area solver gap happiness is a False Data Detector (FDD) that incorporates a Convolutional Neural Network (CNN). The FDD is established enough to catch the fake information. As a multi-label classifier, the following CNN is utilized to evaluate the irregularity and cooccurrence dependency of power flow calculations due to the possible attacks. There are no earlier statistical assumptions in the architecture proposed, as they are "model-free." It is also "cost-accommodating" since it does not alter the current FDD framework and it is only several microseconds on a household computer during the identification procedure. We have shown that ANN-MLP, SVM-RBF, and CNN can conduct locational detection under different noise and attack circumstances through broad experience in IEEE 14, 30, 57, and 118 bus systems. Moreover, the multi-name classification method used successfully improves the precision of the present identification.

10 citations


Journal ArticleDOI
TL;DR: In this article, a Fault Tolerant Data management (FTDM) scheme for healthcare IoT in fog computing is presented, where the data generated by healthcare IoT devices is efficiently organized and managed through well-defined components and steps.
Abstract: Fog computing aims to provide the solution of bandwidth, network latency and energy consumption problems of cloud computing Likewise, management of data generated by healthcare IoT devices is one of the significant applications of fog computing Huge amount of data is being generated by healthcare IoT devices and such types of data is required to be managed efficiently, with low latency, without failure, and with minimum energy consumption and low cost Failures of task or node can cause more latency, maximum energy consumption and high cost Thus, a failure free, cost efficient, and energy aware management and scheduling scheme for data generated by healthcare IoT devices not only improves the performance of the system but also saves the precious lives of patients because of due to minimum latency and provision of fault tolerance Therefore, to address all such challenges with regard to data management and fault tolerance, we have presented a Fault Tolerant Data management (FTDM) scheme for healthcare IoT in fog computing In FTDM, the data generated by healthcare IoT devices is efficiently organized and managed through well-defined components and steps A two way fault-tolerant mechanism ie, task-based fault-tolerance and node-based fault-tolerance, is provided in FTDM through which failure of tasks and nodes are managed The paper considers energy consumption, execution cost, network usage, latency, and execution time as performance evaluation parameters The simulation results show significantly improvements which are performed using iFogSim Further, the simulation results show that the proposed FTDM strategy reduces energy consumption 397%, execution cost 509%, network usage 2588%, latency 4415% and execution time 4889% as compared with existing Greedy Knapsack Scheduling (GKS) strategy Moreover, it is worthwhile to mention that sometimes the patients are required to be treated remotely due to non-availability of facilities or due to some infectious diseases such as COVID-19 Thus, in such circumstances, the proposed strategy is significantly efficient

7 citations



Journal Article
TL;DR: In this article, the authors proposed an intelligent resource allocation (IRA) to integrate with the extant resource adjustment (ERA) approach mainly based on the convergence of support vector machine (SVM) algorithm, software-defined networking (SDN), and mobile edge computing (MEC) paradigms.
Abstract: With the widespread deployment of the fifth-generation (5G) communication networks, various real-time applications are rapidly increasing and generating massive traffic on backhaul network environments. In this scenario, network congestion will occur when the communication and computation resources exceed the maximum available capacity, which severely degrades the network performance. To alleviate this problem, this paper proposed an intelligent resource allocation (IRA) to integrate with the extant resource adjustment (ERA) approach mainly based on the convergence of support vector machine (SVM) algorithm, software-defined networking (SDN), and mobile edge computing (MEC) paradigms. The proposed scheme acquires predictable schedules to adapt the downlink (DL) transmission towards off-peak hour intervals as a predominant priority. Accordingly, the peak hour bandwidth resources for serving real-time uplink (UL) transmission enlarge its capacity for a variety of mission-critical applications. Furthermore, to advance and boost gateway computation resources, MEC servers are implemented and integrated with the proposed scheme in this study. In the conclusive simulation results, the performance evaluation analyzes and compares the proposed scheme with the conventional approach over a variety of QoS metrics including network delay, jitter, packet drop ratio, packet delivery ratio, and throughput.

3 citations


Journal Article
TL;DR: In this article, the authors present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety, and propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system.
Abstract: Recently, the healthcare field has undergone rapid changes owing to the accumulation of health big data and the development of machine learning. Data mining research in the field of healthcare has different characteristics from those of other data analyses, such as the structural complexity of the medical data, requirement for medical expertise, and security of personal medical information. Various methods have been implemented to address these issues, including the machine learning model and cloud platform. However, the machine learning model presents the problem of opaque result interpretation, and the cloud platform requires more in-depth research on security and efficiency. To address these issues, this paper presents a recent technology for Internet-of-Things-based (IoT-based) health big data processing. We present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety. We also present a data mining technology for health-risk prediction, which is the core of healthcare. Finally, we propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system, which is called the black box model owing to its lack of transparency.

3 citations



Journal Article
TL;DR: In this paper, a new hybrid method called compressed encrypted data embedding (CEDE) is proposed, in which the secret information is first compressed with Lempel Ziv Welch (LZW) compression algorithm and then, the compressed secret information was encrypted using AES symmetric block cipher.
Abstract: The secure communication of information is a major concern over the internet. The information must be protected before transmitting over a communication channel to avoid security violations. In this paper, a new hybrid method called compressed encrypted data embedding (CEDE) is proposed. In CEDE, the secret information is first compressed with Lempel Ziv Welch (LZW) compression algorithm. Then, the compressed secret information is encrypted using the Advanced Encryption Standard (AES) symmetric block cipher. In the last step, the encrypted information is embedded into an image of size 512 × 512 pixels by using image steganography. In the steganographic technique, the compressed and encrypted secret data bits are divided into pairs of two bits and pixels of the cover image are also arranged in four pairs. The four pairs of secret data are compared with the respective four pairs of each cover pixel which leads to sixteen possibilities of matching in between secret data pairs and pairs of cover pixels. The least significant bits (LSBs) of current and imminent pixels are modified according to the matching case number. The proposed technique provides double-folded security and the results show that stego image carries a high capacity of secret data with adequate peak signal to noise ratio (PSNR) and lower mean square error (MSE) when compared with existing methods in the literature.

Journal Article
TL;DR: In this article, the authors proposed a bio-inspired cross-layer routing protocol (BiHCLR) protocol to achieve effective and energy preserving routing in WSN assisted IoT, where the deployed sensor nodes are arranged in the form of a grid as per the grid-based routing strategy.
Abstract: Nowadays, the Internet of Things (IoT) is adopted to enable effective and smooth communication among different networks. In some specific application, the Wireless Sensor Networks (WSN) are used in IoT to gather peculiar data without the interaction of human. The WSNs are self-organizing in nature, so it mostly prefer multi-hop data forwarding. Thus to achieve better communication, a cross-layer routing strategy is preferred. In the cross-layer routing strategy, the routing processed through three layers such as transport, data link, and physical layer. Even though effective communication achieved via a cross-layer routing strategy, energy is another constraint in WSN assisted IoT. Cluster-based communication is one of the most used strategies for effectively preserving energy in WSN routing. This paper proposes a Bio-inspired cross-layer routing (BiHCLR) protocol to achieve effective and energy preserving routing in WSN assisted IoT. Initially, the deployed sensor nodes are arranged in the form of a grid as per the grid-based routing strategy. Then to enable energy preservation in BiHCLR, the fuzzy logic approach is executed to select the Cluster Head (CH) for every cell of the grid. Then a hybrid bio-inspired algorithm is used to select the routing path. The hybrid algorithm combines moth search and Salp Swarm optimization techniques. The performance of the proposed BiHCLR is evaluated based on the Quality of Service (QoS) analysis in terms of Packet loss, error bit rate, transmission delay, lifetime of network, buffer occupancy and throughput. Then these performances are validated based on comparison with conventional routing strategies like Fuzzy-rule-based Energy Efficient Clustering and Immune-Inspired Routing (FEEC-IIR), Neuro-Fuzzy- Emperor Penguin Optimization (NF-EPO), Fuzzy Reinforcement Learning-based Data Gathering (FRLDG) and Hierarchical Energy Efficient Data gathering (HEED). Ultimately the performance of the proposed BiHCLR outperforms all other conventional techniques.

Journal ArticleDOI
TL;DR: A lightweight active scan algorithm that effectively identifies devices using UPnP protocols that are most commonly used by manufacturers is proposed and shown that devices can be distinguished by more than twice the true positive and recall at an average time of 1524 times faster than Nmap, which has a firm position in the field.
Abstract: Today, IoT devices are flooding, and traffic is increasing rapidly The Internet of Things creates a variety of added value through connections between devices, while many devices are easily targeted by attackers due to security vulnerabilities In the IoT environment, security diagnosis has problems such as having to provide different solutions for different types of devices in network situations where various types of devices are interlocked, personal leakage of security solutions themselves, and high cost, etc To avoid such problems, a TCP-based active scan was presented However, the TCP-based active scan has limitations that it is difficult to be applied to real-time systems due to long detection times To complement this, this study uses UDP-based approaches Specifically, a lightweight active scan algorithm that effectively identifies devices using UPnP protocols (SSDP, MDNS, and MBNS) that are most commonly used by manufacturers is proposed The experimental results of this study have shown that devices can be distinguished by more than twice the true positive and recall at an average time of 1524 times faster than Nmap, which has a firm position in the field


Journal Article
TL;DR: The authors proposed a differentiable neural computer (DNC) architecture using a limited retention vector, which determines whether the network increases or decreases its usage of information in external memory according to a threshold.
Abstract: Recurrent neural network (RNN) architectures have been used for language modeling (LM) tasks that require learning long-range word or character sequences. However, the RNN architecture is still suffered from unstable gradients on long-range sequences. To address the issue of long-range sequences, an attention mechanism has been used, showing state-of-the-art (SOTA) performance in all LM tasks. A differentiable neural computer (DNC) is a deep learning architecture using an attention mechanism. The DNC architecture is a neural network augmented with a content-addressable external memory. However, in the write operation, some information unrelated to the input word remains in memory. Moreover, DNCs have been found to perform poorly with low numbers of weight parameters. Therefore, we propose a robust memory deallocation method using a limited retention vector. The limited retention vector determines whether the network increases or decreases its usage of information in external memory according to a threshold. We experimentally evaluate the robustness of a DNC implementing the proposed approach according to the size of the controller and external memory on the enwik8 LM task. When we decreased the number of weight parameters by 32.47%, the proposed DNC showed a low bits-per-character (BPC) degradation of 4.30%, demonstrating the effectiveness of our approach in language modeling tasks.

Journal ArticleDOI
TL;DR: In this article, different offloading models are examined to identify the offloading parameters that need to be optimized, and compared several optimization techniques used to optimize offloading decisions, specifically Swarm Intelligence (SI) models, since they are best suited to the distributed aspect of edge computing.
Abstract: In recent years, mobile devices have become an essential part of daily life. More and more applications are being supported by mobile devices thanks to edge computing, which represents an emergent architecture that provides computing, storage, and networking capabilities for mobile devices. In edge computing, heavy tasks are offloaded to edge nodes to alleviate the computations on the mobile side. However, offloading computational tasks may incur extra energy consumption and delays due to network congestion and server queues. Therefore, it is necessary to optimize offloading decisions to minimize time, energy, and payment costs. In this article, different offloading models are examined to identify the offloading parameters that need to be optimized. The paper investigates and compares several optimization techniques used to optimize offloading decisions, specifically Swarm Intelligence (SI) models, since they are best suited to the distributed aspect of edge computing. Furthermore, based on the literature review, this study concludes that a Cuckoo Search Algorithm (CSA) in an edge-based architecture is a good solution for balancing energy consumption, time, and cost.

Journal Article
TL;DR: In this paper, the authors analyzed the possibility of model inversion attack on a deep learning model of a recognition system, namely a face recognizer, and the experimental results in targeting five registered users of a CNN-based face recognition system approve the possibility for regeneration of users' face images even for a deep model by MIA under a gray box scenario.
Abstract: In a wide range of ML applications, the training data contains privacy-sensitive information that should be kept secure. Training the ML systems by privacy-sensitive data makes the ML model inherent to the data. As the structure of the model has been fine-tuned by training data, the model can be abused for accessing the data by the estimation in a reverse process called model inversion attack (MIA). Although, MIA has been applied to shallow neural network models of recognizers in literature and its threat in privacy violation has been approved, in the case of a deep learning (DL) model, its efficiency was under question. It was due to the complexity of a DL model structure, big number of DL model parameters, the huge size of training data, big number of registered users to a DL model and thereof big number of class labels. This research work first analyses the possibility of MIA on a deep learning model of a recognition system, namely a face recognizer. Second, despite the conventional MIA under the white box scenario of having partial access to the users' non-sensitive information in addition to the model structure, the MIA is implemented on a deep face recognition system by just having the model structure and parameters but not any user information. In this aspect, it is under a semi-white box scenario or in other words a gray-box scenario. The experimental results in targeting five registered users of a CNN-based face recognition system approve the possibility of regeneration of users' face images even for a deep model by MIA under a gray box scenario. Although, for some images the evaluation recognition score is low and the generated images are not easily recognizable, but for some other images the score is high and facial features of the targeted identities are observable. The objective and subjective evaluations demonstrate that privacy cyber-attack by MIA on a deep recognition system not only is feasible but also is a serious threat with increasing alert state in the future as there is considerable potential for integration more advanced ML techniques to MIA.




Journal Article
TL;DR: In this paper, the authors proposed an intelligent detection technique for piracy sites that automatically classifies and detects whether a site is involved in copyright infringement, based on features of piracy sites.
Abstract: Recently, with the diversification of media services and the development of smart devices, users have more opportunities to use digital content, such as movies, dramas, and music; consequently, the size of the copyright market expands simultaneously. However, there are piracy sites that generate revenue by illegal use of copyrighted works. This has led to losses for copyright holders, and the scale of copyrighted works infringed due to the ever-increasing number of piracy sites has increased. To prevent this, government agencies respond to copyright infringement by monitoring piracy sites using online monitoring and countermeasure strategies for infringement. However, the detection and blocking process consumes a significant amount of time when compared to the rate of generating new piracy sites. Hence, online monitoring is less effective. Additionally, given that piracy sites are sophisticated and refined in the same way as legitimate sites, it is necessary to accurately distinguish and block a site that is involved in copyright infringement. Therefore, in this study, we analyze features of piracy sites and based on this analysis, we propose an intelligent detection technique for piracy sites that automatically classifies and detects whether a site is involved in infringement.

Journal Article
TL;DR: In this article, the proposed scheme, Semantic Conceptual Relational Similarity (SCRS) based clustering algorithm which, considers the relationship of any document in two ways, to measure the similarity.
Abstract: In the modern rapid growing web era, the scope of web publication is about accessing the web resources. Due to the increased size of web, the search engines face many challenges, in indexing the web pages as well as producing result to the user query. Methodologies discussed in literatures towards clustering web documents suffer in producing higher clustering accuracy. Problem is mitigated using, the proposed scheme, Semantic Conceptual Relational Similarity (SCRS) based clustering algorithm which, considers the relationship of any document in two ways, to measure the similarity. One is with the number of semantic relations of any document class covered by the input document and the second is the number of conceptual relation the input document covers towards any document class. With a given data set Ds, the method estimates the SCRS measure for each document Di towards available class of documents. As a result, a class with maximum SCRS is identified and the document is indexed on the selected class. The SCRS measure is measured according to the semantic relevancy of input document towards each document of any class. Similarly, the input query has been measured for Query Relational Semantic Score (QRSS) towards each class of documents. Based on the value of QRSS measure, the document class is identified, retrieved and ranked based on the QRSS measure to produce final population. In both the way, the semantic measures are estimated based on the concepts available in semantic ontology. The proposed method had risen efficient result in indexing as well as search efficiency also has been improved.

Journal Article
TL;DR: Wang et al. as mentioned in this paper proposed an algorithm for city-level boundary node identification based on bidirectional approaching, which uses topological analysis to construct a set of candidate boundary nodes and then identifies the boundary nodes.
Abstract: Existing city-level boundary nodes identification methods need to locate all IP addresses on the path to differentiate which IP is the boundary node. However, these methods are susceptible to time-delay, the accuracy of location information and other factors, and the resource consumption of locating all IPes is tremendous. To improve the recognition rate and reduce the locating cost, this paper proposes an algorithm for city-level boundary node identification based on bidirectional approaching. Different from the existing methods based on time-delay information and location results, the proposed algorithm uses topological analysis to construct a set of candidate boundary nodes and then identifies the boundary nodes. The proposed algorithm can identify the boundary of the target city network without high-precision location information and dramatically reduces resource consumption compared with the traditional algorithm. Meanwhile, it can label some errors in the existing IP address database. Based on 45,182,326 measurement results from Zhengzhou, Chengdu and Hangzhou in China and New York, Los Angeles and Dallas in the United States, the experimental results show that: The algorithm can accurately identify the city boundary nodes using only 20.33% location resources, and more than 80.29% of the boundary nodes can be mined with a precision of more than 70.73%.




Journal Article
TL;DR: In this paper, a context-aware vehicular task offloading (CAVTO) optimization scheme was proposed to reduce the system delay significantly in collaborative vehicular edge computing networks with multiple smart cars and multiple MEC servers.
Abstract: With the development of mobile edge computing (MEC), some late-model application technologies, such as self-driving, augmented reality (AR) and traffic perception, emerge as the times require. Nevertheless, the high-latency and low-reliability of the traditional cloud computing solutions are difficult to meet the requirement of growing smart cars (SCs) with computing-intensive applications. Hence, this paper studies an efficient offloading decision and resource allocation scheme in collaborative vehicular edge computing networks with multiple SCs and multiple MEC servers to reduce latency. To solve this problem with effect, we propose a context-aware offloading strategy based on differential evolution algorithm (DE) by considering vehicle mobility, roadside units (RSUs) coverage, vehicle priority. On this basis, an autoregressive integrated moving average (ARIMA) model is employed to predict idle computing resources according to the base station traffic in different periods. Simulation results demonstrate that the practical performance of the context-aware vehicular task offloading (CAVTO) optimization scheme could reduce the system delay significantly.


Journal ArticleDOI
TL;DR: This work proposes a low-cost CN processing method to reduce the complexity of CN operations, which take most of the decoding time, and employs quick selection algorithm, thereby reducing the hardware complexity and CN operation time.
Abstract: Although non-binary low-density parity-check (NB-LDPC) codes have better error-correction capability than that of binary LDPC codes, their decoding complexity is significantly higher. Therefore, it is crucial to reduce the decoding complexity of NB-LDPC while maintaining their error-correction capability to adopt them for various applications. The extended min-sum (EMS) algorithm is widely used for decoding NB-LDPC codes, and it reduces the complexity of check node (CN) operations via message truncation. Herein, we propose a low-cost CN processing method to reduce the complexity of CN operations, which take most of the decoding time. Unlike existing studies on low complexity CN operations, the proposed method employs quick selection algorithm, thereby reducing the hardware complexity and CN operation time. The experimental results show that the proposed selection-based CN operation is more than three times faster and achieves better error-correction performance than the conventional EMS algorithm.

Journal Article
TL;DR: Wang et al. as mentioned in this paper proposed a privacy protection and dynamic share system (PPADS) based on CP-ABE for PHRs, which supports full policy hiding and flexible access control.
Abstract: Personal health records (PHRs) is an electronic medical system that enables patients to acquire, manage and share their health data. Nevertheless, data confidentiality and user privacy in PHRs have not been handled completely. As a fine-grained access control over health data, ciphertext-policy attribute-based encryption (CP-ABE) has an ability to guarantee data confidentiality. However, existing CP-ABE solutions for PHRs are facing some new challenges in access control, such as policy privacy disclosure and dynamic policy update. In terms of addressing these problems, we propose a privacy protection and dynamic share system (PPADS) based on CP-ABE for PHRs, which supports full policy hiding and flexible access control. In the system, attribute information of access policy is fully hidden by attribute bloom filter. Moreover, data user produces a transforming key for the PHRs Cloud to change access policy dynamically. Furthermore, relied on security analysis, PPADS is selectively secure under standard model. Finally, the performance comparisons and simulation results demonstrate that PPADS is suitable for PHRs.