scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Information Technology and Web Engineering in 2020"


Journal ArticleDOI
TL;DR: The authors propose a cloud-assisted proxy re-encryption scheme for efficient data sharing across IoT systems that solves the root extraction problem using near-ring and improves the security measures of the system.
Abstract: In recent years, the growth of IoT applications is rapid in nature and widespread across several domains. This tremendous growth of IoT applications leads to various security and privacy concerns. The existing security algorithms fail to provide improved security features across IoT devices due to its resource constrained nature (inability to handle a huge amount of data). In this context, the authors propose a cloud-assisted proxy re-encryption scheme for efficient data sharing across IoT systems. The proposed approach solves the root extraction problem using near-ring. This improves the security measures of the system. The security analysis of the proposed approach states that it provides improved security with lesser computational overheads.

19 citations


Journal ArticleDOI
TL;DR: An ontology matching framework is proposed using novel combinations of semantic matching techniques to find accurate mappings between formal ontologies schemas and an upper-level ontology is used as a semantic bridge.
Abstract: Over the last few decades, data has assumed a central role, becoming one of the most valuable items in society. The exponential increase of several dimensions of data, e.g. volume, velocity, variety, veracity, and value, has led the definition of novel methodologies and techniques to represent, manage, and analyse data. In this context, many efforts have been devoted in data reuse and integration processes based on the semantic web approach. According to this vision, people are encouraged to share their data using standard common formats to allow more accurate interconnection and integration processes. In this article, the authors propose an ontology matching framework using novel combinations of semantic matching techniques to find accurate mappings between formal ontologies schemas. Moreover, an upper-level ontology is used as a semantic bridge. An implementation of the proposed framework is able to retrieve, match, and align ontologies. The framework has been evaluated with the state-of-the-art ontologies in the domain of cultural heritage and its performances have been measured by means of standard measures.

13 citations


Journal ArticleDOI
TL;DR: This article will compare the results of some standard methods with and without the use of method noise and prove its efficiency and validity and shows its best use in different ways of denoising.
Abstract: This article introduces the concept, use and implementation of method noise in the field of synthetic aperture radar (SAR) image despeckling. Method noise has the capability to enhance the efficiency and performance of any despeckling algorithm. It is easy, efficient and enhanced way of improving the results. The difference between speckled image and despeckled image contains some residual image information which is due to the inefficiency of the denoising algorithm. This article will compare the results of some standard methods with and without the use of method noise and prove its efficiency and validity. It also shows its best use in different ways of denoising. The results will be compared on the basis of performance metrics like PSNR and SSIM. The concept of method noise is not restricted to only SAR images. It has vast usage and application. It can be used in any denoising procedure such as medical images, optical image etc. but this paper shows the experimental results only on the SAR images.

12 citations


Journal ArticleDOI
TL;DR: A new effective recommender system for TED talks that first groups users according to their preferences, and then provides a powerful mechanism to improve the quality of recommendations for users is proposed.
Abstract: With the enormous amount of information circulating on the Web, it is becoming increasingly difficult to find the necessary and useful information quickly and efficiently. However, with the emergence of recommender systems in the 1990s, reducing information overload became easy. In the last few years, many recommender systems employ the collaborative filtering technology, which has been proven to be one of the most successful techniques in recommender systems. Nowadays, the latest generation of collaborative filtering methods still requires further improvements to make the recommendations more efficient and accurate. Therefore, the objective of this article is to propose a new effective recommender system for TED talks that first groups users according to their preferences, and then provides a powerful mechanism to improve the quality of recommendations for users. In this context, the authors used the Pearson Correlation Coefficient (PCC) method and TED talks to create the TED user-user matrix. Then, they used the k-means clustering method to group the same users in clusters and create a predictive model. Finally, they used this model to make relevant recommendations to other users. The experimental results on real dataset show that their approach significantly outperforms the state-of-the-art methods in terms of RMSE, precision, recall, and F1 scores.

11 citations


Journal ArticleDOI
TL;DR: A new method for generating extractive summaries directly via unigram and bigram extraction techniques using the selective part of speech tagging to extract significant unigrams and bigrams from a set of sentences is described.
Abstract: This article describes a new method for generating extractive summaries directly via unigram and bigram extraction techniques. The methodology uses the selective part of speech tagging to extract significant unigrams and bigrams from a set of sentences. Extracted unigrams and bigrams along with other features are used to build a final summary. A new selective rule-based part of speech tagging system is developed that concentrates on the most important parts of speech for summarizations: noun, verb, and adjective. Other parts of speech such as prepositions, articles, adverbs, etc., play a lesser role in determining the meaning of sentences; therefore, they are not considered when choosing significant unigrams and bigrams. The proposed method is tested on two problem domains: citations and opinosis data sets. Results show that the proposed method performs better than Text-Rank, LexRank, and Edmundson summarization methods. The proposed method is general enough to summarize texts from any domain. KEyWoRdS Abstractive Summarization, Extractive Summarization, Lex-Rank, Natural Language Processing, Part of Speech Tagging, Text-Rank, Unigram-Bigram Extraction

7 citations


Journal ArticleDOI
TL;DR: Empirical analysis is done to show the performance of proposed system using real-time datasets and focuses on machine learning technique in improving practice and research in such e-X domains.
Abstract: In this Internet era, with ever-increasing interactions among participants, the size of the data is increasing so rapidly such that the information available to us in the near future is going to be unpredictable. Modeling and visualizing such data are one of the challenging tasks in the data analytics field. Therefore, business intelligence is the way in which a company can use data to improve business and operational efficiency whereas data analytics involves improving ways of making intelligence out of that data before acting on it. Thus, the proposed work focuses on prevailing challenges in data analytics and its application on social media like Facebook, Twitter, blogs, e-commerce, e-service and so on. Among all of the possible interactions, e-commerce, e-education, and e-services have been identified as important domains for analytics techniques. So, it focuses on machine learning technique in improving practice and research in such e-X domains. Empirical analysis is done to show the performance of proposed system using real-time datasets.

6 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a secure threshold based encryption scheme combined with homomorphic properties (TBHM) for accessing cloud based health information, which completely eliminates the possibility of any kind of attack as data cannot be accessed using any type of key.
Abstract: Healthcare today is one of the most promising, prevailing, and sensitive sectors where patient information like prescriptions, health records, etc., are kept on the cloud to provide high quality on-demand services for enhancing e-health services by reducing the burden of data storage and maintenance to providing information independent of location and time. The major issue with healthcare organization is to provide protected sharing of healthcare data from the cloud to the decision makers, medical practitioners, data analysts, and insurance firms by maintaining confidentiality and integrity. This article proposes a novel and secure threshold based encryption scheme combined with homomorphic properties (TBHM) for accessing cloud based health information. Homomorphic encryption completely eliminates the possibility of any kind of attack as data cannot be accessed using any type of key. The experimental results report superiority of TBHM scheme over state of art in terms throughput, file encryption/decryption time, key generation time, error rate, latency time, and security overheads.

6 citations



Journal ArticleDOI
TL;DR: A crisp-based approach for representing and reasoning about concepts evolving in time and of their properties in terms of qualitative relations (e.g., “before”) in addition to quantitative ones, time intervals and points is proposed.
Abstract: This article proposes a crisp-based approach for representing and reasoning about concepts evolving in time and of their properties in terms of qualitative relations (e.g., “before”) in addition to quantitative ones, time intervals and points. It is not only suitable to handle precise time intervals and points, but also imprecise ones. It extends the 4D-fluents approach with crisp components to represent handed data. It also extends the Allen's interval algebra. This extension allows reasoning about imprecise time intervals. Compared to related work, it is based on crisp set theory. These relations preserve many properties of the original algebra. Their definitions are adapted to allow relating a time interval and a time point, and two time points. All relations can be used for temporal reasoning by means of transitivity tables. Finally, it proposes a crisp ontology that based on the extended Allen's algebra instantiates the 4D-fluents-based representation.

5 citations


Journal ArticleDOI
TL;DR: The design of this paper has greatly reduced the number of original alarms and completed the merging of related rules in the improved Apriori algorithm and confidence formula.
Abstract: With the increasing number of internet users, a large number of network alarm information increases, resulting in the increasing pressure of SMS gateway and frequent alarm delay. Therefore, in order to effectively improve the above problems, the article is based on the improved Apriori algorithm and confidence formula, points and grabbing module, model training module and the test evaluation module device, three steps to realize the web log mining and mining system design for the data acquisition module, data preprocessing module, mining model building blocks, mining model checking module and mining model analysis and evaluation module five modules. Finally, the Python program was used to verify the test data of about two million pieces of original alarm data in a company's network management database for a consecutive month. The verification results show that the design of this paper has greatly reduced the number of original alarms and completed the merging of related rules.

5 citations


Journal ArticleDOI
TL;DR: A data stream classification model based on distributed processing is constructed, the corresponding data sequence is selected and formatted abstractly, and the local node mining method and global mining mode under this model are designed.
Abstract: In order to solve the problem of real-time detection of power grid equipment anomalies, this paper proposes a data flow classification model based on distributed processing. In order to realize distributed processing of power grid data flow, a local node mining method and a global mining mode based on uneven data flow classification are designed. A data stream classification model based on distributed processing is constructed, then the corresponding data sequence is selected and formatted abstractly, and the local node mining method and global mining mode under this model are designed. In the local node miner, the block-to-block mining strategy is implemented by acquiring the current data blocks. At the same time, the expression and real-time maintenance of local mining patterns are completed by combining the clustering algorithm, thus improving the transmission rate of information between each node and ensuring the timeliness of the overall classification algorithm.

Journal ArticleDOI
TL;DR: The new conceptﻷ £1.5bn Internet of Things, Packet Tracer, SmartHome, Workflow showed the importance ofimportance in the management of data and flow, showed researchers.
Abstract: Evolution of the Internet of Things is accompanied by automated processes based on BPM. In this context, one of the main problems is to adapt smart objects to cooperate with each other and with applications in order to generate intelligent and automated decisions. BPMN, is the most suitable means for business process modeling, but BPMN operators have limitations for managing IoT semantic data and the flow generated by connected objects. This article presents an architecture adapted to smart homes and BPM, moreover, it presents the proposal of a GIPSIT operator that manages the semantic data of IoT and solves the problems raised by existing operators. The authors thus contribute through a GIPSIT-based SPM platform, in order to simulate intelligent cooperation between objects and remotely between user and objects via Internet to provide smart assistance to patients living in connected environments.

Journal ArticleDOI
TL;DR: A new graph-based method for adapting multimedia documents in complex situations by modeling relations between adaptation-actions to select the compatible triggerable ones using ontological reasoning.
Abstract: Currently, advanced technological hardware can offer mobile devices which fits in the hand with a capacity to consult documents at anytime and anywhere. Multiple user context constraints as well as mobile device capabilities may involve the adaptation of multimedia content. In this article, the authors propose a new graph-based method for adapting multimedia documents in complex situations. Each contextual situation could correspond to a physical handicap and therefore triggers an adaptation action using ontological reasoning. Consequently, when several contextual situations are identified, this leads to multiple disabilities and may give rise to inconsistency between triggered actions. Their method allows modeling relations between adaptation-actions to select the compatible triggerable ones. In order to evaluate the feasibility and the performance of their proposal, an experimental study has been made on some real scenarios. When tested and compared with some existing approaches, their proposal showed improvements according to various criteria.

Journal ArticleDOI
TL;DR: A novel undersampling algorithm based on the combination of spectral clustering and cost sensitive deep neural network (SCCSDNN) is proposed, which outperforms state of the art undersamplings, oversampling and ensemble resampling techniques.
Abstract: Peer-to-peer lending, also known as P2P lending, is the new generation loan disbursement process, where lenders and borrowers communicate through online services. Loans through P2P lending platforms are generally unsecured, due to the presence of borrowers with low credit scores. Lendingclub dataset, consisting of quantitative and qualitative information of borrowers from 2007 to 2011, is taken for the research. Machine learning models trained with such imbalanced dataset consists of biasing towards major class samples. The model performs significantly well on major class (safe borrowers) in terms of high precision but does not perform significantly well on minor class (defaulted) borrowers and provides low recall on minor class samples. To deal with the issue, a novel undersampling algorithm based on the combination of spectral clustering and cost sensitive deep neural network (SCCSDNN) is proposed. Experimental results showcased the outstanding performance of the proposed technique, and it outperforms state of the art undersampling, oversampling and ensemble resampling techniques.


Journal ArticleDOI
TL;DR: The images and text-based semantic similarity analysis provide similar drugs grouped together by composition or manufacturer to be classifying drugs based on free data.
Abstract: The evolution of humankind is through the exchange of information and extraction of knowledge from available information. The process of exchange of the information differs by the probability of the medium through which the information is exchanged. The Internet of things (IoT) contains millions of devices with sensors simultaneously transferring real time information to devices as rapid streams of data that need to be processed on the go. This leads to the need for development of effective and efficient approaches for segregating data based on class, relatedness, and differences in the information. The extraction of text from images is performed through tesseract irrespective of the language. SCIBERT models to extract scientific information and evaluating on a suite of tasks specially in classifying drugs based on free data (tweets, images, etc.). The images and text-based semantic similarity analysis provide similar drugs grouped together by composition or manufacturer.

Journal ArticleDOI
TL;DR: This article proposes a dynamic personalizing approach in Big Data context using OLAP cubes, based on the Content-Based Filtering, and the Query Expansion techniques, and retrieves results that are “as relevant as possible” compared to the user's initial request.
Abstract: The recent debates on personalizing analyses in a Big Data context are one of the most solicited challenges for business intelligence (BI) administrators. The high-volume, the high-variety, and the high-velocity of Big Data have produced difficulty in storing, processing, and analyzing data in traditional systems. These 3Vs (volume, velocity, and variety) created many new challenges and make them difficult to extract the specific needs of the users. In addition, the user may be faced with the problem of disorientation; he does not know what information really corresponds to his needs. The information personalization systems aim to overcome these problems of disorientation by using a user profile. The effectiveness of the personalization system in a Big Data context is to demonstrate by the relevance and accuracy of the content of the results obtained, according to the needs of the user and the context of the research. Nevertheless, most of the recent research focused on the relational data warehouse personalizing and ignored the integration of the user context into the analysis of OLAP cubes, which is the first concerned to execute the user's multidimensional queries. To deal with this, the authors propose in this article a dynamic personalizing approach in Big Data context using OLAP cubes, based on the Content-Based Filtering, and the Query Expansion techniques. The first step in the proposal consists of processing the user queries by an enrichment technique in order to integrate the user profile and his searching context to reduce the searching space in the OLAP cube, and use the expansion technique to extend the scope of the analysis in the OLAP cube. The retrieved results are: “as relevant as possible” compared to the user's initial request. Afterward, they use information filtering techniques such as content-based filtering to personalize the analysis in the reduced data cube according to the term frequency and cosine similarity. Finally, they present a case study and experiences results to evaluate and validate their approach.

Journal ArticleDOI
TL;DR: This paper focuses on ensuring the integrity of the health record with context-based Merkle tree (CBMT) through temporal shadow with general public ledger (GPL) and personalized micro ledger (PML).
Abstract: The patient's health record is sensitive and confidential information. The sharing of health information is a first venture to make health services more productive and improve the quality of healthcare services. Decentralized online ledgers with blockchain-based platforms were already proposed and in use to address the interoperability and privacy issues. However, other challenges remain, in particular, scalability, usability, and accessibility as core technical challenges. The paper focuses on ensuring the integrity of the health record with context-based Merkle tree (CBMT) through temporal shadow. In this system, two ledgers were used to ensure the integrity of eHealth records like general public ledger (GPL) and personalized micro ledger (PML). The context-based Merkle tree (CBMT) is used to aggregates all the transactions at a particular time. The context means it depends on time, location, and identity. This is ensured without the help of a third party.

Journal ArticleDOI
TL;DR: Overall aggregate waiting time was observed to be longer in Maitama District Hospital (MDH) using the electronic hospital information system, but wait times at registration and pay-point was significantly lower for MDH, and variance was found to be statistically significant.
Abstract: Evidence to support widespread adoption of digital health tools in hospitals is still lacking, and proof of their acceptance within the health system is largely missing. This study compared patient's pre-consultation waiting time (time spent at the pay-point, registration, and nursing-station prior to consultation) in two hospitals, one using paper based registration system and the other using an electronic (eHealth) registration system. Structured questionnaires were administered to both patients and health workers to determine and compare factors affecting care delivery wait times and how the use or non-use of eHealth in patient's registration process influence it. In addition, patient wait times were measured in both hospitals at these care points. Overall aggregate waiting time was observed to be longer in Maitama District Hospital (MDH) using the electronic hospital information system. This variance was found to be statistically significant as can be seen (t= 58.405, p=0.024). Inspite of this, wait times at registration and pay-point was significantly lower for MDH.

Journal ArticleDOI
TL;DR: The objective ofﻴanﻷ�an £1.5bn-worth of products, researches, and ideas are expected to be developed in the coming year.
Abstract: The objective of an online social network is to amplify the stream of information among the users. This goal can be accomplished by maximizing interconnectivity among users using link prediction techniques. Existing link prediction techniques uses varied heuristics such as similarity score to predict possible connections. Link prediction can be considered a binary classification problem where probable class outcomes are presence and absence of connections. One of the challenges in classification is to decide threshold value. Since the social network is exceptionally dynamic in nature and each user possess different features, it is difficult to choose a static, common threshold which decides whether two non-connected users will form interconnectivity. This article proposes a novel technique, FIXT, that dynamically decides the threshold value for predicting the possibility of new link formation. The article evaluates the performance of FIXT with six baseline techniques. The comparative results depict that FIXT achieves accuracy up to 93% and outperforms baseline techniques. KEyWoRDS Classification, Ego Network, Link Prediction, Personalized Recommendation, Social Network Analysis, Statistical Approach, Threshold, Web Semantics

Journal ArticleDOI
TL;DR: This work intends to identify the similarity between regions in the geographical area using Rough Set methodology so that similar crime-fighting strategies for the neighbours and alleviate the crime.
Abstract: Crime analysis has been carried out to find out patterns and associations in crime incidents. A few of the different latitudes that research has been carried out are the prediction of crime rate, sociological impacts of crime, the contribution of socio-economic factors to the crime and finding the places where the frequency of crime is unusually high. GIS and spatial information have evolved as an inherent part of the crime data as the information is made public by the policing agencies. ‘Crime mapping' refers to mapping a crime to a particular place. Geography or the spatial information of crime plays an important role in the analysis of crime. Previous research have documented the spatial importance in identifying the hotspots and showing crime distribution in a particular geography. This work intends to identify the similarity between regions in the geographical area using Rough Set methodology. By doing so, we can prepare similar crime-fighting strategies for the neighbours and alleviate the crime.