scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Processing Systems in 2015"


Journal ArticleDOI
TL;DR: A new variant of S salsa20 that uses the chaos theory and that can achieve diffusion faster than the original Salsa20 is presented.
Abstract: The stream cipher Salsa20 and its reduced versions are among the fastest stream ciphers available today. However, Salsa20/7 is broken and Salsa20/12 is not as safe as before. Therefore, Salsa20 must completely perform all of the four rounds of encryption to achieve a good diffusion in order to resist the known attacks. In this paper, a new variant of Salsa20 that uses the chaos theory and that can achieve diffusion faster than the original Salsa20 is presented. The method has been tested and benchmarked with the original Salsa20 with a series of tests. Most of the tests show that the proposed chaotic Salsa of two rounds is faster than the original four rounds of Salsa20/4, but it offers the same diffusion level.

21 citations


Journal ArticleDOI
TL;DR: A virtual laboratory platform (VLP) baptized Mercury allowing students to make practical work (PW) on different aspects of mobile wireless sensor networks (WSNs) and demonstrates the performance of the proposed algorithms that contribute to the familiarization of the learners in the field of WSNs.
Abstract: In this paper, we present a virtual laboratory platform (VLP) baptized Mercury allowing students to make practical work (PW) on different aspects of mobile wireless sensor networks (WSNs). Our choice of WSNs is motivated mainly by the use of real experiments needed in most courses about WSNs. These experiments require an expensive investment and a lot of nodes in the classroom. To illustrate our study, we propose a course related to energy efficient and safe weighted clustering algorithm. This algorithm which is coupled with suitable routing protocols, aims to maintain stable clustering structure, to prevent most routing attacks on sensor networks, to guaranty energy saving in order to extend the lifespan of the network. It also offers a better performance in terms of the number of re-affiliations. The platform presented here aims at showing the feasibility, the flexibility and the reduced cost of such a realization. We demonstrate the performance of the proposed algorithms that contribute to the familiarization of the learners in the field of WSNs.

18 citations


Journal ArticleDOI
TL;DR: A distributed approach-based scheme for determining the migration path of the agents where at each hop, the local information is used to decide the migration of the Agents, and a local repair mechanism for dealing with the faulty nodes.
Abstract: The use of mobile agents for collaborative processing in wireless sensor network has gained considerable attention. This is when mobile agents ar e used for data aggregation to exploit redundant and correlated data. The efficiency of agent-based data aggregation depends on the agent migration scheme. However, in general, most of the proposed schemes are centralized approach-based schemes where the sink node determines the migration paths for the agents before dispatching them in the sensor network. The main limitations with such schemes are that they need global network topology information for deriving the migration paths of the agents, which incurs additional communication overhead, since each node has a very limited communication range. In addition, a centralized approach does not provide fault tolerant and adaptive migration paths. In order to solve such problems, we have proposed a distributed approach-based scheme for determining the migration path of the agents where at each hop, the local information is used to decide the migration of the agents. In addition, we also propose a local repair mechanism for dealing with the faulty nodes. The simulation results show that the proposed scheme performs better than existing schemes in the presence of faulty nodes within the networks, and manages to report the aggregated data to the sink faster.

18 citations


Journal ArticleDOI
TL;DR: The results revealed that the overall accuracies of SVM classification of textured images is 88%, while the fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.
Abstract: This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

16 citations


Journal ArticleDOI
TL;DR: This paper presents a collection of web shell files, WebSHArk 1.0, as a standard dataset for current and future studies in malicious web shell detection, and presents some benchmark results by scanning the WebSH Ark dataset directory with three web shell scanning tools that are publicly available on the Internet.
Abstract: Web shells are programs that are written for a specific purpose in Web scripting languages, such as PHP, ASP, ASP.NET, JSP, PERL-CGI, etc. Web shells provide a means to communicate with the server's operating system via the interpreter of the web scripting languages. Hence, web shells can execute OS specific commands over HTTP. Usually, web attacks by malicious users are made by uploading one of these web shells to compromise the target web servers. Though there have been several approaches to detect such malicious web shells, no standard dataset has been built to compare various web shell detection techniques. In this paper, we present a collection of web shell files, WebSHArk 1.0, as a standard dataset for current and future studies in malicious web shell detection. To provide baseline results for future studies and for the improvement of current tools, we also present some benchmark results by scanning the WebSHArk dataset directory with three web shell scanning tools that are publicly available on the Internet. The WebSHArk 1.0 dataset is only available upon request via email to one of the authors, due to security and legal issues.

16 citations


Journal ArticleDOI
TL;DR: The most influential attributes, as well as demographic attributes, were successfully obtained which were affecting student of being inactive and the experimental results show that Rotation Forest, with decision tree as the base-classifier, denotes the best performance compared to other classifiers.
Abstract: The inactive student rate is becoming a major problem in most open universities worldwide. In Indonesia, roughly 36% of students were found to be inactive, in 2005. Data mining had been successfully employed to solve problems in many domains, such as for educational purposes. We are proposing a method for preventing inactive students by mining knowledge from student record systems with several state of the art ensemble methods, such as Bagging, AdaBoost, Random Subspace, Random Forest, and Rotation Forest. The most influential attributes, as well as demographic attributes (marital status and employment), were successfully obtained which were affecting student of being inactive. The complexity and accuracy of classification techniques were also compared and the experimental results show that Rotation Forest, with decision tree as the base-classifier, denotes the best performance compared to other classifiers.

15 citations


Journal ArticleDOI
TL;DR: A simple pyramid RAM-based Neural Network architecture is proposed to improve the localization process of mobile sensor nodes in indoor environments by using the capabilities of learning and generalization to reduce the effect of incorrect information and increases the accuracy of the agent's position.
Abstract: The localization of multi-agents, such as people, animals, or robots, is a requirement to accomplish several tasks. Especially in the case of multi-robotic applications, localization is the process for determining the positions of robots and targets in an unknown environment. Many sensors like GPS, lasers, and cameras are utilized in the localization process. However, these sensors produce a large amount of computational resources to process complex algorithms, because the process requires environmental mapping. Currently, combination multi-robots or swarm robots and sensor networks, as mobile sensor nodes have been widely available in indoor and outdoor environments. They allow for a type of efficient global localization that demands a relatively low amount of computational resources and for the independence of specific environmental features. However, the inherent instability in the wireless signal does not allow for it to be directly used for very accurate position estimations and making difficulty associated with conducting the localization processes of swarm robotics system. Furthermore, these swarm systems are usually highly decentralized, which makes it hard to synthesize and access global maps, it can be decrease its flexibility. In this paper, a simple pyramid RAM-based Neural Network architecture is proposed to improve the localization process of mobile sensor nodes in indoor environments. Our approach uses the capabilities of learning and generalization to reduce the effect of incorrect information and increases the accuracy of the agent's position. The results show that by using simple pyramid RAM-base Neural Network approach, produces low computational resources, a fast response for processing every changing in environmental situation and mobile sensor nodes have the ability to finish several tasks especially in localization processes in real time.

15 citations


Journal ArticleDOI
TL;DR: According to the results obtained, the HM achieves more than double the compression ratio compared to that of JSVM and delivers the same video quality at half the bitrate, yet the HM encodes two times slower (at most) than JSVM.
Abstract: High Efficiency Video Coding (HEVC) is the most recent video codec standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of this newly introduced standard is for catering to high-resolution video in low bandwidth environments with a higher compression ratio. This paper provides a performance comparison between HEVC and H.264/AVC video compression standards in terms of objective quality, delay, and complexity in the broadcasting environment. The experimental investigation was carried out using six test sequences in the random access configuration of the HEVC test model (HM), the HEVC reference software. This was also carried out in similar configuration settings of the Joint Scalable Video Module (JSVM), the official scalable H.264/AVC reference implementation, running on a single layer mode. According to the results obtained, the HM achieves more than double the compression ratio compared to that of JSVM and delivers the same video quality at half the bitrate. Yet, the HM encodes two times slower (at most) than JSVM. Hence, it can be concluded that the application scenarios of HM and JSVM should be judiciously selected considering the availability of system resources. For instance, HM is not suitable for low delay applications, but it can be used effectively in low bandwidth environments.

14 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the watching network is a useful information source and a feasible foundation for information personalization and the expandability of social network-based recommendations to the new type of online social networks.
Abstract: This paper aims to assess the feasibility of a new and less-focused type of online sociability (the watching network) as a useful information source for personalized recommendations. In this paper, we recommend scientific articles of interests by using the shared interests between target users and their watching connections. Our recommendations are based on one typical social bookmarking system, CiteULike. The watching network-based recommendations, which use a much smaller size of user data, produces suggestions that are as good as the conventional Collaborative Filtering technique. The results demonstrate that the watching network is a useful information source and a feasible foundation for information personalization. Furthermore, the watching network is substitutable for anonymous peers of the Collaborative Filtering recommendations. This study shows the expandability of social network-based recommendations to the new type of online social networks.

14 citations


Journal ArticleDOI
TL;DR: The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in the research.
Abstract: This paper describes the Tezpur University dataset of online handwritten Assamese characters. The online data acquisition process involves the capturing of data as the text is written on a digitizer with an electronic pen. A sensor picks up the pen-tip movements, as well as pen-up/pen-down switching. The dataset contains 8,235 isolated online handwritten Assamese characters. Preliminary results on the classification of online handwritten Assamese characters using the above dataset are presented in this paper. The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in our research.

14 citations


Journal ArticleDOI
TL;DR: The experimental results showed that Universal Kriging plus the digital elevation model correction method outperformed the two other methods when applied to temperature, and the interpolation effectiveness of Ordinary Kriged and Universal K Riging were almost the same when applying to both temperature and relative humidity.
Abstract: This paper presents the applications of Kriging spatial interpolation methods for meteorologic variables, including temperature and relative humidity, in regions of Vietnam. Three types of interpolation methods are used, which are as follows: Ordinary Kriging, Universal Kriging, and Universal Kriging plus Digital Elevation model correction. The input meteorologic data was collected from 98 ground weather stations throughout Vietnam and the outputs were interpolated temperature and relative humidity gridded fields, along with their error maps. The experimental results showed that Universal Kriging plus the digital elevation model correction method outperformed the two other methods when applied to temperature. The interpolation effectiveness of Ordinary Kriging and Universal Kriging were almost the same when applied to both temperature and relative humidity.

Journal ArticleDOI
TL;DR: This work presents a secure and robust image watermarking scheme that uses combined reversible DWT-DCT-SVD transformations to increase integrity, authentication, and confidentiality and is shown to be robust.
Abstract: We present a secure and robust image watermarking scheme that uses combined reversible DWT-DCT-SVD transformations to increase integrity, authentication, and confidentiality. The proposed scheme uses two different kinds of watermarking images: a reversible watermark, , which is used for verification (ensuring integrity and authentication aspects); and a second one, , which is defined by a logo image that provides confidentiality. Our proposed scheme is shown to be robust, while its performances are evaluated with respect to the peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), normalized cross-correlation (NCC), and running time. The robustness of the scheme is also evaluated against different attacks, including a compression attack and Salt & Pepper attack.

Journal ArticleDOI
TL;DR: This paper examines two approaches that reduce the Universal Background Model in the automatic dialect identification system across the five following Arabic Maghreb dialects and shows that their approaches significantly improve identification performance over purely acoustic features.
Abstract: While Modern Standard Arabic is the formal spoken and written language of the Arab world; dialects are the major communication mode for everyday life. Therefore, identifying a speaker`s dialect is critical in the Arabic-speaking world for speech processing tasks, such as automatic speech recognition or identification. In this paper, we examine two approaches that reduce the Universal Background Model (UBM) in the automatic dialect identification system across the five following Arabic Maghreb dialects: Moroccan, Tunisian, and 3 dialects of the western (Oranian), central (Algiersian), and eastern (Constantinian) regions of Algeria. We applied our approaches to the Maghreb dialect detection domain that contains a collection of 10-second utterances and we compared the performance precision gained against the dialect samples from a baseline GMM-UBM system and the ones from our own improved GMM-UBM system that uses a Reduced UBM algorithm. Our experiments show that our approaches significantly improve identification performance over purely acoustic features with an identification rate of 80.49%.

Journal ArticleDOI
TL;DR: In this article, a method that computes the corresponding crisp order for the fuzzy relation in a given fuzzy formal context is proposed, and the obtained formal context using the proposed method provides a fewer number of concepts when compared to original fuzzy context.
Abstract: Fuzzy Formal Concept Analysis (FCA) is a mathematical tool for the effective representation of imprecise and vague knowledge. However, with a large number of formal concepts from a fuzzy context, the task of knowledge representation becomes complex. Hence, knowledge reduction is an important issue in FCA with a fuzzy setting. The purpose of this current study is to address this issue by proposing a method that computes the corresponding crisp order for the fuzzy relation in a given fuzzy formal context. The obtained formal context using the proposed method provides a fewer number of concepts when compared to original fuzzy context. The resultant lattice structure is a reduced form of its corresponding fuzzy concept lattice and preserves the specialized and generalized concepts, as well as stability. This study also shows a step-by-step demonstration of the proposed method and its application. Keywords—Crisp Context, Concept Lattice, Formal Concept Analysis, Fuzzy Formal Concept, Fuzzy Relation, Knowledge Reduction

Journal ArticleDOI
TL;DR: A fault tolerance model of WVSN is proposed for efficient post-disaster management in order to assist rescue and preparedness operations and is demonstrated by simulating the benefits of the proposal in terms of reliability and high coverage.
Abstract: Wireless Video Sensor Networks (WVSNs) have become a leading solution in many important applications, such as disaster recovery. By using WVSNs in disaster scenarios, the main goal is achieving a successful immediate response including search, location, and rescue operations. The achievement of such an objective in the presence of obstacles and the risk of sensor damage being caused by disasters is a challenging task. In this paper, we propose a fault tolerance model of WVSN for efficient post-disaster management in order to assist rescue and preparedness operations. To get an overview of the monitored area, we used video sensors with a rotation capability that enables them to switch to the best direction for getting better multimedia coverage of the disaster area, while minimizing the effect of occlusions. By constructing different cover sets based on the field of view redundancy, we can provide a robust fault tolerance to the network. We demonstrate by simulating the benefits of our proposal in terms of reliability and high coverage.

Journal ArticleDOI
TL;DR: This paper proposes using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change and achieves impressive better recognition accuracy than other algorithms compared on the FERET database.
Abstract: Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

Journal ArticleDOI
TL;DR: In this work, network penetration testing and auditing of the Redhat operating system (OS) are highlighted as one of the most popular OS for Internet applications and used as a reference for practitioners to protect their systems from cyber-attacks.
Abstract: Along with the evolution of Internet and its new emerging services, the quantity and impact of attacks have been continuously increasing. Currently, the technical capability to attack has tended to decrease. On the contrary, performances of hacking tools are evolving, growing, simple, comprehensive, and accessible to the public. In this work, network penetration testing and auditing of the Redhat operating system (OS) are highlighted as one of the most popular OS for Internet applications. Some types of attacks are from a different side and new attack method have been attempted, such as: scanning for reconnaissance, guessing the password, gaining privileged access, and flooding the victim machine to decrease availability. Some analyses in network auditing and forensic from victim server are also presented in this paper. Our proposed system aims confirmed as hackable or not and we expect for it to be used as a reference for practitioners to protect their systems from cyber-attacks.

Journal ArticleDOI
TL;DR: This paper considers the demand-side management issue that exists for a group of consumers (houses) that are equipped with renewable energy and storage units (battery) and tries to find the optimal scheduling for their home appliances, in order to reduce their electricity bills.
Abstract: Smart grids propose new solutions for electricity consumers as a means to help them use energy in an efficient way. In this paper, we consider the demand-side management issue that exists for a group of consumers (houses) that are equipped with renewable energy (wind turbines) and storage units (battery), and we try to find the optimal scheduling for their home appliances, in order to reduce their electricity bills. Our simulation results prove the effectiveness of our approach, as they show a significant reduction in electricity costs when using renewable energy and battery storage.

Journal ArticleDOI
TL;DR: This paper deals with IDEA's shortcoming of generating weak keys, and achieves a definite solution to convert the weaker keys to stronger ones by applying genetic approach to the weak keys.
Abstract: Cryptography aims at transmitting secure data over an unsecure network in coded version so that only the intended recipient can analyze it. Communication through messages, emails, or various other modes requires high security so as to maintain the confidentiality of the content. This paper deals with IDEA's shortcoming of generating weak keys. If these keys are used for encryption and decryption may result in the easy prediction of ciphertext corresponding to the plaintext. For applying genetic approach, which is well-known optimization technique, to the weak keys, we obtained a definite solution to convert the weaker keys to stronger ones. The chances of generating a weak key in IDEA are very rare, but if it is produced, it could lead to a huge risk of attacks being made on the key, as well as on the information. Hence, measures have been taken to safeguard the key and to ensure the privacy of information.

Journal ArticleDOI
TL;DR: This paper carefully study Persian stemmers, which are classified into three main clas-ses: structuralstemmers, lookup table stemmer, and statistical stemmers and describes the algorithms of each class carefully and presents the weaknesses and strengths of each stemmer.
Abstract: In linguistics, stemming is the operation of reducing words to their more general form, which is called the ‘stem’. Stemming is an important step in information retrieval systems, natural language processing, and text mining. Information retrieval systems are evaluated by metrics like precision and recall and the fundamental superiority of an information retrieval system over another one is measured by them. Stemmers decrease the indexed file, increase the speed of information retrieval systems, and improve the performance of these sys-tems by boosting precision and recall. There are few Persian stemmers and most of them work based on mor-phological rules. In this paper we carefully study Persian stemmers, which are classified into three main clas-ses: structural stemmers, lookup table stemmers, and statistical stemmers. We describe the algorithms of each class carefully and present the weaknesses and strengths of each Persian stemmer. We also propose some metrics to compare and evaluate each stemmer by them.

Journal ArticleDOI
TL;DR: The infectious watermarking model (IWM) is presented, which used pathogen, mutant, and contagion as the infectious water mark and defined the techniques of infectious watermarks generation and authentication, kernel-based infectiousWatermarking, and content-based Infectious Watermarking medium.
Abstract: This paper presents the infectious watermarking model (IWM) for the protection of video contents that are based on biological virus modeling by the infectious route and procedure. Our infectious watermarking is designed as a new paradigm protection for video contents, regarding the hidden watermark for video protection as an infectious virus, video content as host, and codec as contagion medium. We used pathogen, mutant, and contagion as the infectious watermark and defined the techniques of infectious watermark generation and authentication, kernel-based infectious watermarking, and content-based infectious watermarking. We experimented with our watermarking model by using existing watermarking methods as kernel-based infectious watermarking and content-based infectious watermarking medium, and verified the practical applications of our model based on these experiments.

Journal ArticleDOI
TL;DR: An Improved Subjective Logic Model with Evidence Driven (ISLM-ED) that expands and enriches the subjective logic theory and includes the multi-agent unified fusion operator and the dynamic function for the base rate and the non-informative prior weight through the changes in evidence.
Abstract: In Josang’s subjective logic, the fusion operator is not able to fuse three or more opinions at a time and it cannot consider the effect of time factors on fusion. Also, the base rate (a) and non-informative prior weight (C) could not change dynamically. In this paper, we propose an Improved Subjective Logic Model with Evidence Driven (ISLM-ED) that expands and enriches the subjective logic theory. It includes the multi-agent unified fusion operator and the dynamic function for the base rate (a) and the non-informative prior weight (C) through the changes in evidence. The multi-agent unified fusion operator not only meets the commutative and associative law but is also consistent with the researchers’s cognitive rules. A strict mathematical proof was given by this paper. Finally, through the simulation experiments, the results show that the ISLM-ED is more reasonable and effective and that it can be better adapted to the changing environment. Keywords Dynamic Weight, Evidence Driven, Subjective Logic

Journal ArticleDOI
TL;DR: This paper reviews several fixed-complexity vector perturbation techniques and investigates their performance under both perfect and imperfect channel knowledge at the transmitter, and investigates the combination of block diagonalization with Vector Perturbation and its merits.
Abstract: Recently, there has been an increasing demand of high data rates services, where several multiuser multipleinput multiple-output (MU-MIMO) techniques were introduced to meet these demands. Among these techniques, vector perturbation combined with linear precoding techniques, such as zero-forcing and minimum mean-square error, have been proven to be efficient in reducing the transmit power and hence, perform close to the optimum algorithm. In this paper, we review several fixed-complexity vector perturbation techniques and investigate their performance under both perfect and imperfect channel knowledge at the transmitter. Also, we investigate the combination of block diagonalization with vector perturbation outline its merits.

Journal ArticleDOI
TL;DR: A memory efficient tree based anti-collision protocol to identify memoryless RFID (Radio Frequency Identification) tags that may be attached to products by utilizing two bit arrays instead of stack or queue and requires only space.
Abstract: This paper presents a memory efficient tree based anti-collision protocol to identify memoryless RFID (Radio Frequency Identification) tags that may be attached to products. The proposed deterministic scheme utilizes two bit arrays instead of stack or queue and requires only space, which is better than the earlier schemes that use at least space, where n is the length of a tag ID in a bit. Also, the size n of each bit array is independent of the number of tags to identify. Our simulation results show that our bit array scheme consumes much less memory space than the earlier schemes utilizing queue or stack.

Journal ArticleDOI
TL;DR: A novel robust medical images watermarking scheme is proposed that may remedy this problem by embedding the watermark without modifying the original host image by based on the visual cryptography concept and the dominant blocks of wavelet coefficients.
Abstract: Inthis paper, a novel robust medical images watermarking scheme is proposed. In traditional methods, the added watermark may alter the host medical image in an irreversible manner and may mask subtle details. Consequently, we propose a method for medical image copyright protection that may remedy this problem by embedding the watermark without modifying the original host image. The proposed method is based on the visual cryptography concept and the dominant blocks of wavelet coefficients. The logic in using the blocks dominants map is that local features, such as contours or edges, are unique to each image. The experimental results show that the proposed method can withstand several image processing attacks such as cropping, filtering, compression, etc. Keywords Copyright Protection, Mammograms, Medical Image, Robust Watermarking, Visual Cryptography 1. Introduction The rapid advancement of the Internet and multimedia systems in these last years has led to the creation of many useful applications such as telemedicine, which requires exposing medical data over open networks. But, due to this development, digital media, such as images, video, audio, or text, can be easily distributed, duplicated, and modified. However, in a number of medical applications, special safety and confidentiality is required for medical images, because critical assessments are made based on those images. Therefore, there is a need to provide strict security to ensure only the occurrence of legitimate changes [1]. Digital image watermarking techniques have been developed to protect the intellectual property of a digital image. This is achieved by embedding the copyright information, which is also called “the watermark pattern,” into the original image. Copyright protection is achieved by robust watermarking while image authentication is usually achieved by fragile schemes. A fragile watermarking scheme detects any manipulation made to a digital image to guarantee the content integrity while a robust scheme prevents the watermark from being removed unless the quality of the image is greatly reduced.

Journal ArticleDOI
TL;DR: An adaptive SD allocation (ASDA) algorithm is proposed that utilizes a single indicator, a distributed neighboring slot incrementer (DNSI), and the experimental results demonstrate that the ASDA has a superior performance over other methods from the viewpoint of resource efficiency.
Abstract: Beacon scheduling is considered to be one of the most significant challenges for energy-efficient Low-Rate Wireless Personal Area Network (LR-WPAN) multi-hop networks. The emerging new standard, IEEE802.15.4e, contains a distributed beacon scheduling functionality that utilizes a specific bitmap and multi-superframe structure. However, this new standard does not provide a critical recipe for superframe duration (SD) allocation in beacon scheduling. Therefore, in this paper, we first introduce three different SD allocation approaches, LSB first, MSB first, and random. Via experiments we show that IEEE802.15.4e DSME beacon scheduling performs differently for different SD allocation schemes. Based on our experimental results we propose an adaptive SD allocation (ASDA) algorithm. It utilizes a single indicator, a distributed neighboring slot incrementer (DNSI). The experimental results demonstrate that the ASDA has a superior performance over other methods from the viewpoint of resource efficiency.

Journal ArticleDOI
TL;DR: This paper presents the applications of spatial interpolation and assimilation methods for satellite and ground meteorological data, including temperature, relative humidity, and precipitation in regions of Vietnam and shows that the accuracy of temperature and humidity when employing assimilation that is not significantly improved because of low MODIS retrieval due to cloud contamination.
Abstract: This paper presents the applications of spatial interpolation and assimilation methods for satellite and ground meteorological data, including temperature, relative humidity, and precipitation in regions of Vietnam. In this work, Universal Kriging is used for spatially interpolating ground data and its interpolated results are assimilated with corresponding satellite data to anticipate better gridded data. The input meteorological data was collected from 98 ground weather stations located all over Vietnam; whereas, the satellite data consists of the MODIS Atmospheric Profiles product (MOD07), the ASTER Global Digital Elevation Map (ASTER DEM), and the Tropical Rainfall Measuring Mission (TRMM) in six years. The outputs are gridded fields of temperature, relative humidity, and precipitation. The empirical results were evaluated by using the Root mean square error (RMSE) and the mean percent error (MPE), which illustrate that Universal Kriging interpolation obtains higher accuracy than other forms of Kriging; whereas, the assimilation for precipitation gradually reduces RMSE and significantly MPE. It also reveals that the accuracy of temperature and humidity when employing assimilation that is not significantly improved because of low MODIS retrieval due to cloud contamination.

Journal ArticleDOI
TL;DR: This work has explored a new edit distance method by using consonant normalization and the normalization factor, which is appropriate for agglutinative languages like Korean.
Abstract: Edit distance metrics are widely used for many applications such as string comparison and spelling error corrections. Hamming distance is a metric for two equal length strings and Damerau-Levenshtein distance is a well-known metrics for making spelling corrections through string-to-string comparison. Previous distance metrics seems to be appropriate for alphabetic languages like English and European languages. However, the conventional edit distance criterion is not the best method for agglutinative languages like Korean. The reason is that two or more letter units make a Korean character, which is called as a syllable. This mechanism of syllable-based word construction in the Korean language causes an edit distance calculation to be inefficient. As such, we have explored a new edit distance method by using consonant normalization and the normalization factor.

Journal ArticleDOI
TL;DR: The main finding of the paper is the use of MIMO on both hops but application TAS on both links with weak direct link and the full rate OFDM with the sub-carrier for an individual link provide a better result as compared to other models.
Abstract: In evaluating the performance of a dual-hop wireless link, the effects of large and small scale fading has to be considered. To overcome this fading effect, several schemes, such as multiple-input multiple-output (MIMO) with orthogonal space time block codes (OSTBC), different combining schemes at the relay and receiving end, and orthogonal frequency division multiplexing (OFDM) are used in both the transmitting and the relay links. In this paper, we first make compare the performance of a two-hop wireless link under a different combination of space diversity in the first and second hop of the amplify-and-forward (AF) case. Our second task in this paper is to incorporate the weak signal of a direct link and then by applying the channel model of two random variables (one for a direct link and another for a relayed link) we get very impressive result at a low signal-to-noise ratio (SNR) that is comparable with other models at a higher SNR. Our third task is to bring other three schemes under a two-hop wireless link: use of transmit antenna selection (TAS) on both link with weak direct link, distributed Alamouti scheme in two-hop link and single relay antenna with OFDM subcarrier. Finally, all of the schemes mentioned above are compared to select the best possible model. The main finding of the paper is as follows: the use of MIMO on both hops but application TAS on both links with weak direct link and the full rate OFDM with the sub-carrier for an individual link provide a better result as compared to other models.

Journal ArticleDOI
TL;DR: This paper presents how the emotional dimensions issued from real viewers can be used as an important input for computing which part is the most interesting in the total time of a film.
Abstract: For different reasons, many viewers like to watch a summary of films without having to waste their time. Traditionally, video film was analyzed manually to provide a summary of it, but this costs an important amount of work time. Therefore, it has become urgent to propose a tool for the automatic video summarization job. The automatic video summarization aims at extracting all of the important moments in which viewers might be interested. All summarization criteria can differ from one video to another. This paper presents how the emotional dimensions issued from real viewers can be used as an important input for computing which part is the most interesting in the total time of a film. Our results, which are based on lab experiments that were carried out, are significant and promising.