scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Digital Content Technology and Its Applications in 2010"


Journal ArticleDOI
TL;DR: The methodology presents a case study approach involving the use of one such state-of-the-art technology in the acquisition of measurement data at a metropolitan university in the UK, and advises on the application of the 3D body scanner in research and sampling activities.
Abstract: Complications with garment sizing and poor fit inconvenience many consumers who become dissatisfied with such provision on the high street. It is evident that human measurement and classification of the human body based on size and shape are precedent to accurate clothing fit and therefore fundamental to production and consumption. With advancement in technology, automated 3D body scanners designed to capture the shape and size of a human body in seconds and further produce its true-to-scale 3D body model have been developed. The use of data generated is extensive; and include anthropometric dimensions and morphology for the creation of avatars and mannequins. This technology is currently viewed as the frontier in solving fit problems, for instance by generating accurate data for the development of size charts, enabling a pragmatic approach to the offer of mass customisation and also facilitating virtual model fit trials that enhance online clothing shopping experiences. Consumers have become savvier than ever and as the demand for well-fitted garments is increasing, 3D body scanning technology is being viewed as a significant bridge between craftsmanship and computer-aided design technologies. Currently being explored by academic research and not as yet widely implemented across retail sectors, it is expected to facilitate consumer satisfaction and reduce commercial waste due to ‘ill-fit’ returns. There is therefore a vital need to authenticate procedures and establish systems for practice. This paper seeks to assess the application of one such technology to human measurement for clothing provision and tests procedures for its implementation. The methodology presents a case study approach involving the use of one such stateof-the-art technology in the acquisition of measurement data at a metropolitan university in the UK, and advises on the application of the 3D body scanner in research and sampling activities.

110 citations


Journal ArticleDOI
TL;DR: This paper introduces new services in Cloud Computing environment and proposes some more fundamental services which are still undefined to the researchers in most of the cases.
Abstract: In this paper, we are going to introduce new services in Cloud Computing environment. Three Cloud Computing services are already classified by the researchers such as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Each of defined services serves distinct purpose. Here, we propose some more fundamental services which are still undefined to the researchers in most of the cases. Each type of utility service is shown with an example which is interrelated with the engineering college scenario. An engineering college hierarchical framework is used as a case study where in each cloud is defined as a private or public or hybrid cloud. This hierarchical design is based on the typical resource environment found in many academic institutions.

49 citations


Journal ArticleDOI
TL;DR: Experimental results show that the algorithm can make the neighbor recognition more accurately and enhance the quality of recommendation system effectively.
Abstract: Collaborative filtering algorithm is one of the most successful technologies used in personalized recommendation system. However, traditional algorithms focus only on user ratings and do not consider the changes of user interest and the credibility of ratings data, which affected the quality of the system's recommendation seriously. To solve this problem, this paper presents an improved algorithm. Firstly, the user’s rating is given a weight by a gradual time decrease and credit assessment in the course of user similarity measurement, and then several users highly similar with active user are selected as his neighbor. Finally, the active user’s preference for an item can be represented by the average scores of his neighbor. Experimental results show that the algorithm can make the neighbor recognition more accurately and enhance the quality of recommendation system effectively. Keyword: Collaborative Filtering, Similarity Measure, Time Weight, Trust Evaluation

49 citations


Journal ArticleDOI
TL;DR: A qualitative study and empirical results evaluating how digital signage screens can improve the image of shopping malls and create a favourable shopping atmosphere showed that the effects are influenced by the audio and video contents and also by the locations of the screens.
Abstract: Digital signage, sometimes known as a Digital Communications Network (DCN) or private plasma screen network, has been little researched to date. This paper puts forward the view that signage is a very important facet of modern life due to its ubiquitous and visible nature to advertise the names of brands, services and the corporate names of supplier organisations and high street stores, to direct people as to where to go for special events and so on. The paper focuses on how consumer shopping behaviour can be enhanced by an atmospheric stimulus and the ways in which digital signage can affect consumer perceptions about the brand names or image of shopping malls. A qualitative study is carried out with empirical results evaluating how digital signage screens can improve the image of shopping malls and create a favourable shopping atmosphere. The findings showed that the effects are influenced by the audio and video contents and also by the locations of the screens. In addition to the obvious application to shopping malls in improving business-to-consumer appeal to shoppers the findings are of use to suppliers of digital signage in business-to-business marketing of their systems to shopping mall tenants.

44 citations


Journal ArticleDOI
TL;DR: The experiment results show that the proposed retrieval method is more efficient than the traditional CBIR method based on the single visual feature and other methods combining color and texture.
Abstract: Content based image retrieval (CBIR) has been one of the most important research areas in computer science for the last decade. A retrieval method which combines color and texture feature is proposed in this paper. According to the characteristic of the image texture, we can represent the information of texture by Dual-Tree Complex Wavelet (DT-CWT) transform and rotated wavelet filter (RWF). We choose the color histogram in RGB and HSV color space as the color feature. The experiment results show that this method is more efficient than the traditional CBIR method based on the single visual feature and other methods combining color and texture.

37 citations


Journal ArticleDOI
TL;DR: This survey introduces the advantages and disadvantages of fusion techniques which can be used in specific applications, and categorizes well-known models, algorithms, systems, and applications depending on the proposed approaches.
Abstract: Information fused by multi-sensors is an important factor for obtaining reliable contextual information in smart spaces which use the pervasive and ubiquitous computing techniques. Adaptive fusion improves robust operational system performances then makes a reliable decision by reducing uncertain information. However, these fusion techniques suffer from problems regarding the accuracy of estimation or inference. No commonly accepted approaches exist currently. In this survey, we introduce the advantages and disadvantages of fusion techniques which can be used in specific applications. Second, we categorize well-known models, algorithms, systems, and applications depending on the proposed approaches. Finally, we discuss the related issues for fusion techniques within the smart spaces then suggest research directions for improving the decision-making in uncertain situation.

37 citations


Journal ArticleDOI
TL;DR: A routing algorithm, Ad hoc on-demand multipath routing, which provides quality of service (QoS) support, Q-AOMDV in terms of bandwidth, hop count and end-to-end delay in mobile ad hoc networks is introduced.
Abstract: Mobile ad hoc networks are typically characterized by high mobility and frequent link failures that result in low throughput and high end-to-end delay. The increasing use of MANETs for transferring multimedia applications such as voice, video and data, leads to the need to provide QoS support. We introduce a routing algorithm, Ad hoc on-demand multipath routing, which provides quality of service (QoS) support,(Q-AOMDV) in terms of bandwidth, hop count and end-to-end delay in mobile ad hoc networks. The results validate that Q-AOMDV provides QoS support in mobile ad hoc wireless networks with high reliability and low overhead..

35 citations


Journal ArticleDOI
TL;DR: The principle of granularity in clustering, the effective clustering algorithms with the idea of granular as well as their merits and faults are analyzed and evaluated from the point view of rough set, fuzzy sets and quotient space theories.
Abstract: Granular Computing (GrC), a knowledge-oriented computing which covers the theory of fuzzy information granularity, rough set theory, the theory of quotient space and interval computing etc, is a way of dealing with incomplete, unreliable, uncertain fuzzy knowledge. In recent years, it is becoming one of the main study streams in Artificial Intelligence (AI). With selecting the size structure flexibly, eliminating the incompatibility between clustering results and priori knowledge, completing the clustering task effectively, cluster analysis based on GrC attracts great interest from domestic and foreign scholars. In this paper, starting from the development of GrC, firstly, the main newly achievements about clustering and GrC are researched and summarized. Secondly, principle of granularity in clustering, the effective clustering algorithms with the idea of granularity as well as their merits and faults are analyzed and evaluated from the point view of rough set, fuzzy sets and quotient space theories. Finally, the feasibility and effectiveness of handling high-dimensional complex massive data with combination of these theories is outlooked.

32 citations


Journal ArticleDOI
TL;DR: A wide survey of different privacy preserving data mining algorithms and analyses the representative techniques for privacy preservingData mining, and points out their merits and demerits.
Abstract: Data mining is the extraction of interesting patterns or knowledge from huge amount of data. In recent years, with the explosive development in Internet, data storage and data processing technologies, privacy preservation has been one of the greater concerns in data mining. A number of methods and techniques have been developed for privacy preserving data mining. This paper provides a wide survey of different privacy preserving data mining algorithms and analyses the representative techniques for privacy preserving data mining, and points out their merits and demerits. Finally the present problems and directions for future research are discussed.

31 citations


Journal ArticleDOI
TL;DR: This work divides the large text vector dataset into data blocks, each of which then processed in different distributed data node of Map Reduce framework with agglomerative hierarchical clustering algorithm.
Abstract: Text clustering is one of the difficult and hot research fields in the text mining research. Combing Map Reduce framework and the neuron initialization method of VPSOM (vector pressing SelfOrganizing Model) algorithm, a new text clustering algorithm is presented. It divides the large text vector dataset into data blocks, each of which then processed in different distributed data node of Map Reduce framework with agglomerative hierarchical clustering algorithm. The experiment results indicate that the improved algorithm has a higher efficiency and a better accuracy.

31 citations


Journal ArticleDOI
TL;DR: Results showed that information quality, system quality, and service quality had a significant effect on individual performance through the usage and user satisfaction with an e-GP system.
Abstract: Based on a survey data from 361 public employees and suppliers in Taiwan, this study employed the updated DeLone and McLean Information Systems (IS) Success Model to measure e-government procurement (e-GP) system success and assess the moderating effect of computer self-efficacy on users’ judgment of IS success. Results showed that information quality, system quality, and service quality had a significant effect on individual performance through the usage and user satisfaction with an e-GP system. In addition, the key antecedents to user satisfaction and system usage did differ between high and low computer self-efficacy users. By measuring the success of an e-GP system from the end-user’s perspective, the findings of this research provide insight into the design and improvement in electronic government procurement.

Journal ArticleDOI
TL;DR: Numerical experiments in the evaluation of TRECVID and a variety of film videos demonstrate that the proposed novel shot boundary detection method is capable of accurately detecting shot transitions, and could greatly reduce the computational cost.
Abstract: Shot boundary detection is an important fundamental process toward automatic video indexing, retrieval, editing, etc. After a critical review of most approaches seeking to solve this problem, we propose a novel shot boundary detection. To improve the performance of the algorithm and reduce the computational cost, frames that are clearly not shot boundaries are first removed from the original video. After that, a novel SIFT keypoint matching algorithm based on SVM is proposed, which is used to capture the changing statistics of different kinds of shot transitions so as to identify, not only abrupt transitions, but also gradual transitions(fade, dissolve, wipe) accordingly. At last, our system use different algorithms for different kinds of shot transitions to help us to get a better solution for shot boundary detection problem. Numerical experiments in the evaluation of TRECVID and a variety of film videos demonstrate that our method is capable of accurately detecting shot transitions, and could greatly reduce the computational cost.

Journal ArticleDOI
TL;DR: A dataspace system is assumed to be a hybrid of a search engine, a databases management system, an information integration system and a data sharing system, which opens up many complex research challenges.
Abstract: Nowadays there is rarely a scenario where all the data can be fit nicely into one relational database management system or any other single data model. In acknowledgement of this fact a new concept of Dataspaces was introduced, according to which a dataspace system is assumed to be a hybrid of a search engine, a databases management system, an information integration system and a data sharing system. However the concept was presented in a visionary way and its implementation in the real world scenario has opened up many complex research challenges. Moreover so far the efforts put forth by the research community are quite disparate and it is highly desirable to have a unified effort which would hopefully enable rapid progress. Furthermore due to very high end-user expectations of such systems there are a lot of challenges and problems that need to be resolved and much scope for future work remains.

Journal ArticleDOI
TL;DR: A novel image median filtering algorithm based on incomplete quick sort algorithm is proposed to improve the filtering speed, and can keep the edge, outline, texture and much other information to a great extent.
Abstract: Quick sort algorithm is studied thoroughly, a new incomplete quick sort algorithm is designed, and then a novel image median filtering algorithm based on incomplete quick sort algorithm is proposed to improve the filtering speed. The new algorithm considers in detail the characteristic of image median filtering (In median filtering algorithm, the sorting operation of all pixel values is not necessary, the median value can be given by many other methods), and can give the median value by only sorting part of the pixels value in the neighborhood, thus it can reduce many data move operations, and then greatly improve the speed of image median filtering. Algorithm analysis and a lot of experiment results show that, the new algorithm greatly improves the speed of image median filtering, and can keep the edge, outline, texture and much other information to a great extent.

Journal ArticleDOI
TL;DR: While the design provides an effective method of intelligent inspection, additional benefits comprise provision of detailed electronic reports that include RFID tag IDs and description, information that would be useful to other government departments or agencies interested in detailed statistics.
Abstract: Pre-shipment inspection is usually undertaken by customs administrations and standards bureaus to address security, smuggling, tax evasion and counterfeit goods challenges of imports. The process has predominantly been undertaken using manual based methods which have had considerable shortcomings. Manual methods of inspection at source and manual tracking are subject to abuse and often results in failure of detecting illegal consignments. Use of modern methods of verification and subsequent tracking has become necessary in order to adequately deal with the challenges experienced by governments. This paper seeks to address these challenges by providing a design of Radio Frequency Identification (RFID) based method of inspection. The system supports tracking of cargo from source to destination using Real Time Location Tracking System (RTLS) using Active RFID technology, GPS and satellite modems. The design, functionality and workings are provided together with the results of tests undertaken on the model using three simulation tools. While the design provides an effective method of intelligent inspection, additional benefits comprise provision of detailed electronic reports that include RFID tag IDs and description, information that would be useful to other government departments or agencies interested in detailed statistics.

Journal ArticleDOI
TL;DR: This paper proposed a context-aware system, which can provide more specific information and services about the situation where you may arrange a business meeting as a case study, and focused on ontologies which make it possible to reason semantic notion.
Abstract: Context-awareness has been considered as promising topic in pervasive computing area, but a review of the existing approaches shown that developing diverse application services in this area is still motivated based on the analysis of existing approaches in literature. This paper proposed a context-aware system, which can provide more specific information and services about the situation where you may arrange a business meeting as a case study. In developing a context awareness application, we adopted an approach that selects and combines the strengths of the reviewed methodologies. As an infrastructure for supporting this development, a framework architecture prototype was constructed. Especially, we focused on ontologies which make it possible to reason semantic notion.

Journal ArticleDOI
TL;DR: A new algorithm, Cluster-Based MultiObjective Genetic Algorithm (CBMOGA) which optimizes the support counting phase by clustering the database, based on the number of items in each transaction.
Abstract: Multi-Objective Genetic Algorithm (MOGA) is a new approach for association rule mining in the market-basket type databases. Finding the frequent itemsets is the most resource-consuming phase in association rule mining, and always does some extra comparisons against the whole database. This paper proposes a new algorithm, Cluster-Based MultiObjective Genetic Algorithm (CBMOGA) which optimizes the support counting phase by clustering the database. Clusters are based on the number of items in each transaction. Experiments on two different marketbasket type databases show that the CBMOGA outperforms the MOGA. However, the speedup highly depends on the distribution of transactions in the cluster tables. Hence, the optimization ratio is datasetdependent.

Journal ArticleDOI
TL;DR: Improvements made on SCADA systems which allow them to be successfully used for automated surveillance are explored, finding that even telecontrol operators who have limited experience with computers were able to employ the system without any difficulties.
Abstract: This article explores a number of improvements made on Supervisory Control and Data Acquisition (SCADA) systems which allow them to be successfully used for automated surveillance. Even telecontrol operators who have limited experience with computers were able to employ the system without any difficulties. Other advances made by taking advantage of the strongest features of embedded and multi-agent system technologies are also featured in this article. These developments have been tested in a true industrial environment. Positive results and feedback have been provided by the tests.

Journal ArticleDOI
TL;DR: A quality enhancement methodology is proposed to include in the CT image denoising technique using window-based multi-wavelet transformation and thresholding, comprised of an edge detection technique based on canny algorithm that is performed on the gradient images so that the images are visualized better for diagnosis.
Abstract: Denoising the CT images removes noise from the CT images and so makes the disease diagnosis procedure more efficient. The denoised images have a notable level of raise in its PSNR values, ensuring a smoother image for diagnosis purpose. In the previous work, a CT image denoising technique using window-based Multi-wavelet transformation and thresholding has been proposed. The performance of the technique has been improved by Genetic Algorithm (GA)-based window selection methodology. However, in the perspective of diagnosis, the PSNR values have not much significance; instead they rely on the quality of the images in the perspective of medical diagnosis. In this paper, a quality enhancement methodology is proposed to include in the CT image denoising technique using window-based multi-wavelet transformation and thresholding. The methodology is comprised of an edge detection technique based on canny algorithm that is performed on the gradient images so that the images are visualized better for diagnosis. A pair of micro block set is generated from the edge detected image and it is subjected to unsharp filter to obtain a sharper image. The smoothness of the image is improved by applying Gaussian filter to the sharper image. Implementation results are given to demonstrate the superior performance of the proposed quality enhancement technique over various CT images in terms of medical perspective.

Journal ArticleDOI
TL;DR: A FAR (Fuzzy logic based Adaptive Routing) algorithm that provides energy-efficiency by dynamically changing the protocols installed at the sensor nodes by changing the output of the fuzzy logic which is the fitness level of the protocols for the environment.
Abstract: Recent advances in WSN (Wireless Sensor Networks) have led to many routing protocols designed for more efficiency of energy utilization in the WSN field. While many routing protocols have been proposed in this field, a single routing protocol cannot be energy-efficient if the environment of the sensor network varies. This paper presents a FAR (Fuzzy logic based Adaptive Routing) algorithm that provides energy-efficiency by dynamically changing the protocols installed at the sensor nodes. The algorithm changes protocols based on the output of the fuzzy logic which is the fitness level of the protocols for the environment. A simulation is performed to show the usefulness of the proposed algorithm.

Journal ArticleDOI
TL;DR: An empirical comparison of kernel selection for SVM was used and performance on text-independent speaker identification using the TIMIT corpus showed that the best performance had been achieved by using polynomial kernel and reported a speaker identification rate equal to 82.47%.
Abstract: Support vector machine (SVM) was the first proposed kernel-based method. It uses a kernel function to transform data from input space into a high-dimensional feature space in which it searches for a separating hyperplane. SVM aims to maximise the generalisation ability that depends on the empirical risk and the complexity of the machine. SVM has been widely adopted in real-world applications including speech recognition. In this paper, an empirical comparison of kernel selection for SVM were used and discussed to achieve performance on text-independent speaker identification using the TIMIT corpus. We were focused on SVM trained using linear, polynomial and radial basis function (RBF) kernels. Results showed that the best performance had been achieved by using polynomial kernel and reported a speaker identification rate equal to 82.47%.

Journal ArticleDOI
TL;DR: A fast, computationally inexpensive solution which uses any type of computer video camera to control a cursor through hand movements and gesticulations is proposed.
Abstract: Hand detection is a fundamental step in many practical applications as gesture recognition, video surveillance, and multimodal machine interface and so on. The aim of this paper is to present the methodology for hand detection and propose the hand motion detection method. Skin color is used to segment the hand region from background and hand blob is extracted from the segmented finger blobs. Analysis of finger blobs gives us the location of hand even when hand and head blobs are visible in the same image. In this paper, we propose a fast, computationally inexpensive solution which uses any type of computer video camera to control a cursor through hand movements and gesticulations. The design and evaluation phases are presented in detail. We have performed extensive experiments and achieve very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

Journal ArticleDOI
TL;DR: The simulation result shown that the proposed intelligent algorithm can extend the network network lifetime for different network positioning method.
Abstract: The recent area of Wireless Sensor Networks (WSNs) has brought new challenges to developers of net-work protocols. Energy consumption and network coverage are two important challenges in wireless sensor networks. We investigate intelligent techniques for node positioning to reduce energy consumption with coverage preserved in wireless sensor network. A genetic algorithm Is used to create energy efficient node positioning in wireless sensor networks. The simulation result shown that the proposed intelligent algorithm can extend the network network lifetime for different network positioning method.

Journal ArticleDOI
TL;DR: The research conceptualizes strategies and operational marketing policies from an original standpoint, for the purpose of reconstructing and comprehending the dynamics of "integrated" marketing when examining potential interaction between communication and distribution in tourism.
Abstract: The wide diffusion of new technologies in communication and business has changed how consumer and product/store knowledge has to be managed and represented digitally. This issue has an important influence in many fields of activity, especially that of tourism, one of the most successful areas on the Web. The research conceptualizes strategies and operational marketing policies from an original standpoint, for the purpose of reconstructing and comprehending the dynamics of "integrated" marketing when examining potential interaction between communication and distribution in tourism. Starting from a brief analysis of the literature, the purpose of our research is to devise an exploratory model of integration involving the main marketing mix levers and to recommend the Internet as a "point of synergy" in the "promo-distribution" process of tourism. The research tests the model proposed, i.e. a specific case study of the Bravofly group, which can be considered a core contribution in terms of practical implications for corporations.

Journal ArticleDOI
TL;DR: BPCLC adopts a new block padding technique to improve the write performance of flash-based SSDs and outperforms its competitors with respect to write count, erase count, merge count, and overall I/O overhead.
Abstract: Flash memory has been widely used for storage devices in various embedded systems and enterprise computing environment, due to its shock-resistance, low power consumption, non-volatile, and high I/O speed. However, its physical characteristics impose several limitations in the design of flash-based solid state disks (SSDs). For example, its write operation costs much more time than read operation, and data in flash memory can not be overwritten before being erased. In particular, random write operations in flash memory have a very poor performance. To overcome these limitations, we propose a page-clustered LRU write buffer management scheme for flash-based SSDs, which is named BPCLC (Block Padding Cold and Large Cluster first). BPCLC adopts a new block padding technique to improve the write performance of flash-based SSDs. We conduct a trace-driven experiment and use two types of workloads to compare the performance of BPCLC with three competitors including FAB, BPLRU, and CLC. The results show that in both types of workloads, BPCLC outperforms its competitors with respect to write count, erase count, merge count, and overall I/O overhead.

Journal ArticleDOI
TL;DR: The experimental result proves that the fault characteristic extracted by improved wavelet packet and Hilbert transform is in accord with the one analyzed from theory, and the fault feature extraction method is effective.
Abstract: In order to supply a gap of current resonance vibration and STFT demodulation method applied to rolling bearing fault feature extraction of city rail vehicle, a fault diagnosis method for rolling bearing is presented, which is based on the integration of improved wavelet packet, frequency energy analysis and Hilbert marginal spectrum When faults occur in rolling bearing, the energy of the rolling bearing vibration signal would change correspondingly, while the Hilbert energy spectrum can exactly provide the energy distribution of the signal in certain frequency with the change of the time and frequency Thus, the fault information of the rolling bearing can be extracted effectively from the improved wavelet packet and Hilbert energy spectrum The experimental result proves that the fault characteristic extracted by improved wavelet packet and Hilbert transform is in accord with the one analyzed from theory, and the fault feature extraction method is effective The research results provide the theoretical foundation for the extraction of fault feature in rotary machine and have important practical value

Journal ArticleDOI
Anan Liu1, Dong Han1
TL;DR: A model-free method for human action recognition via sparse spatiotemporal representation that can handle the action recognition under difference scenarios by multiple persons, and which outperforms most of the state-of-art methods for human behavior recognition.
Abstract: In this paper, we propose a model-free method for human action recognition via sparse spatiotemporal representation. Similar to the template matching method, the philosophy of the proposed method is to decompose each video sample containing one kind of human actions as a 1 sparse linear combination of several video samples containing multiple kinds of human actions. To realize this goal, we mainly focus on three problems, feature point detection and description for human action, spatiotemporal feature construction for action video representation, and action video decomposition via sparse representation. The contributions of this method lies in three-folds: 1) the proposed method does not depend on complicated model selection and learning; 2) it can handle the action recognition under difference scenarios by multiple persons; 3) the generalization ability of the method can be easily extended by simply adding bases, the new labeled action video. To demonstrate the superiority of the proposed method, we evaluate it on KTH dataset, the well known dataset for human action recognition. Large scale experiment shows the accuracy and robustness of the method. Moreover, the proposed method outperforms most of the state-of-art methods for human behavior recognition.

Journal ArticleDOI
TL;DR: A power control algorithm based on non-cooperative game for wireless sensor networks is presented to regulate the appropriate transmit power and satisfy the maximum requirement of network utility.
Abstract: How to improve energy efficiency of WSNs(Wireless Sensor Network) is an important problem on the design of WSNs , for the demand of Multi-Services , a non-cooperative game power control model is proposed based on the WSNs model of CDMA, the existence and uniqueness of Nash Equilibrium in the game are proved. Then a distributed power control algorithm is proposed based on the game model. Simulation results show, because take into account the factor of node residual energy on the design of the game model, the algorithm can effectively reduce total transmitting power of nodes, save nodes energy and prolong network lifetime efficiently.

Journal ArticleDOI
TL;DR: New channel estimation technique is proposed for multiband orthogonal frequecny division multiplexing (MB-OFDM) ultra wideband (UWB) systems in multipath time varying wireless channels and shows exact channel tarcking and provides better symbol error rate (SER) performance.
Abstract: New channel estimation technique is proposed for multiband orthogonal frequecny division multiplexing (MB-OFDM) ultra wideband (UWB) systems in multipath time varying wireless channels. Two-stages approch has been used to achieve this purpose. In first stage, Winner-Hopf filtration has been employed for the interpolation of unknown channel state information (CSI) using comb-type known pilots. In second stage, interpolated channel statistics are then modeled as autoregressive (AR) process and fed into kalman filter. Moreover, inorder to suppress intercarrier interference (ICI), an ICI mitigation filter does joinly work with kalman filter. A mathematical framework is given for the ralization of our proposed system. Link level simulation (LLS) urgs that this new tecnique shows exact channel tarcking and provides better symbol error rate (SER) performance.

Journal ArticleDOI
TL;DR: This special issue of the Journal of Digital Content Technology and its Applications was planned with the aim to publish researches regarding the fields of knowledge management and consumer behaviour in order to deeply understand the applications of new technologies in retailing and their impact on the design, development and modelling of consumers’ and products’ knowledge.
Abstract: In recent years, many studies are focusing on the best practices which make stores (both physical and virtual) more attractive, interesting and trustworthy for consumers [1]. The use of customized and efficient communication strategies addressed to specific consumers is an important topic of research [2] [3]. Furthermore, a growing interest has been put on the ways producers’ and retailers’ knowledge is used to add value to customers’ experiences [4]. While the use of new technological tools in online shops is often studied in academic research, the application of new technologies to the point of sales is a promising and relatively unexplored field of study. In particular the use of digital content and technologies allowing consumers to interact with products in new ways has been under-investigated by scholars and underestimated practitioners. For instance, many e-retailers already exploit the opportunities offered by interactive technologies, such as 3D virtual models, in order to enhance consumers shopping experience [5]. Their use in stores, however, is still limited. Hence, the development and use of new technologies supporting and influencing consumers during their shopping experience is a promising area of development for both academics and practitioners. Moreover, the study of digital technologies in web-based retail offers opportunities to single out approaches and practices useful also in traditional shops. Among the few existing applications there are shopping assistant systems based on shopping trolleys and handled devices [6]. Most of them use mobile and ubiquitous computing [7] [1]. Adding digital content to these tools can be a powerful means to influence customers’ experience. The aim is to support consumers, through a user-friendly interface, by giving them information related to products, promotions, new arrivals and so on. The main characteristics are the interactivity and the multimodality, which support an efficient, flexible and meaningful feeling of human-computer interaction [8] [9]. As a consequence, this powerful interaction is capable to influence consumer satisfaction, as well as the loyalty to retailers and the consumers’ buying behaviour [10]. In this scenario, it becomes very useful to understand how consumers’ and products/stores’ knowledge has to be managed and digitally represented. The study of online applications offers opportunities to investigate functioning tools based on the digital representation of consumers, products and stores’ knowledge. Besides, it is important to understand how this knowledge is capable to influence customers’ experience once converted into digital content. This special issue of the Journal of Digital Content Technology and its Applications was planned with the aim to publish researches regarding the fields of knowledge management and consumer behaviour in order to deeply understand the applications of new technologies in retailing and their impact on the design, development and modelling of consumers’ and products’ knowledge in order to improve consumers’ shopping experience and, as a consequence, influence their buying behaviour. Indeed, the overall goal was to encourage depth researches on digital contents management capable to support and affect consumer behaviour in different retail environments (both online and offline). A secondary purpose was to encourage consumer researches who might not usually consider the Journal of Digital Content Technology and its Applications as an alternative publication for their studies. The following mix of 6 papers emerged from the review process. We organized these manuscripts into two general categories: (1) consumers’, products’ and shops’ knowledge in online and offline retail and (2) impact of new technologies on consumers’ behaviour in the retail context. We are thankful to the editor of the Journal of Digital Content Technology and its Applications, for his encouragement and the initiation of the Special Issue. In addition, we are most appreciative of the many reviewers who supported in the revision process. Their work helped us to improve the quality of the papers and the development of this issue.