scispace - formally typeset
Search or ask a question

Showing papers on "Upload published in 2017"


Journal ArticleDOI
TL;DR: The VISION dataset is currently composed by 34,427 images and 1914 videos, both in the native format and in their social version (Facebook, YouTube, and WhatsApp are considered), from 35 portable devices of 11 major brands, and can be exploited as benchmark for the exhaustive evaluation of several image and video forensic tools.
Abstract: Forensic research community keeps proposing new techniques to analyze digital images and videos. However, the performance of proposed tools are usually tested on data that are far from reality in terms of resolution, source device, and processing history. Remarkably, in the latest years, portable devices became the preferred means to capture images and videos, and contents are commonly shared through social media platforms (SMPs, for example, Facebook, YouTube, etc.). These facts pose new challenges to the forensic community: for example, most modern cameras feature digital stabilization, that is proved to severely hinder the performance of video source identification technologies; moreover, the strong re-compression enforced by SMPs during upload threatens the reliability of multimedia forensic tools. On the other hand, portable devices capture both images and videos with the same sensor, opening new forensic opportunities. The goal of this paper is to propose the VISION dataset as a contribution to the development of multimedia forensics. The VISION dataset is currently composed by 34,427 images and 1914 videos, both in the native format and in their social version (Facebook, YouTube, and WhatsApp are considered), from 35 portable devices of 11 major brands. VISION can be exploited as benchmark for the exhaustive evaluation of several image and video forensic tools.

206 citations


Patent
23 Jan 2017
TL;DR: A distributed database management system as discussed by the authors provides a central database resident on a server that contains database objects, e.g., program guide data, to be replicated are gathered together into distribution packages called "slices", that are transmitted to client devices.
Abstract: A distributed database management system provides a central database resident on a server that contains database objects. Objects, e.g., program guide data, to be replicated are gathered together into distribution packages called “slices,” that are transmitted to client devices. A slice is a subset of the central database which is relevant to clients within a specific domain, such as a geographic region, or under the footprint of a satellite transmitter. The viewer selects television programs and Web content from displayed sections of the program guide data which are recorded to a storage device. The program guide data are used to determine when to start and end recordings. Client devices periodically connect to the server using a phone line and upload information of interest which is combined with information uploaded from other client devices for statistical, operational, or viewing models.

204 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: A system that leverages blockchain technology to provide a secure distributed data storage with keyword search service that allows the client to upload their data in encrypted form, distributes the data content to cloud nodes and ensures data availability using cryptographic techniques.
Abstract: Traditional cloud storage has relied almost exclusively on large storage providers, who act as trusted third parties to transfer and store data. This model poses a number of issues including data availability, high operational cost, and data security. In this paper, we introduce a system that leverages blockchain technology to provide a secure distributed data storage with keyword search service. The system allows the client to upload their data in encrypted form, distributes the data content to cloud nodes and ensures data availability using cryptographic techniques. It also provides the data owner a capability to grant permission for others to search on her data. Finally, the system supports private keyword search over the encrypted dataset.

108 citations


Journal ArticleDOI
TL;DR: A cloud storage auditing scheme for group users, which greatly reduces the computation burden on the user side and blind data using simple operations in the phase of data uploading and data auditing to protect the data privacy against the TPM.

102 citations


Journal ArticleDOI
TL;DR: Security analysis and experimental evaluation indicate that the proposed identity-based data outsourcing (IBDO) scheme provides strong security with desirable efficiency in securing outsourced data.
Abstract: Cloud storage system provides facilitative file storage and sharing services for distributed clients. To address integrity, controllable outsourcing, and origin auditing concerns on outsourced files, we propose an identity-based data outsourcing (IBDO) scheme equipped with desirable features advantageous over existing proposals in securing outsourced data. First, our IBDO scheme allows a user to authorize dedicated proxies to upload data to the cloud storage server on her behalf, e.g., a company may authorize some employees to upload files to the company's cloud account in a controlled way. The proxies are identified and authorized with their recognizable identities, which eliminates complicated certificate management in usual secure distributed computing systems. Second, our IBDO scheme facilitates comprehensive auditing, i.e., our scheme not only permits regular integrity auditing as in existing schemes for securing outsourced data, but also allows to audit the information on data origin, type, and consistence of outsourced files. Security analysis and experimental evaluation indicate that our IBDO scheme provides strong security with desirable efficiency.

93 citations


DissertationDOI
01 Jan 2017
TL;DR: A number of problems have arisen as a consequence of the rapid increase in the sharing of personal images online, because personal images uploaded online are, more now than ever, prone to misuse.
Abstract: Social networks have changed the nature of communication in the modern world: they have changed how people communicate, the frequency and mode of communication, and how people relate to those communications. Social networks have also changed the type of information that is communicated. One of the notable developments has been a proliferation of the sharing of images that people have taken themselves. From the ubiquitous selfie through to group shots, personal images are now a key part of modern social communication. A number of problems have arisen as a consequence of the rapid increase in the sharing of personal images online. This is because personal images uploaded online are, more now than ever, prone to misuse. Third parties are easily able to reuse, distort and alter images that are uploaded on social networks. As a result, people are at risk of losing control over the images that they upload online.

81 citations


Journal ArticleDOI
TL;DR: A Privacy-Preserving Data Processing (PPDP) system with the support of a Homomorphic Re-Encryption Scheme (HRES), which extends partial HE from a single-user system to a multi-user one by offering ciphertext re-encryption to allow multiple users to access processed ciphertexts.

79 citations


Journal ArticleDOI
TL;DR: A verifiable keyword search over encrypted data in multi-owner settings (VKSE-MO) scheme by exploiting the multisignatures technique that is secure against a chosen-keyword attack under a random oracle model.
Abstract: Searchable encryption (SE) techniques allow cloud clients to easily store data and search encrypted data in a privacy-preserving manner, where most of SE schemes treat the cloud server as honest-but-curious. However, in practice, the cloud server is a semi-honest-but-curious third-party, which only executes a fraction of search operations and returns a fraction of false search results to save its computational and bandwidth resources. Thus, it is important to provide a results verification method to guarantee the correctness of the search results. Existing SE schemes allow multiple data owners to upload different records to the cloud server, but these schemes have very high computational and storage overheads when applied in a different but more practical setting where each record is co-owned by multiple data owners. To address this problem, we develop a verifiable keyword search over encrypted data in multi-owner settings (VKSE-MO) scheme by exploiting the multisignatures technique. Thus, our scheme only requires a single index for each record and data users are assured of the correctness of the search results in challenging settings. Our formal security analysis proved that the VKSE-MO scheme is secure against a chosen-keyword attack under a random oracle model. In addition, our empirical study using a real-world dataset demonstrated the efficiency and feasibility of the proposed scheme in practice.

73 citations


Journal ArticleDOI
Jiahao Dai1, Jiajia Liu1, Yongpeng Shi1, Shubin Zhang1, Jianfeng Ma1 
TL;DR: A framework based on stochastic geometry for D2D multichannel overlaying uplink cellular networks is presented, able to model and analyze how different parameters affect the coverage probability and ergodic rate of users in the cellular network.
Abstract: Device-to-device (D2D) communication, which enables two closely located users to communicate with each other without traversing the base station (BS), has become an emerging technology for network engineers to optimize network performance. This paper presents a framework based on stochastic geometry for D2D multichannel overlaying uplink cellular networks. In this framework, a part of mobile devices and machines (namely cellular users) can upload data to the nearest BSs directly through cellular channels, the other mobile devices and machines (namely D2D users) must upload data to their own relays through D2D channels, and then, the relays communicate with the nearest BSs through cellular channels. D2D users upload data with a fixed transmit power, while cellular users and D2D relays do so by adopting the channel inversion power control with maximum transmit power limit. This tractable framework is able to model and analyze how different parameters affect the coverage probability and ergodic rate of users in the cellular network. As validated by extensive numerical results, the framework can help us to determine the optimal channel allocation to achieve the best network performance efficiently.

71 citations


Patent
24 Apr 2017
TL;DR: In this article, a volume-based block storage service and application programming interfaces (APIs) to the service are presented. But the API is not designed for remote data stores.
Abstract: Methods, apparatus, and computer-accessible storage media for providing a volume-based block storage service and application programming interfaces (APIs) to the service. A block storage service and block storage service APIs may allow processes (applications or appliances) on the service client network to leverage remote, volume-based block storage provided by the service provider. The APIs may provide a standard interface to volume-based block storage operations on a remote data store. The service provider, the service clients, and/or third parties may develop various applications and/or appliances that may, for example, be instantiated in service clients' local networks and that leverage the block storage service via the APIs to create and manage volumes and snapshots on the remote data store and to upload and download data from the volumes and snapshots on the remote data store.

64 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: The importance of creating public, open "smart city" data repositories for the research community is argued and privacy preserving techniques for the anonymous uploading of urban sensor data from vehicles are proposed.
Abstract: In the Intelligent Vehicle Grid, the car is becoming a formidable sensor platform, absorbing information from the environment, from other cars (and from the driver) and feeding it to other cars and infrastructure to assist in safe navigation, pollution control and traffic management. The Vehicle Grid essentially becomes an Internet of Things (IOT), which we call Internet of Vehicles (IOV), capable to make its own decisions about driving customers to their destinations. Like other important IOT examples (e.g., smart buildings), the Internet of Vehicles will not merely upload data to the Internet using V2I. It will also use V2V communications between peers to complement on board sensor inputs and provide safe and efficient navigation. In this paper, we first describe several vehicular applications that leverage V2V and V2I. Communications with infrastructure and with other vehicles, however, can create privacy and security violations. In the second part of the paper we address these issues and more specifically focus on the need to guarantee location privacy to mobile users. We argue on the importance of creating public, open "smart city" data repositories for the research community and propose privacy preserving techniques for the anonymous uploading of urban sensor data from vehicles.

Proceedings ArticleDOI
12 Apr 2017
TL;DR: A system for online assessment of handwritten homework assignments and exams finds that the time spent grading an individual response to a question rapidly decays with the number of responses to that question that the grader has already graded.
Abstract: We present a system for online assessment of handwritten homework assignments and exams. First, either instructors or students scan and upload handwritten work. Instructors then grade the work and distribute the results using a web-based platform. Our system optimizes for three key dimensions: speed, consistency, and flexibility. The primary innovation enabling improvements in all three dimensions is a dynamically evolving rubric for each question on an assessment. We also describe how the system minimizes the overhead incurred in the digitization process. Our system has been in use for four years, with instructors at 200 institutions having graded over 10 million pages of student work. We present results as user-reported data and feedback regarding time saved grading, enjoyment, and student experience. Two-thirds of responders report saving 30% or more time relative to their traditional workflow. We also find that the time spent grading an individual response to a question rapidly decays with the number of responses to that question that the grader has already graded.

Patent
29 Jun 2017
TL;DR: In this paper, a method, a device, and a non-transitory storage medium provides an installation of an IoT device in which the installation includes to store Internet of Things (IoT) management information, which includes IoT device information of the IoT device.
Abstract: A method, a device, and a non-transitory storage medium provides an installation of an IoT device in which the installation includes to store Internet of Things (IoT) management information, which includes IoT device information of the IoT device; upload the IoT management information to a network device in response to the storing of the IoT management information; store the IoT management information at the IoT device in response to the upload; present a map of the location; receive a designation of a location point on the map that indicates where the IoT device is to be installed; determine whether the IoT device is to be updated; update the IoT device in response to a determination that an update for the IoT device is available; calibrate one or more sensors of the IoT device; and configure the IoT device to transmit IoT data to another network device.

Proceedings ArticleDOI
01 Apr 2017
TL;DR: The proposed unit is designed to collect information using a variety of sensors and an on-board camera to be uploaded to a central server for actions such as speed limit adjustment, metering routes to reduce vehicle congestion and emissions, and issuing weather advisory warnings.
Abstract: This paper presents the design of a modular, Scalable Enhanced Road Side Unit for use as part of a comprehensive Intelligent Transportation System based on the concept of Internet of Things (IoT). The proposed unit is designed to collect information using a variety of sensors and an on-board camera. The collected information can then be uploaded to a central server for actions such as speed limit adjustment, metering routes to reduce vehicle congestion and emissions, and issuing weather advisory warnings. Communication between an individual unit and the central server are performed using existing wireless cellular networks. Each module also contains an RF module for communication with both other nearby units and other parties equipped with an appropriate receiver module. Lab testing with an initial prototype confirmed the feasibility of the proposed communication links and evaluation of the current real-time software implementation showed an average total CPU resource usage of 34%, giving room to expand functionality with additional tasks.

Journal ArticleDOI
TL;DR: A platform for sharing medical imaging data between clinicians and researchers that automates anonymisation of pixel data and metadata at the clinical site and maintains subject data groupings while preserving anonymity.

Journal ArticleDOI
TL;DR: An overview of WebMeV is provided and two simple use cases are demonstrated that illustrate the value of putting data analysis in the hands of those looking to explore the underlying biology of the systems being studied.
Abstract: Although large, complex genomic datasets are increasingly easy to generate, and the number of publicly available datasets in cancer and other diseases is rapidly growing, the lack of intuitive, easy-to-use analysis tools has remained a barrier to the effective use of such data. WebMeV (http://mev.tm4.org) is an open-source, web-based tool that gives users access to sophisticated tools for analysis of RNA-Seq and other data in an interface designed to democratize data access. WebMeV combines cloud-based technologies with a simple user interface to allow users to access large public datasets, such as that from The Cancer Genome Atlas or to upload their own. The interface allows users to visualize data and to apply advanced data mining analysis methods to explore the data and draw biologically meaningful conclusions. We provide an overview of WebMeV and demonstrate two simple use cases that illustrate the value of putting data analysis in the hands of those looking to explore the underlying biology of the systems being studied. Cancer Res; 77(21); e11-14. ©2017 AACR.

Patent
07 Jul 2017
TL;DR: In this article, the authors propose a system that uses an evaluation of geographic locations, transaction times, and device identities to control the upload of consent data, and evaluate the location of restricted equipment such as ATMs and kiosks.
Abstract: Method and apparatus for a system to harden digital consents. The system uses an evaluation of geographic locations, transaction times, and device identities to control the upload of consent data. Evaluations occur using numerous techniques including MAC address evaluation, IP address evaluation, meta-data evaluation, and physical location of restricted equipment such as ATMs and kiosks. Reliability of consent data entered into the system may be enhanced by strictly evaluating geographic locations, transaction times, and/or device identities.

Journal ArticleDOI
TL;DR: This paper proves that the authentication key can be fudged and the message in this key cannot be denied, and the proposed method should be stable in expense, and should be leakage-resilient.

Patent
17 May 2017
TL;DR: In this paper, the authors proposed a picture copyright protection method based on the blockchain technology, which includes a picture maker uploads a picture file to a server via an uploading client side, and the server calculates the Hash value of the picture file as a unique identification of the file, initiates a transaction to a digital currency network based on blockchain technology and adds the hash value as additional information to the transaction and saves related information to a database.
Abstract: The invention relates to a picture copyright protection method and a picture copyright protection system based on a blockchain technology. The picture copyright protection method includes that (1), a picture maker uploads a picture file to a server via an uploading client side; (2), the server calculates the Hash value of the picture file as a unique identification of the picture file, initiates a transaction to a digital currency network based on the blockchain technology, adds the Hash value as additional information to the transaction, and saves related information to a database; (3), a picture user downloads the picture file from the server via a downloading client side, initiates a transaction, adds the Hash value of the downloaded picture file as additional information to the transaction, and saves related information to the database. By the picture copyright protection method and the picture copyright protection system, copyrights of picture makers can be declared and protected when original pictures are released on the internet.

Journal ArticleDOI
TL;DR: Results show that Facebook and Twitter compressed HD videos more as compared to other clouds, however, Facebook gives a better quality of compressed videos compared to Twitter, and users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.
Abstract: Video sharing on social clouds is popular among the users around the world. High-Definition (HD) videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS). Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE) level of end users. To assess the QoE of video compression, we conducted subjective (QoE) experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

Proceedings Article
12 Jul 2017
TL;DR: A video prediction service, ChessVPS, is built using the first popularity prediction algorithm that is both scalable and accurate, and enables a higher percentage of total user watch time to benefit from intensive encoding, with less overhead than a recent production heuristic.
Abstract: Streaming video algorithms dynamically select between different versions of a video to deliver the highest quality version that can be viewed without buffering over the client's connection. To improve the quality for viewers, the backing video service can generate more and/or better versions, but at a significant computational overhead. Processing all videos uploaded to Facebook in the most intensive way would require a prohibitively large cluster. Facebook's video popularity distribution is highly skewed, however, with analysis on sampled videos showing 1% of them accounting for 83% of the total watch time by users. Thus, if we can predict the future popularity of videos, we can focus the intensive processing on those videos that improve the quality of the most watch time. To address this challenge, we designed Chess, the first popularity prediction algorithm that is both scalable and accurate. Chess is scalable because, unlike the state-of-the-art approaches, it requires only constant space per video, enabling it to handle Facebook's video workload. Chess is accurate because it delivers superior predictions using a combination of historical access patterns with social signals in a unified online learning framework. We have built a video prediction service, ChessVPS, using our new algorithm that can handle Facebook's workload with only four machines. We find that re-encoding popular videos predicted by ChessVPS enables a higher percentage of total user watch time to benefit from intensive encoding, with less overhead than a recent production heuristic, e.g., 80% of watch time with one-third as much overhead.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The proposed web platform can be used to download datasets, learn how some well-known algorithms work, study the implementation of those algorithms, test the methods, and even upload indoor positioning estimations of the user's methods to check the accuracy when comparing against the results provided by other methods already included in a ranking.
Abstract: This paper presents the IndoorLoc Platform, a public repository for comparing and evaluating indoor positioning algorithms and sharing datasets. The proposed web platform can be used to download datasets, learn how some well-known algorithms work, study the implementation of those algorithms, test the methods, and even upload indoor positioning estimations of the user's methods to check the accuracy when comparing against the results provided by other methods already included in a ranking, among other functionalities. This paper also presents a comparative study of the accuracy of two well-known fingerprinting-based indoor localization algorithms using the datasets included in the platform. This comparative study can be performed using the tools included in the platform.

Book ChapterDOI
11 Sep 2017
TL;DR: In this paper, a classification engine for the reconstruction of the history of an image is presented, using machine learning techniques and a-priori knowledge acquired through image analysis, which can understand which social network platform has processed an image and the software application used to perform the image upload.
Abstract: Image Forensics has already achieved great results for the source camera identification task on images. Standard approaches for data coming from Social Network Platforms cannot be applied due to different processes involved (e.g., scaling, compression, etc.). In this paper, a classification engine for the reconstruction of the history of an image, is presented. Specifically, machine learning techniques and a-priori knowledge acquired through image analysis, we propose an automatic approach that can understand which Social Network Platform has processed an image and the software application used to perform the image upload. The engine makes use of proper alterations introduced by each platform as features. Results, in terms of global accuracy on a dataset of 2720 images, confirm the effectiveness of the proposed strategy.

Patent
06 Jul 2017
TL;DR: Disclosed as mentioned in this paper is a zero-knowledge distributed application configured to securely share information among groups of users having various roles, such as doctors and patients, with private keys that reside solely client side.
Abstract: Disclosed is a zero-knowledge distributed application configured to securely share information among groups of users having various roles, such as doctors and patients. Confidential information may be encrypted client-side, with private keys that reside solely client side. Encrypted collections of data may be uploaded to, and hosted by, a server that does not have access to keys suitable to decrypt the data. Other users may retrieve encrypted data from the server and decrypt some or all of the data with keys suitable to gain access to at least part of the encrypted data. The system includes a key hierarchy with multiple entry points to a top layer by which access is selectively granted to various users and keys may be recovered.

Journal ArticleDOI
TL;DR: This paper presents a general framework to model the information diffusion and utility function of each user on the proposed architecture, and formulate the problem as a decentralized social utility maximization game and develops two decentralized algorithms to solve the problem.
Abstract: Mobile social video sharing enables mobile users to create ultra-short video clips and instantly share them with social friends, which poses significant pressure to the content distribution infrastructure. In this paper, we propose a public cloud-assisted architecture to tackle this problem. In particular, by motivating mobile users to upload videos to the local public cloud to serve requests, and, therefore, having a permission to access friends’ videos stored in the cloud, our method can alleviate the traffic burden to the social service providers, while reducing the service latency of mobile users. First, we present a general framework to model the information diffusion and utility function of each user on the proposed architecture, and formulate the problem as a decentralized social utility maximization game. Second, we show that this problem is a supermodular game and there exists at least one socially aware Nash equilibrium (SNE). We then develop two decentralized algorithms to solve this problem. The first algorithm can find an SNE with less computation complexity, and the second algorithm can find the Pareto-optimal SNE with better performance. Finally, through extensive experiments, we demonstrate that the overall system performance can be significantly improved by exploiting the selflessness among social friends.

Proceedings ArticleDOI
01 May 2017
TL;DR: Extensive experiments demonstrate that ALSense can indeed achieve higher classification accuracy given fixed data acquisition budgets for both applications, namely, WiFi fingerprint-based indoor localization and IMU-based human activity recognition.
Abstract: An important category of mobile crowdsensing applications involve collecting sensor measurements from mobile devices and querying mobile users for annotations to build machine learning models for inference and prediction. Trade-offs between inference performance and the costs of data acquisition (both unlabeled and labeled) are not yet well understood. In this paper, we develop, ALSense, a distributed active learning framework for mobile crowdsensing. The goal is to minimize prediction errors for classification-based mobile crowdsensing tasks subject to upload and query cost constraints. Novel stream-based active learning strategies are developed to orchestrate queries of annotation data and the upload of unlabeled data from mobile devices. We evaluate the effectiveness of ALSense through two applications that can benefit from mobile crowdsensing, namely, WiFi fingerprint-based indoor localization and IMU-based human activity recognition. Extensive experiments demonstrate that ALSense can indeed achieve higher classification accuracy given fixed data acquisition budgets for both applications.

Journal ArticleDOI
TL;DR: A parallel downloading approach that replicates data segments and parallel downloads replicated data fragments, to enhance the overall performance is introduced and extensive experimentations demonstrate the effectiveness of DPRS under most of access patterns.

Patent
19 Dec 2017
TL;DR: In this paper, the authors describe systems and methods for deploying a new code block on a blockchain, where an application server may provide a user with a graphical user interface (GUI) with contract components and document components.
Abstract: Embodiments disclosed herein describe systems and methods for deploying a new code block on a blockchain. In an embodiment, an application server may provide a user with a graphical user interface (GUI) with contract components and document components. The application server may generate an assembled contract text based on the user selecting the contract and document components. The application server may determine the blockchain addresses or local addresses of smart contract components corresponding to the contract components and the documents components. The application server may generate a code block including references to the addresses of the smart contracts and the document components or containing the executable code itself and may deploy the code block to the latest valid blockchain. The application server may execute the smart contract in the code block based in response to a digital event trigger.

Proceedings ArticleDOI
22 Mar 2017
TL;DR: A privacy-preserving system that allows users to upload their resources encrypted, and a collaborative multi-party access control model allowing all the users related to a resource to participate in the specification of the access control policy is proposed.
Abstract: According to the current design of content sharing services, such as Online Social Networks (OSNs), typically (i) the service provider has unrestricted access to the uploaded resources and (ii) only the user uploading the resource is allowed to define access control permissions over it. This results in a lack of control from other users that are associated, in some way, with that resource. To cope with these issues, in this paper, we propose a privacy-preserving system that allows users to upload their resources encrypted, and we design a collaborative multi-party access control model allowing all the users related to a resource to participate in the specification of the access control policy. Our model employs a threshold-based secret sharing scheme, and by exploiting users' social relationships, sets the trusted friends of the associated users responsible to partially enforce the collective policy. Through replication of the secret shares and delegation of the access control enforcement role, our model ensures that resources are timely available when requested. Finally, our experiments demonstrate that the performance overhead of our model is minimal and that it does not significantly affect user experience.

Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of simple and low cost piezoelectric patch type force sensors for logistic applications to ensure safety of the package while also detecting damage suffered.
Abstract: We propose the use of simple and low cost piezoelectric patch type force sensors for logistic applications to ensure safety of the package while also detecting damage suffered. The sensors are connected to a prototype readout system that can record the data and transfer it wirelessly via a Bluetooth module. The data can be received by an in-vehicle telematics device which can upload it directly to the cloud database, thus allowing real time monitoring of package condition during transportation. This logistics management model would aid in improving the quality of services provided by the logistics company while also earning consumer credibility. Thus, we believe that these patch type force sensors can be realistically implemented in logistics in the near future.