scispace - formally typeset
Search or ask a question

Showing papers on "Upload published in 2018"


Journal ArticleDOI
TL;DR: Improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt.
Abstract: Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.

433 citations


Proceedings ArticleDOI
TL;DR: In this article, a federated learning (FL) protocol for heterogeneous clients in a mobile edge computing (MEC) network is proposed. But the authors consider the problem of data aggregation in the overall training process and propose a new protocol to solve it.
Abstract: We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e. requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.

369 citations


Posted Content
TL;DR: Federated Dropout is introduced, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation.
Abstract: Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a $14\times$ reduction in server-to-client communication, a $1.7\times$ reduction in local computation, and a $28\times$ reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FL's impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached.

274 citations


Journal ArticleDOI
TL;DR: A blockchain-based security architecture for distributed cloud storage, where users can divide their own files into encrypted data chunks, and upload those data chunks randomly into the P2P network nodes that provide free storage capacity is proposed.

155 citations


Proceedings ArticleDOI
11 Oct 2018
TL;DR: ONN (Incremental Offloading of Neural Network), a partitioning-based DNN offloading technique for edge computing that divides a client's DNN model into a few partitions and uploads them to the edge server one by one.
Abstract: Current wisdom to run computation-intensive deep neural network (DNN) on resource-constrained mobile devices is allowing the mobile clients to make DNN queries to central cloud servers, where the corresponding DNN models are pre-installed. Unfortunately, this centralized, cloud-based DNN offloading is not appropriate for emerging decentralized cloud infrastructures (e.g., cloudlet, edge/fog servers), where the client may send computation requests to any nearby server located at the edge of the network. To use such a generic edge server for DNN execution, the client should first upload its DNN model to the server, yet it can seriously delay query processing due to long uploading time. This paper proposes IONN (Incremental Offloading of Neural Network), a partitioning-based DNN offloading technique for edge computing. IONN divides a client's DNN model into a few partitions and uploads them to the edge server one by one. The server incrementally builds the DNN model as each DNN partition arrives, allowing the client to start offloading partial DNN execution even before the entire DNN model is uploaded. To decide the best DNN partitions and the uploading order, IONN uses a novel graph-based algorithm. Our experiments show that IONN significantly improves query performance in realistic hardware configurations and network conditions.

134 citations


Proceedings ArticleDOI
01 Dec 2018
TL;DR: A Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training, and which outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.
Abstract: Content caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.

116 citations


Journal ArticleDOI
TL;DR: A novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space and a rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis.
Abstract: Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.

113 citations


Journal ArticleDOI
TL;DR: This paper proposes a fine-grained EHR access control scheme which is proven secure in the standard model under the decisional parallel bilinear Diffie–Hellman exponent assumption and is very suitable for mobile cloud computing.

113 citations


Posted Content
TL;DR: The simulation results show that the integrated network significantly outperforms the non-integrated ones in terms of the sum data rate, and the influence of the traffic load and LEO constellation on the system performance is also discussed.
Abstract: In this paper, we propose a terrestrial-satellite network (TSN) architecture to integrate the ultra-dense low earth orbit (LEO) networks and the terrestrial networks to achieve efficient data offloading. In TSN, each ground user can access the network over C-band via a macro cell, a traditional small cell, or a LEO-backhauled small cell (LSC). Each LSC is then scheduled to upload the received data via multiple satellites over Ka-band. We aim to maximize the sum data rate and the number of accessed users while satisfying the varying backhaul capacity constraints jointly determined by the LEO satellite based backhaul links. The optimization problem is then decomposed into two closely connected subproblems and solved by our proposed matching algorithms. Simulation results show that the integrated network significantly outperforms the non-integrated ones in terms of the sum data rate. The influence of the traffic load and LEO constellation on the system performance is also discussed.

107 citations


Journal ArticleDOI
TL;DR: The adoption and use of SynBioHub, a community-driven effort, has the potential to overcome the reproducibility challenge across laboratories by helping to address the current lack of information about published designs.
Abstract: The SynBioHub repository (https://synbiohub.org) is an open-source software project that facilitates the sharing of information about engineered biological systems. SynBioHub provides computational access for software and data integration, and a graphical user interface that enables users to search for and share designs in a Web browser. By connecting to relevant repositories (e.g., the iGEM repository, JBEI ICE, and other instances of SynBioHub), the software allows users to browse, upload, and download data in various standard formats, regardless of their location or representation. SynBioHub also provides a central reference point for other resources to link to, delivering design information in a standardized format using the Synthetic Biology Open Language (SBOL). The adoption and use of SynBioHub, a community-driven effort, has the potential to overcome the reproducibility challenge across laboratories by helping to address the current lack of information about published designs.

96 citations


Proceedings ArticleDOI
20 May 2018
TL;DR: An efficient privacy-preserving contact tracing for infection detection (EPIC) which enables users to securely upload their data to the server and later in case of one user got infected other users can check if they have ever got in contact with the infected user in the past.
Abstract: The world has experienced many epidemic diseases in the past, SARS, H1N1, and Ebola are some examples of these diseases. When those diseases outbreak, they spread very quickly among people and it becomes a challenge to trace the source in order to control the disease. In this paper, we propose an efficient privacy-preserving contact tracing for infection detection (EPIC) which enables users to securely upload their data to the server and later in case of one user got infected other users can check if they have ever got in contact with the infected user in the past. The process is done privately and without disclosing any unnecessary information to the server. Our scheme uses a matching score to represent the result of the contact tracing, and uses a weight-based matching method to increase the accuracy of the score. In addition, we have developed an adaptive scanning method to optimize the power consumption of the wireless scanning process. Further, we evaluate our scheme in real experiment and show that the user's privacy is preserved, and the accuracy achieves 93% in detecting the contact tracing based on the matching score in an energy efficient way.

Proceedings ArticleDOI
20 May 2018
TL;DR: It is shown how the adversary can infer users' full phone numbers knowing just their email address, determine whether a particular user visited a website, and de-anonymize all the visitors to a website by inferring their phone numbers en masse.
Abstract: Sites like Facebook and Google now serve as de facto data brokers, aggregating data on users for the purpose of implementing powerful advertising platforms. Historically, these services allowed advertisers to select which users see their ads via targeting attributes. Recently, most advertising platforms have begun allowing advertisers to target users directly by uploading the personal information of the users who they wish to advertise to (e.g., their names, email addresses, phone numbers, etc.); these services are often known as custom audiences. Custom audiences effectively represent powerful linking mechanisms, allowing advertisers to leverage any PII (e.g., from customer data, public records, etc.) to target users. In this paper, we focus on Facebook's custom audience implementation and demonstrate attacks that allow an adversary to exploit the interface to infer users' PII as well as to infer their activity. Specifically, we show how the adversary can infer users' full phone numbers knowing just their email address, determine whether a particular user visited a website, and de-anonymize all the visitors to a website by inferring their phone numbers en masse. These attacks can be conducted without any interaction with the victim(s), cannot be detected by the victim(s), and do not require the adversary to spend money or actually place an ad. We propose a simple and effective fix to the attacks based on reworking the way Facebook de-duplicates uploaded information. Facebook's security team acknowledged the vulnerability and has put into place a fix that is a variant of the fix we propose. Overall, our results indicate that advertising platforms need to carefully consider the privacy implications of their interfaces.

Journal ArticleDOI
TL;DR: A lightweight SPE (LSPE) scheme with semantic security for CWSNs, which reduces a large number of the computation-intensive operations that are adopted in previous works; thus, LSPE has search performance close to that of some practical searchable symmetric encryption schemes.
Abstract: The industrial Internet of Things is flourishing, which is unprecedentedly driven by the rapid development of wireless sensor networks (WSNs) with the assistance of cloud computing. The new wave of technology will give rise to new risks to cyber security, particularly the data confidentiality in cloud-assisted WSNs (CWSNs). Searchable public-key encryption (SPE) is a promising method to address this problem. In theory, it allows sensors to upload public-key ciphertexts to the cloud, and the owner of these sensors can securely delegate a keyword search to the cloud and retrieve the intended data while maintaining data confidentiality. However, all existing and semantically secure SPE schemes have expensive costs in terms of generating ciphertexts and searching keywords. Hence, this paper proposes a lightweight SPE (LSPE) scheme with semantic security for CWSNs. LSPE reduces a large number of the computation-intensive operations that are adopted in previous works; thus, LSPE has search performance close to that of some practical searchable symmetric encryption schemes. In addition, LSPE saves considerable time and energy costs of sensors for generating ciphertexts. Finally, we experimentally test LSPE and compare the results with some previous works to quantitatively demonstrate the above advantages.

Journal ArticleDOI
TL;DR: The proposed Select Maximum Saved Energy First (SMSEF) algorithm can effectively help mobiles to save energy in the MEC system.
Abstract: Mobile edge computing (MEC) is a novel technique that can reduce mobiles' computational burden by tasks offloading, which emerges as a promising paradigm to provide computing capabilities in close proximity to mobile users. In this paper, we will study the scenario where multiple mobiles upload tasks to a MEC server in a sing cell, and allocating the limited server resources and wireless channels between mobiles becomes a challenge. We formulate the optimization problem for the energy saved on mobiles with the tasks being dividable, and utilize a greedy choice to solve the problem. A Select Maximum Saved Energy First (SMSEF) algorithm is proposed to realize the solving process. We examined the saved energy at different number of nodes and channels, and the results show that the proposed scheme can effectively help mobiles to save energy in the MEC system.

Posted Content
TL;DR: In this article, the authors proposed a new capacity-achieving code for the private information retrieval (PIR) problem, and showed that it has the minimum message size and the minimum upload cost (being roughly linear in the number of messages).
Abstract: We propose a new capacity-achieving code for the private information retrieval (PIR) problem, and show that it has the minimum message size (being one less than the number of servers) and the minimum upload cost (being roughly linear in the number of messages) among a general class of capacity-achieving codes, and in particular, among all capacity-achieving linear codes. Different from existing code constructions, the proposed code is asymmetric, and this asymmetry appears to be the key factor leading to the optimal message size and the optimal upload cost. The converse results on the message size and the upload cost are obtained by a strategic analysis of the information theoretic proof of the PIR capacity, from which a set of critical properties of any capacity-achieving code in the code class of interest is extracted. The symmetry structure of the PIR problem is then analyzed, which allows us to construct symmetric codes from asymmetric ones, yielding a meaningful bridge between the proposed code and existing ones in the literature.

Journal ArticleDOI
TL;DR: Ms2lda.org is a web application that allows users to upload their data, run MS2LDA analyses and explore the results through interactive visualizations, and the user can also decompose a data set onto predefined Mass2Motifs.
Abstract: Motivation We recently published MS2LDA, a method for the decomposition of sets of molecular fragment data derived from large metabolomics experiments. To make the method more widely available to the community, here we present ms2lda.org, a web application that allows users to upload their data, run MS2LDA analyses and explore the results through interactive visualizations. Results Ms2lda.org takes tandem mass spectrometry data in many standard formats and allows the user to infer the sets of fragment and neutral loss features that co-occur together (Mass2Motifs). As an alternative workflow, the user can also decompose a data set onto predefined Mass2Motifs. This is accomplished through the web interface or programmatically from our web service. Availability and implementation The website can be found at http://ms2lda.org, while the source code is available at https://github.com/sdrogers/ms2ldaviz under the MIT license. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This paper proposes a key-updating and authenticator-evolving mechanism with zero-knowledge privacy of the stored files for secure cloud data auditing, which incorporates zero knowledge proof systems, proxy re-signatures and homomorphic linear authenticators and develops a prototype implementation of the protocol.

Journal ArticleDOI
27 Dec 2018
TL;DR: This work proposes a novel approach, called DeepType, to personalize text input with better privacy, and proposes a set of techniques that effectively reduce the computation cost of training deep learning models on mobile devices at the cost of negligible accuracy loss.
Abstract: Mobile users spend an extensive amount of time on typing. A more efficient text input instrument brings a significant enhancement of user experience. Deep learning techniques have been recently applied to suggesting the next words of input, but to achieve more accurate predictions, these models should be customized for individual users. Personalization is often at the expense of privacy concerns. Existing solutions require users to upload the historical logs of their input text to the cloud so that a deep learning predictor can be trained. In this work, we propose a novel approach, called DeepType, to personalize text input with better privacy. The basic idea is intuitive: training deep learning predictors on the device instead of on the cloud, so that the model makes personalized and private data never leaves the device to externals. With DeepType, a global model is first trained on the cloud using massive public corpora, and our personalization is done by incrementally customizing the global model with data on individual devices. We further propose a set of techniques that effectively reduce the computation cost of training deep learning models on mobile devices at the cost of negligible accuracy loss. Experiments using real-world text input from millions of users demonstrate that DeepType significantly improves the input efficiency for individual users, and its incurred computation and energy costs are within the performance and battery restrictions of typical COTS mobile devices.

Proceedings ArticleDOI
15 Nov 2018
TL;DR: This work proposes a distributed DNN architecture that learns end-to-end how to represent the raw sensor data and send it over the network such that it meets the eventual sensing task's needs.
Abstract: We believe that most future video uploaded over the network will be consumed by machines for sensing tasks such as automated surveillance and mapping rather than for human consumption. Today's systems typically collect raw data from distributed sensors, such as drones, with the computer vision logic implemented in the cloud using deep neural networks (DNNs). They use standard video encoding techniques, send it over the network, and then decompress it at the cloud before using the vision DNN. In other words, data encoding and distribution is decoupled from the sensing goal. This is bandwidth inefficient because video encoding schemes, such as MPEG4, might send data tailored for human perception but irrelevant for the overall sensing goal. We argue that data collection and distribution mechanisms should be co-designed with the eventual sensing objective. Specifically, we propose a distributed DNN architecture that learns end-to-end how to represent the raw sensor data and send it over the network such that it meets the eventual sensing task's needs. Such a design naturally adapts to varying network bandwidths between the sensors and the cloud, as well as automatically sends task-appropriate data features.

Journal ArticleDOI
TL;DR: LiSPEC as discussed by the authors is a web-based spectrum viewer for visualizing, analyzing and sharing mass spectrometry data, which supports peptide-spectrum matches from standard proteomics and cross-linking experiments.
Abstract: We present xiSPEC, a standard compliant, next-generation web-based spectrum viewer for visualizing, analyzing and sharing mass spectrometry data. Peptide-spectrum matches from standard proteomics and cross-linking experiments are supported. xiSPEC is to date the only browser-based tool supporting the standardized file formats mzML and mzIdentML defined by the proteomics standards initiative. Users can either upload data directly or select files from the PRIDE data repository as input. xiSPEC allows users to save and share their datasets publicly or password protected for providing access to collaborators or readers and reviewers of manuscripts. The identification table features advanced interaction controls and spectra are presented in three interconnected views: (i) annotated mass spectrum, (ii) peptide sequence fragmentation key and (iii) quality control error plots of matched fragments. Highlighting or selecting data points in any view is represented in all other views. Views are interactive scalable vector graphic elements, which can be exported, e.g. for use in publication. xiSPEC allows for re-annotation of spectra for easy hypothesis testing by modifying input data. xiSPEC is freely accessible at http://spectrumviewer.org and the source code is openly available on https://github.com/Rappsilber-Laboratory/xiSPEC.

Journal ArticleDOI
TL;DR: Launched in July 2017, HEAT Plus allows users to upload their own databases and assess inequalities at the global, national or subnational level for a range of (health) indicators and dimensions of inequality.
Abstract: As a key step in advancing the sustainable development goals, the World Health Organisation (WHO) has placed emphasis on building capacity for measuring and monitoring health inequalities A number of resources have been developed, including the Health Equity Assessment Toolkit (HEAT), a software application that facilitates the assessment of within-country health inequalities Following user demand, an Upload Database Edition of HEAT, HEAT Plus, was developed Launched in July 2017, HEAT Plus allows users to upload their own databases and assess inequalities at the global, national or subnational level for a range of (health) indicators and dimensions of inequality The software is open-source, operates on Windows and Macintosh platforms and is readily available for download from the WHO website The flexibility of HEAT Plus makes it a suitable tool for both global and national inequality assessments Further developments will include interactive graphs, maps and translation into different languages

Journal ArticleDOI
TL;DR: This paper considers a multi-source CB-PHR system in which multiple data providers are authorized by individual data owners to upload their personal health data to an untrusted public cloud and proposes a novel Multi-Source Order-Preserving Symmetric Encryption (MOPSE) scheme, which enables efficient and privacy-preserving query processing.
Abstract: Cloud-based Personal Health Record systems (CB-PHR) have great potential in facilitating the management of individual health records. Security and privacy concerns are among the main obstacles for the wide adoption of CB-PHR systems. In this paper, we consider a multi-source CB-PHR system in which multiple data providers, such as hospitals and physicians are authorized by individual data owners to upload their personal health data to an untrusted public cloud. The health data are submitted in an encrypted form to ensure data security, and each data provider also submits encrypted data indexes to enable queries over the encrypted data. We propose a novel Multi-Source Order-Preserving Symmetric Encryption (MOPSE) scheme whereby the cloud can merge the encrypted data indexes from multiple data providers without knowing the index content. MOPSE enables efficient and privacy-preserving query processing in that a data user can submit a single data query, the cloud can process over the encrypted data from all related data providers without knowing the query content. We also propose an enhanced scheme, MOPSE+, to more efficiently support the data queries by hierarchical data providers. Extensive analysis and experiments over real data sets demonstrate the efficacy and efficiency of MOPSE and MOPSE+.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: An edge-based attack detection in ridesharing services, namely SafeShareRide, which can detect dangerous events happening in the vehicle in near real time and is implemented on both drivers' and passengers' smartphones.
Abstract: Ridesharing services, such as Uber and Didi, are enjoying great popularity; however, a big challenge remains in guaranteeing the safety of passenger and driver. State-of-the-art work has primarily adopted the cloud model, where data collected through end devices on vehicles are uploaded to and processed in the cloud. However, data such as video can be too large to be uploaded onto the cloud in real time. When a vehicle is moving, the network communication can become unstable, leading to high latency for data uploading. In addition, the cost of huge data transfer and storage is a big concern from a business point of view. As edge computing enables more powerful computing end devices, it is possible to design a latency-guaranteed framework to ensure in-vehicle safety. In this paper, we propose an edge-based attack detection in ridesharing services, namely SafeShareRide, which can detect dangerous events happening in the vehicle in near real time. SafeShareRide is implemented on both drivers' and passengers' smartphones. The detection of SafeShareRide consists of three stages: speech recognition, driving behavior detection, and video capture and analysis. Abnormal events detected during the stages of speech recognition or driving behavior detection will trigger the video capture and analysis in the third stage. The video data processing is also redesigned: video compression is conducted at the edge to save upload bandwidth while video analysis is conducted in the cloud. We implement the SafeShareRide system by leveraging open source algorithms. Our experiments include a performance comparison between SafeShareRide and other edge-based and cloud-based approaches, CPU usage and memory usage of each detection stage, and a performance comparison between stationary and moving scenarios. Finally, we summarize several insights into smartphone based edge computing systems.

Patent
28 Aug 2018
TL;DR: Wang et al. as mentioned in this paper proposed a method for storing and sharing secure files based on a blockchain, and uses the blockchain technology to realize secure storage and sharing of files, where a user encrypts and uploads the files to obtain a file pointer, and acquires the partial file as the excitation after the accounting node writesthe information such as the formulated access policy and the pointer into a blockchain account.
Abstract: The invention belongs to the technical field of information retrieval and database structure, discloses a method for storing and sharing secure files based on a blockchain, and uses the blockchain technology to realize secure storage and sharing of files. A user encrypts and uploads the files to obtain a file pointer, and acquires the partial file as the excitation after the accounting node writesthe information such as the formulated access policy and the pointer into a blockchain account. And the other users may acquire a file key from the adjacent accounting node or the file owner after satisfying the access policy to decrypt the files so as to finally obtain a plaintext file. The invention ensures the security of the user data, and is simple and convenient for the user to use, and thepublic key cryptography technology makes the file more secure at the same time. The unchangeable modification progress of the blockchain account further ensures the complete availability of the files, enables the user to formulate different access strategies for different files, and enables full control of the files while sharing the same.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: In this paper, a simple PIR protocol for graph-based replication systems is proposed, which guarantees perfect secrecy against any set of colluding servers that does not induce a cycle.
Abstract: Replication is prevalent in both theory and practice as a means for obtaining robustness in distributed storage systems. A system in which every data entry is stored on two separate servers gives rise to a graph structure in a natural way, and the combinatorial properties of this graph shed light on the possible features of the system. One possible feature of interest, that has recently gained renewed attention, is private information retrieval (PIR). A PIR protocol enables a user to obtain a data entry from a storage system without revealing the identity of the requested entry to sets of colluding servers. In this paper we suggest a simple PIR protocol for graph based replication systems, which guarantees perfect secrecy against any set of colluding servers that does not induce a cycle. Furthermore, it is shown that the secrecy deteriorates gracefully with the number of cycles in the colluding set, and that the upload complexity can be reduced for graphs of certain specialized structure.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: This study compares simulation result of different block size and block interval to Block Propagation Time, time setup, and Average upload/download with selfish mining attack using NS3 to find out what influences the transaction process.
Abstract: Bitcoin is one of the first implementations of cryptocurrency or digital currency. It uses has increased in recent years along with the increasing volume of online transactions that require digital currency A blockchain is a digital ledger that allows parties to transact without use of a central authority as a trusted intermediary. In blockchain, there are a number of consensus protocols proposed including Proof of Stake, Proof of Elapsed Time, but most of the existing blockchain utilizes the computed Proof of Work (PoW) mechanism. Transaction security is secured in Bitcoin by using blocks with a hash-based Proof of Work (PoW) mechanism. PoW is a functional protocol that validates every incoming data to overcome spam attacks and Distributed Denial of Service (DDoS) attacks. Blockchain technology can store historically decentralized transaction data where each connected computer will store exactly the same data. To be able to perform an optimal transaction process, it is necessary to evaluate performance of the POW blockchain and find out what influences the transaction process. In this study, we compare simulation result of different block size and block interval to Block Propagation Time, time setup, and Average upload/download with selfish mining attack using NS3. The experimental results show that the smaller the block interval and block size, the smaller the Block Propagation time. It means that faster transactions are confirmed to peers on the network, and this affects the upload/download speeds.

Proceedings ArticleDOI
10 Jun 2018
TL;DR: The TILES Audio Recorder (TAR) is proposed - an unobtrusive and scalable solution to track audio activity using an affordable miniature mobile device with an open-source app, and shows that performing feature extraction only during speech segments greatly increases battery life.
Abstract: Most existing speech activity trackers used in human subject studies are bulky, record raw audio content which invades participant privacy, have complicated hardware and non-customizable software, and are too expensive for large-scale deployment The present effort seeks to overcome these challenges by proposing the TILES Audio Recorder (TAR) - an unobtrusive and scalable solution to track audio activity using an affordable miniature mobile device with an open-source app For this recorder, we make use of Jelly Pro Mobile, a pocket-sized Android smartphone, and employ two open-source toolkits: openSMILE and Tarsos-DSP Tarsos-DSP provides a Voice Activity Detection capability that triggers openSMILE to extract and save audio features only when the subject is speaking Experiments show that performing feature extraction only during speech segments greatly increases battery life, enabling the subject to wear the recorder up to 10 hours at time Furthermore, recording experiments with ground-truth clean speech show minimal distortion of the recorded features, as measured by root mean-square error and cosine distance The TAR app further provides subjects with a simple user interface that allows them to both pause feature extraction at any time and also easily upload data to a remote server

Journal ArticleDOI
TL;DR: In this article, the authors argue the use of a client-server cooperation-based approach to achieve the three objectives of efficiency, fairness, and stability in dynamic adaptive streaming over HTTP.
Abstract: Many studies have shown that the dynamic adaptive streaming over HTTP scheme is limited in achieving efficiency, fairness, and stability. Solutions proposed in the recent literature target these objectives, but tackle them from either client or server side separately. This paper argues the use of a client–server cooperation-based approach to achieve the three objectives. Effectively, information available at the client side, such as buffer occupancy, available throughput, and previous played representation levels, can be used to better control the video streaming efficiency and stability at the client side. On the other hand, information available at the server side, such as the server’s shared bandwidth capacity, and the number of connected clients and their corresponding downloading bitrates, can be leveraged to better tune the system fairness at the server side. Furthermore, the envisioned client–server cooperation aims at shortening the convergence time of the different clients to the fair bitrate allocation without affecting the overall system smoothness while increasing or decreasing the bitrates. The proposed approach is evaluated through extensive simulations using the Network Simulator, NS-3. Its performance is compared against that of notable algorithms, such as the FESTIVE [1] and PANDA [2] schemes. The obtained results show that the cooperation between the client and the server defines a promising approach in enhancing the efficiency, the fairness, the stability as well as shortening the convergence time to the fair bandwidth share.

Journal ArticleDOI
TL;DR: A differential privacy (DP)-based privacy-preserving indoor localization scheme, called DP3, which is composed of four phases: access point (AP), fuzzification, clustering, location retrieval and finger permutation in client side and DP-based finger clustering in server side, and theoretical and experimental results show that DP3 can simultaneously protect the location privacy of the TBL client and the data privacy ofthe localization server.
Abstract: Wi-Fi fingerprint-based indoor localization is regarded as one of the most promising techniques for location-based services. However, it faces serious problem of privacy disclosure of both clients’ location data and provider’s fingerprint database. To address this issue, this letter proposes a differential privacy (DP)-based privacy-preserving indoor localization scheme, called DP3, which is composed of four phases: access point (AP) fuzzification and location retrieval in client side and DP-based finger clustering and finger permutation in server side. Specifically, in AP fuzzification , instead of providing the measured full finger (including AP sequence and the corresponding received signal strength), a to-be-localized (TBL) client only uploads the AP sequence to the server. Then, the localization server utilizes the DP-enabled clustering to build the fingerprints related to the AP sequence into $k$ clusters, permutes these reference points in each cluster with exponential mechanism to mask the real positions of these fingerprints, and sends the modified data set to the TBL client. At client side, location retrieval phase estimates the location of the client. Theoretical and experimental results show that DP3 can simultaneously protect the location privacy of the TBL client and the data privacy of the localization server.

Journal ArticleDOI
TL;DR: This article considers the problem of designing a secure, robust, high-fidelity, storage-efficient image-sharing scheme over Facebook, a representative OSN that is widely accessed and proposes a DCT-domain image encryption/decryption framework that is robust against these lossy operations.
Abstract: Sharing images online has become extremely easy and popular due to the ever-increasing adoption of mobile devices and online social networks (OSNs). The privacy issues arising from image sharing over OSNs have received significant attention in recent years. In this article, we consider the problem of designing a secure, robust, high-fidelity, storage-efficient image-sharing scheme over Facebook, a representative OSN that is widely accessed. To accomplish this goal, we first conduct an in-depth investigation on the manipulations that Facebook performs to the uploaded images. Assisted by such knowledge, we propose a DCT-domain image encryption/decryption framework that is robust against these lossy operations. As verified theoretically and experimentally, superior performance in terms of data privacy, quality of the reconstructed images, and storage cost can be achieved.