scispace - formally typeset
Search or ask a question
Topic

Block (data storage)

About: Block (data storage) is a research topic. Over the lifetime, 33998 publications have been published within this topic receiving 372145 citations. The topic is also known as: data block & logical block.


Papers
More filters
Posted Content
TL;DR: Squeeze-and-excitation (SE) as mentioned in this paper adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels, which can be stacked together to form SENet architectures.
Abstract: The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.

5,411 citations

Journal ArticleDOI
TL;DR: In this article, a modified maximum likelihood procedure is proposed for estimating intra-block and inter-block weights in the analysis of incomplete block designs with block sizes not necessarily equal, and the method consists of maximizing the likelihood, not of all the data, but of selected error contrasts.
Abstract: SUMMARY A method is proposed for estimating intra-block and inter-block weights in the analysis of incomplete block designs with block sizes not necessarily equal. The method consists of maximizing the likelihood, not of all the data, but of a set of selected error contrasts. When block sizes are equal results are identical with those obtained by the method of Nelder (1968) for generally balanced designs. Although mainly concerned with incomplete block designs the paper also gives in outline an extension of the modified maximum likelihood procedure to designs with a more complicated block structure. In this paper we consider the estimation of weights to be used in the recovery of interblock information in incomplete block designs with possibly unequal block sizes. The problem can also be thought of as one of estimating constants and components of variance from data arranged in a general two-way classification when the effects of one classification are regarded as fixed and the effects of the second classification are regarded as random. Nelder (1968) described the efficient estimation of weights in generally balanced designs, in which the blocks are usually, although not always, of equal size. Lack of balance resulting from unequal block sizes is, however, common in some experimental work, for example in animal breeding experiments. The maximum likelihood procedure described by Hartley & Rao (1967) can be used but does not give the same estimates as Nelder's method in the balanced case. As will be shown, the two methods in effect use the same weighted sums of squares of residuals but assign different expectations. In the maximum likelihood approach, expectations are taken over a conditional distribution with the treatment effects fixed at their estimated values. In contrast Nelder uses unconditional expectations. The difference between the two methods is analogous to the well-known difference between two methods of estimating the variance o2 of a normal distribution, given a random sample of n values. Both methods use the same total sum of squares of deviations. But

3,855 citations

Proceedings ArticleDOI
21 May 2015
TL;DR: A decentralized personal data management system that ensures users own and control their data is described, and a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party is implemented.
Abstract: The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society.

1,953 citations

Proceedings ArticleDOI
21 Oct 2001
TL;DR: The Cooperative File System is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval with a completely decentralized architecture that can scale to large systems.
Abstract: The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.

1,733 citations

Journal ArticleDOI
TL;DR: Res2Net as mentioned in this paper constructs hierarchical residual-like connections within one single residual block to represent multi-scale features at a granular level and increases the range of receptive fields for each network layer.
Abstract: Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/ .

1,553 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
86% related
Wireless network
122.5K papers, 2.1M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202221
20211,347
20202,112
20192,604
20181,824
20171,448