scispace - formally typeset
Search or ask a question

Showing papers on "Block (data storage) published in 2001"


Proceedings ArticleDOI
21 Oct 2001
TL;DR: The Cooperative File System is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval with a completely decentralized architecture that can scale to large systems.
Abstract: The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.

1,733 citations


Journal ArticleDOI
TL;DR: The state of the art in the design and analysis of external memory algorithms and data structures, where the goal is to exploit locality in order to reduce the I/O costs is surveyed.
Abstract: Data sets in large applications are often too massive to fit completely inside the computers internal memory. The resulting input/output communication (or I/O) between fast internal memory and slower external memory (such as disks) can be a major performance bottleneck. In this article we survey the state of the art in the design and analysis of external memory (or EM) algorithms and data structures, where the goal is to exploit locality in order to reduce the I/O costs. We consider a variety of EM paradigms for solving batched and online problems efficiently in external memory. For the batched problem of sorting and related problems such as permuting and fast Fourier transform, the key paradigms include distribution and merging. The paradigm of disk striping offers an elegant way to use multiple disks in parallel. For sorting, however, disk striping can be nonoptimal with respect to I/O, so to gain further improvements we discuss distribution and merging techniques for using the disks independently. We also consider useful techniques for batched EM problems involving matrices (such as matrix multiplication and transposition), geometric data (such as finding intersections and constructing convex hulls), and graphs (such as list ranking, connected components, topological sorting, and shortest paths). In the online domain, canonical EM applications include dictionary lookup and range searching. The two important classes of indexed data structures are based upon extendible hashing and B-trees. The paradigms of filtering and bootstrapping provide a convenient means in online data structures to make effective use of the data accessed from disk. We also reexamine some of the above EM problems in slightly different settings, such as when the data items are moving, when the data items are variable-length (e.g., text strings), or when the allocated amount of internal memory can change dynamically. Programming tools and environments are available for simplifying the EM programming task. During the course of the survey, we report on some experiments in the domain of spatial databases using the TPIE system (transparent parallel I/O programming environment). The newly developed EM algorithms and data structures that incorporate the paradigms we discuss are significantly faster than methods currently used in practice.

751 citations


Journal ArticleDOI
TL;DR: Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads the authors considered.
Abstract: Efficient and effective buffering of disk blocks in main memory is critical for better file system performance due to a wide speed gap between main memory and hard disks. In such a buffering system, one of the most important design decisions is the block replacement policy that determines which disk block to replace when the buffer is full. In this paper, we show that there exists a spectrum of block replacement policies that subsumes the two seemingly unrelated and independent Least Recently Used (LRU) and Least Frequently Used (LFU) policies. The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older history. We also show that there is a spectrum of implementations of the LRFU that again subsumes the LRU and LFU implementations. This spectrum is again dictated by how much weight is given to recent and older histories and the time complexity of the implementations lies between O(1) (the time complexity of LRU) and {\rm O}(\log_2 n) (the time complexity of LFU), where n is the number of blocks in the buffer. Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads we considered.

593 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: The Dead-Block Predictors (DBPs) are proposed, trace-based predictors that accurately identify “when” an Ll data cache block becomes evictable or “dead”, and a DBCP enables effective data prefetching in a wide spectrum of pointer-intensive, integer, and floating-point applications.
Abstract: Effective data prefetching requires accurate mechanisms to predict both “which” cache blocks to prefetch and “when” to prefetch them. This paper proposes the Dead-Block Predictors (DBPs), trace-based predictors that accurately identify “when” an Ll data cache block becomes evictable or “dead”. Predicting a dead block significantly enhances prefetching lookahead and opportunity, and enables placing data directly into Ll, obviating the need for auxiliary prefetch buffers. This paper also proposes Dead-Block Correlating Prefetchers (DBCPs), that use address correlation to predict “which” subsequent block to prefetch when a block becomes evictable. A DBCP enables effective data prefetching in a wide spectrum of pointer-intensive, integer, and floating-point applications.We use cycle-accurate simulation of an out-of-order superscalar processor and memory-intensive benchmarks to show that: (1) dead-block prediction enhances prefetching lookahead at least by an order of magnitude as compared to previous techniques, (2) a DBP can predict dead blocks on average with a coverage of 90% only mispredicting 4% of the time, (3) a DBCP offers an address prediction coverage of 86% only mispredicting 3% of the time, and (4) DBCPs improve performance by 62% on average and 282% at best in the benchmarks we studied.

359 citations


Book ChapterDOI
Jun Furukawa1, Kazue Sako1
19 Aug 2001
TL;DR: A novel and efficient protocol is proposed for proving the correctness of a shuffle, without leaking how the shuffle was performed, which will be a building block of an efficient, universally verifiable mix-net, whose application to voting system is prominent.
Abstract: In this paper, we propose a novel and efficient protocol for proving the correctness of a shuffle, without leaking how the shuffle was performed. Using this protocol, we can prove the correctness of a shuffle of n data with roughly 18n exponentiations, where as the protocol of Sako-Kilian[SK95] required 642n and that of Abe[Ab99] required 22n log n. The length of proof will be only 211n bits in our protocol, opposed to 218n bits and 214 n log n bits required by Sako-Kilian and Abe, respectively. The proposed protocol will be a building block of an efficient, universally verifiable mix-net, whose application to voting system is prominent.

278 citations


Journal Article
TL;DR: In this paper, the authors proposed a novel and efficient protocol for proving the correctness of a shuffle, without leaking how the shuffle was performed, which is a building block of an efficient, universally verifiable mix-net, whose application to voting system.
Abstract: In this paper, we propose a novel and efficient protocol for proving the correctness of a shuffle, without leaking how the shuffle was performed. Using this protocol, we can prove the correctness of a shuffle of n data with roughly 18n exponentiations, where as the protocol of Sako-Kilian[SK95] required 642n and that of Abe[Ab99] required 22nlogn. The length of proof will be only 211n bits in our protocol, opposed to 2 18 n bits and 2 14 n log n bits required by Sako-Kilian and Abe, respectively. The proposed protocol will be a building block of an efficient, universally verifiable mix-net, whose application to voting system is prominent.

266 citations


Patent
16 May 2001
TL;DR: In this article, a system and method is provided for detecting, tracking and blocking denial of service (DoS) attacks, which can occur between local computer systems and/or between remote computer systems, network links, and routing systems over a computer network.
Abstract: A system and method is provided for detecting, tracking and blocking denial of service (“DoS”) attacks, which can occur between local computer systems and/or between remote computer systems, network links, and/or routing systems over a computer network. The system includes a collector adapted to receive a plurality of data statistics from the computer network and to process the plurality of data statistics to detect one or more data packet flow anomalies. The collector is further adapted to generate a plurality of signals representing the one or more data packet flow anomalies. The system further includes a controller that is coupled to the collector and is adapted to receive the plurality of signals from the collector. The controller is constructed and arranged to respond to the plurality of signals by tracking attributes related to the one or more data packet flow anomalies to at least one source, and to block the one or more data packet flow anomalies using a filtering mechanism executed in close proximity to the at least one source.

263 citations


Patent
07 May 2001
TL;DR: In this paper, a nonvolatile semiconductor mass storage system and architecture can be substituted for a rotating hard disk by using several flags, and a map to correlate a logical block address of a block to a physical address of that block.
Abstract: A nonvolatile semiconductor mass storage system and architecture can be substituted for a rotating hard disk. The system and architecture avoid an erase cycle each time information stored in the mass storage is changed. Erase cycles are avoided by programming an altered data file into an empty mass storage block rather than over itself as a hard disk would. Periodically, the mass storage will need to be cleaned up. These advantages are achieved through the use of several flags, and a map to correlate a logical block address of a block to a physical address of that block. In particular, flags are provided for defective blocks, used blocks, and old versions of a block. An array of volatile memory is addressable according to the logical address and stores the physical address.

256 citations


14 Sep 2001
TL;DR: Block algorithms have been developed to acquire very weak Global Positioning System coarse/acquisition signals in a software receiver in order to enable the use of weak GPS signals in applications such as geostationary orbit determination.
Abstract: Block algorithms have been developed to acquire very weak Global Positioning System (GPS) coarse/acquisition (C/A) signals in a software receiver. These algorithms are being developed in order to enable the use of weak GPS signals in applications such as geostationary orbit determination. The algorithms average signals over multiple GPS data bits after a squaring operation that removes the bits’ signs. Methods have been developed to ensure that the pre-squaring summation intervals do not contain data bit transitions. The algorithms make judicious use of Fast Fourier Transform (FFT) and inverse FFT (IFFT) techniques in order to speed up operations. Signals have been successfully acquired from 4 seconds worth of bitgrabbed data with signal-to-noise ratios (SNRs) as low as 21 dB Hz.

202 citations


Journal ArticleDOI
TL;DR: A new fast algorithm based on the winner-update strategy which utilizes an ascending lower bound list of the matching error to determine the temporary winner and two lower bound lists derived by using partial distance and by using Minkowski's inequality are described.
Abstract: Block matching is a widely used method for stereo vision, visual tracking, and video compression. Many fast algorithms for block matching have been proposed in the past, but most of them do not guarantee that the match found is the globally optimal match in a search range. This paper presents a new fast algorithm based on the winner-update strategy which utilizes an ascending lower bound list of the matching error to determine the temporary winner. Two lower bound lists derived by using partial distance and by using Minkowski's inequality are described. The basic idea of the winner-update strategy is to avoid, at each search position, the costly computation of the matching error when there exists a lower bound larger than the global minimum matching error. The proposed algorithm can significantly speed up the computation of the block matching because (1) computational cost of the lower bound we use is less than that of the matching error itself; (2) an element in the ascending lower bound list will be calculated only when its preceding element has already been smaller than the minimum matching error computed so far; (3) for many search positions, only the first several lower bounds in the list need to be calculated. Our experiments have shown that, when applying to motion vector estimation for several widely-used test videos, 92% to 98% of operations can be saved while still guaranteeing the global optimality. Moreover, the proposed algorithm can be easily modified either to meet the limited time requirement or to provide an ordered list of best candidate matches.

184 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented and the performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.
Abstract: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented. They are applied to codes with parity-check matrices generated either randomly or using geometric properties of elements in Galois fields. Both decoding schedules have low computational requirements. The original concurrent decoding schedule has a large storage requirement that is dependent on the total number of edges in the underlying bipartite graph, while a new, staggered decoding schedule which uses an approximation of the belief propagation, has a reduced memory requirement that is dependent only on the number of bits in the block. The performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.

Patent
04 Apr 2001
TL;DR: In this article, a block key to encrypt block data is generated using an ATS (arrival time stamp) appended to each of TS (transport stream) packets included in a transport stream correspondingly to the arrival time of the TS packet.
Abstract: A block key to encrypt block data is generated using an ATS (arrival time stamp) appended to each of TS (transport stream) packets included in a transport stream correspondingly to the arrival time of the TS packet. The ATS is a random data depending upon an arrival time, and so a block-unique key can be generated, which enhances the protection against data cryptanalysis. A block key is generated from a combination of an ATS with a key unique to a device, recording medium or the like such as a master key, disc-unique key, title-unique key or the like. Since an ATS is used to generate a block key, any area for storage of an encryption key for each block may not be provided in a recording medium.

Patent
Brady L. Keays1
24 Aug 2001
TL;DR: In this article, an improved Flash memory device with a distributed erase block management (EBM) scheme is detailed that enhances operation and helps minimize write fatigue of the floating gate memory cells of the flash memory device.
Abstract: An improved Flash memory device with a distributed erase block management (EBM) scheme is detailed that enhances operation and helps minimize write fatigue of the floating gate memory cells of the Flash memory device. In the prior art, erase block management of a Flash memory device, which provides logical sector to physical sector mapping and provides a virtual rewriteable interface for the host, requires that erase block management data be kept in specialized EBM data tables to keep the state of the Flash memory device in case of loss of power. This placement of EBM data in a separate erase block location from the user data slows the Flash memory operation by requiring up to two writes and/or block erasures for every update of the user data. Additionally, one of the goals of the EBM control is to minimize write fatigue of the non-volatile floating gate memory cells of the Flash memory device erase blocks by re-mapping and distributing heavily rewritten user data sectors in a process called load leveling so that no one erase block gets overused too quickly and reduce the expected lifespan of the Flash memory device. The EBM data structures, however, are some of the most heavily rewritten non-volatile floating gate memory cells in the device and thus, while helping to reduce write fatigue in the Flash memory device, are some of the data structures most susceptible to the process of fatigue. The Flash memory device of the invention combines the EBM data in a user data erase block by placing it in an EBM data field of the control data section of the erase block sectors. Therefore distributing the EBM data within the Flash memory erase block structure. This allows the Flash memory to update and/or erase the user data and the EBM data in a single operation, to reduce overhead and speed operation. The Flash memory also reduces the process of EBM data structure write fatigue by allowing the EBM data fields to be load leveled by rotating them with the erase blocks they describe.

Patent
13 Nov 2001
TL;DR: In this paper, a nonvolatile memory system, such as a flash EEPROM system, is disclosed to be divided into a plurality of blocks and each of the blocks into one or more pages, with sectors of data being stored therein that are of a different size than either the pages or blocks.
Abstract: A non-volatile memory system, such as a flash EEPROM system, is disclosed to be divided into a plurality of blocks and each of the blocks into one or more pages, with sectors of data being stored therein that are of a different size than either the pages or blocks. One specific technique packs more sectors into a block than pages provided for that block. Error correction codes and other attribute data for a number of user data sectors are preferably stored together in different pages and blocks than the user data.

Journal ArticleDOI
TL;DR: A new dimension, called the data span dimension, is introduced, which allows user-defined selections of a temporal subset of the database, and a generic algorithm is described that takes any traditional incremental model maintenance algorithm and transforms it into an algorithm that allows restrictions on the dataspan dimension.
Abstract: Data mining algorithms have been the focus of much research. In practice, the input data to a data mining process resides in a large data warehouse whose data is kept up-to-date through periodic or occasional addition and deletion of blocks of data. Most data mining algorithms have either assumed that the input data is static, or have been designed for arbitrary insertions and deletions of data records. We consider a dynamic environment that evolves through systematic addition or deletion of blocks of data. We introduce a new dimension, called the data span dimension, which allows user-defined selections of a temporal subset of the database. Taking this new degree of freedom into account, we describe efficient model maintenance algorithms for frequent item sets and clusters. We then describe a generic algorithm that takes any traditional incremental model maintenance algorithm and transforms it into an algorithm that allows restrictions on the data span dimension. We also develop an algorithm for automatically discovering a specific class of interesting block selection sequences. In a detailed experimental study, we examine the validity and performance of our ideas on synthetic and real datasets.

Patent
31 Dec 2001
TL;DR: In this paper, a flash memory management method is presented, where a request to write the predetermined data to a page to which data has been written is made, and the data is written to a log block corresponding to a data block containing the page.
Abstract: A flash memory management method is provided. According to the method, when a request to write the predetermined data to a page to which data has been written is made, the predetermined data is written to a log block corresponding to a data block containing the page. When a request to write the predetermined data to the page again is received, the predetermined data is written to an empty free page in the log block. Even if the same page is requested to be continuously written to, the management method allows this to be processed in one log block, thereby improving the effectiveness in the use of flash memory resources.

Patent
23 Oct 2001
TL;DR: In this article, the authors present a data communication system that includes a content server and a proxy server that store and mark multimedia data content as single-use or multi-use data.
Abstract: Data communication systems, methods, and devices for transmitting multimedia data content to wireless devices and, more particularly, methods, systems, and devices to deliver, store, and playback multimedia content on a handheld wireless device. The data communication system includes a content server and proxy server that store and mark multimedia data content as single-use or multi-use data. The marked data is transmitted to a wireless held device where the data is stored and provided with an indicator based on whether it is single-use or multiuse data, and then is routed to a media player for play back. After playback, the data is either deleted or stored, depending on the indicator attached thereto. The data may also be marked as restricted data and stored on a restricted access area. A block retransmission program is also provided to restore data transmission from the proxy server in the event transmission is prematurely lost. The data communication systems, methods, and devices according to the present invention provide a more efficient and better quality of service in the delivery, storage, and playback of multimedia data in a mobile device platform.

Proceedings ArticleDOI
08 Jul 2001
TL;DR: This work proposes an algorithm based on the numerical definition of entire domain basis functions to be used in the full-wave method of moments (MoM) solution for large printed antennas that permits a strong reduction of memory occupation and computation time.
Abstract: This work proposes an algorithm based on the numerical definition of entire domain basis functions to be used in the full-wave method of moments (MoM) solution for large printed antennas. After a block partitioning of the structure, entire domain basis functions are generated and then employed in the global solution process. The generation algorithm and a possible selection model is presented in detail. The method permits a strong reduction of memory occupation and computation time. A numerical example for a 4 /spl times/ 2 array of stacked patches is presented.

Patent
26 Oct 2001
TL;DR: In this article, a scalable content delivery network (SCDN) employs a parallel download mechanism to ensure that a demanded file is present at a station in time for user consumption, which is used in solving the content caching and storage problem for applications such as video-on-demand.
Abstract: A scalable content delivery network (SCDN) employs a parallel download mechanism to ensure that a demanded file is present at a station in time for user consumption. This mechanism is used in solving the content caching and storage problem for applications such as video-on-demand, which is commonly perceived as a tough problem in the industry. In the network, files are divided into smaller units called tracks according to the nature of data contained in each of them. Tracks are further divided into smaller equally sized units called block files. This division builds the foundation for parallel download. A sequence server provides a lock-free mechanism for multiple threads or processes to access data atomically. The sequence server allows clients to gain sequential access to data, or to find out whether the sequence has been violated so that they can retry their operation or take corrective action. Advantages of the invention include the ability to handle distribution of large files and process sequencing.

Journal ArticleDOI
TL;DR: Two approaches are presented that significantly reduce the computational cost of applying the EM algorithm to databases with a large number of cases, including databases with large dimensionality.
Abstract: The EM algorithm is a popular method for parameter estimation in a variety of problems involving missing data. However, the EM algorithm often requires significant computational resources and has been dismissed as impractical for large databases. We present two approaches that significantly reduce the computational cost of applying the EM algorithm to databases with a large number of cases, including databases with large dimensionality. Both approaches are based on partial E-steps for which we can use the results of Neal and Hinton (In Jordan, M. (Ed.), Learning in Graphical Models, pp. 355–371. The Netherlands: Kluwer Academic Publishers) to obtain the standard convergence guarantees of EM. The first approach is a version of the incremental EM algorithm, described in Neal and Hinton (1998), which cycles through data cases in blocks. The number of cases in each block dramatically effects the efficiency of the algorithm. We provide a method for selecting a near optimal block size. The second approach, which we call lazy EM, will, at scheduled iterations, evaluate the significance of each data case and then proceed for several iterations actively using only the significant cases. We demonstrate that both methods can significantly reduce computational costs through their application to high-dimensional real-world and synthetic mixture modeling problems for large databases.

Book ChapterDOI
21 May 2001
TL;DR: A new approach to transparent embedding of data into digital images is proposed that provides a high rate of the embedded data and is robust to common and some intentional distortions and can be used both for hidden communication and watermarking.
Abstract: A new approach to transparent embedding of data into digital images is proposed. It provides a high rate of the embedded data and is robust to common and some intentional distortions. The developed technique employs properties of the singular value decomposition (SVD) of a digital image. According to these properties each singular value (SV) specifies the luminance of the SVD image layer, whereas the respective pair of singular vectors specifies image geometry. Therefore slight variations of SVs cannot affect the visual perception of the cover image. The proposed approach is based on embedding a bit of data through slight modifications of SVs of a small block of the segmented covers. The approach is robust because it supposes to embed extra data into low bands of covers in a distributed way. The size of small blocks is used as an attribute to achieve a tradeoff between the embedded data rate and robustness. An advantage of the approach is that it is blind. Simulation has proved its robustness to JPEG up to 40%. The approach can be used both for hidden communication and watermarking.

Patent
18 Jul 2001
TL;DR: In this paper, a new digital configurable macro architecture is described, where the configuration of the programmable digital circuit block is determined by its small number of configuration registers, and changes in configuration are accomplished by changing the contents of the configuration registers.
Abstract: A new digital configurable macro architecture is described. The digital configurable macro architecture is well suited for microcontroller or controller designs. In particular, the foundation of the digital configurable macro architecture is a programmable digital circuit block. The programmable digital circuit blocks can be configured to coupled in series or in parallel to handle more complex digital functions. More importantly, the configuration of the programmable digital circuit block is determined by its small number of configuration registers. This provides much flexibility. In particular, the configuration of the programmable digital circuit block is fast and easy since changes in configuration are accomplished by changing the contents of the configuration registers, whereas the contents are generally a small number of configuration data bits. Thus, the programmable digital circuit block is dynamically configurable from one predetermined digital function to another predetermined digital function for real-time processing.

Proceedings ArticleDOI
07 May 2001
TL;DR: A method for DCT-domain blind measurement of blocking artifacts by constituting a new block across any two adjacent blocks, the blocking artifact is modeled as a 2-D step function.
Abstract: A method for DCT-domain blind measurement of blocking artifacts is proposed. By constituting a new block across any two adjacent blocks, the blocking artifact is modeled as a 2-D step function. A fast DCT-domain algorithm has been derived to constitute the new block and extract all parameters needed. Then an human visual system (HVS) based measurement of blocking artifacts is conducted. Experimental results have shown the effectiveness and stability of our method. The proposed technique can be used for online image/video quality monitoring and control in applications of DCT-domain image/video processing.

Patent
28 Sep 2001
TL;DR: In this article, a distributed election of a shared transmission schedule within an ad hoc network is proposed, where each node is given a ring number according to its location within the network topology and maintains local neighbor information along with its own part number and message digest.
Abstract: A system and method of providing distributed election of a shared transmission schedule within an ad hoc network. The invention includes a collision-free access protocol which resolves channel access contentions for time division multiple access (TDMA) of a single channel. Time-slots are organized into part numbers, which are included within sections, a sequence of which define a block. Each node is given a ring number according to its location within the network topology and maintains local neighbor information along with its own part number and message digest. Collision-free channel access is automatically scheduled and repetitious contention phases are resolved by a random permutation algorithm operating in message digests. An empty time-slot utilization method is also described and data packets may also be transmitted subject to a non-zero collision probability within a blind section of the block.

Patent
01 Jun 2001
TL;DR: In this article, the authors proposed a virtual storage system that generally stores uses larger segmentations, but divides large segments into smaller sub-segments during data movement operations, such that the administration costs are generally low, but latencies caused by the movement of large data blocks are avoided.
Abstract: The present invention provides a virtual storage system that generally stores uses larger segmentations, but divides large segments into smaller sub-segments during data movement operations. The present invention provides a method and system having this hierarchy of segment sizes, namely a large segment for the normal case, while breaking the large segment into single disk blocks during data movement. The mapping has large segments except for those segments undergoing data movement. For those segments, it would be desirable to have the smallest segment size possible, namely, a single disk block. In this way, the administration costs are generally low, but latencies caused by the movement of large data blocks are avoided.

Patent
Dan Eylon1, Amit Ramon1, Yehuda Volk1, Uri Raz1, Shmuel Melamed1 
25 Sep 2001
TL;DR: In this paper, a method and system for streaming software applications (100) to a client (14) uses an application server having a library with the application files stored therein, a streaming manager is configured to send application files to the client as a plurality of streamlets, each streamlet corresponding to a particular data block in a respective application file.
Abstract: A method and system for streaming software applications (100) to a client (14) uses an application server having a library with the application files stored therein. A streaming manager is configured to send the application files to a client (14) as a plurality of streamlets, each streamlet corresponding to a particular data block in a respective application file. A streaming prediction engine (172) is provided to identify at least one streamlet which is predicted to be most appropriate to send to a given client at a particular time in accordance with a prediction model reflecting the manner in which the application files are loaded and used by the application. In the preferred implementation, the application files are preprocessed and stored as a set of compressed streamlets, each of which corresponds to a file data block having a size equal to a code page size, such as 4k, used during file reads by an operating system expected to be present on a client (14) system. In addition, the server is configured to send a startup block to a new streaming client containing a file structure specification of the application files and a set of streamlets comprising at least those streamlets containing the portions of the application required to enable execution of the application to be initiated.

Patent
13 Apr 2001
TL;DR: In this paper, a client agent program is configured with a NAS software component to enable selected distributed devices from the multiplicity of distributed devices to appear to client devices coupled to the network as dedicated NAS devices.
Abstract: Software-based network attached storage (NAS) services are hosted on a massively distributed processing system configured by coupling a multiplicity of distributed devices with a network, wherein each of the distributed devices are enabled to process workloads for the distributed processing system by a client agent program. More particularly, the client agent program is configured with a NAS software component to enable selected distributed devices from the multiplicity of distributed devices to appear to client devices coupled to the network as dedicated NAS devices. The NAS software component allocates an available amount of storage resources in the selected distributed devices to provide NAS services to the client devices. Storage priority controls, including user specified constraints, standard bit, block and file priority levels, and direct bit, block or file priority markings may be utilized to facilitate the full use of the available amounts of unused storage in the selected distributed devices.

Patent
Kenichi Ueda1
20 Jun 2001
TL;DR: In this article, a USB device controller is applied to a peripheral device that performs data communications with a host by using a transmission endpoint and a reception endpoint via a USB interface, which contributes to downsizing of the circuit scale.
Abstract: A USB device controller is applied to a peripheral device that performs data communications with a host by using a transmission endpoint and a reception endpoint via a USB interface. Herein, a USB endpoint controller performs data transmission and data reception by using the reduced number of memories, which contribute to downsizing of the circuit scale of the USB device controller. The USB endpoint controller contains a transmission control block, a reception control block and a buffer switch control block as well as the memories. The buffer switch control block controls allocation of the memories to a transmission endpoint and a reception endpoint respectively in response to a type of a token issued from the host. In response to an OUT token, the data transmission is performed on the transmission endpoint that actualizes a double buffer configuration while the reception endpoint is also available in data reception by a single buffer configuration. In response to an IN token, the data reception is performed on the reception endpoint that actualizes a double buffer configuration while the transmission endpoint is also available in data transmission by a single buffer configuration. Because of the actualization of the double buffer configuration, it is possible to perform high-speed processing in communications of data, particularly transaction data based on the updated standard of USB 2.0.

Journal ArticleDOI
TL;DR: Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control.
Abstract: This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control.

Patent
10 Aug 2001
TL;DR: In this paper, a database system that can synchronize all or a part of its contents over a limited bandwidth link is described, where the lowest layer, the bedrock layer, implements a transactional block store and the top level is a communication protocol that directs the synchronization process such that minimization of bits communicated, rounds of communication, and local computation are simultaneously addressed.
Abstract: A database system that can synchronize all or a part of its contents over a limited bandwidth link is described The lowest layer of the system, the bedrock layer, implements a transactional block store On top of this is a B+-tree that can efficiently compute a digest (hash) of the records within any range of key values in O(log n) time The top level is a communication protocol that directs the synchronization process such that minimization of bits communicated, rounds of communication, and local computation are simultaneously addressed