scispace - formally typeset
Search or ask a question

Showing papers on "Block (data storage) published in 2004"


Book ChapterDOI
19 Jan 2004
TL;DR: The efficient subdivision of a sensor network into uniform, mostly non-overlapping clusters of physically close nodes is an important building block in the design of efficient upper layer network functions such as routing, broadcast, data aggregation, and query processing.
Abstract: The efficient subdivision of a sensor network into uniform, mostly non-overlapping clusters of physically close nodes is an important building block in the design of efficient upper layer network functions such as routing, broadcast, data aggregation, and query processing.

417 citations


Proceedings Article
27 Jun 2004
TL;DR: The scheme, called Redundancy Elimination at the Block Level (REBL), leverages the benefits of compression, duplicate block suppression, and delta-encoding to eliminate a broad spectrum of redundant data in a scalable and efficient manner.
Abstract: Ongoing advancements in technology lead to ever-increasing storage capacities. In spite of this, optimizing storage usage can still provide rich dividends. Several techniques based on delta-encoding and duplicate block suppression have been shown to reduce storage overheads, with varying requirements for resources such as computation and memory. We propose a new scheme for storage reduction that reduces data sizes with an effectiveness comparable to the more expensive techniques, but at a cost comparable to the faster but less effective ones. The scheme, called Redundancy Elimination at the Block Level (REBL), leverages the benefits of compression, duplicate block suppression, and delta-encoding to eliminate a broad spectrum of redundant data in a scalable and efficient manner. REBL generally encodes more compactly than compression (up to a factor of 14) and a combination of compression and duplicate suppression (up to a factor of 6.7). REBL also encodes similarly to a technique based on delta-encoding, reducing overall space significantly in one case. Furthermore, REBL uses super-fingerprints, a technique that reduces the data needed to identify similar blocks while dramatically reducing the computational requirements of matching the blocks: it turns O(n2) comparisons into hash table lookups. As a result, using super-fingerprints to avoid enumerating matching data objects decreases computation in the resemblance detection phase of REBL by up to a couple orders of magnitude.

344 citations


Journal ArticleDOI
TL;DR: This work addresses optimal estimation of correlated multiple-input multiple-output (MIMO) channels using pilot signals, assuming knowledge of the second-order channel statistics at the transmitter and designing the transmitted signal to optimize two criteria: MMSE and the conditional mutual information between the MIMO channel and the received signal.
Abstract: We address optimal estimation of correlated multiple-input multiple-output (MIMO) channels using pilot signals, assuming knowledge of the second-order channel statistics at the transmitter. Assuming a block fading channel model and minimum mean square error (MMSE) estimation at the receiver, we design the transmitted signal to optimize two criteria: MMSE and the conditional mutual information between the MIMO channel and the received signal. Our analysis is based on the recently proposed virtual channel representation, which corresponds to beamforming in fixed virtual directions and exposes the structure and the true degrees of freedom in the correlated channel. However, our design framework is applicable to more general channel models, which include known channel models, such as the transmit and receive correlated model, as special cases. We show that optimal signaling is in a block form, where the block length depends on the signal-to-noise ratio (SNR) as well as the channel correlation matrix. The block signal corresponds to transmitting beams in successive symbol intervals along fixed virtual transmit angles, whose powers are determined by (nonidentical) water filling solutions based on the optimization criteria. Our analysis shows that these water filling solutions identify exactly which virtual transmit angles are important for channel estimation. In particular, at low SNR, the block length reduces to one, and all the power is transmitted on the beam corresponding to the strongest transmit angle, whereas at high SNR, the block length has a maximum length equal to the number of active virtual transmit angles, and the power is assigned equally to all active transmit angles. Consequently, from a channel estimation viewpoint, a faster fading rate can be tolerated at low SNRs relative to higher SNRs.

297 citations


01 Jan 2004
TL;DR: 7 different types of block matching algorithms used for motion estimation in video compression are implemented and compared, ranging from the very basic Exhaustive Search to the recent fast adaptive algorithms like Adaptive Rood Pattern Search.
Abstract: This paper is a review of the block matching algorithms used for motion estimation in video compression. It implements and compares 7 different types of block matching algorithms that range from the very basic Exhaustive Search to the recent fast adaptive algorithms like Adaptive Rood Pattern Search. The algorithms that are evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, ranging from MPEG1 / H.261 to MPEG4 / H.263. The paper also presents a very brief introduction to the entire flow of video compression.

296 citations


Patent
Rajiv Vijayan1, Aamod Khandekar1, Fuyun Ling1, Gordon Kent Walker1, Ramaswamy Murali1 
02 Sep 2004
TL;DR: In this paper, a technique for multiplexing and transmitting multiple data streams is described, where each super-frame has a predetermined time duration and is further divided into multiple (e.g., four) frames.
Abstract: Techniques for multiplexing and transmitting multiple data streams are described. Transmission of the multiple data streams occurs in “super-frames”. Each super-frame has a predetermined time duration and is further divided into multiple (e.g., four) frames. Each data block for each data stream is outer encoded to generate a corresponding code block. Each code block is partitioned into multiple subblocks, and each data packet in each code block is inner encoded and modulated to generate modulation symbols for the packet. The multiple subblocks for each code block are transmitted in the multiple frames of the same super-frame, one subblock per frame. Each data stream is allocated a number of transmission units in each super-frame and is assigned specific transmission units to achieve efficient packing. A wireless device can select and receive individual data streams.

289 citations


Patent
21 Dec 2004
TL;DR: A nonvolatile memory system is organized in physical groups of physical memory locations and each physical group (metablock) is erasable as a unit and can be used to store a logical group of data.
Abstract: A non-volatile memory system is organized in physical groups of physical memory locations. Each physical group (metablock) is erasable as a unit and can be used to store a logical group of data. A memory management system allows for update of a logical group of data by allocating a metablock dedicated to recording the update data of the logical group. The update metablock records update data in the order received and has no restriction on whether the recording is in the correct logical order as originally stored (sequential) or not (chaotic). Eventually the update metablock is closed to further recording. One of several processes will take place, but will ultimately end up with a fully filled metablock in the correct order which replaces the original metablock. In the chaotic case, directory data is maintained in the non-volatile memory in a manner that is conducive to frequent updates. The system supports multiple logical groups being updated concurrently.

245 citations


Proceedings ArticleDOI
07 Mar 2004
TL;DR: Medium access control protocols are developed to enable users in a wireless network to opportunistically transmit when they have favorable channel conditions, without requiring a centralized scheduler.
Abstract: In this paper, we develop medium access control protocols to enable users in a wireless network to opportunistically transmit when they have favorable channel conditions, without requiring a centralized scheduler. We consider approaches that use splitting algorithms to resolve collisions over a sequence of minislots, and determine the user with the best channel. First, we present a basic algorithm for a system with i.i.d. block fading and a fixed number of backlogged users. We give an analysis of the throughput of this system and show that the average number of minislots required to find the user with the best channel is less than 2.5 independent of the number of users or the fading distribution. We then extend this algorithm to a channel with memory and also develop a reservation based scheme that offers improved performance as the channel memory increases. Finally we consider a model with random arrivals and propose a modified algorithm for this case. Simulation results are given to illustrate the performance in each of these settings

227 citations


Journal ArticleDOI
TL;DR: Efficient optimization algorithms based on simplified EXIT chart construction are devised to find irregular codes improving the convergence of iterative decoding, which yields systems performing very well for short block lengths, too.
Abstract: Based on extrinsic information transfer (EXIT) charts, the convergence behavior of iterative decoding is studied for a number of serially concatenated systems, such as a serially concatenated code, coded data transmission over an intersymbol interference channel, bit-interleaved coded modulation, or trellis-coded modulation. Efficient optimization algorithms based on simplified EXIT chart construction are devised to find irregular codes improving the convergence of iterative decoding. One optimization criterion is to find concatenated systems exhibiting thresholds of successful decoding convergence, which are close to information-theoretic limits. However, these thresholds are approached only for very long block lengths. To overcome this problem, the decoding convergence after a fixed, finite number of iterations is optimized, which yields systems performing very well for short block lengths, too. As an example, optimal system configurations for communication over an additive white Gaussian noise channel are presented.

206 citations


Proceedings Article
31 Mar 2004
TL;DR: C-Miner is proposed, an algorithm which uses a data mining technique called frequent sequence mining to discover block correlations in storage systems and runs reasonably fast with feasible space requirement, indicating that it is a practical tool for dynamically inferring correlations in a storage system.
Abstract: Block correlations are common semantic patterns in storage systems. These correlations can be exploited for improving the effectiveness of storage caching, prefetching, data layout and disk scheduling. Unfortunately, information about block correlations is not available at the storage system level. Previous approaches for discovering file correlations in file systems do not scale well enough to be used for discovering block correlations in storage systems. In this paper, we propose C-Miner, an algorithm which uses a data mining technique called frequent sequence mining to discover block correlations in storage systems. C-Miner runs reasonably fast with feasible space requirement, indicating that it is a practical tool for dynamically inferring correlations in a storage system. Moreover, we have also evaluated the benefits of block correlation-directed prefetching and data layout through experiments. Our results using real system workloads show that correlation-directed prefetching and data layout can reduce average I/O response time by 12-25% compared to the base case, and 7-20% compared to the commonly used sequential prefetching scheme.

204 citations


Journal ArticleDOI
TL;DR: Local execution of the embedded damage detection method is shown to save energy by avoiding utilization of the wireless channel to transmit raw time-history data.
Abstract: A low-cost wireless sensing unit is designed and fabricated for deployment as the building block of wireless structural health monitoring systems. Finite operational lives of portable power supplies, such as batteries, necessitate optimization of the wireless sensing unit design to attain overall energy efficiency. This is in conflict with the need for wireless radios that have far-reaching communication ranges that require significant amounts of power. As a result, a penalty is incurred by transmitting raw time-history records using scarce system resources such as battery power and bandwidth. Alternatively, a computational core that can accommodate local processing of data is designed and implemented in the wireless sensing unit. The role of the computational core is to perform interrogation tasks of collected raw time-history data and to transmit via the wireless channel the analysis results rather than time-history records. To illustrate the ability of the computational core to execute such embedded engineering analyses, a two-tiered time-series damage detection algorithm is implemented as an example. Using a lumped-mass laboratory structure, local execution of the embedded damage detection method is shown to save energy by avoiding utilization of the wireless channel to transmit raw time-history data.

204 citations


Patent
15 Dec 2004
TL;DR: A nonvolatile memory system of a type having blocks of memory cells erased together and which are programmable from an erased state in units of a large number of pages per block is described in this article.
Abstract: A non-volatile memory system of a type having blocks of memory cells erased together and which are programmable from an erased state in units of a large number of pages per block. If the data of only a few pages of a block are to be updated, the updated pages are written into another block provided for this purpose. Updated pages from multiple blocks are programmed into this other block in an order that does not necessarily correspond with their original address offsets. The valid original and updated data are then combined at a later time, when doing so does not impact on the performance of the memory. If the data of a large number of pages of a block are to be updated, however, the updated pages are written into an unused erased block and the unchanged pages are also written to the same unused block. By handling the updating of a few pages differently, memory performance is improved when small updates are being made. The memory controller can dynamically create and operate these other blocks in response to usage by the host of the memory system.

Proceedings ArticleDOI
26 Jun 2004
TL;DR: This paper proposes a software based adaptive incremental checkpoint technique which uses a secure hash function to uniquely identify changed blocks in memory, and is the first self-optimizing algorithm that dynamically computes the optimal block boundaries, based on the history of changed blocks.
Abstract: Given the scale of massively parallel systems, occurrence of faults is no longer an exception but a regular event. Periodic checkpointing is becoming increasingly important in these systems. However, huge memory footprints of parallel applications place severe limitations on scalability of normal checkpointing techniques. Incremental checkpointing is a well researched technique that addresses scalability concerns, but most of the implementations require paging support from hardware and the underlying operating system, which may not be always available. In this paper, we propose a software based adaptive incremental checkpoint technique which uses a secure hash function to uniquely identify changed blocks in memory. Our algorithm is the first self-optimizing algorithm that dynamically computes the optimal block boundaries, based on the history of changed blocks. This provides better opportunities for minimizing checkpoint file size. Since the hash is computed in software, we do not need any system support for this. We have implemented and tested this mechanism on the BlueGene/L system. Our results on several well-known benchmarks are encouraging, both in terms of reduction in average checkpoint file size and adaptivity towards application's memory access patterns.

Proceedings ArticleDOI
22 Jun 2004
TL;DR: This scheme takes advantage of integer DCT coefficients' Laplacian-shape-like distribution, which permits low distortion between the watermarked image and the original one caused by the bit-shift operations of the companding technique in the embedding process.
Abstract: We present a high capacity reversible watermarking scheme using companding technique over integer DCT coefficients of image blocks. This scheme takes advantage of integer DCT coefficients' Laplacian-shape-like distribution, which permits low distortion between the watermarked image and the original one caused by the bit-shift operations of the companding technique in the embedding process. In our scheme, we choose AC coefficients in the integer DCT domain for the bit-shift operation, and therefore the capacity and the quality of the watermarked image can be adjusted by selecting different numbers of coefficients of different frequencies. To prevent overflows and underflows in the spatial domain caused by modification of the DCT coefficients, we design a block discrimination structure to find suitable blocks that can be used for embedding without overflow or underflow problems. We can also use this block discrimination structure to embed an overhead of location information of all blocks suitable for embedding. With this scheme, watermark bits can be embedded in the saved LSBs of coefficient blocks, and retrieved correctly during extraction, while the original image can be restored perfectly.

Patent
20 Dec 2004
TL;DR: In this paper, the authors leverage the parity computation that exists in RAID systems to reduce the amount of data to be stored and transferred in a networked storage system by caching, transferring, and storing data parity or delta bytes of changes on a block as opposed to data block itself.
Abstract: A method dramatically reduces the amount of data to be stored and transferred in a networked storage system. Preferably, the network storage system provides continued data protection through mirroring/replication, disk-to-disk backup, data archiving for future retrieval, and Information Lifecycle management (ILM). The idea is to leverage the parity computation that exists in RAID systems. By caching, transferring, and storing data parity or delta bytes of changes on a block as opposed to data block itself, substantial data reduction is possible without using sophisticated compression algorithms at the production side to minimize performance impacts upon production servers. Data can be computed using the parity/delta and previously existing data at mirror side, replication side, backup storage, or at retrieval time upon events such as failures or ILM operations.

Proceedings ArticleDOI
05 Apr 2004
TL;DR: A modified version of min-sum algorithm has been used which the advantage of simpler computations has compared to sum-product algorithm without any loss in performance.
Abstract: This paper presents a semi-parallel architecture for decoding low density parity check (LDPC) codes. A modified version of min-sum algorithm has been used which the advantage of simpler computations has compared to sum-product algorithm without any loss in performance. Special structure of the parity check matrix of the proposed code leads to an efficient semi-parallel implementation of the decoder for a family of (3, 6) LDPC codes. A prototype architecture has been implemented in VHDL on programmable hardware. The design is easily scalable and reconfigurable for larger block sizes. Simulation results show that our proposed decoder for a block length of 1536 bits can achieve data rates up to 127 Mbps.

Patent
15 Jan 2004
TL;DR: In this paper, the layout area of X system peripheral circuits is reduced by constituting so that sub-decoders 30 of one block being a control unit of bit lines are controlled by two main decoders.
Abstract: PROBLEM TO BE SOLVED: To reduce layout area of X system peripheral circuits and to reduce erasing disturbance of memory cells in a semiconductor memory device such as a flash memory or the like SOLUTION: The number of lines of wirings from gate decoders to sub-decoders which is a factor of determination of layout area is decreased and layout area of X system peripheral circuits is reduced by constituting so that sub-decoders 30 of one block being a control unit of bit lines are controlled by two main decoders 10 As a result of such a constitution, the number of non-selection sectors of selection main decoders is decreased, and erasing disturbance is reduced COPYRIGHT: (C)2004,JPO

Book ChapterDOI
Shai Halevi1
20 Dec 2004
TL;DR: The EME* mode as mentioned in this paper is a refinement of the EME mode of Halevi and Rogaway, and inherits the efficiency and parallelism from the original EME.
Abstract: This work describes a mode of operation, EME*, that turns a regular block cipher into a length-preserving enciphering scheme for messages of (almost) arbitrary length. Specifically, the resulting scheme can handle any bit-length, not shorter than the block size of the underlying cipher, and it also handles associated data of arbitrary bit-length. Such a scheme can either be used directly in applications that need encryption but cannot afford length expansion, or serve as a convenient building block for higher-level modes. The mode EME* is a refinement of the EME mode of Halevi and Rogaway, and it inherits the efficiency and parallelism from the original EME.

Patent
Tamer Kadous1
09 Sep 2004
TL;DR: In this article, an incremental redundancy (IR) transmission in a MIMO system is considered, where the receiver detects a received symbol block to obtain a detected symbol block, processes all detected symbol blocks obtained for the data packet, and provides a decoded packet.
Abstract: An incremental redundancy (IR) transmission in a MIMO system (Fig. 1), a transmitter 110 processes a data packet based on a selected rate to obtain multiple data symbol blocks. The transmitter transmits one data symbol block at a time until a receiver correctly recovers the data packet or all blocks are transmitted. Whenever a data symbol block is received from the transmitter, the receiver 150 detects a received symbol block to obtain a detected symbol block, processes all detected symbol blocks obtained for the data packet, and provides a decoded packet. If the decoded packet is in error, then the receiver repeats the processing when another data symbol block is received for the data packet. The receiver may also perform iterative detection and decoding on the received symbol blocks for the data packet multiple times to obtain the decoded packet.

Patent
Tamer Kadous1
13 Aug 2004
TL;DR: In this paper, an incremental redundancy transmission on multiple parallel channels in a MIMO system is considered, where the receiver performs detection and obtains symbol blocks transmitted on the parallel channels independently or in a designated order.
Abstract: For incremental redundancy transmission on multiple parallel channels in a MIMO system, a transmitter processes (e.g., encodes, partitions, interleaves, and modulates) each data packet for each parallel channel based on a rate selected for the parallel channel and obtains multiple symbol blocks for the packet. For each data packet, the transmitter transmits one symbol block at a time on its parallel channel until a receiver recovers the packet or all blocks have been transmitted. The receiver performs detection and obtains symbol blocks transmitted on the parallel channels. The receiver recovers the data packets transmitted on the parallel channels independently or in a designated order. The receiver processes (e.g., demodulates, deinterleaves, re-assembles, and decodes) all symbol blocks obtained for each data packet and provides a decoded packet. The receiver may estimate and cancel interference due to recovered data packets so that data packets recovered later can achieve higher SINRs.

Patent
11 Feb 2004
TL;DR: In this paper, the original digital data stored in a memory is searched for on the basis of an input image, and difference information is extracted by comprising the retrieved original data and the input image.
Abstract: Original digital data stored in a memory is searched for on the basis of an input image, difference information is extracted by comprising the retrieved original digital data and the input image, and the difference information is composited to the original digital data. Furthermore, digital data generated by composition is stored in the memory. Also, original digital data stored in a memory is searched for on the basis of an input image, and when no original digital data is retrieved, the input image is converted into vector data, and the image that has been converted into the vector data is stored as digital data in the memory. Moreover, region segmentation information obtained in a block selection step and an input image are composited, the composite image is displayed on an operation screen of an MFP, and a rectangular block to be vectorized is designated as a specific region from the displayed region segmentation information. As a method of designating the specific region, for example, the user designates one or a plurality of rectangular blocks in an image using a pointing device.

Proceedings Article
31 Mar 2004
TL;DR: D-GRAID as mentioned in this paper is a gracefully-degrading and quickly-recovering RAID storage array that ensures that most files within the file system remain available even when an unexpectedly high number of faults occur.
Abstract: We present the design, implementation, and evaluation of D-GRAID, a gracefully-degrading and quickly-recovering RAID storage array. D-GRAID ensures that most files within the file system remain available even when an unexpectedly high number of faults occur. D-GRAID also recovers from failures quickly, restoring only live file system data to a hot spare. Both graceful degradation and live-block recovery are implemented in a prototype SCSIbased storage system underneath unmodified file systems, demonstrating that powerful "file-system like" functionality can be implemented behind a narrow block-based interface

Book ChapterDOI
05 Feb 2004
TL;DR: The resulting design offers better hardware efficiency than other recent 128-key-bit block ciphers and Resistance against side-channel cryptanalysis was also considered as a design criteria for ICEBERG.
Abstract: We present a fast involutional block cipher optimized for reconfigurable hardware implementations. ICEBERG uses 64-bit text blocks and 128-bit keys. All components are involutional and allow very efficient combinations of encryption/decryption. Hardware implementations of ICEBERG allow to change the key at every clock cycle without any performance loss and its round keys are derived “on-the-fly” in encryption and decryption modes (no storage of round keys is needed). The resulting design offers better hardware efficiency than other recent 128-key-bit block ciphers. Resistance against side-channel cryptanalysis was also considered as a design criteria for ICEBERG.

Journal ArticleDOI
TL;DR: From the simulation results, the proposed methods show better performance in estimating "true motion" and reducing blocking artifacts with less complexity in calculation compared to a conventional method.
Abstract: In this paper, a motion compensated frame interpolation (MCI) algorithm adapting new block-based motion estimation (BME), which is overlapped block-based motion estimation (OBME), is proposed for frame rate up-conversion (FRC). Unlike conventional BME algorithms, where a video frame is divided into many non-overlapping square blocks of pixels, the proposed BME is executed using an overlapped matching block in order to get more accurate motion trajectory. To reduce computational complexity caused by the OBME, a sub-sampled pixel block is used for the OBME. The proposed OBME is executed with the various overlapped blocks that have different block sizes and sub-sampling ratios. In this paper, instead of mean absolute difference (MAD) used for general BME, a modified MAD is applied for the OBME. The MAD proposed is weighted using the magnitude of MV that corresponds to the estimating position. From the simulation results, the proposed methods show better performance in estimating "true motion" and reducing blocking artifacts with less complexity in calculation compared to a conventional method.

Patent
Svend Frolund1, Arif Merchant1, Yasusuhi Saito1, Susan Spence1, Alistair Veitch1 
13 May 2004
TL;DR: In this article, the read, write and recovery operations for replicated data are provided, using first and second timestamps to coordinate the operations among the designated devices (101, 102).
Abstract: Read, write and recovery operations for replicated data are provided. In one aspect, a system for redundant storage of data included a plurality of storage devices (102) and a communication medium (104) for interconnecting the storage devices (102). At least two of the storage devices (102) are designated devices (102) for storing a block of data. Each designated device (102) has a version of the data and a first timestamp that is indicative of when the version of data was last updated and a second timestamp that is indicative of any pending update to the block of data. The read, write and recovery operations are performed to the data using the first and second timestamps to coordinate the operations among the designated devices (102).

Proceedings ArticleDOI
05 Oct 2004
TL;DR: An adaptive mechanism that relies on a logical tree data structure, the range search tree (RST), to support range queries efficiently and avoids bottleneck problems encountered in traditional tree-based systems is described.
Abstract: In recent years, distributed hash tables (DHTs) have been proposed as a fundamental building block for large scale distributed applications. Important functionalities such as searching have been added to the DHTs basic lookup capability. However, supporting range queries efficiently remains a difficult problem. We describe an adaptive mechanism that relies on a logical tree data structure, the range search tree (RST), to support range queries efficiently. Nodes in the RST automatically group registrations based on their values. Queries are decomposed into a small number of sub-queries for efficient resolution. The system dynamically optimizes itself to minimize the registration and query cost based on observed load. The system is fully distributed and avoids bottleneck problems encountered in traditional tree-based systems. Extensive simulation results validate the effectiveness of the system.

Journal ArticleDOI
TL;DR: In this article, a 6-port phase/frequency discriminator (SPD) was used for collision avoidance radar sensor in the frequency band of 94 GHz, where the receiver front-end module is based on a six-port SPD, composed of four 90/spl deg/ hybrid couplers.
Abstract: A new 94-GHz collision avoidance radar sensor is proposed The receiver front-end module is based on a six-port phase/frequency discriminator (SPD) The SPD, composed of four 90/spl deg/ hybrid couplers, is manufactured in a metal block of brass using a computer numerically controlled milling machine Simulation and measurement S-parameters of the SPD are presented in the frequency band New SPD computer models are generated and used in the system simulations Preliminary measurements and system simulations performed to obtain the relative velocity of the target and its distance are presented Statistical evaluations show an acceptable measurement error of this radar sensor

Patent
20 May 2004
TL;DR: In this article, addresses are pipelined to multibank memories on both rising and falling edges of a clock and the Global Address Supervisor pipelines these addresses optimally without causing bank or block or subarray operational conflicts.
Abstract: The invention describes and provides pipelining of addresses to memory products. Addresses are pipelined to multibank memories on both rising and falling edges of a clock. Global Address Supervisor pipelines these addresses optimally without causing bank or block or subarray operational conflicts. Enhanced data through put and bandwidth, as well as substantially improved bus utilization (simultaneously), can be realized. In peer-to-peer connected systems, significant random data access throughput can be obtained.

Patent
30 Jun 2004
TL;DR: In this paper, a method of reading a data block from a sector of a recording media is described, where the data block is read from the same sector of the recording channel using the adjusted timing recovery block that is adjusted based on the re-decoded data block.
Abstract: A method of reading a data block from a sector of a recording media is described. The data block from the sector of the recording channel is decoded with an ECC decoder (first trial). The data block is re-decoded (second trial) using an adjusted timing recovery block that is adjusted based on the decoded data block, if the number of errors exceeded an error correction capability of the ECC decoder on the first trial. In one embodiment, the data block is reread from the same sector of the recording channel using the adjusted timing recovery block that is adjusted based on the re-decoded data block. The data block is subsequently jointly decoded with the waveforms obtained from the second trial by a possibly modified sequence detector, if the number of errors exceeded the error correction capability of the ECC decoder during the second trial.

Patent
20 Jan 2004
TL;DR: In this paper, the authors propose a block level data storage service that provides differentiated pools of storage on a single storage device by leveraging the different performance characteristics across the logical block name (LBN) space of the storage device (or devices).
Abstract: The systems and methods described herein include among other things, systems for providing a block level data storage service. More particularly, the systems and methods of the invention provide a block level data storage service that provides differentiated pools of storage on a single storage device. To this end, the systems and methods described herein leverage the different performance characteristics across the logical block name (LBN) space of the storage device (or devices). These different performance characteristics may be exploited to support two or more classes of storage on a single device.

Journal ArticleDOI
TL;DR: An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented and experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.
Abstract: An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as post processing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.