scispace - formally typeset
Search or ask a question

Showing papers on "Block (data storage) published in 1998"


Patent
31 Dec 1998
TL;DR: In this paper, a method and apparatus for copying, transferring, backing up, and restoring data are disclosed, where data can be copied, backed up or restored in segments sizes larger than the data blocks which comprise a logical object.
Abstract: Method and apparatus for copying, transferring, backing up and restoring data are disclosed. The data can be copied, backed up or restored in segments sizes larger than the data blocks which comprise a logical object. In some embodiments, the segment can correspond to a track of a primary storage device and the data blocks to a fixed size block. In some instances, copying, storage and transfer of the segments which include multiple data blocks can result in transfer of a data block not in a logical object.

552 citations


Proceedings ArticleDOI
23 May 1998
TL;DR: In this paper, the authors introduce a model of symmetrically private information retrieval (SPIR), where the privacy of the data, as well as the private of the user, is guaranteed.
Abstract: Private information retrieval (PIR) schemes allow a user to retrieve the ith bit of an n-bit data string x, replicated in k?2 databases (in the information-theoretic setting) or in k?1 databases (in the computational setting), while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. In this paper we introduce a model of symmetrically-private information retrieval (SPIR), where the privacy of the data, as well as the privacy of the user, is guaranteed. That is, in every invocation of a SPIR protocol, the user learns only a single physical bit of x and no other information about the data. Previously known PIR schemes severely fail to meet this goal. We show how to transform PIR schemes into SPIR schemes (with information-theoretic privacy), paying a constant factor in communication complexity. To this end, we introduce and utilize a new cryptographic primitive, called conditional disclosure of secrets, which we believe may be a useful building block for the design of other cryptographic protocols. In particular, we get a k-database SPIR scheme of complexity O(n1/(2k?1)) for every constant k?2 and an O(logn)-database SPIR scheme of complexity O(log2n·loglogn). All our schemes require only a single round of interaction, and are resilient to any dishonest behavior of the user. These results also yield the first implementation of a distributed version of (n1)-OT (1-out-of-n oblivious transfer) with information-theoretic security and sublinear communication complexity.

485 citations


Patent
06 Oct 1998
TL;DR: In this paper, a scrambled data transmission is descrambled by communicating encrypted program information and authentication information between an external storage device and block buffers of a secure circuit, where the program information is communicated in block chains to reduce the overhead of the authentication information.
Abstract: A scrambled data transmission is descrambled by communicating encrypted program information and authentication information between an external storage device and block buffers of a secure circuit. The program information is communicated in block chains to reduce the overhead of the authentication information. The program information is communicated a block at a time, or even a chain at a time, and stored temporarily in block buffers and a cache, then provided to a CPU to be processed. The blocks may be stored in the external storage device according to a scrambled address signal, and the bytes, blocks, and chains may be further randomly re-ordered and communicated to the block buffers non-sequentially to obfuscate the processing sequence of the program information. Program information may be also be communicated from the secure circuit to the external memory. The program information need not be encrypted but only authenticated for security.

302 citations


Journal ArticleDOI
TL;DR: In this article, a detailed traffic analysis of optical packet switch design is performed, where special consideration is given to the complexity of the optical buffering and the overall switch block structure.
Abstract: A detailed traffic analysis of optical packet switch design is performed. Special consideration is given to the complexity of the optical buffering and the overall switch block structure is considered in general. Wavelength converters are shown to improve the traffic performance of the switch blocks for both random and bursty traffic. Furthermore, the traffic performance of switch blocks with add-drop switches has been assessed in a Shufflenetwork showing the advantage of having converters at the inlets. Finally, the aspect of synchronization is discussed through a proposal to operate the packet switch block asynchronously, i.e. without packet alignment at the input.

223 citations


Journal ArticleDOI
TL;DR: In this paper, a wavemaker curve of a characteristic water wave amplitude is constructed from the landslide length, the initial landslide submergence, the incline angle measured from the horizontal, the characteristic distance of landslide motion, and the characteristic duration of the landslide motion.
Abstract: A nondimensional wavemaker curve of a characteristic water wave amplitude is constructed from the landslide length, the initial landslide submergence, the incline angle measured from the horizontal, the characteristic distance of landslide motion, and the characteristic duration of landslide motion. This wavemaker curve applies broadly to water waves generated by unsteady motion of a submerged object, provided the motion is governed by only one characteristic distance scale and one characteristic time scale. An analytical solution of solid block motion provides the characteristic distance scale and time scale. Two-dimensional experimental results on a 45° incline confirm that a wavemaker curve for solid block landslides exists as a function of the nondimensional initial submergence and what is called the Hammack number. Water wave amplitudes generated by solid block landslides can be predicted from the wavemaker curve if the solid block motion is known. Criteria for the generation of linear water waves are given.

186 citations


Patent
31 Dec 1998
TL;DR: In this paper, a method and apparatus for generating partial backups of logical objects in a computer storage system is described, where changed data blocks are identified and stored as differential abstract block sets.
Abstract: Method and apparatus for generating partial backups of logical objects in a computer storage system are disclosed. Changed data blocks are identified and stored as differential abstract block sets. The differential abstract block set may include data blocks in any order and metadata identifying the relative position of the data block in the logical object. The invention includes methods for formatting updated backups using the differential backups.

184 citations


Journal ArticleDOI
TL;DR: A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images by consists of three stages, which reduces these blocking artifacts efficiently.
Abstract: A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images. The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular for highly compressed images, because each block is transformed and quantized independently. The blocking effects are classified into three types of noises in this paper: grid noise, staircase noise, and corner outlier. The proposed postprocessing algorithm, which consists of three stages, reduces these blocking artifacts efficiently. A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images.

180 citations


Patent
25 Aug 1998
TL;DR: In this paper, a computer operated apparatus for generating a visual information system is disclosed, where a virtual world associated with an application is built using building blocks such as scenes, data sources, global parameters, and resources.
Abstract: A computer operated apparatus for generating a visual information system is disclosed. A virtual world associated with an application is built using building blocks such as scenes, data sources, global parameters, and resources. A scene is a visual display of information much like a presentation slide, except that the information may be linked to data stored in a database or other data storage systems. Within a scene, values resulting from a data source are represented graphically as user-defined data elements. Data sources are built with a block diagraming tool which generates one or more database queries. The queries may be SQL queries. Scenes are created with a drawing editor which transparently binds data sources to the graphical elements of the scenes. When the virtual world is completed, an execution image of the virtual world may be represented as byte code. The byte code representing the virtual world may be executed by a runtime control to provide desired information to users.

177 citations


Patent
30 Nov 1998
TL;DR: In this article, a motion estimation process is used to improve coding efficiency by using a modified search criteria, which takes into account the error signal needed to encode a block of pixels as well as the motion data when selecting a matching block in a target frame.
Abstract: A motion estimation process improves coding efficiency by using a modified search criteria. The modified search criteria takes into account the error signal needed to encode a block of pixels as well as the the motion data when selecting a matching block in a target frame. This approach reduces the combined overhead of both the motion and error signal data for the encoded block of pixels. When used in conjunction with a spiral search path in the target frame, the modified search criteria improves the speed of the search because it eliminates the need for an exhaustive search. A predicted motion vector is used to optimize the search location. Preferably the search order is selected so that target pixels closer to predicted point are searched before pixels farther away in the target frame.

156 citations


Patent
Yung-Iyul Lee1, HyunWook Park1
18 Jun 1998
TL;DR: In this article, a signal adaptive filtering method for reducing blocking effect and ringing noise is proposed, which is capable of filtering the image data through inverse quantization and inverse discrete cosine transform according to generated blocking information and ringing information.
Abstract: A signal adaptive filtering method for reducing blocking effect and ringing noise, a signal adaptive filter, and a computer readable medium. The signal adaptive filtering method capable of reducing blocking effect and ringing noise of image data when a frame is composed of blocks of a predetermined size includes the steps of: (a) generating blocking information for reducing the blocking effect and ringing information for reducing the ringing noise, from coefficients of predetermined pixels of the upper and left boundary regions of the data block when a frame obtained by deconstructing a bitstream image data for inverse quantization is an intraframe; and (b) adaptively filtering the image data passed through inverse quantization and inverse discrete cosine transform according to the generated blocking information and ringing information. Therefore, the blocking effect and ringing noise can be eliminated from the image restored from the block-based image, thereby enhancing the image restored from compression.

144 citations


Patent
Takaaki Hayashi1, Minoru Etoh1
15 Jul 1998
TL;DR: In this article, a smoothing filter is used to improve the image quality by adding a certain number to a weight before taking its inverse number, which is controlled by the parameter.
Abstract: A great decrease of a step, i.e., block distortion, of such as unnatural, abnormal or artifact brightness and color, on a block boundary, leads to much visual improvement of picture quality. Against a picture that has been encoded and decoded by block, the smoothing filter characteristics is changed according to quantizing parameter, block activity of a decoded picture, the activity of each pixel of the decoded picture, and the position with respect to a block boundary of a pixel. A smoothing filter utilizes a weighted mean whose weight is the inverse number of a difference between a target pixel and its surrounding pixels. The weighted mean is obtained by adding a certain number to a weight before taking its inverse number. The value to be added is controlled by the parameter. Similarly, the mixing ratio of a pixel value after performing the smoothing filter and a decoded pixel value is changed depending on the parameter. A filter having strong edge preserving property is utilized for the inside of a block, and a filter having weak edge preserving property is utilized for a block boundary. A filter having edge preserving property is utilized for the post/loop filter.

Patent
29 Dec 1998
TL;DR: In this article, a steganographic method is disclosed to embed an invisible watermark into an image, which can be used for copyright protection, content authentication, or content annotation.
Abstract: A steganographic method is disclosed to embed an invisible watermark into an image. It can be used for copyright protection, content authentication or content annotation. The technique is mainly based on K-L transform. Firstly a block and cluster step (106) and cluster selection step (108) are performed to enhance the optimization of K-L transform (110) for a given image. Then a watermark is embedded (114) into the selected eigen-clusters. ECC (Error Correction Code) can be employed to reduce the embedded code error rate. The proposed method is characterized by robustness despite the degradation or modification on the watermarked content. Furthermore, the method can be extended to video, audio or other multimedia especially for multimedia databases in which the stored multimedia are categorized by their contents or classes.

Patent
Kyle R. Johns1
17 Jul 1998
TL;DR: The virtual frame buffer controller maintains a data structure, called a pointer list, to keep track of the physical memory location and compression state of each block of pixels in the virtual buffer as discussed by the authors.
Abstract: A virtual frame buffer controller in a computer's display system manages accesses to a display image stored in discrete compressed and uncompressed blocks distributed in physical memory. The controller maps conventional linear pixel addresses of a virtual frame buffer to pixel locations within blocks stored at arbitrary places in physical memory. The virtual frame buffer controller maintains a data structure, called a pointer list, to keep track of the physical memory location and compression state of each block of pixels in the virtual frame buffer. The virtual frame buffer controller initiates a decompression process to decompress a block when a pixel request maps to a pixel in a compressed block. The block remains decompressed until physical memory needs to be reclaimed to free up memory. A software driver for the virtual frame buffer controller performs memory management functions, including adding to a free memory list when the virtual frame buffer requires more memory and reclaiming memory previously allocated to a block of pixels whose state has changed from a compressed to an uncompressed state, or from a decompressed back to a compressed state.

Journal ArticleDOI
TL;DR: Augmenting the estimation technique to a conventional systolic-architecture-based VLSI motion estimation reduces the power consumption by a factor of 2, while still preserving the optimal solution and the throughput.
Abstract: This paper presents an architectural enhancement to reduce the power consumption of the full-search block-matching (FSBM) motion estimation. Our approach is based on eliminating unnecessary computation using conservative approximation. Augmenting the estimation technique to a conventional systolic-architecture-based VLSI motion estimation reduces the power consumption by a factor of 2, while still preserving the optimal solution and the throughput. A register-transfer level implementation as well as simulation results on benchmark video clips are presented.

Journal ArticleDOI
TL;DR: A procedure for the recursive approximation of the feasible parameter set of a linear model with a set membership uncertainty description is provided and several approximation strategies for polytopes are presented.

Patent
Robert A. DeMoss1
31 Mar 1998
TL;DR: In this article, the authors propose a method, system, and data structure for encoding a block of data with redundancy information and for correction of erasure type errors in the block using the redundancy data.
Abstract: A method, system, and data structure for encoding a block of data with redundancy information and for correction of erasure type errors in the block using the redundancy data. In particular, the invention is particularly applicable to disk array storage subsystems which are capable of recovering from total or partial failures of one or two disks in the disk array. Still more specifically, the invention is applicable to RAID level 6 storage devices. A given data block of data is translated into a code block of n 2 elements including 2n XOR parity elements for redundancy. Each code block is manipulated as a square matrix, of n 2 elements with parity elements along the major diagonals of the matrix and data elements in the remainder of the matrix. Each parity element is a dependent variable whose value is the XOR sum of the (n-2) data elements in a minor diagonal which intersects it. If the elements in any one or two columns or one or two rows are erased, their values can be generated from the other elements in the matrix. The invention therefore allows for recovery from data loss resulting from complete failure of any one or two disks in the disk array. Further, since the invention recovers all erased elements in any one or two rows, it allows recovery from data loss resulting from correlated partial failure of all disks in the disk array. Still further, the invention allows recovery from many uncorrelated failure patterns in the storage domain of disk drives in a disk array.

Journal ArticleDOI
TL;DR: A column generation, branch-and-bound algorithm in which attractive paths for each shipment are generated by solving a shortest path problem for a large domestic railroad, in which the paths that shipments may take in the physical network are restricted.
Abstract: On major domestic railroads, a typical general merchandise shipment may pass through many classification yards on its route from origin to destination. At these yards, the incoming traffic, which may consist of a number of individual shipments, is reclassified (sorted and grouped together) to be placed on outgoing trains. Each reclassification incurs costs due to handling and delay. To prevent shipments from being reclassified at every yard they pass through, several shipments may be grouped together to form a block. A block has associated with an origin destination pair that may or may not be the origin or destination of any of the individual cars contained in the block. The objective of the railroad blocking problem is to choose which blocks to build at each yard and to assign sequences of blocks to deliver each shipment to minimize total mileage, handling, and delay costs. We model the railroad blocking problem as a network design problem in which yards are represented by nodes and blocks by arcs. Our model is intended as a strategic decision-making tool. We develop a column generation, branch-and-bound algorithm in which attractive paths for each shipment are generated by solving a shortest path problem. Our solution approach is unique in constraining the classification resources of each yard and simultaneously solving for different priority classes of shipments. We implement our algorithm and find near-optimal solutions in about one hour for the blocking problem of a large domestic railroad, in which the paths that shipments may take in the physical network are restricted. The resulting network design problem has 150 nodes, 1300 commodities, and 6800 possible arcs (blocks). We test the robustness of our solution on 19 test instances that are variations of the data for the real-world problems. If shipments are restricted to following one of a limited number of paths in the rail network, then, in four hours or less, our algorithm finds solutions within 0.4% of optimal for all test cases. Furthermore, the solutions obtained are no more than 3.9% from optimal even if all possible paths are allowed.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a matching algorithm based on a kernel estimate of the conditional lag one distribution or on a fitted autoregression of small order to align with higher likelihood those blocks which match at their ends.
Abstract: The block bootstrap for time series consists in randomly resampling blocks of consecutive values of the given data and aligning these blocks into a bootstrap sample. Here we suggest improving the performance of this method by aligning with higher likelihood those blocks which match at their ends. This is achieved by resampling the blocks according to a Markov chain whose transitions depend on the data. The matching algorithms that we propose take some of the dependence structure of the data into account. They are based on a kernel estimate of the conditional lag one distribution or on a fitted autoregression of small order. Numerical and theoretical analysis in the case of estimating the variance of the sample mean show that matching reduces bias and, perhaps unexpectedly, has relatively little effect of variance. Our theory extends to the case of smooth functions of a vector mean.

Patent
29 Jul 1998
TL;DR: In this paper, a multi-byte fetch is used to generate column syndromes on-the-fly as the data is written from the DVD disk to the memory buffer.
Abstract: A digital-versatile disk (DVD) playback-controller integrated circuit (IC) writes data to a block in an embedded memory buffer. The block has rows and columns. Row syndromes are generated on-the-fly as the data is written from the DVD disk to the memory buffer. Row syndrome generation thus requires no memory access cycles. Once errors in the rows identified by the row syndromes are corrected, column syndromes are generated. A multi-byte fetch supplies a multi-column syndrome generator with bytes in the row for two or more columns. The fetched bytes for the two or more columns are accumulated into intermediate syndromes. Fetched bytes are accumulated for other rows until all of the column's bytes in all rows have been fetched and accumulated. The final accumulated syndromes are output to an error corrector that detects, locates, and corrects any errors in the columns. The same error corrector can be used for row and column syndromes, even though a three-block-deep pipeline is used. Only one memory access cycle is required during column-syndrome generation for each row, even though two or more column syndromes are simultaneously generated. Pipelined registers for the intermediate syndrome bytes in the column-syndrome generator allow syndrome-calculation circuits to be shared for all column syndromes.

Patent
29 Jul 1998
TL;DR: In this article, an embedded DRAM is incorporated inside a digital-versatile-disk (DVD) playback-controller integrated circuit, and data from the DVD optical disk is written to a data block in the embedded DRRAM.
Abstract: An embedded DRAM is incorporated inside a digital-versatile-disk (DVD) playback-controller integrated circuit. Data from the DVD optical disk is written to a data block in the embedded DRAM. Error correction is performed by reading the data block to generate syndromes and over-writing errors in the data block with corrections. Once the data block is corrected, it is copied or moved to a different area of the embedded memory, a host-buffer area. As the data block is moved, de-scrambling is performed to decrypt the data. The re-ordered data is stripped of overhead such as ECC bytes and written to the host-buffer area of the embedded DRAM. A checksum is generated as the data is moved, and the checksum is compared to a stored checksum to ensure that all errors were corrected. The data block in the host-buffer area is then transferred to a host. The embedded DRAM has a very wide data-access width of 16 bytes. The full width is used for writing data from the optical disk to the ECC data block buffer, and for reading data from the host-buffer area to the host. Narrower access widths are used by the error correction and de-scrambler blocks.

Patent
30 Dec 1998
TL;DR: In this paper, the authors present a method and apparatus for encoding and decoding a turbo code, where an interleaver interleaves and delays a block of input bits to generate interleaved input bits and delayed input bits.
Abstract: The present invention is a method and apparatus for encoding and decoding a turbo code. In the encoder, an interleaver interleaves and delays a block of input bits to generate interleaved input bits and delayed input bits. A first encoder generates a first, second, and third encoded bits. A second encoder generates a fourth encoded bit. A symbol generator generates a plurality of symbols which correspond to the input bits. In a decoder, a sync search engine detects a synchronizing pattern and extracts symbols from the encoded bits. An input buffer is coupled to the sync search engine to store the extracted symbols. A first soft-in-soft-out (SISO 1 ) is coupled to the input buffer to generate a first soft decision set based on the extracted symbols. An interleaver is coupled to the SISO 1 to interleave the first soft decision set. A second soft-in-soft-out (SISO 2 ) is coupled to the input buffer and the interleaver to generate a second soft decision set. A de-interleaver is coupled to the SISO 2 to de-interleave the second soft decision set. An adder is coupled to the SISO 1 and the de-interleaver to generate a hard decision set.

Journal ArticleDOI
TL;DR: Based on the simulation results obtained in this study, the proposed approach to detection and concealment approach to transmission errors in H. 261 images can indeed recover high-quality H.261 images from their corresponding corrupted H.260 images, without increasing the transmission bit rate.
Abstract: The detection and concealment approach to transmission errors in H.261 images is proposed. For entropy-coded H.261 images, a transmission error in a codeword will not only affect the underlying codeword, but also may affect subsequent codewords, resulting in a great degradation of the received images. Here a transmission error may be a single-bit error or a burst error containing N successive error bits. The objective of the proposed approach is to recover high-quality H.261 images from the corresponding corrupted H.261 images, without increasing the transmission bit rate. In the proposed approach, using the constraints imposed on compressed image data, all the groups of blocks (GOBs) within an H.261 picture can be correctly located. After a GOB is located, transmission errors within the GOB are detected by two successive procedures: (1) whether the GOB is corrupted or not is determined by checking a set of error-checking conditions under decoding and (2) the precise location (block-based) of the first transmission error (i.e., the first corrupted block) within the GOB is located by a block-based backtracking procedure. For a corrupted block, a set of concealed block candidates, SC, is generated, and a proposed fitness function for error concealment is used to select the "best" concealed block candidate among SC as the concealed block of the corrupted block. Based on the simulation results obtained in this study, the proposed approach can indeed recover high-quality H.261 images from their corresponding corrupted H.261 images.

Patent
Erik Hagersten1
18 Dec 1998
TL;DR: In this article, a computer system optimized for block copy operations is provided, in which a processor within a local node of the computer system performs a specially coded write operation, which is indicated using certain most significant bits of the address of the write operation.
Abstract: A computer system optimized for block copy operations is provided. In order to perform a block copy from a remote source block to a local destination block, a processor within a local node of the computer system performs a specially coded write operation. The local node, upon detection of the specially coded write operation, performs a read operation to the source block in the remote node. Concurrently, the write operation is allowed to complete in the local node such that the processor may proceed with subsequent computing tasks while the local node completes the copy operation. The read from the remote node and subsequent storage of the data in the local node is completed by the local node, not by the processor. In one specific embodiment, the specially coded write operation is indicated using certain most significant bits of the address of the write operation. The address identifies the destination coherency unit within the local node, and a translation of the address to a global address identifies the source coherency unit. Subsequent to completion of the copy operation, the destination coherency unit may be accessed in the local node.

Patent
Haruo Tomita1
06 Aug 1998
TL;DR: In this article, a disk storage system with a RAID architecture includes a write buffer having a storage capacity corresponding to one stripe, a buffer management table, and a controller that stores, in the write buffer, logical blocks whose data lengths are changed, as needed, and delays updating of the logical blocks stored in the read buffer in data update processing until the number of stored logical blocks reaches N*K−1.
Abstract: A disk storage system with a RAID architecture includes a write buffer having a storage capacity corresponding to one stripe, a buffer management table, and a controller. The controller stores, in the write buffer, logical blocks whose data lengths are changed, as needed, and delays updating of the logical blocks stored in the write buffer in data update processing until the number of stored logical blocks reaches N*K−1 The controller then performs a continuous write operation to sequentially write N*K logical blocks, obtained by adding a logical address tag block to the N*K−1logical blocks, in contiguous areas of empty areas different from the areas in which the old data are stored.

Patent
30 Jun 1998
TL;DR: In this article, a World Wide Web browser software is implemented in a processing system housed in a set-top box connected to a television and communicating over a wide-area network with one or more servers.
Abstract: A World Wide Web browser software is implemented in a processing system housed in a set-top box connected to a television and communicating over a wide-area network with one or more servers. The browser software allows a user to navigate using a remote control through World-Wide Web pages in which a number of hypertext anchors are displayed on the television. User inputs are entered from a remote input device using an infrared (IR) link. The processing system includes a read-only memory (ROM) and a flash memory. The mask ROM and the flash memory are assigned adjacent memory spaces in the memory map of the processing system. Browser software and configuration data are stored in the flash memory. Other software and configuration data are stored in a mask ROM. The browser is upgraded or reconfigured by downloading to the box replacement software or data transmitted from a server over the network and then writing the replacement software or data into the flash memory. A mechanism is provided to temporarily maintain power to the processing system in the event power to the box is lost during downloading. The mechanism allows the writing of a current block to be completed. An indication of the current block is maintained while power is absent so that downloading can be resumed once power is restored from the last block that was written.

Patent
09 Dec 1998
TL;DR: In this article, a scalable FIR filter architecture that requires fewer computations, less storage registers, and is capable of parallel processing, is presented, which reduces the number of computations (e.g., multiplication) by utilizing the inherent symmetry.
Abstract: A scalable FIR filter architecture that requires fewer computations, less storage registers, and is capable of parallel processing, is presented. The scalable filter architecture reduces the number of computations (e.g., multiplication) by utilizing the inherent symmetry and reduces the number of storage elements required by utilizing what is known as the transpose-form (as compared to direct-form) filter architecture. The filter architecture is scalable to accommodate different complexity levels. In accordance to the present invention, a filter can be scaled up/down by adding/subtracting a processing block to/from the existing structure. Because these processing blocks can process signals independently and simultaneously, the filter architecture in accordance to the present invention allows for parallel and distributive processing thereby meeting the required performance requirements.

Patent
09 Nov 1998
TL;DR: A programmable image transform processor has a programmable addressing and arithmetic blocks as mentioned in this paper, where an input address generator has an input addressing micro-sequencer and an output addressing memory that stores the input addressing procedure.
Abstract: A programmable image transform processor has a programmable addressing and arithmetic blocks. In the programmable addressing block, an input address generator has an input addressing microsequencer and an input addressing memory that stores an input addressing procedure. The microsequencer executes the input addressing procedure to generate addresses from which to request image data. In the programmable arithmetic block, an arithmetic block memory stores an image processing procedure and a microsequencer executes the image processing procedure using the image data to generate transformed image data.

Patent
28 May 1998
TL;DR: In this article, a block distortion reduction device including a degree-of-difficulty of encoding detection, a parameter calculation and a correction value calculation is proposed. But the detection of the parameter is dependent on the result of parameter calculation.
Abstract: A block distortion reduction device including: a degree-of-difficulty-of-encoding detection means (3) to detect from input image data a parameter representing the degree of difficulty of encoding; a parameter calculation means (4) to calculate from input image data a parameter necessary to make a block distortion decision; a block distortion decision means (6) to make a decision on the block distortion based on the result of detection of the parameter representing the degree of difficulty of encoding and on the result of parameter calculation; a correction value calculation means (7) to calculate a correction value for reducing the block distortion; and a means to perform correction on the input image data by using the correction value according to the block distortion decision result and then to output the corrected image data.

Patent
15 Jun 1998
TL;DR: In this article, the file system employs a three part block state indicator (V,A,U), which is a volume indication, A is allocation sequence indication, and U is update sequence number indication.
Abstract: A shared persistent memory (e.g., disk) file system provides persistent memory block allocation with multiple redo logging of memory blocks. The file system employs a three part block state indicator (V,A,U). V is a volume indication. A is allocation sequence indication. U is update sequence number indication. The file system (a) generates indication of the allocation sequence in the allocation map in a manner free of initially reading the block from storage memory, (b) records the indication of volume, allocation sequence and update sequence in an entry of the transaction log of the requesting computer node, and (c) sets indications of volume, allocation sequence and update sequence on the subject block in storage memory. Subsequent transactions on the subject block, by the requesting node are recorded in respective entries in the transaction log. Each respective entry reflects state of the subject block by indicating in the block state indicator the volume, allocation sequence and order of update sequence. The file system includes redo recovery means for updating blocks in the storage memory upon a failure in the computer system. For each block being updated, the recovery means utilizes one transaction log and the block state indicators recorded therein corresponding to indications of volume and update sequence in the block in storage memory.

Journal ArticleDOI
TL;DR: This paper describes a data-interlacing architecture with two-dimensional (2-D) data-reuse for full-search blockmatching algorithm that achieves 100% hardware utilization and a high throughput rate.
Abstract: This paper describes a data-interlacing architecture with two-dimensional (2-D) data-reuse for full-search blockmatching algorithm. Based on a one-dimensional processing element (PE) array and two data-interlacing shift-register arrays, the proposed architecture can efficiently reuse data to decrease external memory accesses and save the pin counts. It also achieves 100% hardware utilization and a high throughput rate. In addition, the same chips can be cascaded for different block sizes, search ranges, and pixel rates.