scispace - formally typeset
Search or ask a question
Author

Thomas A. Dye

Bio: Thomas A. Dye is an academic researcher from Cirrus Logic. The author has contributed to research in topics: Memory controller & Display list. The author has an hindex of 35, co-authored 58 publications receiving 4058 citations. Previous affiliations of Thomas A. Dye include Motorola & Wilmington University.


Papers
More filters
Patent
14 Apr 2000
TL;DR: The Compression Enhanced Dual In-line Memory Module of the present invention uses parallel lossless compression and decompression engines embedded into the ASIC device for improved system memory page density and I/O subsystem data bandwidth.
Abstract: An ASIC device embedded into the memory subsystem of a computing device used to accelerate the transfer of active memory pages for usage by the system CPU from either compressed memory cache buffer or the addition of a compressed disk subsystem for improved system cost and performance. The Compression Enhanced Dual In-line Memory Module of the present invention uses parallel lossless compression and decompression engines embedded into the ASIC device for improved system memory page density and I/O subsystem data bandwidth. In addition, the operating system software optimizes page transfers between compressed disk partitions, compressed cache memory and inactive/active page memory within the computer system. The disclosure also indicates preferred methods for initialization, recognition and operation of the ASIC device transparently within industry standard memory interfaces and subsystems. The system can interface to present operating system software and applications, which enable optimal usage for the compressed paging system memory environment. The integrated parallel data compression and decompression capabilities of the compactor ASIC mounted on industry standard memory modules, along with the software drivers and filters of the present invention keep recently used pages compressed in the system memory. Additional performance is gained by the transfer of compressed pages between the system memory and the disk and network subsystems. In addition, the present invention may reduce the amount of data transferred between distributed computers across the LAN or WAN by the transmission of compressed page data between remote systems or distributed databases.

301 citations

Patent
26 Apr 1999
TL;DR: The Compression Enhanced Flash Memory Controller (CEFMC) as discussed by the authors uses parallel lossless compression and decompression engines embedded into the flash memory controller unit for improved memory density and data bandwidth.
Abstract: A flash memory controller and/or embedded memory controller including MemoryF/X Technology that uses data compression and decompression for improved system cost and performance. The Compression Enhanced Flash Memory Controller (CEFMC) of the present invention preferably uses parallel lossless compression and decompression engines embedded into the flash memory controller unit for improved memory density and data bandwidth. In addition, the invention includes a Compression Enhanced Memory Controller (CEMC) where the parallel compression and decompression engines are introduced into the memory controller of the microprocessor unit. The Compression Enhanced Memory Controller (CEMC) invention improves system wide memory density and data bandwidth. The disclosure also indicates preferred methods for specific applications such as usage of the invention for solid-state disks, embedded memory and Systems on Chip (SOC) environments. The disclosure also indicates a novel memory control method for the execute in place (XIP) architectural model. The integrated parallel data compression and decompression capabilities of the CEFMC and CEMC inventions remove system bottle-necks and increase performance matching the data access speeds of the memory subsystem to that of the microprocessor. Thus, the invention allows lower cost systems due to smaller data storage, reduced bandwidth requirements, reduced power and noise.

225 citations

Patent
08 Jun 1994
TL;DR: In this paper, a nonlinear feedback method for data synchronization was proposed, which periodically queries each driver for the current audio and video position (or frame number) and calculates the synchronization error, which is used to determine a tempo value adjustment to one of the data streams designed to place the video and audio back in sync.
Abstract: A method and apparatus for synchronizing audio and video data streams in a computer system during a multimedia presentation to produce a correctly synchronized presentation. The preferred embodiment of the invention utilizes a nonlinear feedback method for data synchronization. The method of the present invention periodically queries each driver for the current audio and video position (or frame number) and calculates the synchronization error. The synchronization error is used to determine a tempo value adjustment to one of the data stream designed to place the video and audio back in sync. The method then adjusts the audio or video tempo to maintain the audio and video data streams in synchrony. In the preferred embodiment of the invention, the video tempo is changed nonlinearly over time to achieve a match between the video position and the equivalent audio position. The method applies a smoothing function to the determined tempo value to prevent overcompensation. The method of the present invention can operate in any hardware system and in any software environment and can be adapted to existing systems with only minor modifications.

205 citations

Patent
20 Oct 1999
TL;DR: In this article, a parallel compression engine is proposed for parallel data compression which processes stream data at more than a single byte or symbol (character) at one time, using a history table comprising entries, each entry comprising at least one symbol.
Abstract: A system and method for performing parallel data compression which processes stream data at more than a single byte or symbol (character) at one time. The parallel compression engine modifies a single stream dictionary based (or history table based) data compression method, such as that described by Lempel and Ziv, to provide a scalable, high bandwidth compression. The parallel compression method examines a plurality of symbols in parallel, thus providing greatly increased compression performance. The method first involves receiving uncompressed data, wherein the uncompressed data comprises a plurality of symbols. The method maintains a history table comprising entries, wherein each entry comprises at least one symbol. The method operates to compare a plurality of symbols with entries in the history table in a parallel fashion, wherein this comparison produces compare results. The method then determines match information for each of the plurality of symbols based on the compare results. The step of determining match information involves determining zero or more matches of the plurality of symbols with each entry in the history table. The method then outputs compressed data in response to the match information.

200 citations

Patent
14 Dec 1999
TL;DR: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance as discussed by the authors is a significant advance over the operation of current memory controllers, which allows lower cost systems due to smaller data storage requirements and reduced bandwidth requirements.
Abstract: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably sits on the main CPU bus or a high-speed system peripheral bus such as the PCI bus and couples to system memory. The IMC preferably uses a lossless data compression and decompression scheme. Data transfers to and from the integrated memory controller of the present invention can thus be in either of two formats, these being compressed or normal (non-compressed). The IMC also preferably includes microcode for specific decompression of particular data formats such as digital video and digital audio. Compressed data from system I/O peripherals such as the hard drive, floppy drive, or local area network (LAN) are decompressed in the IMC and stored into system memory or saved in the system memory in compressed format. Thus, data can be saved in either normal or compressed format, retrieved from the system memory for CPU usage in normal or compressed format, or transmitted and stored on a medium in normal or compressed format. Internal memory mapping allows for format definition spaces which define the format of the data and the data type to be read or written. Software overrides may be placed in applications software in systems that desire to control data decompression at the software application level. The integrated data compression and decompression capabilities of the IMC remove system bottlenecks and increase performance. This allows lower cost systems due to smaller data storage requirements and reduced bandwidth requirements. This also increases system bandwidth and hence increases system performance. Thus the IMC of the present invention is a significant advance over the operation of current memory controllers.

190 citations


Cited by
More filters
Patent
21 Dec 1998
TL;DR: In this article, the data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored.
Abstract: Multiple applications request data from multiple storage units over a computer network. The data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored. At least one additional copy of each segment also is distributed randomly over the storage units, such that each segment is stored on at least two storage units. This random distribution of multiple copies of segments of data improves both scalability and reliability. When an application requests a selected segment of data, the request is processed by the storage unit with the shortest queue of requests. Random fluctuations in the load applied by multiple applications on multiple storage units are balanced nearly equally over all of the storage units. This combination of techniques results in a system which can transfer multiple, independent high-bandwidth streams of data in a scalable manner in both directions between multiple applications and multiple storage units.

1,427 citations

Patent
14 May 2002
TL;DR: In this article, the data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored.
Abstract: Multiple applications request data from multiple storage units over a computer network. The data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored. Redundancy information corresponding to each segment also is distributed randomly over the storage units. The redundancy information for a segment may be a copy of the segment, such that each segment is stored on at least two storage units. The redundancy information also may be based on two or more segments. This random distribution of segments of data and corresponding redundancy information improves both scalability and reliability. When a storage unit fails, its load is distributed evenly over to remaining storage units and its lost data may be recovered because of the redundancy information. When an application requests a selected segment of data, the request may be processed by the storage unit with the shortest queue of requests. Random fluctuations in the load applied by multiple applications on multiple storage units are balanced nearly equally over all of the storage units. Small data files also may be stored on storage units that combine small files into larger segments of data using a log structured file system. This combination of techniques results in a system which can transfer both multiple, independent high-bandwidth streams of data and small data files in a scalable manner in both directions between multiple applications and multiple storage units.

1,195 citations

Patent
16 Dec 2005
TL;DR: In this article, the authors present methods, systems and apparatuses for use in managing content on at least a local network, where the change is additional content on a first client device.
Abstract: The present embodiments provide methods, systems and apparatuses for use in managing content on at least a local network. Some embodiments provide a method for use in managing content that detects there is a change to content on a local network, determines whether the change is additional content on a first client device, determines whether the additional content can be identified, determines whether there is a predictive distribution scheme when the additional content is identified, distributes the additional content over the local network according the predictive distribution scheme when a predictive distribution scheme applies to the additional content, determines whether a new predictive distribution scheme can be defined when a predictive distribution scheme does not apply to the additional content, and saving the new predictive distribution scheme when a new predictive scheme can be defined.

1,152 citations

Patent
01 Dec 2014
TL;DR: In this article, a system and method for data storage by shredding and deshredding of the data allows for various combinations of processing of data to provide various resultant storage of data.
Abstract: A system and method for data storage by shredding and deshredding of the data allows for various combinations of processing of the data to provide various resultant storage of the data. Data storage and retrieval functions include various combinations of data redundancy generation, data compression and decompression, data encryption and decryption, and data integrity by signature generation and verification. Data shredding is performed by shredders and data deshredding is performed by deshredders that have some implementations that allocate processing internally in the shredder and deshredder either in parallel to multiple processors or sequentially to a single processor. Other implementations use multiple processing through multi-level shredders and deshredders. Redundancy generation includes implementations using non-systematic encoding, systematic encoding, or a hybrid combination. Shredder based tag generators and deshredder based tag readers are used in some implementations to allow the deshredders to adapt to various versions of the shredders.

901 citations

Patent
15 Nov 2012
TL;DR: In this paper, the authors propose a method comprising providing a plurality of links to end-user devices communicatively coupled to a network system, a particular link of the plurality supporting control-plane communications between the network system and a particular user over one or more wireless access networks, the message comprising payload for delivery to the particular user and an identifier identifying a particular device agent on the particular enduser device.
Abstract: A method comprising providing a plurality of links to a plurality of end-user devices communicatively coupled to a network system, a particular link of the plurality of links supporting control-plane communications between the network system and a particular end-user device of the plurality of end-user devices over one or more wireless access networks; receiving a message from a server communicatively coupled to the network system, the message comprising payload for delivery to the particular end-user device; generating an encrypted message comprising the payload and an identifier identifying a particular device agent of a plurality of device agents on the particular end-user device, the identifier configured to assist in delivering at least a portion of the payload to the particular device agent on the particular end-user device; and sending the encrypted message to the particular end-user device over the particular link.

483 citations