scispace - formally typeset
Search or ask a question

Showing papers on "Cache published in 1981"


Proceedings ArticleDOI
12 May 1981
TL;DR: A cache organization is presented that essentially eliminates a penalty on subsequent cache references following a cache miss and has been incorporated in a cache/memory interface subsystem design, and the design has been implemented and prototyped.
Abstract: In the past decade, there has been much literature describing various cache organizations that exploit general programming idiosyncrasies to obtain maximum hit rate (the probability that a requested datum is now resident in the cache). Little, if any, has been presented to exploit: (1) the inherent dual input nature of the cache and (2) the many-datum reference type central processor instructions.No matter how high the cache hit rate is, a cache miss may impose a penalty on subsequent cache references. This penalty is the necessity of waiting until the missed requested datum is received from central memory and, possibly, for cache update. For the two cases above, the cache references following a miss do not require the information of the datum not resident in the cache, and are therefore penalized in this fashion.In this paper, a cache organization is presented that essentially eliminates this penalty. This cache organizational feature has been incorporated in a cache/memory interface subsystem design, and the design has been implemented and prototyped. An existing simple instruction set machine has verified the advantage of this feature; future, more extensive and sophisticated instruction set machines may obviously take more advantage. Prior to prototyping, simulations verified the advantage.

504 citations


Patent
27 Nov 1981
TL;DR: In this article, a buffered cache memory subsystem is described which features a solid-state cache memory connected to a storage director which interfaces a host channel with a control module controlling operation of a long-term data storage device such as a disk drive.
Abstract: A buffered cache memory subsystem is disclosed which features a solid-state cache memory connected to a storage director which interfaces a host channel with a control module controlling operation of a long-term data storage device such as a disk drive. The solid-state cache memory is connected to plural directors which in turn may be connected to differing types of control modules, whereby the cache is usable with more than one type of long-term data storage means within a given system. The cache memory may be field-installed in a preexisting disk drive storage system and is software transparent to the host computer, while providing improvements in overall operating efficiency. In a preferred embodiment, data is only cached when it is expected to be the subject of a future host request.

128 citations


Patent
Robert Percy Fletcher1
31 Mar 1981
TL;DR: In this article, the authors propose a control system for interlocking processors in a multiprocessing organization where each processor has its own high speed store in buffer (SIB) cache and each processor shares a common cache with the other processors.
Abstract: A control system for interlocking processors in a multiprocessing organization. Each processor has its own high speed store in buffer (SIB) cache and each processor shares a common cache with the other processors. The control system insures that all processors access the most up-to-date copy of memory information with a minimal performance impact. The design allows read only copies of the same shared memory block (line) to exist simultaneously in all private caches. Lines that are both shared and changed are stored in the common shared cache, which each processor can directly fetch from and store into. The shared cache system dynamically detects and moves lines, which are both shared and changed, to the common shared cache and moves lines from the shared cache once sharing has ceased.

124 citations


Patent
03 Aug 1981
TL;DR: In this article, a plurality of addressable data storage devices are selectively accessed or accessed via a cache memory, and accesses to the devices are queued on a device basis.
Abstract: A plurality of addressable data storage devices are selectively directly accessed or accessed via a cache memory. Access via the cache memory uses one of a plurality of logical addresses; each of the data storage devices is represented by a plurality of the logical addresses. Each of the data storage devices can be reserved for direct access; such reservation does not apply to device accesses via the cache. Accesses to the devices are queued on a device basis.

105 citations


Patent
23 Mar 1981
TL;DR: In this paper, a lock array is provided with bit positions corresponding to each line entry in an associated cache directory, and a replacement selection circuit is used to eliminate each locked line from being a replacement candidate in its congruence class in a set-associative store-in-cache in a multiprocessor (MP).
Abstract: A lock array is provided with bit positions corresponding to each line entry in an associated cache directory. When a lock bit is on, it inhibits the castout, replacement, or invalidation of the associated cache line, which operations are allowed when the lock bit is off. The lock bit may be in an off state while an associated valid bit is set on, but once the lock bit is set on the valid bit cannot be set off until the lock bit is first set off. Lock array controls operate with a replacement selection circuit (which may be conventional) to eliminate each locked line from being a replacement candidate in its congruence class in a set-associative store-in-cache in a multiprocessor (MP). The lock array enables simultaneous reset of all lock bits at each checkpoint without disturbing the status of the associated cache directory. A special type of IE operand request, called a store-interrogate (SI) request, is used to lock the accessed line, whether or not the SI request hits or misses in the cache. Any locked line can continue to receive any fetch, SI, or store cache request from its own IE. Any line remains unlocked as long as it is not accessed by a SI request; that is a line remains unlocked as long as it only receives fetch requests, and fetch requests are generally much more numerous than SI requests. Line locking enables the castout or invalidation of unlocked cache lines during a checkpoint interval.

94 citations


Patent
05 Jun 1981
TL;DR: In this paper, a cache memory is provided for storing blocks of data which are most likely to be needed by the host processor in the near future, and a directory table is maintained wherein all data in cache is listed at a "home" position.
Abstract: In a data processing system of the type wherein a host processor transfers data to or from a plurality of attachment devices, a cache memory is provided for storing blocks of data which are most likely to be needed by the host processor in the near future. The host processor can then merely retrieve the necessary information from the cache memory without the necessity of accessing the attachment devices. When transferring data to cache from an attachment disk, additional unrequested information can be transferred at the same time if it is likely that this additional data will soon be requested. Further, a directory table is maintained wherein all data in cache is listed at a "home" position and, if more than one block of data in cache have the same home position, a conflict chain is set-up so that checking the contents of the cache can be done simply and quickly.

80 citations


Patent
Robert Percy Fletcher1
06 Jul 1981
TL;DR: In this paper, the replacement selection of entries in a second level cache directory of a storage hierarchy using replaced and hit addresses of a dynamic look-aside translation buffer (DLAT) at the first level (L1) in the hierarchy which receives CPU storage requests along with the CPU cache and its directory.
Abstract: The disclosure controls the replacement selection of entries in a second level (L2) cache directory of a storage hierarchy using replaced and hit addresses of a dynamic look-aside translation buffer (DLAT) at the first level (L1) in the hierarchy which receives CPU storage requests along with the CPU cache and its directory. The DLAT entries address page size blocks in main storage (MS). The disclosure provides a replacement (R) flag for each entry in the L2 directory, which represents a page size block in the L2 cache. An R bit is selected and turned on by the address of a DLAT replaced page which is caused by a DLAT miss to indicate its associated page is a candidate for replacement in the L2 cache. However, the page may continue to be accessed in the L2 cache until it is actually replaced. An R bit is selected and turned off by a CPU request address causing a DLAT hit and a L1 cache miss to indicate its associated L2 page is not a candidate for replacement.

78 citations


Patent
05 Jun 1981
TL;DR: In this article, the average quantity of data transferred to the cache memory in each operation can be automatically and continually varied in order to maximize the performance advantage provided by cache memory, and the average number of blocks transferred in any one operation being varied by adjusting threshold position values at which second or third data blocks are transferred.
Abstract: When transferring data to a cache memory from an attachment data storage device, additional unrequested information can be transferred at the same time if it is likely that this additional data will soon be requested. The average quantity of data transferred to the cache memory in each operation can be automatically and continually varied in order to maximize the performance advantage provided by the cache memory. When a record of data is requested by the host processor, data is transferred to the cache memory from an attachment data storage device in increments of fixed-length data blocks each containing a sequence of data records, with the number of transferred blocks being determined by the position of a requested data record in its respective data block, and the average number of blocks transferred in any one operation being varied by adjusting threshold position values at which second or third data blocks are transferred.

78 citations


Patent
24 Mar 1981
TL;DR: In this paper, the integrity of data in each cache with respect to the shared memory modules is maintained by providing each shared memory with a cache monitoring and control capability which monitors processor reading and writing requests and, in response to this monitoring, maintains an accurate, updatable record of the data addresses in cache while also providing for invalidating data in a cache when it is no longer valid.
Abstract: A data processing system having a plurality of processors and a plurality of dedicated and shared memory modules. Each processor includes a cache for speeding up data transfers between the processor and its dedicated memory and also between the processor and one or more shared memories. The integrity of the data in each cache with respect to the shared memory modules is maintained by providing each shared memory with a cache monitoring and control capability which monitors processor reading and writing requests and, in response to this monitoring, maintains an accurate, updatable record of the data addresses in each cache while also providing for invalidating data in a cache when it is no longer valid.

72 citations


Patent
22 Jan 1981
TL;DR: In this article, a data processing system includes at least two processors, each having a cache memory containing an index section and a memory section, each of which can respond to an external request derived from the other processor which is simultaneously processing a task.
Abstract: A data processing system includes at least two processors, each having a cache memory containing an index section and a memory section. A first processor performs a task by deriving internal requests for its cache memory which also may respond to an external request derived from the other processor which is simultaneously processing a task. To avoid a conflict between the simultaneous processing of an internal request and of an external request by the same cache memory, one request may act on the other by delaying its enabling or by suspending its processing from the instant at which these requests are required to operate simultaneously on the index section or the memory section of the cache memory of the processor affected by these requests. Thereby, the tasks are performed by the system at an increased speed.

71 citations


Patent
27 Nov 1981
TL;DR: In this paper, a method for detection of a sequential data stream which can be performed without host computer intervention is disclosed featuring examination of a data record and channel program during read operations for signals indicative that the data is not part of sequential data streams, for example, embedded seek instructions.
Abstract: A method for detection of a sequential data stream which can be performed without host computer intervention is disclosed featuring examination of a data record and channel program during read operations for signals indicative that the data is not part of a sequential data stream, for example, embedded seek instructions. If a particular sought for record does not contain such indications, the successive record or records may then be staged to a faster access memory device such as a solid-state cache. The invention is described in a plug-compatible, software-transparent configuration.

Patent
17 Aug 1981
TL;DR: In this paper, a cache is accessed based upon addresses to a backing store having a larger address space than the cache, and the cache accessing is based upon a hashing method and system derived from the arrangement of the backing store.
Abstract: A cache is accessed based upon addresses to a backing store having a larger address space than the cache. The backing store consists of plurality of devices exhibiting delay access boundaries. The cache accessing is based upon a hashing method and system derived from the arrangement of the backing store and in an ordered manner for accommodating the delay access boundaries and enable rapidly adjusting the hash parameters in accordance with changes and backing store capability in other hardware changes.

Journal ArticleDOI
TL;DR: It is suggested that urine-marking may enhance foraging efficiency in wolves by signalling that a site contains no more edible food despite the presence of lingering food odors.
Abstract: The relationship between urine-marking and caching was studied in two captive groups of wolves (Canis lupus). It was found that urine-marking never occurred when a cache was stocked, rarely occurred during later investigations if some food was still present, but usually occurred soon after the cache was emptied. The animal marking an empty cache was often not the one which had exploited it. Once an empty cache was marked it received little further attention, as opposed to caches that were empty but not urine-marked. These results suggest that urine-marking may enhance foraging efficiency in wolves by signalling that a site contains no more edible food despite the presence of lingering food odors.

Patent
22 May 1981
TL;DR: A fast cache flush mechanism includes, associated with the cache, an auxiliary portion (termed a flush count memory) that references a flush counter during the addressing of the cache as mentioned in this paper.
Abstract: A fast cache flush mechanism includes, associated with the cache, an auxiliary portion (termed a flush count memory) that references a flush counter during the addressing of the cache. This flush counter preferably has a count capacity of the same size as the number of memory locations in the cache. Whenever the cache is updated, the current value of the flush counter is written into the location in the flush count memory associated with the memory location in the cache pointed to by an accessing address, and the valid bit is set. Whenever it is desired to flush the cache, the contents of the flush counter are changed (e.g. incremented) to a new value which is then written as the new cache index into the location of the flush count memory associated with that flush count, and the associated valid bit is cleared or reset. Any access to the cache by the address requires that the cache index in the associated flush count memory location match the current contents of the flush counter and that the valid bit be set. When the cache is flushed by the above procedure, these conditions cannot be fulfilled, since the current contents of the flush counter do not match any cache index or the valid bit has been reset. As a result, for each addressed memory location that has not been accessed since the last cache flush command (corresponding to the latest incrementing of the flush counter), the total contents of that memory location (i.e. data, cache index and validity bit) are updated in the manner described above. Through this procedure, once the contents of the flush counter have recycled back to a previous value, it is guaranteed that each memory location in the cache will have been flushed and, in many instances, updated with valid data.

Patent
03 Aug 1981
TL;DR: In this paper, data is promoted from a backing store (disk storage apparatus termed DASD) to a random access cache in a peripheral data storage system by sending a sequential access bit to the storage system.
Abstract: Data is promoted from a backing store (disk storage apparatus termed DASD) to a random access cache in a peripheral data storage system. When a sequential access bit is sent to the storage system, all data specified in a read command is fetched to the cache from DASD. If such prefetched data is replaced from cache and the sequential bit is on, a subsequent host access request for such data causes all related data, up to a predetermined maximum, not yet read to be promoted to cache.

Patent
15 Oct 1981
TL;DR: In this article, a peripheral data storage system is described, where data promotion occurs after completion of a series of storage access requests, based on the status of the cache and activity of a last storage reference.
Abstract: In a storage hierarchy, promotion of data from a backing store to a caching buffer store is restricted based upon status of the cache and activity of a last storage reference. Observed writing activity selectively inhibits data promotion. Data promotion occurs after completion of a series of storage access requests. A peripheral data storage system is described.

Patent
03 Aug 1981
TL;DR: In this article, the cache directory is used to determine if an invalid signal group is stored in the associated cache storage unit, and when an invalid group is found in the cache storage, this group is rendered unavailable to the data processing unit during the present cache memory cycle without interrupting the normal cache memory operation during succeeding cache memory cycles.
Abstract: In a cache memory unit including a cache directory identifying signal groups stored in an associated cache storage unit, apparatus and method are disclosed for searching the cache directory during a second portion of the cache memory cycle when the cache directory is not needed for normal operation, to determine if an invalid signal group is stored in the associated cache storage. When an invalid signal is found in the cache storage, this signal group is rendered unavailable to the data processing unit during the present cache memory cycle without interrupting the normal cache memory operation during succeeding cache memory cycles.

Patent
Robert Percy Fletcher1
30 Dec 1981
TL;DR: The hybrid cache control as mentioned in this paper provides a sharing (SH) flag with each line representation in each private CP cache directory in a multiprocessor (MP) to uniquely indicate for each line in the associated cache whether it is to be handled as a store-in-cache (SIC) line when its SH flag is in non-sharing state, and as a ST line in sharing state.
Abstract: The hybrid cache control provides a sharing (SH) flag with each line representation in each private CP cache directory in a multiprocessor (MP) to uniquely indicate for each line in the associated cache whether it is to be handled as a store-in-cache (SIC) line when its SH flag is in non-sharing state, and as a store-through (ST) cache line when its SH flag is in sharing state. At any time the hybrid cache can have some lines operating as ST lines, and other lines as SIC lines. A newly fetched line (resulting from a cache miss) has its SH flag set to non-sharing (SIC) state in its location determined by cache replacement selection circuits, unless the SH flag for the requested line is dynamically set to sharing (ST) state and if a cross-interrogation (XI) hit in another cache is found by cross-interrogation (XI) controls, which XIs all other cache directories in the MP for every store or fetch cache miss and for every store cache hit of a ST line (having SH= 1). A XI hit signals that a conflicting copy of the line has been found in another cache. If the conflicting cache line is changed from its corresponding MS line, the cache line is castout to MS. The sharing (SH) flag for the conflicting line is set to sharing state for a fetch miss, but the conflicting line is invalidated for a store miss.

Patent
15 Oct 1981
TL;DR: In this paper, a storage hierarchy has a caching buffer and a backing store; the backing store preferably having disk-type data-storage apparatus and directory indicates data stored in the caching buffer.
Abstract: A storage hierarchy has a caching buffer and a backing store; the backing store preferably having disk-type data-storage apparatus A directory indicates data stored in the caching buffer Upon a data-storage access, read or write, within a series of such accesses, resulting in a cache miss, all subsequent data storage accesses in the series are made to the backing store to the exclusion of the caching buffer even though the caching buffer has storage space allocated for such a data transfer Selected limits are placed on the series to the backing store, such as receiving on end of series (end of command chain) indication from a using unit, crossing DASD cylinder boundaries, receiving an out of bounds address or receiving certain device oriented commands

Patent
19 Aug 1981
TL;DR: In this paper, the cache buffer circuit is coupled to the main memory to compare all of the memorized store address data with the accompanying readout address data and to make the first and second cache control circuits preferentially process the buffer store request prior to each of the readout requests.
Abstract: In a cache memory arrangement used between a control processor (21) and a main memory (22) and comprising operand and instruction cache memories (31, 32), a cache buffer circuit (40) is responsive to storage requests from the central processor to individually memorize the accompanying storage data and store address data and to produce the memorized storage data and store address data as buffer output data and buffer output address data together with a buffer store request. Responsive to the buffer store request, first and second cache control circuits (36, 37) transfer for accompanying buffer output address data to the operand and the instruction cache memories, if each of the operand and the instruction cache memories is not supplied with any readout requests. Preferably, first and second coincidence circuits (51, 52) are coupled to the cache buffer circuit and responsive to the readout requests to compare all of the memorized store address data with the accompanying readout address data and to make the first and the second cache control circuits preferentially process the buffer store request prior to each of the readout requests. The buffer circuit may comprise two pairs of buffers (41, 42; 63, 64), each pair being for memorizing each of the store address data and the storage data. An address converter (70) may be attached to the arrangement to convert a logical address represented by each address data into a physical address.

Patent
28 Sep 1981
TL;DR: In this article, direct access storage devices (DASD) are connected to a host via a cache each device can be independently addressed by any one of a plurality of addresses, also termed logical devices and exposures Since operations between DASD and cache are combined for all of the independent logical devices, resetting operations related to one independent logical device can inadvertently interfere with operations of another independent DL.
Abstract: Direct access storage devices (DASD) are connected to a host via a cache Each device can be independently addressed by any one of a plurality of addresses, also termed logical devices and exposures Since operations between DASD and cache are combined for all of the independent logical devices, resetting operations related to one independent logical device can inadvertently interfere with operations of another independent logical device To maintain data integrity, a programmed control accommodates logical device independence by using queues and control blocks relating to the DASD and logical devices, respectively

Patent
Wing N. Toy1
02 Feb 1981
TL;DR: In this paper, the cache memory comprises a cache address unit which stores the subset of the real address bits from the address translation buffer (ATB) in order to increase cache memory performance.
Abstract: In a computer system having a cache memory and using virtual addressing, effectiveness of the cache is improved by storing a subset of the least significant real address bits obtained by translation of a previous virtual address and by using this subset in subsequent cache addressing operations. The system functions in the following manner. In order to access a memory location in either the main memory or cache memory, a processor generates and transmits virtual address bits to the memories. The virtual address bits comprise segment, page and word address bits. The word address bits do not have to be translated, but an address translation buffer (ATB) translates the segment and page address into real address bits. A subset of the least significant bits of the latter word address bits represent the address needed for accessing the cache. In order to increase cache memory performance, the cache memory comprises a cache address unit which stores the subset of the real address bits from the ATB. These stored address bits are used in subsequent operations along with the word address bits for accessing the cache memory until the stored address bits no longer equal the current subset of least significant real address bits transmitted from the ATB. When the stored address bits no longer equal the current subset, the cache address unit then stores the current subset; and the cache memory is reaccessed utilizing the word address bits and current subset.

Patent
15 Oct 1981
TL;DR: A storage hierarchy has a backing store and a caching buffer store as discussed by the authors, and during a series of accesses to the hierarchy by a user, data being selectively removed from the buffer store increases the probability of writing data to the backing store.
Abstract: A storage hierarchy has a backing store and a caching buffer store. During a series of accesses to the hierarchy by a user, writing data to the hierarchy results in data being selectively removed from the buffer store. Space in said buffer store not being allocated to data being written results in such data being written to the backing store to the exclusion of the buffer store. Removal of data increases the probability of writing data to the backing store. In a preferred implementation, the backing store is one or more disk type data storage apparatus and the buffer store is an electronic random access memory.

Journal ArticleDOI
TL;DR: The memory system of the Dorado, a compact high- performance personal computer, has very high I/O bandwidth, a large paged virtual memory, a cache, and heavily pipelined control; this paper discusses all of these in detail.
Abstract: The memory system of the Dorado, a compact high- performance personal computer, has very high I/O bandwidth, a large paged virtual memory, a cache, and heavily pipelined control; this paper discusses all of these in detail. Relatively low-speed I/O devices transfer single words to or from the cache; fast devices, such as a color video display, transfer directly to or from main storage while the processor uses the cache. Virtual addresses are used in the cache and for all I/O transfers. The memory is controlled by a seven-stage pipeline, which can deliver a peak main-storage bandwidth of 533 million bits/s to service fast I/O devices and cache misses. Interesting problems of synchronization and scheduling in this pipeline are discussed. The paper concludes with some performance measurements that show, among other things, that the cache hit rate is over 99 percent.

Journal ArticleDOI
TL;DR: This paper summarizes research by the author on two topics: cache disks and file migration, by which files are migrated between disk and mass storage as needed in order to effectively maintain on-line a much larger amount of information than the disks can hold.

Patent
08 Dec 1981
TL;DR: A buffer memory system comprises a buffer memory and a fetch directory, both of which are accessible by concatenation of at least the least significant bit of logical or physical page field of a logical or a physical address signal and a selected number of bits lower than that least significant bits as mentioned in this paper.
Abstract: A buffer memory system comprises a buffer memory and a fetch directory. Both are accessible by a concatenation of at least the least significant bit of logical or physical page field of a logical or a physical address signal and a selected number of bits lower than that least significant bit. Physical page fields stored in the fetch directory are used to control an access to a data block stored in the buffer memory even at a plurality of addresses accessible by logical and physical address signals for one and the same instruction for accessing the memory. The system may or may not comprise an inverse translation table for translating a physical address signal for accessing a main memory into the concatenation to be used in accessing the buffer memory and the control table.

Journal ArticleDOI
J. Voldman1, Lee Windsor Hoevel1
TL;DR: This paper describes an adaptation of standard Fourier analysis techniques to the study of software-cache interactions, where the cache is viewed as a "black box" boolean signal generator, where "ones" correspond to cache misses and "zeros" corresponds to cache hits.
Abstract: This paper describes an adaptation of standard Fourier analysis techniques to the study of software-cache interactions. The cache is viewed as a "black box" boolean signal generator, where "ones" correspond to cache misses and "zeros" correspond to cache hits. The spectrum of this time sequence is used to study the dynamic characteristics of complex systems and workloads with minimal a priori knowledge of their internal organization. Line spectra identify tight loops accessing regular data structures, while the overall spectral density reveals the general structure of instruction localities.

Patent
Dana R Spencer1
15 Jun 1981
TL;DR: In this article, the cache allocates a line for LS use by sending a special signal with an address for a line in a special in main storage which is non-program addressable (i.e. not addressable by any of the architected instructions of the processor).
Abstract: The disclosure pertains to a relatively small local storage (LS) in a processor's IE which can be effectively expanded by utilizing a portion of a processor's store-in-cache. The cache allocates a line (i.e. block) for LS use by the instruction unit sending a special signal with an address for a line in a special in main storage which is non-program addressable (i.e. not addressable by any of the architected instructions of the processor). The special signal suppresses the normal line fetch operation of the cache from main storage caused when the cache does not have a requested line. After the initial allocation of the line space in the cache to LS use, the normal cache operation is again enabled, and the LS line can be castout to the special area in main storage and be retrieved therefrom to the cache for LS use.

Proceedings ArticleDOI
18 Oct 1981
TL;DR: To remedy the lack of an ability to store and retrieve a computed value under programmer control, “caching functionals” are proposed which allow the programmer to selectively avoid recomputation without overt use of assignment.
Abstract: The “referential transparency” of applicative language expressions demands that all occurrences of an expression in a given context yield the same value. In principle, that value therefore needs to be computed only once. However, in recursive programming, a context usually unfolds dynamically, precluding textual recognition of multiple occurrences, so that such occurrences are recomputed. To remedy the lack, in applicative languages, of an ability to store and retrieve a computed value under programmer control, “caching functionals” are proposed which allow the programmer to selectively avoid recomputation without overt use of assignment. The uses and implementation of such mechanisms are discussed, including reasons and techniques for purging the underlying cache. Our approach is an extension of the early notion of “memo function”, enabling improved space utilization and a “building-block” approach.

Patent
16 Sep 1981
TL;DR: In this article, a data processing system includes a plurality of central processing units (CPO to CP3) each including an instruction execution unit (IE) and a buffer control unit (BCE).
Abstract: A data processing system includes a plurality of central processing units (CPO to CP3) each including an instruction execution unit (IE) and a buffer control unit (BCE). Each buffer control unit includes a store in cache, a cache directory and cache controls. The processing units are coupled to a shared main storage system (MS) through system control means (SCO, SC1). The system control means includes a plurality of copy directories, each corresponding to an associated one of the cache directories. When a processing unit issues a clear storage command, the address thereof is directed to all of the copy directories. For each one found to refer to the address, an invalidate signal is sent to the associated cache directory to invalidate its corresponding entry, if permitted by its processing unit, and a resend signal is sent to the processor which issued the clear storage command. The entries in the copy directories corresponding to invalidated entries in the cache directories are then also invalidated. The procedure continues until all corresponding entries in the directories are invalidated, at which time the directory search results in an accept signal which enables the main storage area defined by the command to be cleared.