scispace - formally typeset
Search or ask a question

Showing papers on "Heap (data structure) published in 2002"


Proceedings ArticleDOI
22 Jul 2002
TL;DR: An extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure is developed, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures.
Abstract: In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.

2,348 citations


Proceedings ArticleDOI
20 Jun 2002
TL;DR: The Event Heap is proposed, a coordination model most similar to tuplespaces, as being appropriate for interactive workspace environments, which will contain a heterogeneous collection of both new and legacy applications and devices.
Abstract: Coordinating the interactions of applications running on the diversity of both mobile and embedded devices that will be common in ubiquitous computing environments is still a difficult and not completely solved problem. We look at one such environment, an interactive workspace, where groups come together to collaborate on solving problems. Such a space will contain a heterogeneous collection of both new and legacy applications and devices. We propose the Event Heap, a coordination model most similar to tuplespaces, as being appropriate for such environments. We also present a prototype implementation of the Event Heap, and show that the system has performed well in actual use over the last two years in our prototype interactive workspace, the iRoom.

210 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: This work proposes a parameterizable framework for data-layout optimization of general-purpose applications that can synthesize layouts that outperform existing non-iterative heuristics, tune application-specific memory allocators, as well as compose multiple data- layout optimizations.
Abstract: Data-layout optimizations rearrange fields within objects, objects within objects, and objects within the heap, with the goal of increasing spatial locality. While the importance of data-layout optimizations has been growing, their deployment has been limited, partly because they lack a unifying framework. We propose a parameterizable framework for data-layout optimization of general-purpose applications. Acknowledging that finding an optimal layout is not only NP-hard, but also poorly approximable, our framework finds a good layout by searching the space of possible layouts, with the help of profile feedback. The search process iteratively prototypes candidate data layouts, evaluating them by "simulating" the program on a representative trace of memory accesses. To make the search process practical, we develop space-reduction heuristics and optimize the expensive simulation via memoization. Equipped with this iterative approach, we can synthesize layouts that outperform existing non-iterative heuristics, tune application-specific memory allocators, as well as compose multiple data-layout optimizations.

144 citations


Journal ArticleDOI
TL;DR: A study on the chemical stability of municipal solid waste (MSW) bottom ash submitted to weathering was carried out in order to identify and quantify the physico-chemical maturation mechanisms in a large heap over a period of about 18 months.

130 citations


Journal ArticleDOI
TL;DR: The organization of the workforce involved in outside-nest tasks (foraging, waste disposal) and quantified task switching and heap location is investigated to test hypotheses that these tasks are organized to minimize contact between the heap and foraging entrances and trails.
Abstract: Unlike most leaf-cutting ants, which have underground waste dumps, the leaf-cutting ant Atta colombica dumps waste in a heap outside the nest. Waste is hazardous, as it is contaminated with pathogens. We investigated the organization of the workforce involved in outside-nest tasks (foraging, waste disposal) and quantified task switching and heap location to test hypotheses that these tasks are organized to minimize contact between the heap and foraging entrances and trails. Waste management is an important task: 11% of externally working ants were either transporting waste or manipulating waste on the heap, and the other 89% were foragers. There is strict division of labor between foragers and waste workers, with no task switching. Waste management also has division of labor and is undertaken by transporters that carry waste to the heap margins and heap workers that manage the heap. Waste heaps are always located downhill from nest entrances. The distance to the waste heap is positively related to colony size and negatively related to slope. Foraging trails avoid the heap, with 92% of trails going away from the heap. This avoidance behavior is costly, increasing foraging trail length by at least 6%. Waste management in A. colombica is a sophisticated system that encompasses both work and spatial organization. This organization is probably adaptive in reducing disease transmission. Division of labor separates waste management from foraging, reducing the likelihood of foragers becoming contaminated with waste. The downhill location of heaps reduces waste entering entrances during rain. The orientation of foraging trails reduces the possibility of foragers becoming accidentally contaminated with waste.

125 citations


Proceedings ArticleDOI
20 Jun 2002
TL;DR: Measurements of thread-local heaps with direct global allocation on a 4-way multiprocessor IBM Netfinity server show that the overall garbage collection times have been substantially reduced, and that most long pauses have been eliminated.
Abstract: We present a memory management scheme for Java based on thread-local heaps. Assuming most objects are created and used by a single thread, it is desirable to free the memory manager from redundant synchronization for thread-local objects. Therefore, in our scheme each thread receives a partition of the heap in which it allocates its objects and in which it does local garbage collection without synchronization with other threads. We dynamically monitor to determine which objects are local and which are global. Furthermore, we suggest using profiling to identify allocation sites that almost exclusively allocate global objects, and allocate objects at these sites directly in a global area.We have implemented the thread-local heap memory manager and a preliminary mechanism for direct global allocation on an IBM prototype of JDK 1.3.0 for Windows. Our measurements of thread-local heaps with direct global allocation on a 4-way multiprocessor IBM Netfinity server show that the overall garbage collection times have been substantially reduced, and that most long pauses have been eliminated.

112 citations


Proceedings ArticleDOI
17 May 2002
TL;DR: The generality of Beltway enables the design and implementation of new collectors that are robust to variations in heap size and improve total execution time over the best generational copying collectors of which the author is aware.
Abstract: We present the design and implementation of a new garbage collection framework that significantly generalizes existing copying collectors. The Beltway framework exploits and separates object age and incrementality. It groups objects in one or more increments on queues called belts, collects belts independently, and collects increments on a belt in first-in-first-out order. We show that Beltway configurations, selected by command line options, act and perform the same as semi-space, generational, and older-first collectors, and encompass all previous copying collectors of which we are aware. The increasing reliance on garbage collected languages such as Java requires that the collector perform well. We show that the generality of Beltway enables us to design and implement new collectors that are robust to variations in heap size and improve total execution time over the best generational copying collectors of which we are aware by up to 40%, and on average by 5 to 10%, for small to moderate heap sizes. New garbage collection algorithms are rare, and yet we define not just one, but a new family of collectors that subsumes previous work. This generality enables us to explore a larger design space and build better collectors.

110 citations


Journal ArticleDOI
TL;DR: In this article, the GEOCOAT™ process was studied in sets of small heated columns and the temperature was gradually increased to 70 °C, while successively introducing various mesophile and thermophile cultures.

108 citations


Proceedings ArticleDOI
20 Jun 2002
TL;DR: This work explores whether the connectivity of objects can yield useful partitions or improve existing partitioning schemes and indicates that connectivity correlates strongly with object lifetimes and deathtimes and is therefore likely to be useful for partitioning objects.
Abstract: Modern garbage collectors partition the set of heap objects to achieve the best performance. For example, generational garbage collectors partition objects by age and focus their efforts on the youngest objects. Partitioning by age works well for many programs because younger objects usually have short lifetimes and thus garbage collection of young objects is often able to free up many objects. However, generational garbage collectors are typically much less efficient for longer-lived objects, and thus prior work has proposed many enhancements to generational collection.Our work explores whether the connectivity of objects can yield useful partitions or improve existing partitioning schemes. We look at both direct (e.g., object A points to object B) and transitive (e.g., object A is reachable from object B) connectivity. Our results indicate that connectivity correlates strongly with object lifetimes and deathtimes and is therefore likely to be useful for partitioning objects.

83 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: A new type-based approach to garbage collection that has similar attributes but lower cost than generational collection is presented, and the short type pointer technique for reducing memory requirements of objects (data) used by the program is described.
Abstract: In this paper, we introduce the notion of prolific and non-prolific types, based on the number of instantiated objects of those types. We demonstrate that distinguishing between these types enables a new class of techniques for memory management and data locality, and facilitates the deployment of known techniques. Specifically, we first present a new type-based approach to garbage collection that has similar attributes but lower cost than generational collection. Then we describe the short type pointer technique for reducing memory requirements of objects (data) used by the program. We also discuss techniques to facilitate the recycling of prolific objects and to simplify object co-allocation decisions.We evaluate the first two techniques on a standard set of Java benchmarks (SPECjvm98 and SPECjbb2000). An implementation of the type-based collector in the Jalapeno VM shows improved pause times, elimination of unnecessary write barriers, and reduction in garbage collection time (compared to the analogous generational collector) by up to 15%. A study to evaluate the benefits of the short-type pointer technique shows a potential reduction in the heap space requirements of programs by up to 16%.

83 citations


Proceedings ArticleDOI
04 Nov 2002
TL;DR: An allocation time object placement technique based on the recently introduced notion of prolific (frequently instantiated) types that attempts to co-locate, at allocation time, objects of prolific types that are connected via object references and a novel locality based graph traversal technique.
Abstract: The growing gap between processor and memory speeds is motivating the need for optimization strategies that improve data locality. A major challenge is to devise techniques suitable for pointer-intensive applications. This paper presents two techniques aimed at improving the memory behavior of pointer-intensive applications with dynamic memory allocation, such as those written in Java. First, we present an allocation time object placement technique based on the recently introduced notion of prolific (frequently instantiated) types. We attempt to co-locate, at allocation time, objects of prolific types that are connected via object references. Then, we present a novel locality based graph traversal technique. The benefits of this technique, when applied to garbage collection (GC), are twofold: (i) it improves the performance of GC due to better locality during a heap traversal and (ii) it restructures surviving objects in a way that enhances locality. On multiprocessors, this technique can further reduce overhead due to synchronization and false sharing. The experimental results, on a well-known suite of Java benchmarks (SPECjvm98 [26], SPECjbb2000 [27], and jOlden [4]), from an implementation of these techniques in the Jikes RVM [1], are very encouraging. The object co-allocation technique improves application performance by up to 21% (10% on average) in the Jikes RVM configured with a non-copying mark-and-sweep collector. The locality-based traversal technique reduces GC times by up to 20% (10% on average) and improves the performance of applications by up to 14% (6% on average) in the Jikes RVM configured with a copying semi-space collector. Both techniques combined can improve application performance by up to 22% (10% on average) in the Jikes RVM configured with a non-copying mark-and-sweep collector.

Proceedings ArticleDOI
Yoav Ossia1, Ori Ben-Yitzhak1, Irit Goft1, Elliot K. Kolodner1, Victor Leikehman1, Avi Owshanko1 
17 May 2002
TL;DR: This work designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet the challenges of multi-gigabyte heaps and good scaling on multiprocessor hardware.
Abstract: Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for "server-oriented" GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on multiprocessor hardware, and keeping the number of expensive multi-cycle fence instructions required by weak ordering to a minimum. We designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet these challenges. First, it combines incremental GC to ensure short pause times with concurrent low-priority background GC threads to take advantage of processor idle time. Second, it employs a low-overhead work packet mechanism to enable full parallelism among the incremental and concurrent collecting threads and ensure load balancing. Third, it reduces memory fence instructions by using batching techniques: one fence for each block of small objects allocated, one fence for each group of objects marked, and no fence at all in the write barrier. When compared to the mature well-optimized parallel stop-the-world mark-sweep collector already in the IBM JVM, our collector prototype reduces the maximum pause time from 284 ms to 101 ms, and the average pause time from 266 ms to 66 ms while only losing 10% throughput when running the SPECjbb2000 benchmark on a 256 MB heap on a 4-way 550 MHz Pentium multiprocessor.

Book ChapterDOI
11 Apr 2002
TL;DR: A framework for concisely defining and evaluating two symmetry reductions currently used in software model checking, involving heap objects and processes, is presented and an on-the-fly state space exploration algorithm combining both techniques is presented.
Abstract: Symmetry reduction techniques exploit symmetries that occur during the execution of a system, in order to minimize its state space for efficient verification of temporal logic properties. This paper presents a framework for concisely defining and evaluating two symmetry reductions currently used in software model checking, involving heap objects and, respectively, processes. An on-the-fly state space exploration algorithm combining both techniques is also presented. Second, the relation between symmetry and partial order reductions is investigated, showing how one's strengths can be used to compensate for the other's weaknesses. The symmetry reductions presented here were implemented in the dSPIN model checking tool. We performed a number of experiments that show significant progress in reducing the cost of finite state software verification.

Patent
26 Feb 2002
TL;DR: In this article, a partitioned scheduling heap data structure is proposed for data packet transmission scheduling using a plurality of levels for storing scheduling values for data packets according to their relative priorities.
Abstract: The present invention is directed toward methods and apparatus for data packet transmission scheduling using a partitioned scheduling heap data structure. The scheduling heap data structure has a plurality of levels for storing scheduling values for data packets according to their relative priorities. A highest level in the heap has a single position and each succeeding lower level has twice the number of positions as the preceding level. The data structure may be adapted to store a plurality of logical heaps within the heap data structure by assigning a highest level of each logical heap to a level in the heap data structure that is lower than the highest level. Thus, a single physical memory may be adapted to store plural logical heaps. This is useful because a single physical memory can be adapted to prioritize packets of various different transmission protocols and speeds.

Book ChapterDOI
21 Nov 2002
TL;DR: An alternative optimal cache oblivious priority queue based only on binary merging is devised and it is shown that the structure can be made adaptive to different usage profiles.
Abstract: The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory model Arge et al recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providingthe first cache oblivious algorithms for graph problems Their structure uses cache oblivious sorting and selection as subroutines In this paper, we devise an alternative optimal cache oblivious priority queue based only on binary merging We also show that our structure can be made adaptive to different usage profiles

Book ChapterDOI
08 Apr 2002
TL;DR: A class of transformations which modify the representation of dynamic data structures used in programs with the objective of compressing their sizes are introduced and the commonprefix and narrow-data transformations that respectively compress a 32 bit address pointer and a 16 bit integer field into 15 bit entities are developed.
Abstract: We introduce a class of transformations which modify the representation of dynamic data structures used in programs with the objective of compressing their sizes. We have developed the commonprefix and narrow-data transformations that respectively compress a 32 bit address pointer and a 32 bit integer field into 15 bit entities. A pair of fields which have been compressed by the above compression transformations are packed together into a single 32 bit word. The above transformations are designed to apply to data structures that are partially compressible, that is, they compress portions of data structures to which transformations apply and provide a mechanism to handle the data that is not compressible. The accesses to compressed data are efficiently implemented by designing data compression extensions (DCX) to the processor's instruction set. We have observed average reductions in heap allocated storage of 25% and average reductions in execution time and power consumption of 30%. If DCX support is not provided the reductions in execution times fall from 30% to 12.5%.

Journal ArticleDOI
TL;DR: It is shown that liveness accuracy reduces the reachable heap size by up to 62% for the authors' benchmark programs, and type accuracy has an insignificant impact on a garbage collector's ability to find unreachable objects in their benchmark runs.
Abstract: The effectiveness of garbage collectors and leak detectors in identifying dead objects depends on the accuracy of their reachability traversal. Accuracy has two orthogonal dimensions: (i) whether the reachability traversal can distinguish between pointers and nonpointers (type accuracy), and (ii) whether the reachability traversal can identify memory locations that will be dereferenced in the future (liveness accuracy). This article presents an experimental study of the importance of type and liveness accuracy for reachability traversals. We show that liveness accuracy reduces the reachable heap size by up to 62p for our benchmark programs. However, the simpler liveness schemes (e.g., intraprocedural analysis of local variables) are largely ineffective for our benchmark runs: one must analyze global variables using interprocedural analysis to obtain significant benefits. Type accuracy has an insignificant impact on a garbage collector's ability to find unreachable objects in our benchmark runs. We report results for programs written in C, C++, and Eiffel.

Patent
17 Oct 2002
TL;DR: In this article, a heap is divided into logical regions, such as a garbage-collected heap in a Java environment, and an incremental compaction cycle is commenced, where the first region of the heap is compacted, with subsequent regions being compacted during subsequent time periods.
Abstract: A system and method for incrementally compacting a computer system heap is presented. A heap, such as a garbage-collected heap in a Java environment, is divided into logical regions. When the heap is becoming fragmented, an incremental compaction cycle is commenced. During a first time period, the first region of the heap is compacted, with subsequent regions being compacted during subsequent time periods. A time period commences when a garbage collection event occurs. In a multiprocessor environment the regions can be divided into a number of sections which are each compacted using a different processor. One or more break tables are constructed indicating how far contiguous groups of moveable objects should be moved to better group objects and eliminate interspersed free spaces. References throughout the heap that point to objects within the compacted region are then adjusted so that the references point to the new object locations.

Journal ArticleDOI
TL;DR: Data-layout optimizations rearrange fields within objects, objects withinObjects, and objects within the heap, with the goal of increasing spatial locality.
Abstract: Data-layout optimizations rearrange fields within objects, objects within objects, and objects within the heap, with the goal of increasing spatial locality. While the importance of data-layout opt...

Proceedings ArticleDOI
19 May 2002
TL;DR: It is shown that for this problem one of the operations find-min, decrease-key, or meld must take non-constant time, and it is proved that the lower bounds which hold for union- find in the cell probe model hold for Boolean union-find as well.
Abstract: In the classical meldable heap data type we maintain an item-disjoint collection of heaps under the operations find-min, insert, delete, decrease-key, and meld. In the usual definition decrease-key and delete get the item and the heap containing it as parameters. We consider the modified problem where decrease-key and delete get only the item but not the heap containing it. We show that for this problem one of the operations find-min, decrease-key, or meld must take non-constant time. This is in contrast with the original data type in which data structures supporting all these three operations in constant time are known (both in an amortized and a worst-case setting).To establish our results for meldable heaps we consider a weaker version of the union-find problem that is of independent interest, which we call Boolean union-find. In the Boolean union-find problem the find operation is a binary predicate that gets an item x and a set A and answers positively if and only if k e A. We prove that the lower bounds which hold for union-find in the cell probe model hold for Boolean union-find as well.We also suggest new heap data structures implementing the modified meldable heap data type that are based on redundant binary counters. Our data structures have good worst-case bounds. The best of our data structures matches the worst-case lower bounds which we establish for the problem. The simplest of our data structures is an interesting generalization of binomial queues.

Proceedings ArticleDOI
04 Nov 2002
TL;DR: GCspy is an architectural framework for the collection, transmission, storage and replay of memory management behaviour that has been used to analyse production Java virtual machines running applications of realistic sizes and has revealed important insights into the interaction between application program and JVM.
Abstract: GCspy is an architectural framework for the collection, transmission, storage and replay of memory management behaviour. It makes new contributions to the understanding of the dynamic memory behaviour of programming languages (and especially object-oriented languages that make heavy demands on the performance of memory managers). GCspy's architecture allows easy incorporation into any memory management system: it is not limited to garbage-collected languages. It requires only small changes to the system in which it is incorporated but provides a simple to use yet powerful data-gathering API. GCspy scales to allow very large heaps to be visualised effectively and efficiently. It allows already-running, local or remote systems to be visualised and those systems to run at full speed outside the points at which data is gathered. GCspy's visualisation tool presents this information in a number of novel ways.Deep understanding of program behaviour is essential to the design of the next generation of garbage collectors and explicit allocators. Until now, no satisfactory tools have been available to assist the implementer in gaining an understanding of heap behaviour. GCspy has been demonstrated to be a practical solution to this dilemma. It has been used to analyse production Java virtual machines running applications of realistic sizes. Its use has revealed important insights into the interaction between application program and JVM and has led to the development of better garbage collectors.

Book ChapterDOI
17 Sep 2002
TL;DR: The main idea is to use different data structures such as BDDs, arithmetic constraints and shape graphs as type specific symbolic representations in automated verification to conservatively verify properties of concurrent linked lists.
Abstract: We present an automated verification technique for verification of concurrent linked lists with integer variables. We show that using our technique one can automatically verify invariants that relate (unbounded) integer variables and heap variables such as head ? null ? numItems > 0. The presented technique extends our previous work on composite symbolic representations with shape analysis. The main idea is to use different data structures such as BDDs, arithmetic constraints and shape graphs as type specific symbolic representations in automated verification. We show that polyhedra based widening operation can be integrated with summarization operation in shape graphs to conservatively verify properties of concurrent linked lists.

Journal ArticleDOI
TL;DR: An implementation of the Older-First algorithm in the Jikes RVM for Java can perform as well as the simulation results suggested, and greatly improves total program performance when compared to using a fixed-size nursery generational collector.
Abstract: Until recently, the best performing copying garbage collectors used a generational policy which repeatedly collects the very youngest objects, copies any survivors to an older space, and then infrequently collects the older space. A previous study that used garbage-collection simulation pointed to potential improvements by using an Older-First copying garbage collection algorithm. The Older-First algorithm sweeps a fixed-sized window through the heap from older to younger objects, and avoids copying the very youngest objects which have not yet had sufficient time to die. We describe and examine here an implementation of the Older-First algorithm in the Jikes RVM for Java. This investigation shows that Older-First can perform as well as the simulation results suggested, and greatly improves total program performance when compared to using a fixed-size nursery generational collector. We further compare Older-First to a flexible-size nursery generational collector in which the nursery occupies all of the heap that does not contain older objects. In these comparisons, the flexible-nursery collector is occasionally the better of the two, but on average the Older-First collector performs the best.

Journal ArticleDOI
TL;DR: Preliminary experimental results are presented showing that the data structure analysis and pool allocation are effective for a set of pointer intensive programs in the Olden benchmark suite.
Abstract: This paper presents an analysis technique and a novel program transformation that can enable powerful optimizations for entire linked data structures. The fully automatic transformation converts ordinary programs to use pool (aka region) allocation for heap-based data structures. The transformation relies on an efficient link-time interprocedural analysis to identify disjoint data structures in the program, to check whether these data structures are accessed in a type-safe manner, and to construct a Disjoint Data Structure Graph that describes the connectivity pattern within such structures. We present preliminary experimental results showing that the data structure analysis and pool allocation are effective for a set of pointer intensive programs in the Olden benchmark suite. To illustrate the optimizations that can be enabled by these techniques, we describe a novel pointer compression transformation and briefly discuss several other optimization possibilities for linked data structures.

Patent
06 Dec 2002
TL;DR: In this article, a garbage collector collects at least a generation of a dynamically allocated heap in increments, and identifies references located outside a collection set that refer to objects that belong to the collection set, and evacuates the objects thus referred to before it reclaims the memory space that the collection sets occupies.
Abstract: A garbage collector collects at least a generation of a dynamically allocated heap in increments. In each increment, it identifies references located outside a collection set that refer to objects that belong to the collection set, and it evacuates the objects thus referred to before it reclaims the memory space that the collection set occupies. In some collection increments, references to collection-set objects are located both inside and outside the generation. The collector locates all such references, both those inside the generation and those outside it, before it evacuates any objects in response to any of them. By doing so, it is able to reduce the cost of locating references and evacuating objects.

Proceedings ArticleDOI
17 Sep 2002
TL;DR: A direct implementation of the shift and reset control operators in the SFE system is presented, based upon the popular incremental stack/heap strategy for representing continuations, which improves upon the traditional technique of simulating shift andreset via callcc.
Abstract: We present a direct implementation of the shift and reset control operators in the SFE system. The new implementation improves upon the traditional technique of simulating shift and reset via callcc. Typical applications of these operators exhibit space savings and a significant overall performance gain. Our technique is based upon the popular incremental stack/heap strategy for representing continuations. We present implementation details as well as some benchmark measurements for typical applications.

Patent
01 Jul 2002
TL;DR: In this paper, the authors present a system that facilitates performing generational garbage collection on a heap by dividing an old generation of the heap into segments and associating a separate card table with each segment.
Abstract: One embodiment of the present invention provides a system that facilitates performing generational garbage collection on a heap. The system operates by dividing an old generation of the heap into segments. Next, the system divides each segment into a series of cards and associates a separate card table with each segment. This card table has an entry for each card in the segment. In a variation on this embodiment, while updating a pointer within an object in the old generation, the system locates the segment containing the object and accesses the card table for the segment. The system then marks the entry in the card table associated with the card containing the object.

Patent
04 Dec 2002
TL;DR: In this article, a garbage collector divides the garbage-collected heap into cards and maintains a table containing a card-object table entry for each card, which contains information from which the collector can determine where any references in the card are located and identify objects that may be reachable.
Abstract: A garbage collector divides the garbage-collected heap into “cards.” It maintains a table containing a card-object table entry for each card. A card's entry contains information from which the collector can determine where any references in the card are located and thereby identify objects that may be reachable. The encoding of a card's table entry is not restricted to values that indicate the location of the object in which the card begins. Instead, its possible values additionally include ones that indicate that the card begins with a certain number of references or that an object begins at a given location in the middle of the card. The collector thereby avoids consulting object's class information unnecessarily.

Patent
05 Nov 2002
TL;DR: In this paper, a garbage collector collects a dynamically allocated heap by employing the train algorithm, in which “car” sections of a heap generation are organized in groups, or trains.
Abstract: A garbage collector collects a dynamically allocated heap by employing the train algorithm, in which “car” sections of a heap generation are organized in groups, or “trains.” When a car section comes up for collection, objects that it contains are evacuated if they are referred to by references located in cars not currently being collected. The cars to which they are evacuated belong to the trains that contain the references. The trains form a sequence in which their constituent cars are to be collected, and objects that are directly allocated in the generation are placed into trains that precede some existing train in the collection sequence.

Patent
26 Feb 2002
TL;DR: In this paper, a hierarchical heap data structure is proposed for data packet transmission scheduling, where each level has a single position and each succeeding lower level has twice the number of positions as the preceding level.
Abstract: The present invention is directed toward data packet transmission scheduling. Scheduling values, such as priority or other scheduling criteria assigned to data packets, are placed in a heap data structure(700). Packets percolate up through the heap by comparing their assigned values in pairs(816). Operations in the heap may be pipelined so as to provide for high-speed sorting(1000). Thus, a few relatively simple operations can be performed repeatedly to quickly percolate packets up trough the heap. Another aspect of the invention provides for fast traversal of the scheduling heap data structure. The hierarchical heap may include a highest level having a single position and each succeeding lower level having twice the number of positions as the preceding level(700). A binary number may represent each position in the heap(806). To traverse the heap, the relative movements necessary to move from one position to another may be determined from the binary number(818).