scispace - formally typeset
Search or ask a question

Showing papers on "Heap (data structure) published in 2000"


Journal ArticleDOI
12 Nov 2000
TL;DR: Hoard as mentioned in this paper combines one global heap and per-processor heaps with a novel discipline that provably bounds memory consumption and has very low synchronization costs in the common case, which is the first allocator to simultaneously solve the above problems.
Abstract: Parallel, multithreaded C and C++ programs such as web servers, database managers, news servers, and scientific applications are becoming increasingly prevalent. For these applications, the memory allocator is often a bottleneck that severely limits program performance and scalability on multiprocessor systems. Previous allocators suffer from problems that include poor performance and scalability, and heap organizations that introduce false sharing. Worse, many allocators exhibit a dramatic increase in memory consumption when confronted with a producer-consumer pattern of object allocation and freeing. This increase in memory consumption can range from a factor of P (the number of processors) to unbounded memory consumption.This paper introduces Hoard, a fast, highly scalable allocator that largely avoids false sharing and is memory efficient. Hoard is the first allocator to simultaneously solve the above problems. Hoard combines one global heap and per-processor heaps with a novel discipline that provably bounds memory consumption and has very low synchronization costs in the common case. Our results on eleven programs demonstrate that Hoard yields low average fragmentation and improves overall program performance over the standard Solaris allocator by up to a factor of 60 on 14 processors, and up to a factor of 18 over the next best allocator we tested.

476 citations


Patent
08 Jun 2000
TL;DR: In this paper, a method and system for detecting memory leaks in an object-oriented environment during real-time trace processing is provided, where an object allocator allocates objects during the execution of the program and modifies object allocation metrics in the profile data structure.
Abstract: A method and system for detecting memory leaks in an object-oriented environment during real-time trace processing is provided. During the profiling of a program executing in a data processing system, a profiler processes events caused by the execution of the program, and the profiler maintains a profile data structure containing execution-related metrics for the program. The execution-related metrics may include object allocation and deallocation metrics that are associated with object processing initiated on behalf of an executing method. An object allocator allocates objects during the execution of the program and modifies object allocation metrics in the profile data structure. Object metrics are stored in a particular location and a pointer to that location is stored in a hash table associated with the object's ID. In another embodiment the pointer to the location is stored in a shadow heap in the same relative position as the position of the object in the heap. The object allocation metrics and the object deallocation metrics may be compared to identify memory leaks.

218 citations


Patent
David Wallman1
09 Jun 2000
TL;DR: In this article, the authors describe a virtual machine that supports the execution of more than one application per virtual machine process, where the heap manager manages substantially all heaps in the memory that are created by the virtual machine.
Abstract: Methods and apparatus for implementing a virtual machine that supports the execution of more than one application per virtual machine process are described. According to one aspect of the present invention, a computing system includes a processor, a memory, and a virtual machine that is in communication with the processor. The virtual machine is arranged to enable one or more jobs to run on the virtual machine, and is further arranged to create a heap in the memory for each job that runs on the virtual machine. In one embodiment, the virtual machine includes a jobs manager, a class manager, and a heap manager. In such an embodiment, the heap manager manages substantially all heaps in the memory that are created by the virtual machine.

171 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: In this article, a modular interprocedural pointer analysis algorithm based on access-paths for C programs is presented, which can reduce the overhead of representing context-sensitive transfer functions and effectively distinguish non-recursive heap objects.
Abstract: In this paper we present a modular interprocedural pointer analysis algorithm based on access-paths for C programs. We argue that access paths can reduce the overhead of representing context-sensitive transfer functions and effectively distinguish non-recursive heap objects. And when the modular analysis paradigm is used together with other techniques to handle type casts and function pointers, we are able to handle significant programs like those in the SPECcint92 and SPECcint95 suites. We have implemented the algorithm and tested it on a Pentium II 450 PC running Linux. The observed resource consumption and performance improvement are very encouraging.

167 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: A method for finding bugs in code whose models represent all execution traces that involve at most j heap cells and k loop iterations is presented.
Abstract: A method for finding bugs in code is presented. For given small numbers j and k, the code of a procedure is translated into a rela-tional formula whose models represent all execution traces that involve at most j heap cells and k loop iterations. This formula is conjoined with the negation of the procedure's specification. The models of the resulting formula, obtained using a constraint solver, are counterexamples: executions of the code that violate the specification.The method can analyze millions of executions in seconds, and thus rapidly expose quite subtle flaws. It can accommodate calls to procedures for which specifications but no code is avail-able. A range of standard properties (such as absence of null pointer dereferences) can also be easily checked, using prede-fined specifications.

139 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: It is shown that a concurrent skiplist structure, following a simple set of modifications, provides a concurrent priority queue with a higher level of parallelism and significantly less contention than the fastest known heap-based algorithms.
Abstract: This paper addresses the problem of designing scalable concurrent priority queues for large scale multiprocessors machines with up to several hundred processors. Priority queues are fundamental in the design of modern multiprocessor algorithms, with many classical applications ranging from numerical algorithms through discrete event simulation and expert systems. While highly scalable approaches have been introduced for the special case of queues with a fixed set of priorities, the most efficient designs for the general case are based on the parallelization of the heap data structure. Though numerous intricate heap-based schemes have been suggested in the literature, their scalability seems to be limited to small machines in the range of ten to twenty processors. This paper proposes an alternative approach: to base the design of concurrent priority queues on the probabilistic skiplist data structure, rather than on a heap. To this end, we show that a concurrent skiplist structure, following a simple set of modifications, provides a concurrent priority queue with a higher level of parallelism and significantly less contention than the fastest known heap-based algorithms. Our initial empirical evidence, collected on a simulated 256 node shared memory multiprocessor architecture similar to the MIT Alewife, suggests that the new skiplist based priority queue algorithm scales significantly better than heap based schemes throughout most of the concurrency range. With 256 processors, they are about twice as fast in performing deletions and up to 8 times faster in performing insertions.

117 citations


Journal ArticleDOI
TL;DR: It is found that microscopic grain dynamics during an avalanche are similar to those in the continuous flow just above the transition, and there is a minimum jamming time, even arbitrarily close to the transition.
Abstract: Surface flows are excited by steadily adding spherical glass beads to the top of a heap. To simultaneously characterize the fast single-grain dynamics and the much slower collective intermittency of the flow, we extend photon-correlation spectroscopy via fourth-order temporal correlations in the scattered light intensity. We find that microscopic grain dynamics during an avalanche are similar to those in the continuous flow just above the transition. We also find that there is a minimum jamming time, even arbitrarily close to the transition.

105 citations


Journal ArticleDOI
TL;DR: In this paper, a new mode of heap behavior called "evaporative autocatalysis" is proposed, in which air is blown upward through the heap at a rate sufficient to drive the net advection of heat upward, resulting in much higher and more uniform internal heap temperatures than can be achieved in the absence of forced aeration.

100 citations


Patent
02 Jun 2000
TL;DR: In this paper, a method and system for garbage collecting a virtual heap using nursery regions for newly created objects to reduce flushing of objects from an in-memory heap to a store heap is provided.
Abstract: A method and system for garbage collecting a virtual heap using nursery regions for newly created objects to reduce flushing of objects from an in-memory heap to a store heap is provided. The garbage collection method is suited for use with small consumer and appliance devices that have a small amount of memory and may be using flash devices as persistent storage. The garbage collection method may provide good performance where only a portion of the virtual heap may be cached in the physical heap. The virtual heap may use a single address space for both objects in the store and the in-memory heap. In one embodiment, a single garbage collector is run on the virtual heap address space. The garbage collection method may remove non-referenced objects from the virtual heap. The garbage collection method may also include a compaction phase to reduce or eliminate fragmentation, and to improve locality of objects within the virtual heap. In one embodiment, the garbage collector for the virtual heap may be implemented as a generational garbage collector using working sets in the virtual heap, where each generation is confined to a working set of the heap. The generational garbage collector may allow the flushing of changes after each garbage collection cycle for each working set region. Heap regions with different flushing policies may be used. An object nursery region without flushing where objects are initially created may be used. When a garbage collection cycle is run, objects truly referenced in the object nursery may be copied back into heap regions to be flushed, while short-lived objects no longer referenced may be deleted without flushing.

91 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: This paper studies the memory system behavior of Java programs by analyzing memory reference traces of several SPECjvm98 applications running with a Just-In-Time (JIT) compiler and finds that the overall cache miss ratio is increased due to garbage collection, which suffers from higher cache misses compared to the application.
Abstract: This paper studies the memory system behavior of Java programs by analyzing memory reference traces of several SPECjvm98 applications running with a Just-In-Time (JIT) compiler. Trace information is collected by an exception-based tracing tool called JTRACE, without any instrumentation to the Java programs or the JIT compiler.First, we find that the overall cache miss ratio is increased due to garbage collection, which suffers from higher cache misses compared to the application. We also note that going beyond 2-way cache associativity improves the cache miss ratio marginally. Second, we observe that Java programs generate a substantial amount of short-lived objects. However, the size of frequently-referenced long-lived objects is more important to the cache performance, because it tends to determine the application's working set size. Finally, we note that the default heap configuration which starts from a small initial heap size is very inefficient since it invokes a garbage collector frequently. Although the direct costs of garbage collection decrease as we increase the available heap size, there exists an optimal heap size which minimizes the total execution time due to the interaction with the virtual memory performance.

91 citations


Patent
25 Oct 2000
TL;DR: In this article, a multiprocessor, multiprogram, stop-the-world garbage collection program is described, which initially over partitions the root sources, and then iteratively employs static and dynamic work balancing.
Abstract: A multiprocessor, multiprogram, stop-the-world garbage collection program is described. The system initially over partitions the root sources, and then iteratively employs static and dynamic work balancing. Garbage collection threads compete dynamically for the initial partitions. Work stealing double-ended queues, where contention is reduced, are described to provide dynamic load balancing among the threads. Contention is resolved by atomic instructions. The heap is broken into a young and an old generation where semi-space copying employing local allocation buffers is used to collect the reachable objects in the young section, and for promoting objects from the young to the old generations, and parallel mark-compacting is used for collecting the old generation. Efficiency of collection is enhanced by use of card tables and linking objects, and overflow conditions are efficiently handled by linking using class pointers. The garbage collection termination using a global status word is employed.

Proceedings ArticleDOI
Bjarne Steensgaard1
16 Oct 2000
TL;DR: An escape analysis and a sample memory management system using thread-specific heaps are presented, which allow concurrent garbage collection of different thread- specific heaps with minimal synchronization overhead on multi-processor computers.
Abstract: Garbage collection for a multi-threaded program typically involves either stopping all threads while doing the collection or involves copious amounts of synchronization between threads. However, a lot of data is only ever visible to a single thread, and such data should ideally be collected without involving other threads.Given an escape analysis, a memory management system may allocate thread-specific data in thread-specific heaps and allocate shared data in a shared heap. Garbage collection of data in a thread-specific heaps can be done independent of other threads and of data in their thread-specific heaps. For multi-threaded programs, thread-specific heaps allow reduced garbage collection latency for active threads. On multi-processor computers, thread-specific heaps allow concurrent garbage collection of different thread-specific heaps with minimal synchronization overhead.We present an escape analysis and a sample memory management system using thread-specific heaps.

Patent
02 Jun 2000
TL;DR: In this article, a method and system for performing generational garbage collection on a virtual heap in a virtual machine is described, where only a portion of the virtual heap may be cached in the physical heap.
Abstract: A method and system for performing generational garbage collection on a virtual heap in a virtual machine is provided. The garbage collection method is suited for use with small consumer and appliance devices that have a small amount of memory and may be using flash devices as persistent storage. The garbage collection method may provide good performance where only a portion of the virtual heap may be cached in the physical heap. The virtual heap may use a single address space for both objects in the store and the in-memory heap. In one embodiment, a single garbage collector is run on the virtual heap address space. The garbage collection method may remove non-referenced objects from the virtual heap. The garbage collection method may also include a compaction phase to reduce or eliminate fragmentation, and to improve locality of objects within the virtual heap. In one embodiment, the garbage collector for the virtual heap may be implemented as a generational garbage collector using working sets in the virtual heap, where each generation is confined to a working set of the heap. The generational garbage collector may allow the flushing of changes after each garbage collection cycle for each working set region. Heap regions with different flushing policies may be used. An object nursery region without flushing where objects are initially created may be used. When a garbage collection cycle is run, objects truly referenced in the object nursery may be copied back into heap regions to be flushed, while short-lived objects no longer referenced may be deleted without flushing.

Patent
Elliot K. Kolodner1, Erez Petrank1
11 Dec 2000
TL;DR: In this article, a method for performing garbage collection of memory objects in a memory heap is presented, which includes the steps of partitioning the heap into old and new generations, and then applying an on-the-fly garbage collection to memory objects of the young generation.
Abstract: A method for performing garbage collection of memory objects in a memory heap, the method includes the steps of partitioning the heap into old and new generations. There follows the step of applying an on-the-fly garbage collection to memory objects in the young generation, whilst running simultaneously a program thread.

Patent
12 Jan 2000
TL;DR: In this paper, each instruction that creates an object (i.e., allocation instruction) is first analyzed to determine whether it is one of the following three types: no escape, global escape, and arg escape.
Abstract: An object oriented mechanism and method allow allocating a greater number of objects on a method's invocation stack. Each instruction that creates an object (i.e., allocation instruction) is first analyzed to determine whether it is one of the following three types: no escape, global escape, and arg escape. If an allocation instruction is global escape, the object must be allocated from the heap. If an allocation instruction is no escape, it can be allocated on the method's invocation stack frame. If an allocation instruction is arg escape, further analysis is required to determine whether the object can be allocated on an invoking method's stack or must be allocated from the heap. If the method that contains an arg escape allocation instruction can be inlined into a method from which the lifetime of the object does not escape, the object can be allocated on the invoking method's stack. This inlining can be done for several layers up, if needed and possible. This allows for nested objects to be potentially allocated on a method's stack, instead of forcing each of these objects to be allocated from the heap.

Patent
02 Jun 2000
TL;DR: A database store method and system for a virtual persistent heap may include an Application Programming Interface (API) that provides a mechanism to cache portions of the virtual heap into an in-memory heap for use by an application.
Abstract: A database store method and system for a virtual persistent heap may include an Application Programming Interface (API) that provides a mechanism to cache portions of the virtual heap into an in-memory heap for use by an application. The virtual heap may be stored in a persistent store that may include one or more virtual persistent heaps, with one virtual persistent heap for each application running in the virtual machine. Each virtual persistent heap may be subdivided into cache lines. The store API may provide atomicity on the store transaction to substantially guarantee the consistency of the information stored in the database. The database store API provides several calls to manage the virtual persistent heap in the store. The calls may include, but are not limited to: opening the store, closing the store, atomic read transaction, atomic write transaction, and atomic delete transaction.

Book ChapterDOI
John Iacono1
05 Jul 2000
TL;DR: It is shown that pairing heaps have a distribution sensitive behavior whereby the cost to perform an extract-min on an element x is O(log min(n, k) where k is the number of heap operations performed since x's insertion.
Abstract: Pairing heaps are shown to have constant amortized time insert and zero amortized time meld, thus improving the previous O(log n) amortized time bound on these operations. It is also shown that pairing heaps have a distribution sensitive behavior whereby the cost to perform an extract-min on an element x is O(log min(n, k)) where k is the number of heap operations performed since x's insertion. Fredman has observed that pairing heaps can be used to merge sorted lists of varying sized optimally, within constant factors. Utilizing the distribution sensitive behavior of pairing heap, an alternative method the employs pairing heaps for optimal list merging is derived.

Patent
19 May 2000
TL;DR: In this article, a heap is scanned in order until a sequence of a requested quantity of free contiguous memory blocks is found or NULL is returned, and the scan then continues from the beginning of the heap.
Abstract: A system and method for memory allocation from a heap comprising memory blocks of a uniform fixed size. Each memory block has a status bit. A binary status key stores a Boolean value indicating free memory. The heap is scanned in order until a sequence of a requested quantity of free contiguous memory blocks is found or NULL is returned. Each scanned free memory block is marked un-free by assigning its status bit to the logical negative of the binary status key. If the end of the heap is reached before a sequence of sufficient quantity is found, all reachable blocks are marked as free. The binary status key is flipped such that all memory blocks which were marked free are now un-free, and vice versa. Any memory block whose corresponding structure has become unreferenced is reclaimed for future use. The scan then continues from the beginning of the heap. In another embodiment, a memory allocation for a partitioned data structure from a heap of fixed-size memory blocks may be used. The quantity of memory blocks required to store a data structure is determined. The required quantity of the memory blocks, which may be noncontiguous, is allocated from the heap. The allocated memory blocks are linked in a list such that the components of the data structure are partitioned in the proper order across the allocated quantity of memory blocks.

Patent
Steven S. Greenberg1
21 Mar 2000
TL;DR: In this paper, the earliest scheduled time is removed from the heap and the events in the associated bucket are performed, and the remaining scheduled times are reorganized into a new heap.
Abstract: In the simulation of an analog and mixed-signal analog-digital physical circuit, events are assigned scheduled times. The events are stored in buckets in a hash table, with the scheduled times of the events in each bucket associated with the bucket. The scheduled times are organized into a heap, with the earliest scheduled time at the root of the heap. The earliest scheduled time is removed from the heap, and the events in the associated bucket are performed. Performing the scheduled events can cause new events to be scheduled, and existing events to be de-scheduled. When all the events in the bucket associated with the earliest scheduled time are simulated, the remaining scheduled times are re-organized into a new heap, and the steps of removing the earliest scheduled time, performing the scheduled events, and re-organizing the remaining scheduled times are repeated.

Journal ArticleDOI
29 Jun 2000-Nature
TL;DR: The Human Genome Project and Celera Genomics announce that they have compiled the ‘working draft’ of the human genome sequence.
Abstract: Washington The Human Genome Project and Celera Genomics announce that they have compiled the ‘working draft’ of the human genome sequence.

Proceedings ArticleDOI
01 Sep 2000
TL;DR: An operational semantics for parallel lazy evaluation that accurately models the parallel behaviour of the non-strict parallel functional language GpH and is the first semantics that models such thread states.
Abstract: We present an operational semantics for parallel lazy evaluation that accurately models the parallel behaviour of the non-strict parallel functional language GpH. Parallelism is modelled synchronously, that is, single reductions are carried out separately then combined before proceeding to the next set of reductions. Consequently the semantics has two levels, with transition rules for individual threads at one level and combining rules at the other. Each parallel thread is modelled by a binding labelled with an indication of its activity status. To the best of our knowledge this is the first semantics that models such thread states. A set of labelled bindings corresponds to a heap and is used to model sharing.The semantics is set at a higher level of abstraction than an abstract machine and is therefore more manageable for proofs about programs rather than implementations. At the same time, it is sufficiently low level to allow us to reason about programs in terms of parallelism (i.e. the number of processors used) as well as work and run-time with different numbers of processors.The framework used by the semantics is sufficiently flexible and general that it can easily be adapted to express other evaluation models such as sequential call-by-need, speculative evaluation, non-deterministic choice and others.

Patent
31 Oct 2000
TL;DR: In this paper, a multiprocessor, multi-program, stop-the-world garbage collection program is described, which initially over partitions the root sources, and then iteratively employs static and dynamic work balancing.
Abstract: A multiprocessor, multi-program, stop-the-world garbage collection program is described. The system initially over partitions the root sources, and then iteratively employs static and dynamic work balancing. Garbage collection threads compete dynamically for the initial partitions. Work stealing double-ended queues, where contention is reduced, are described to provide dynamic load balancing among the threads. Contention is resolved by using atomic instructions. The heap is broken into a young and an old generation where parallel semi-space copying is used to collect the young generation and parallel mark-compacting the old generation. The old generation heap is divided into a number of contiguous cards that are partitioned into subsets. The cards are arranged into the subsets so that non-contiguous cards are contained in each subset. Speed and efficiency of collection is enhanced by use of card tables and linking objects, and overflow conditions are efficiently handled by linking using class pointers. The garbage collection termination employs a global status word.

Patent
31 Oct 2000
TL;DR: In this article, a method for detecting data races in the execution of multi-threaded, strictly object oriented programs is presented, whereby objects on a heap are classified in a set of global objects, containing objects that can be reached by more than one thread, and sets of local objects, which can be only reached by one thread.
Abstract: The present invention relates to concurrently executing program threads in computer systems, an more particularly to detecting data races. A computer implemented method for detecting data races in the execution of multi-threaded, strictly object oriented programs is provided, whereby objects on a heap are classified in a set of global objects, containing objects that can be reached by more than one thread, and sets of local objects, containing objects that can only be reached by one thread. Only the set of global objects is observed for determining occurrence of data races.

Proceedings Article
30 Jul 2000
TL;DR: This work considers the problem of algorithm selection: dynamically choose an algorithm to attack an instance or subinstances of a problem with the goal of minimizing the overall execution time, and uses ideas from reinforcement learning to solve it.
Abstract: Many computational problems can be solved by multiple algorithms, with different algorithms fastest for different problem sizes, input distributions, and hardware characteristics. We consider the problem of algorithm selection: dynamically choose an algorithm to attack an instance or subinstances (due to recursive calls) of a problem with the goal of minimizing the overall execution time. We formulate the problem as a kind of Markov Decision Process (MDP), and use ideas from reinforcement learning (RL) to solve it. The process’ state consists of a set of instance features, such as problem size. Actions are the different algorithms we can choose from. Non-recursive algorithms are terminal in that they solve the problem completely (terminal state). Recursive algorithms create subproblems and therefore cause transitions to other states, making the task a sequential decision task. The immediate cost of a decision is the real time taken for executing the selected algorithm on the current instance, excluding time taken in recursive calls. Thus, the total (undiscounted) cost during an episode is the time taken to solve the problem. The goal is a policy that minimizes the total cost/time. This process differs from a standard MDP as it allows one-to-many state transitions (multiple recursive calls at one level). Our initial experiments focus on the problem of order statistic selection: given an array of (unordered) numbers and some index , select the number that would rank -th if the array were sorted. We picked two algorithms such that neither is best in all cases, otherwise learning would not help. DETERMINISTIC SELECT ( )i s an recursive algorithm and HEAP SELECT ( )i s an algorithm

Patent
06 Nov 2000
TL;DR: In this article, the use of three heaps enables garbage collection to be selectively targeted to one heap at a time in between applications, thus avoiding this overhead during the life of an application.
Abstract: In a virtual machine environment, a method and apparatus for the use of multiple heaps to retain persistent data and transient data wherein the multiple heaps enables a single virtual machine to be easily resettable, thus avoiding the need to terminate and start a new Virtual Machine as well as enabling a single virtual machine to retain data and objects across multiple applications, thus avoiding the computing resource overhead of relinking, reloading, reverifying, and recompiling classes. The memory hierarchy includes a System Heap, a Middleware Heap and a Transient Heap. The use of three heaps enables garbage collection to be selectively targeted to one heap at a time in between applications, thus avoiding this overhead during the life of an application.

Journal Article
TL;DR: The details of the memory model underlying the verification of sequential Java programs in the “LOOP” project are explained, via several examples of Java programs, involving various subtleties of the language (wrt. memory storage).
Abstract: This paper explains the details of the memory model underlying the verification of sequential Java programs in the LOOP project ([14,20]). The building blocks of this memory are cells, which are untyped in the sense that they can store the contents of the fields of an arbitrary Java object. The main memory is modeled as three infinite series of such cells, one for storing instance variables on a heap, one for local variables and parameters on a stack, and and one for static (or class) variables. Verification on the basis of this memory model is illustrated both in PVS and in Isabelle/HOL, via several examples of Java programs, involving various subtleties of the language (wrt. memory storage).

Patent
Richard J. Houldsworth1
09 Mar 2000
TL;DR: In this paper, a method of scheduling instructions to be executed concurrently by a processor, the processor being capable of executing a predetermined number of instructions concurrently, is presented, where instructions from a first process and a second process are interleaved according to a predetermined rule to give a third process.
Abstract: A method of scheduling instructions to be executed concurrently by a processor, the processor being capable of executing a predetermined number of instructions concurrently. Instructions from a first process and a second process are interleaved according to a predetermined rule to give a third process. Instructions from the third process are then scheduled for execution at a first time point by the processor. Instructions of the first process generate data structures comprising data objects linked by identifying pointers in a memory heap. The second process comprises a garbage collection process for traversing the memory heap and reclaiming memory allocated to data structures unused by the first process.

Patent
12 May 2000
TL;DR: In this paper, a heap leaching method for chalcopyrite is described, in which an oxygen-containing gas is introduced to the heap as a source of oxygen for the bacteria, and a heap having saturation and temperature controlled to maintain a substantial portion of the heap at a temperature such that thermophilic bacteria leach the chal copyrite at an economically acceptable rate.
Abstract: Disclosed is a method for heap leaching an ore containing chalcopyrite Acidic liquor containing iron or sulphur oxidising bacteria is introduced to a heap containing chalcopyrite ore for contacting with that ore Such contacting will liberate copper from the ore During this process, an oxygen-containing gas is introduced to the heap as a source of oxygen for the bacteria The oxygen-containing gas is introduced to a heap having saturation and temperature controlled to maintain a substantial portion of the heap at a temperature such that thermophilic bacteria leach the chalcopyrite at an economically acceptable rate A heap leaching system (1) operating in accordance with the method is also disclosed

Proceedings ArticleDOI
05 Jan 2000
TL;DR: A sweeping method which traverses only the live objects so that sweeping can be done in time dependent only on the number of live objects in the heap, which allows each collection to use time independent of the size of the heap.
Abstract: Mark and sweep garbage collectors are known for using time proportional to the heap size when sweeping memory, since all objects in the heap, regardless of whether they are live or not, must be visited in order to reclaim the memory occupied by dead objects. This paper introduces a sweeping method which traverses only the live objects, so that sweeping can be done in time dependent only on the number of live objects in the heap.This allows each collection to use time independent of the size of the heap, which can result in a large reduction of overall garbage collection time in empty heaps. Unfortunately, the algorithm used may slow down overall garbage collection if the heap is not so empty. So a way to select the sweeping algorithm depending on the heap occupancy is introduced, which can avoid any significant slowdown.

Patent
David Wallman1
28 Dec 2000
TL;DR: In this article, the authors propose a data structure including one or more addresses of source code that creates local objects, and determine whether an address of the obtained next source code is in the data structure.
Abstract: Methods and apparatus for executing a method to enable memory associated with objects not referenced external to the executed method to be reclaimed upon completion of execution of the method. Methods include obtaining a data structure including one or more addresses of source code that creates local objects, obtaining next source code in the method, and determining whether an address of the obtained next source code is in the data structure. When the address of the obtained next source code is in the data structure including one or more addresses of source code that creates local objects, a local object is created on a local heap of memory using the source code associated with the address such that local objects are stored in memory separately from non-local objects.