scispace - formally typeset
Search or ask a question

Showing papers on "Heap (data structure) published in 2023"


Journal ArticleDOI
TL;DR: In this article , a study of the factors determining the formation and changes in the filtration properties of a heap leaching stack formed from pelletized poor sandy-clay ores is carried out.
Abstract: There are the results of a study of the factors determining the formation and changes in the filtration properties of a heap leaching stack formed from pelletized poor sandy-clay ores. An analysis of methods of investigation of filtration properties of ore material for different stages of heap leaching plots functioning is carried out. Influence of segregation process during stack dumping on formation of zones with very different permeability parameters of ore has been established by experimental and filtration works. The construction and application of a numerical model of filtration processes in pelletized ores based on laboratory experiments is shown. By means of solution percolation simulation at different irrigation intensities the justification of optimal stack parameters is provided in terms of the geomechanical stability and prevention of solution level rise above the drainage layer.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors provide recommendations for classifying and using various types of patterns in a holistic enterprise architecture pattern (HEAP) that can support transformation projects based on a concise, composite, and layered patterns model.
Abstract: This chapter provides recommendations for classifying and using various types of patterns in a holistic enterprise architecture pattern (HEAP) that can support transformation projects. The HEAP is based on a concise, composite, and layered patterns model. A composite patterns model can be used as a template to instantiate building blocks (BB) to implement a variety of types of transformation projects. In this chapter, the focus is on various pattern standards that can be used in a holistic and adaptable BBs to support an optimal set of enterprise architectures (EA). A patterns-based EA offers BBs, like in civil engineering, that can support colossal and complex projects. For such complex projects, there is a need to create a common denominator pattern, to integrate other standard patterns, in the form of a HEAP, where the HEAP is the backbone of this research and development project (RDP). This RDP proves that the HEAP can be used for building dynamic and flexible EAs.

3 citations


Journal ArticleDOI
TL;DR: In this article , an unresolved discrete element method (DEM) coupled with Computational Fluid Dynamics (CFD) is used to model the detachment of individual adhered dust particles in a free stream flow in detail.

2 citations


Journal ArticleDOI
TL;DR: Novel “stackable” assertions are proposed, which keep track of the existence of stack-to-heap pointers without explicitly recording their origin, and how to reason about closures—concrete heap-allocated data structures that implement the abstract concept of a first-class function.
Abstract: We present a Separation Logic with space credits for reasoning about heap space in a sequential call-by-value lambda-calculus equipped with garbage collection and mutable state. A key challenge is to design sound, modular, lightweight mechanisms for establishing the unreachability of a block. Prior work in this area uses pointed-by assertions to keep track of the predecessors of every block, but is carried out in the setting of an assembly-like programming language. We take up the challenge in the setting of a high-level language, where a key problem is to identify and reason about the memory locations that the garbage collector considers as roots. For this purpose, we propose novel "stackable" assertions, which keep track of the existence of stack-to-heap pointers without explicitly recording their origin. Furthermore, we explain how to reason about closures -- concrete heap-allocated data structures that implement the abstract concept of a first-class function. We demonstrate the expressiveness and tractability of our program logic via a range of examples, including recursive functions on linked lists, objects implemented using closures and mutable internal state, recursive functions in continuation-passing style, and three stack implementations that exhibit different space bounds. These last three examples illustrate reasoning about the reachability of the items stored in a container as well as amortized reasoning about space. All of our results are proved in Coq on top of Iris.

2 citations


Journal ArticleDOI
TL;DR: In this paper , three models, namely, the agglomeration model with intra-particle pore networks, the crushed ore model with interparticle pores, and the mixed segregated model with both intra- and inter particle pores, are defined.

2 citations



Journal ArticleDOI
Florencia Acuña Ramírez1
TL;DR: In this paper , the hydraulic conductivity function (HCF) is modelled as a discontinuous J-curve, with the HCF equal to the saturated hydraulic conductivities (Ks) above the air-entry point, rather than a continuous J-Curve with a fitted Ks value.

1 citations



Proceedings ArticleDOI
17 Jan 2023
TL;DR: Zenodo as discussed by the authors is a parallel agent-based simulation engine for large-scale studies with a grid to search for neighbors and parallelize the merging of thread-local results.
Abstract: Agent-based modeling plays an essential role in gaining insights into biology, sociology, economics, and other fields. However, many existing agent-based simulation platforms are not suitable for large-scale studies due to the low performance of the underlying simulation engines. To overcome this limitation, we present a novel high-performance simulation engine. We identify three key challenges for which we present the following solutions. First, to maximize parallelization, we present an optimized grid to search for neighbors and parallelize the merging of thread-local results. Second, we reduce the memory access latency with a NUMA-aware agent iterator, agent sorting with a space-filling curve, and a custom heap memory allocator. Third, we present a mechanism to omit the collision force calculation under certain conditions. Our evaluation shows an order of magnitude improvement over Biocellion, three orders of magnitude speedup over Cortex3D and NetLogo, and the ability to simulate 1.72 billion agents on a single server. Supplementary Materials, including instructions to reproduce the results, are available at: https://doi.org/10.5281/zenodo.6463816

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors evaluate how large a debris bed becomes dryout during ex-vessel cooling in a pre-flooded reactor cavity if a postulated severe accident occurs.

1 citations


Proceedings ArticleDOI
25 Mar 2023
TL;DR: TeraHeap as mentioned in this paper extends the managed runtime to use a second high-capacity heap (H2) over a fast storage device, allowing big data analytics frameworks to leverage knowledge about objects to populate H2.
Abstract: Big data analytics frameworks, such as Spark and Giraph, need to process and cache massive amounts of data that do not always fit on the managed heap. Therefore, frameworks temporarily move long-lived objects outside the managed heap (off-heap) on a fast storage device. However, this practice results in (1) high serialization/deserialization (S/D) cost and (2) high memory pressure when off-heap objects are moved back to the heap for processing. In this paper, we propose TeraHeap, a system that eliminates S/D overhead and expensive GC scans for a large portion of the objects in big data frameworks. TeraHeap relies on three concepts. (1) It eliminates S/D cost by extending the managed runtime (JVM) to use a second high-capacity heap (H2) over a fast storage device. (2) It offers a simple hint-based interface, allowing big data analytics frameworks to leverage knowledge about objects to populate H2. (3) It reduces GC cost by fencing the garbage collector from scanning H2 objects while maintaining the illusion of a single managed heap. We implement TeraHeap in OpenJDK and evaluate it with 15 widely used applications in two real-world big data frameworks, Spark and Giraph. Our evaluation shows that for the same DRAM size, TeraHeap improves performance by up to 73% and 28% compared to native Spark and Giraph, respectively. Also, it provides better performance by consuming up to 4.6× and 1.2× less DRAM capacity than native Spark and Giraph, respectively. Finally, it outperforms Panthera, a state-of-the-art garbage collector for hybrid memories, by up to 69%.

Journal ArticleDOI
TL;DR: In this paper , a symbolic abstraction approach for modeling the heap in Java-like programs is presented, which is parameterized with a family of relations among references to offer various levels of precision based on user preferences.
Abstract: In the realm of sound object-oriented program analyses for information-flow control, very few approaches adopt flow-sensitive abstractions of the heap that enable a precise modeling of implicit flows. To tackle this challenge, we advance a new symbolic abstraction approach for modeling the heap in Java-like programs. We use a store-less representation that is parameterized with a family of relations among references to offer various levels of precision based on user preferences. This enables us to automatically infer polymorphic information-flow guards for methods via a co-reachability analysis of a symbolic finite-state system. We instantiate the heap abstraction with three different families of relations. We prove the soundness of our approach and compare the precision and scalability obtained with each instantiated heap domain by using the IFSpec benchmarks and real-life applications.

Journal ArticleDOI
TL;DR: In this paper , a continuum approach to model segregation of size-bidisperse granular materials in unsteady bounded heap flow is presented as a prototype for modeling segregation in other time varying flows.

Journal ArticleDOI
TL;DR: In this paper , a task scheduling problem for a cloud computing environment is formulated by using the M/M/n queuing model and a priority assignment algorithm is designed to employ a new data structure named the waiting time matrix to assign priority to individual tasks upon arrival.
Abstract: In this paper, a task scheduling problem for a cloud computing environment is formulated by using the M/M/n queuing model. A priority assignment algorithm is designed to employ a new data structure named the waiting time matrix to assign priority to individual tasks upon arrival. In addition to this, the waiting queue implements a unique concept based on the principle of the Fibonacci heap for extracting the task with the highest priority. This work introduces a parallel algorithm for task scheduling in which the priority assignment to task and building of heap is executed in parallel with respect to the non-preemptive and preemptive nature of tasks. The proposed work is illustrated in a step-by-step manner with an appropriate number of tasks. The performance of the proposed model is compared in terms of overall waiting time and CPU time against some existing techniques like BATS, IDEA, and BATS+BAR to determine the efficacy of our proposed algorithms. Additionally, three distinct scenarios have been considered to demonstrate the competency of the task scheduling method in handling tasks with different priorities. Furthermore, the task scheduling algorithm is also applied in a dynamic cloud computing environment.

Posted ContentDOI
20 May 2023
TL;DR: Locksynth as discussed by the authors is a tool that automatically derives synchronization needed for destructive updates to concurrent data structures that involve a constant number of shared heap memory write operations, such as dictionary operations for linked lists and binary search trees.
Abstract: We present Locksynth, a tool that automatically derives synchronization needed for destructive updates to concurrent data structures that involve a constant number of shared heap memory write operations. Locksynth serves as the implementation of our prior work on deriving abstract synchronization code. Designing concurrent data structures involves inferring correct synchronization code starting with a prior understanding of the sequential data structure's operations. Further, an understanding of shared memory model and the synchronization primitives is also required. The reasoning involved transforming a sequential data structure into its concurrent version can be performed using Answer Set Programming and we mechanized our approach in previous work. The reasoning involves deduction and abduction that can be succinctly modeled in ASP. We assume that the abstract sequential code of the data structure's operations is provided, alongside axioms that describe concurrent behavior. This information is used to automatically derive concurrent code for that data structure, such as dictionary operations for linked lists and binary search trees that involve a constant number of destructive update operations. We also are able to infer the correct set of locks (but not code synthesis) for external height-balanced binary search trees that involve left/right tree rotations. Locksynth performs the analyses required to infer correct sets of locks and as a final step, also derives the C++ synchronization code for the synthesized data structures. We also provide a performance analysis of the C++ code synthesized by Locksynth with the hand-crafted versions available from the Synchrobench microbenchmark suite. To the best of our knowledge, our tool is the first to employ ASP as a backend reasoner to perform concurrent data structure synthesis.

Book ChapterDOI
01 Jan 2023
TL;DR: A detailed description of ancient garbage collectors is beyond the scope of this book, so this chapter limits its description to the three garbage collectors most often in use: the G1 garbage collector, the Shenandoah GC, and the Zero garbage collector as mentioned in this paper .
Abstract: Garbage collection is the cleanup process that Java uses to get rid of unused objects, freeing associated heap memory. Over time, Java garbage collectors went through many changes, and the historical development of garbage collectors is actually quite interesting. However, a detailed description of ancient garbage collectors is beyond the scope of this book, so this chapter limits its description to the three garbage collectors most often in use—The G1 Garbage Collector, the Shenandoah GC, and the Zero Garbage Collector. The inclined reader can find information about the other garbage collectors on the web.

Book ChapterDOI
01 Jun 2023

Journal ArticleDOI
TL;DR: In this article , a self-aerated composting technique was designed and developed to effectively manage and mitigate the environmental and health hazards evolving from poultry litters, which is found environmentally safe, functional and cost-effective.
Abstract: This study elucidates the present scenario of poultry litter management practices and development of a technique for safe management of litter at farmer’s level. Survey-based data were collected through pre-tested questionnaires from some purposively selected 42 poultry farms erected within the areas of Mymensingh, Gazipur, Netrokona, and Jamalpur districts. A large amount of poultry litters were generated from broiler and layer farms daily. Most of the farmers dumped this litters in open places (50%) which caused a serious environmental and health hazards. A self-aerated composting technique was designed and developed to effectively manage and mitigate the environmental and health hazards evolving from poultry litters. A compost heap was prepared with rice straw, water hyacinth, and poultry litters with the proportion 1:2:4 respectively by weight at the optimum C:N ratio of 30:1 incorporating the provision for air entraining into the bulk compost heap. Temperature and moisture contents at time interval of three days, and pH, C/N ratio, volume, and microbial properties at time interval of seven days were observed throughout the composting period of 60 days. Analysis was accomplished taking representative samples from the compost heap using the random sampling technique. The quality of compost in terms of nitrogen, phosphorus, potassium, and organic carbon was evaluated in accordance with the Indian and Australian Standards. This technique is found environmentally safe, functional and cost-effective. The developed self-aerated composting technique would be an alternative option for safe management of poultry litters for the farmers in the rural areas.

Journal ArticleDOI
TL;DR: In this article , the fundamental heap of framed links is defined using group presentations, and a heap invariant is constructed using the cocycles constructed, and conversely, the invariant values can be used to derive algebraic properties of the heap cohomology.
Abstract: A heap is a set with a certain ternary operation that is self-distributive and exemplified by a group with the operation [Formula: see text]. We introduce and investigate framed link invariants using heaps. In analogy with the knot group, we define the fundamental heap of framed links using group presentations. The fundamental heap is determined for some classes of links such as certain families of torus and pretzel links. We show that for these families of links there exist epimorphisms from fundamental heaps to Vinberg and Coxeter groups, implying that corresponding groups are infinite. A relation to the Wirtinger presentation is also described. The cocycle invariant is defined using ternary self-distributive (TSD) cohomology, by means of a state sum that uses ternary heap [Formula: see text]-cocycles as weights. This invariant corresponds to a rack cocycle invariant for the rack constructed by doubling of a heap, while colorings can be regarded as heap morphisms from the fundamental heap. For the construction of the invariant, first computational methods for the heap cohomology are developed. It is shown that the cohomology splits into two types, called degenerate and nondegenerate, and that the degenerate part is one-dimensional. Subcomplexes are constructed based on group cosets, that allow computations of the nondegenerate part. Computations of the cocycle invariants are presented using the cocycles constructed, and conversely, it is proved that the invariant values can be used to derive algebraic properties of the cohomology.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a new memory optimization method, called dynamic code compression, which dynamically compresses the source code and keeps its compressed form instead of the source string, and re-designs the internal structure of the JavaScript engine to efficiently compress source code without incurring any conflict.
Abstract: Web applications created using web languages such as HTML, CSS, and JavaScript are widely used regardless of execution environments, owing to their portability and popularity. Since JavaScript language is conventionally used for complex computations in large‐scale web apps, many optimization techniques have been proposed to accelerate the JavaScript performance. However, most of these optimizations speed up JavaScript engines at the expense of consuming more memory. Hence, memory consumption becomes an additional concern in the JavaScript field. Based on the research on memory status, we unearthed that a substantial portion of the heap memory is allocated for JavaScript source code, particularly in lightweight JavaScript engines, which may range from 13.2% to a maximal 52.5% of the entire heap. To resolve this memory issue, this article suggests a new memory optimization method, called dynamic code compression that dynamically compresses the source code and keeps its compressed form instead of the source string. A novel heuristic is proposed to timely compress the source code. We also re‐design the internal structure of the JavaScript engine to efficiently compress the source code without incurring any conflict. Using our code compression method, we could reduce the entire heap memory by up to 43.3% and consistently downsize the overall heap size. From an evaluation of standard benchmarks, our approach showed just 2.7% degradation in performance with negligible compression overhead, proving the feasibility of the code compression technique.

Posted ContentDOI
23 Mar 2023
TL;DR: In this paper , it was shown that any Hopf-Galois co-object has the natural structure of a Hopf heap with the translation Hopf algebra isomorphic to the acting hopf algebra.
Abstract: To every Hopf heap or quantum cotorsor of Grunspan a Hopf algebra of translations is associated. This translation Hopf algebra acts on the Hopf heap making it a Hopf-Galois co-object. Conversely, any Hopf-Galois co-object has the natural structure of a Hopf heap with the translation Hopf algebra isomorphic to the acting Hopf algebra. It is then shown that this assignment establishes an equivalence between categories of Hopf heaps and Hopf-Galois co-objects.

Posted ContentDOI
17 Jan 2023
TL;DR: In this paper , the authors present a novel high-performance simulation engine for large-scale simulation platforms based on NUMA-aware agent iterator, agent sorting with a space-filling curve and a custom heap memory allocator.
Abstract: Agent-based modeling plays an essential role in gaining insights into biology, sociology, economics, and other fields. However, many existing agent-based simulation platforms are not suitable for large-scale studies due to the low performance of the underlying simulation engines. To overcome this limitation, we present a novel high-performance simulation engine. We identify three key challenges for which we present the following solutions. First, to maximize parallelization, we present an optimized grid to search for neighbors and parallelize the merging of thread-local results. Second, we reduce the memory access latency with a NUMA-aware agent iterator, agent sorting with a space-filling curve, and a custom heap memory allocator. Third, we present a mechanism to omit the collision force calculation under certain conditions. Our evaluation shows an order of magnitude improvement over Biocellion, three orders of magnitude speedup over Cortex3D and NetLogo, and the ability to simulate 1.72 billion agents on a single server. Supplementary Materials, including instructions to reproduce the results, are available at: https://doi.org/10.5281/zenodo.6463816

Proceedings ArticleDOI
06 Jun 2023
TL;DR: In this paper , the authors explore the ability of neural network models to predict heap allocation properties from the statically available code, and explore the trade-off space of this approach, investigate promising directions, motivate these directions with data analysis and experiments, and highlight challenges that future work needs to overcome.
Abstract: Memory allocators and runtime systems can leverage dynamic properties of heap allocations – such as object lifetimes, hotness or access correlations – to improve performance and resource consumption. A significant amount of work has focused on approaches that collect this information in performance profiles and then use it in new memory allocator or runtime designs, both offline (e.g., in ahead-of-time compilers) and online (e.g., in JIT compilers). This is a special instance of profile-guided optimization. This approach introduces significant challenges: 1) The profiling oftentimes introduces substantial overheads, which are prohibitive in many production scenarios, 2) Creating a representative profiling run adds significant engineering complexity and reduces deployment velocity, and 3) Profiles gathered ahead of time or during the warm-up phase of a server are often not representative of all workload behavior and may miss important corner cases. In this paper, we investigate a fundamentally different approach. Instead of deriving heap allocation properties from profiles, we explore the ability of neural network models to predict them from the statically available code. As an intellectual abstract, we do not offer a conclusive answer but describe the trade-off space of this approach, investigate promising directions, motivate these directions with data analysis and experiments, and highlight challenges that future work needs to overcome.

Book ChapterDOI
01 Jan 2023
TL;DR: In this paper , the authors proposed an economic feasibility assessment of combine harvester improvement using a comprehensive approach based on the application of comprehensive approach in order to improve the methodology for the economic feasibility assess of enhancing agricultural machinery by conducting an analysis of efficiency for manufacturers and consumers.
Abstract: AbstractThe algorithm for estimating the effectiveness of improving a combine harvester is developed and tested in the study, including the identification of efficiency indicators for both manufacturer and consumer of the machine. The study is based on the application of comprehensive approach in order to improve the methodology for the economic feasibility assessment of enhancing agricultural machinery by conducting an analysis of efficiency for manufacturers and consumers of the machine. The proposed method includes the use of suggested coefficients of the improved design efficiency and consumer satisfaction. The suggested algorithm is tested using the example of evaluating the project for improving the combine harvester design by enhancing the straw heap cleaning system in a straw separator. The analysis conducted shows economic viability of the technical solution under consideration and indicates that the suggested method is promising.KeywordsCombine harvester improvementEconomic feasibilityInvestment effectiveness evaluationConsumer benefits from harvester improvement

Journal ArticleDOI
09 Jan 2023-Leonardo
TL;DR: In this article , the authors focus on three digitization projects for Mani heaps, which leverage digitization methods to introduce, analyze, and represent Mani heap to support scholarly analysis and casual appreciation.
Abstract: abstract:A kind of cultural heritage, a Mani heap is piled with many carving stones and used as a religious altar for prayer in daily life in Tibet. It has distinctive characteristics and high research value, providing extensive content and abundant information. This paper focuses on an overview of three digitization projects for Mani heaps. Based on the collected data of five field surveys, the three projects leverage digitization methods to introduce, analyze, and represent Mani heaps to support scholarly analysis and casual appreciation. The authors explore these projects to study, represent, and conserve Mani heaps, which are often ignored by researchers.

Book ChapterDOI
08 Feb 2023

Posted ContentDOI
24 Apr 2023
TL;DR: The first worst-case linear time algorithm for selection was discovered in 1973; however, linear-time binary heap construction was first published in 1964 as discussed by the authors , and it was implemented in place and shown to perform similarly to in-place median of medians.
Abstract: The first worst-case linear-time algorithm for selection was discovered in 1973; however, linear-time binary heap construction was first published in 1964. Here we describe another worst-case linear selection algorithm,which is simply implemented and uses binary heap construction as its principal engine. The algorithm is implemented in place, and shown to perform similarly to in-place median of medians.


Journal ArticleDOI
TL;DR: In this article , the authors define a sequence of numbers to be heapable if they can be sequentially inserted to form a binary heap, i.e. a binary tree in which every child is greater than its parent.

Journal ArticleDOI
TL;DR: In this paper , a methodology to study the dynamics of copper recovery in the heap leaching by means of fit of analytical models that capture the leaching dynamics product of variations of leaching agents as a function of the feeding is proposed.
Abstract: Analytical models are of vital importance to study the dynamics of complex systems, including the heap leaching process. In this work, a methodology to study the dynamics of copper recovery in the heap leaching by means of fit of analytical models that capture the leaching dynamics product of variations of leaching agents as a function of the feeding is proposed, establishing a first mode of operation keeping the leaching agent fixed (H2SO4) and a second operation mode, where Cl− is added to accelerate the reaction kinetics of sulfide minerals (secondary sulfides). Mineral recovery was modeled for the different modes of operation, dependent on the independent variables/control parameters time, heap height, leach flow rate, and feed granulometry. The results indicate that the recovery of ore from sulfide minerals is proportional to the addition of Cl−, reaching recovery levels of approximately 60%, very close to 65% recovery in conventional oxide leaching, using only H2SO4 as leaching agent. Additionally, high copper recoveries from sulfide ores are achieved at medium Cl− concentrations, but the increase in recovery at high Cl− concentrations is marginal.