scispace - formally typeset
Search or ask a question

Answers from top 9 papers

More filters
Papers (9)Insight
Benchmarking on a set of task parallel programs using a work-stealing scheduler demonstrates that our approach is generally effective.
Open accessProceedings ArticleDOI
10 Oct 2011
39 Citations
Experimental evaluation shows that the unified scheduler is as efficient as the Cilk scheduler when tasks have no dependencies.
Experimental results indicate that our scheduler outperforms prior work in terms of task schedulability and analysis time complexity.
Open accessProceedings ArticleDOI
11 Apr 2011
32 Citations
This information also allows the scheduler to determine feasibility of task migrations, which is critical for the safety of any hard real-time system.
In the experiments our scheduler clearly outperforms conventional run-time schedulers based on as-soon-as-possible techniques.
Our experimental results show that the scheduler can achieve substantial energy savings over a device that is always on.
This scheduler has achieved the tight schedulability bound of this problem.
Results of the experimental study of the scheduler demonstrate its high performance.
Our scheduler can be implemented with reasonable hardware overhead.

See what other people are reading

What can i use to learn about rocky programe wich deal with dem?
5 answers
To learn about the ROCKY program, which deals with improving the robustness of STT-MRAM cache memory hierarchy against write failures, you can refer to research by Talebi et al.. The ROCKY architecture proposes efficient replacement policies to enhance the reliability of STT-MRAM memories by reducing susceptible transitions during write operations. The study demonstrates that ROCKY can decrease the Write Error Rate (WER) of STT-MRAM cache memories by up to 35.4% with minimal performance overhead. This research provides valuable insights into addressing the critical reliability challenges faced by STT-MRAM technology, particularly focusing on mitigating write failures in cache memory hierarchies. By studying the ROCKY program, you can gain a deeper understanding of strategies to enhance the robustness of STT-MRAM-based systems in the face of reliability issues.
How does parallel tuning affect the memory bandwidth of a computer system?
5 answers
Parallel tuning significantly impacts the memory bandwidth of a computer system by optimizing the utilization of available resources and improving the efficiency of memory operations. Through the application of auto-tuning models that leverage active learning and Bayesian optimization, parallel tuning can recommend optimal parameter values for parallel I/O operations, leading to substantial increases in I/O bandwidth, as evidenced by improvements of up to 11× over default parameters in scientific applications and benchmarks. Similarly, empirical auto-tuning methods that adjust blocking in Sparse BLAS operations based on memory footprint considerations have shown to enhance performance by fine-tuning memory access patterns. The use of statistical modeling and neural network algorithms in auto-tuning methods further reduces the space of possible parameter combinations, enabling more efficient exploration of tuning parameters that affect memory bandwidth. This approach has been successfully applied to parallel sorting programs, demonstrating the potential for significant performance optimization. Moreover, the introduction of control points in parallel programs allows for dynamic reconfiguration of application behavior, including adjustments to memory-related parameters, thereby directly influencing memory bandwidth and overall application performance. Techniques that focus on adaptability and transparency in the tuning process also play a crucial role in optimizing memory bandwidth. These techniques involve adjustments to thread numbers and processor operating frequencies, which can have a direct impact on memory access patterns and efficiency. Tools like MemSpy assist in identifying memory bottlenecks and guiding program transformations to better exploit the memory hierarchy, further contributing to improved memory bandwidth. Additionally, memory devices equipped with mode registers for tuning delays of data signals demonstrate the hardware-level adjustments that can be made to enhance memory bandwidth. Automatic performance analysis tools and dynamic tuning systems that measure execution on-line provide a framework for continuous improvement of memory bandwidth during runtime. Finally, forecasting MPI–IO bandwidth through machine learning techniques, such as artificial neural networks, offers a method for auto-tuning configuration parameters that significantly impact I/O and memory bandwidth performance.
What are the negative impact of Drones in terms of Limited Battery Life for Film and media industry?
4 answers
Drones' limited battery life poses significant challenges for the film and media industry. This limitation restricts the duration of aerial shots and requires drones to land for battery replacement, disrupting filming continuity. To address this issue, innovative concepts like the Flying Hot-Swap Battery (FHSB) system have been developed, enabling drones to stay airborne indefinitely by replacing batteries mid-flight. Additionally, research focuses on strategies to ensure persistent drone operation by determining the minimum number of spare drones needed and implementing efficient replacement strategies to prevent battery drainage. These advancements aim to enhance the efficiency and effectiveness of drone operations in the film and media industry despite the challenges posed by limited battery life.
What is physical resources?
4 answers
Physical resources refer to tangible assets or elements utilized in various systems, such as classrooms, computer equipment, and base stations. These resources can include sensor-equipped items, time slot and code channel allocations, and allocated resources for user equipment (UE). Methods and devices are developed to manage and optimize the allocation of physical resources efficiently. For instance, in classrooms, physical resources are adjusted based on sensor readings to enhance student learning. Similarly, in telecommunications, physical resources like time slots and code channels are allocated to improve user peak rates. The dynamic allocation and reallocation of physical resources play a crucial role in optimizing system performance and resource utilization across various domains.
What is slite?
5 answers
Slite refers to various processes and devices related to cutting or slitting materials. It can involve methods like forming asymmetric gaps on rolls for stable rolling, designing slit blades with adjustable cutting blade surfaces for longevity and versatility, or utilizing slitting machines with support frames and cutting devices for precise cutting of rolled materials. Additionally, slite can involve devices like a blade holder with a cutting blade for slitting metallic foils, which is moved along a linear guide path at a controllable speed. Moreover, Slite can also refer to a system called Slite designed for near zero-cost scheduling of system-level threads at user-level, enabling the implementation of configurable scheduling policies for real-time systems with improved isolation and parallelism.
Does the PUE metric cover rack level energy losses?
5 answers
The Power Usage Effectiveness (PUE) metric, while successful in enhancing data center energy efficiency, does not fully encompass rack-level energy losses. Context_1 discusses how the PUE metric, defined by the ratio of total energy used to the energy used in IT equipment, may not consider all relevant factors like server fan power, leading to potential inaccuracies in energy efficiency assessments. Context_2 introduces the concept of ITUE and TUE metrics to address the limitations of PUE, focusing on internal and external energy usage within IT equipment. Additionally, Context_5 emphasizes the importance of reducing energy consumption at the rack component level to achieve significant energy savings. Therefore, while PUE is valuable, incorporating metrics like ITUE, TUE, and considering rack-level energy losses is crucial for a more comprehensive evaluation of data center energy efficiency.
What are the current research trends in dependent task performance?
5 answers
Current research trends in dependent task performance focus on enhancing scheduling algorithms to improve efficiency in distributed environments. Various studies propose innovative approaches like combining table scheduling with task replication to create heuristic algorithms that prioritize critical tasks efficiently. Additionally, research emphasizes the use of Directed Acyclic Graphs to optimize task prioritization and reduce execution costs in cloud environments. Furthermore, advancements in task-based programming models aim to accelerate task dependence management and scheduling through hardware accelerators, showing significant performance improvements over software-only implementations. Moreover, research highlights the importance of task duplication in scheduling algorithms to ensure timely satisfaction of task dependencies, ultimately enhancing resource utilization efficiency for large-scale applications in grid computing. Overall, the trends indicate a shift towards more efficient, cost-effective, and performance-driven approaches in handling dependent tasks.
How has the use of FIFOs (named pipes) impacted modern operating systems?
5 answers
The use of FIFOs, also known as named pipes, has significantly impacted modern operating systems by facilitating efficient data transfer and communication between different devices and processes. FIFOs play a crucial role in real-time digital system design, especially for data streaming applications like multimedia devices, and are integral to Unix and Linux interprocess communication architectures. They provide a powerful model for transferring data between devices, utilizing memory controllers, CPUs, and data controllers for seamless data delivery. Additionally, FIFOs with integrated error management ensure reliable data transfer by reacting to errors and maintaining known operational states, preventing data gaps or overlaps during communication. The efficient integration of FIFOs in modern operating systems enhances performance, resource utilization, and overall system design.
What's direct memory access?
4 answers
Direct Memory Access (DMA) is a method that allows peripherals to access the system's memory directly without involving the CPU. DMA controllers facilitate this process by managing data transfers between peripherals and memory, enhancing system performance by reducing CPU involvement in data transfer operations. DMA operations involve transferring data in blocks, optimizing memory access efficiency and reducing latency. DMA controllers can execute microprograms for data reorganization in shared local memory, enhancing processing capabilities. Additionally, DMA management includes cache memory address handling and TCE cache flag utilization for efficient data transfer operations. Overall, DMA streamlines data movement, enhances system performance, and minimizes CPU workload in managing memory operations.
What are the best practices for implementing backups in an OpenNebula environment?
4 answers
Best practices for implementing backups in an OpenNebula environment involve utilizing methods like creating snapshots and backups of application output in virtual machines. Additionally, employing a system backup solution can prevent data loss by duplicating and storing server data periodically. Implementing hardware-based mirroring technology can ensure direct mirroring of NVRAM data between servers at an atomic level, enhancing data redundancy and protection. For backing up encrypted data in backend storage without exposing it, a frontend client system with client-controlled keys and minimal server-side processes is recommended. Furthermore, a computer-implemented method for backups may involve identifying data volumes, locating data objects, and backing up references to data objects in archival data stores, optimizing backup efficiency. These practices collectively enhance data protection and recovery capabilities in an OpenNebula environment.
What are the performance limitations of replicating database systems into a database for analytics?
4 answers
Replicating database systems for analytics can lead to performance limitations due to challenges such as maintaining consistent state for real-time analytics, dealing with cold-cache misses during reconfigurations causing high read-performance impact, and facing trade-offs between consistency and latency in distributed storage systems. While modern streaming systems like Apache Flink struggle to efficiently expose state to analytical queries, proposed solutions involve sending read hints to non-serving replicas to keep caches warm and maintain performance levels during reconfigurations. Additionally, managing data distribution transparently while ensuring scalability remains a challenge, with techniques like sharding impacting system complexity and inter-process communication. These factors collectively highlight the intricate balance required to optimize performance when replicating database systems for analytics.