scispace - formally typeset
Search or ask a question
Conference

International Workshop on OpenMP 

About: International Workshop on OpenMP is an academic conference. The conference publishes majorly in the area(s): Compiler & Shared memory. Over the lifetime, 372 publications have been published by the conference receiving 4894 citations.


Papers
More filters
Book ChapterDOI
30 Jul 2001
TL;DR: An overview of a new benchmark suite for parallel computers, SPEComp, which targets mid-size parallel servers and includes a number of science/engineering and data processing applications, is presented.
Abstract: We present a new benchmark suite for parallel computers. SPEComp targets mid-size parallel servers. It includes a number of science/engineering and data processing applications. Parallelism is expressed in the OpenMP API. The suite includes two data sets, Medium and Large, of approximately 1.6 and 4 GB in size. Our overview also describes the organization developing SPEComp, issues in creating OpenMP parallel benchmarks, the benchmarking methodology underlying SPEComp, and basic performance characteristics.

239 citations

Book ChapterDOI
12 May 2008
TL;DR: It is found that work-first schedules seem to have the best performance, but because of the restrictions that OpenMP imposes a breadthfirstscheduler is a better choice to have as a default for an OpenMPruntime.
Abstract: OpenMP is in the process of adding a tasking model that allowsthe programmer to specify independent units of work, called tasks,but does not specify how the scheduling of these tasks should be done(although it imposes some restrictions). We have evaluated differentscheduling strategies (schedulers and cut-offs) with several applicationsand we found that work-first schedules seem to have the best performancebut because of the restrictions that OpenMP imposes a breadthfirstscheduler is a better choice to have as a default for an OpenMPruntime.

133 citations

Book ChapterDOI
16 Sep 2013
TL;DR: The tools working group of the OpenMP Language Committee has designed OMPT—a performance tools API for OpenMP that enables performance tools to gather useful performance information from applications with low overhead and to map this information back to a user-level view of applications.
Abstract: A shortcoming of OpenMP standards to date is that they lack an application programming interface (API) to support construction of portable, efficient, and vendor-neutral performance tools. To address this issue, the tools working group of the OpenMP Language Committee has designed OMPT—a performance tools API for OpenMP. OMPT enables performance tools to gather useful performance information from applications with low overhead and to map this information back to a user-level view of applications. OMPT provides three principal capabilities: (1) runtime state tracking, which enables a sampling-based performance tool to understand what an application thread is doing, (2) callbacks and inquiry functions that enable sampling-based performance tools to attribute application performance to complete calling contexts, and (3) additional callback notifications that enable construction of more full-featured monitoring capabilities. The earnest hope of the tools working group is that OMPT be adopted as part of the OpenMP standard and supported by all standard-compliant OpenMP implementations.

100 citations

Book ChapterDOI
12 May 2008
TL;DR: An extension to allow the runtime detection of dependencies between generated tasks, broading the range of application that can benefit from tasking or improving the performance when loadbalancing or locality are critical issues for performance is proposed.
Abstract: Tasking in OpenMP 3.0 has been conceived to handle the dynamicgeneration of unstructured parallelism. New directives have beenadded allowing the user to identify units of independent work (tasks) andto define points to wait for the completion of tasks (task barriers). Inthis paper we propose an extension to allow the runtime detection of dependenciesbetween generated tasks, broading the range of applicationsthat can benefit from tasking or improving the performance when loadbalancing or locality are critical issues for performance. Furthermore thepaper describes our proof-of-concept implementation (SMP Superscalar)and shows preliminary performance results on an SGI Altix 4700.

89 citations

Book ChapterDOI
22 May 2009
TL;DR: This paper investigates if OpenMP could still survive in this new scenario and proposes a possible way to extend the current specification to reasonably integrate heterogeneity while preserving simplicity and portability.
Abstract: OpenMP has evolved recently towards expressing unstructured parallelism, targeting the parallelization of a broader range of applications in the current multicore era. Homogeneous multicore architectures from major vendors have become mainstream, but with clear indications that a better performance/power ratio can be achieved using more specialized hardware (accelerators), such as SSE-based units or GPUs, clearly deviating from the easy-to-understand shared-memory homogeneous architectures. This paper investigates if OpenMP could still survive in this new scenario and proposes a possible way to extend the current specification to reasonably integrate heterogeneity while preserving simplicity and portability. The paper leverages on a previous proposal that extended tasking with dependencies. The runtime is in charge of data movement, tasks scheduling based on these data dependencies and the appropriate selection of the target accelerator depending on system configuration and resource availability.

85 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202210
202113
202022
201922
201815
201724