scispace - formally typeset
Search or ask a question
Author

Herb Sutter

Bio: Herb Sutter is an academic researcher. The author has contributed to research in topics: Generic programming & Template. The author has an hindex of 8, co-authored 17 publications receiving 983 citations.

Papers
More filters
01 Jan 2013
TL;DR: Looking back, it’s not much of a stretch to call 2004 the year of multicore, as many companies showed new or updated multicore processors.
Abstract: The major processor manufacturers and architectures, from Intel and AMD to Sparc and PowerPC, have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds and straight-line instruction throughput ever higher, they are instead turning en masse to hyperthreading and multicore architectures. Both of these features are already available on chips today; in particular, multicore is available on current PowerPC and Sparc IV processors, and is coming in 2005 from Intel and AMD. Indeed, the big theme of the 2004 InStat/MDR Fall Processor Forum was multicore devices, as many companies showed new or updated multicore processors. Looking back, it’s not much of a stretch to call 2004 the year of multicore.

683 citations

Book
01 Jan 2004
TL;DR: From type definition to error handling, this book presents C++ best practices, including some that have only recently been identified and standardized-techniques you may not know even if you've used C++ for years.
Abstract: Preface. 1. Organizational and Policy Issues. Don't sweat the small stuff. (Or: Know what not to standardize.). Compile cleanly at high warning levels. Use an automated build system. Use a version control system. Invest in code reviews. 2. Design Style. Give one entity one cohesive responsibility. Correctness, simplicity, and clarity come first. Know when and how to code for scalability. Don't optimize prematurely. Don't pessimize prematurely. Minimize global and shared data. Hide information. Know when and how to code for concurrency. Ensure resources are owned by objects. Use explicit RAII and smart pointers. 3. Coding Style. Prefer compile- and link-time errors to run-time errors. Use const proactively. Avoid macros. Avoid magic numbers. Declare variables as locally as possible. Always initialize variables. Avoid long functions. Avoid deep nesting. Avoid initialization dependencies across compilation units. Minimize definitional dependencies. Avoid cyclic dependencies. Make header files self-sufficient. Always write internal #include guards. Never write external #include guards. 4. Functions and Operators. Take parameters appropriately by value, (smart) pointer, or reference. Preserve natural semantics for overloaded operators. Prefer the canonical forms of arithmetic and assignment operators. Prefer the canonical form of ++ and --. Prefer calling the prefix forms. Consider overloading to avoid implicit type conversions. Avoid overloading &&, ||, or , (comma). Don't write code that depends on the order of evaluation of functionarguments. 5. Class Design and Inheritance. Be clear what kind of class you're writing. Prefer minimal classes to monolithic classes. Prefer composition to inheritance. Avoid inheriting from classes that were not designed to be base classes. Prefer providing abstract interfaces. Public inheritance is substitutability. Inherit, not to reuse, but to be reused. Practice safe overriding. Consider making virtual functions nonpublic, and public functions nonvirtual. Avoid providing implicit conversions. Make data members private, except in behaviorless aggregates (C-stylestructs). Don't give away your internals. Pimpl judiciously. Prefer writing nonmember nonfriend functions. Always provide new and delete together. If you provide any class-specific new, provide all of the standard forms (plain, in-place, and nothrow). 6. Construction, Destruction, and Copying. Define and initialize member variables in the same order. Prefer initialization to assignment in constructors. Avoid calling virtual functions in constructors and destructors. Make base class destructors public and virtual, or protected and nonvirtual. Destructors, deallocation, and swap never fail. Copy and destroy consistently. Explicitly enable or disable copying. Avoid slicing. Consider Clone instead of copying in base classes. Prefer the canonical form of assignment. Whenever it makes sense, provide a no-fail swap (and provide it correctly). 7. Namespaces and Modules. Keep a type and its nonmember function interface in the same namespace. Keep types and functions in separate namespaces unless they're specifically intended to work together. Don't write namespace usings in a header file or before an #include. Avoid allocating and deallocating memory in different modules. Don't define entities with linkage in a header file. Don't allow exceptions to propagate across module boundaries. Use sufficiently portable types in a module's interface. 8. Templates and Genericity. Blend static and dynamic polymorphism judiciously. Customize intentionally and explicitly. Don't specialize function templates. Don't write unintentionally nongeneric code. 9. Error Handling and Exceptions. Assert liberally to document internal assumptions and invariants. Establish a rational error handling policy, and follow it strictly. Distinguish between errors and non-errors. Design and write error-safe code. Prefer to use exceptions to report errors. Throw by value, catch by reference. Report, handle, and translate errors appropriately. Avoid exception specifications. 10. STL: Containers. Use vector by default. Otherwise, choose an appropriate container. Use vector and string instead of arrays. Use vector (and string::c_str) to exchange data with non-C++ APIs. Store only values and smart pointers in containers. Prefer push_back to other ways of expanding a sequence. Prefer range operations to single-element operations. Use the accepted idioms to really shrink capacity and really erase elements. 11. STL: Algorithms. Use a checked STL implementation. Prefer algorithm calls to handwritten loops. Use the right STL search algorithm. Use the right STL sort algorithm. Make predicates pure functions. Prefer function objects over functions as algorithm and comparer arguments. Write function objects correctly. 12. Type Safety. Avoid type switching prefer polymorphism. Rely on types, not on representations. Avoid using reinterpret_cast. Avoid using static_cast on pointers. Avoid casting away const. Don't use C-style casts. Don't memcpy or memcmp non-PODs. Don't use unions to reinterpret representation. Don't use varargs (ellipsis). Don't use invalid objects. Don't use unsafe functions. Don't treat arrays polymorphically. Bibliography. Summary of Summaries. Index.

70 citations

Book
01 Jan 2004
TL;DR: In this article, the authors propose a method to find the optimal set of attributes for a given set of users: ǫ(ǫ) = 0, 0, 1, 0.
Abstract: 組織とポリシーに関する問題 設計スタイル コーディングスタイル 関数と演算子 クラス設計と継承 コンストラクタ、デストラクタ、およびコピー代入演算子 名前空間とモジュール テンプレートと汎用性 エラー処理と例外処理 STL:コンテナ〔ほか〕

68 citations

01 Jan 2008
TL;DR: 2004 was the year of multicore; in particular, multicore is available on current PowerPC and Sparc IV processors, and is coming in 2005 from Intel and AMD.
Abstract: Your free lunch will soon be over. What can you do about it? What are you doing about it. The major processor manufacturers and architectures, from Intel and AMD to Sparc and PowerPC, have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds and straight-line instruction throughput ever higher, they are instead turning en masse to hyperthreading and multicore architectures. Both of these features are available on chips today; in particular, multicore is available on current PowerPC and Sparc IV processors, and is coming in 2005 from Intel and AMD. Indeed, the big theme of the 2004 In-Stat/MDR Fall Processor Forum (http://www.mdronline.com/fpf04/index.html) was multicore devices, with many companies showing new or updated multicore processors. Looking back, it's not much of a stretch to call 2004 the year of multicore.

65 citations

Journal Article

36 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.
Abstract: The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

662 citations

Proceedings ArticleDOI
03 Oct 2002
TL;DR: A new extension to the purely functional programming language Haskell that supports compile-time meta-programming and the ability to generate code at compile time allows the programmer to implement such features as polytypic programs, macro-like expansion, user directed optimization, and the generation of supporting data structures and functions from existing data structure and functions.
Abstract: We propose a new extension to the purely functional programming language Haskell that supports compile-time meta-programming. The purpose of the system is to support the algorithmic construction of programs at compile-time.The ability to generate code at compile time allows the programmer to implement such features as polytypic programs, macro-like expansion, user directed optimization (such as inlining), and the generation of supporting data structures and functions from existing data structures and functions.Our design is being implemented in the Glasgow Haskell Compiler, ghc.

572 citations

Book
31 Oct 2007
TL;DR: Using OpenMP describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, and describes how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance.
Abstract: "I hope that readers will learn to use the full expressibility and power of OpenMP. This book should provide an excellent introduction to beginners, and the performance section should help those with some experience who want to push OpenMP to its limits." --from the foreword by David J. Kuck, Intel Fellow, Software and Solutions Group, and Director, Parallel and Distributed Solutions, Intel Corporation OpenMP, a portable programming interface for shared memory parallel computers, was adopted as an informal standard in 1997 by computer scientists who wanted a unified model on which to base programs for shared memory systems. OpenMP is now used by many software developers; it offers significant advantages over both hand-threading and MPI. Using OpenMP offers a comprehensive introduction to parallel programming concepts and a detailed overview of OpenMP. Using OpenMP discusses hardware developments, describes where OpenMP is applicable, and compares OpenMP to other programming interfaces for shared and distributed memory parallel architectures. It introduces the individual features of OpenMP, provides many source code examples that demonstrate the use and functionality of the language constructs, and offers tips on writing an efficient OpenMP program. It describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, discussing several case studies in detail, and offers in-depth troubleshooting advice. It explains how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance. Finally, Using OpenMP considers trends likely to influence OpenMP development, offering a glimpse of the possibilities of a future OpenMP 3.0 from the vantage point of the current OpenMP 2.5. With multicore computer use increasing, the need for a comprehensive introduction and overview of the standard interface is clear. Using OpenMP provides an essential reference not only for students at both undergraduate and graduate levels but also for professionals who intend to parallelize existing codes or develop new parallel programs for shared memory computer architectures.

465 citations

Journal ArticleDOI
01 Mar 2017
TL;DR: The "end of Moore's law" as discussed by the authors has been widely recognized as a major barrier to further miniaturization of semiconductor technology. But the field effect transistor is approaching some physical limits, and the associated rising costs and reduced return on investment appear to be slowing the pace of development.
Abstract: The insights contained in Gordon Moore's now famous 1965 and 1975 papers have broadly guided the development of semiconductor electronics for over 50 years. However, the field-effect transistor is approaching some physical limits to further miniaturization, and the associated rising costs and reduced return on investment appear to be slowing the pace of development. Far from signaling an end to progress, this gradual "end of Moore's law" will open a new era in information technology as the focus of research and development shifts from miniaturization of long-established technologies to the coordinated introduction of new devices, new integration technologies, and new architectures for computing.

461 citations