Shared-Memory Parallelism Can Be Simple, Fast, and Scalable

Shared-Memory Parallelism Can Be Simple, Fast, and Scalable

Julian Shun
ISBN: 9781970001884 | PDF ISBN: 9781970001891
Hardcover ISBN:9781970001914
Copyright © 2017 | 444 Pages | Publication Date: June, 2017

BEFORE YOU ORDER: You may have Academic or Corporate access to this title. Click here to find out: 10.1145/3018787

READ SAMPLE CHAPTER

Ordering Options: Paperback $99.95   E-book $79.96   Paperback & E-book Combo $124.94
Hardcover $129.95   Hardcover & E-book Combo $162.44


Why pay full price? Members receive 15% off all orders.
Learn More Here

Read Our Digital Content License Agreement (pop-up)

Purchasing Options:



Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.

The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.

The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.

This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.

Table of Contents

1. Introduction
2. Preliminaries and Notation

Part I: Programming Techniques for Deterministic Parallelism
3. Internally Deterministic Parallelism: Techniques and Algorithms
4. Deterministic Parallelism in Sequential Iterative Algorithms
5. A Deterministic Phase-Concurrent Parallel Hash Table
6. Priority Updates: A Contention-Reducing Primitive for Deterministic Programming

Part II: Large-Scale Shared-Memory Graph Analytics
7. Ligra: A Lightweight Graph Processing Framework for Shared Memory
8. Ligra+: Adding Compression to Ligra

Part III: Parallel Graph Algorithms
9. Linear-Work Parallel Graph Connectivity
10. Parallel and Cache-Oblivious Triangle Computations

Part IV: Parallel String Algorithms
11. Parallel Cartesian Tree and Suffix Tree Construction
12. Parallel Computation of Longest Common Prefixes
13. Parallel Lempel-Ziv Factorization
14. Parallel Wavelet Tree Construction
15. Conclusion and Future Work

References
Index

About the Author(s)

Julian Shun, University of California, Berkeley
Julian Shun obtained his Ph.D. in Computer Science from Carnegie Mellon University, advised by Guy Blelloch. He obtained his undergraduate degree in Computer Science from UC Berkeley. During his Ph.D., Julian developed Ligra, a framework for large-scale graph processing in shared-memory, as well as algorithms for graph and text analytics that are efficient both in theory and in practice. He also developed methods for writing deterministic parallel programs and created the Problem Based Benchmark Suite for benchmarking parallel programs. Julian is currently a Miller Research Fellow at UC Berkeley.

Reviews
Browse by Subject
Case Studies in Engineering
ACM Books
IOP Concise Physics
SEM Books
0 items
LATEST NEWS

Newsletter
Note: Registered customers go to: Your Account to subscribe.

E-Mail Address:

Your Name: