art_of_concurrency_preface

The Art of Concurrency - Preface

Return to The Art of Concurrency - A Thread Monkey's Guide to Writing Parallel Applications, Concurrency Bibliography, Concurrency, Concurrent Programming, Parallel Programming, Multi-Threaded Programming, Multi-Core Programming, Asynchronous Programming, Concurrency and Functional Programming, Concurrency and [[Security, Concurrency and Data Science - Concurrency and and Databases, Concurrency Glossary, GitHub Concurrency, Awesome Concurrency, Concurrency Topics

Preface

Why Should You Read This Book?

Multicore processors made a big splash when they were first introduced. Bowing to the physics of heat and power, processor clock speeds could not keep doubling every 18 months as they had been doing for the past three decades or more. In order to keep increasing the processing power of the next generation over the current generation, processor manufacturers began producing chips with multiple processor cores. More processors running at a reduced speed generate less heat and consume less power than single-processor chips continuing on the path of simply doubling clock speeds.” (ArtConc 2009)

“But how can we use those extra cores? We can run more than one application at a time, and each program could have a separate processor core devoted to the execution. This would give us truly parallel execution. However, there are only so many apps that we can run simultaneously. If those apps aren't very compute-intensive, we're probably wasting compute cycles, but now we're doing it in more than one processor.” (ArtConc 2009)

“Another option is to write applications that will utilize the additional cores to execute portions of the code that have a need to perform lots of calculations and whose computations are independent of each other. Writing such programs is known as concurrent programming. With any programming language or methodology, there are techniques, tricks, traps, and tools to design and implement such programs. I've always found that there is more ”art“ than ”science“ to programming. So, this book is going to give you the knowledge and one or two of the ”secret handshakes“ you need to successfully practice the art of concurrent programming.” (ArtConc 2009)

“In the past, parallel and concurrent programming was the domain of a very small set of programmers who were typically involved in scientific and technical computing arenas. From now on, concurrent programming is going to be mainstream. Parallel programming will eventually become synonymous with ”programming.“ Now is your time to get in on the g[[round floor, or at least somewhere near the start of the concurrent programming evolution.” (ArtConc 2009)

Who Is This Book For?

This book is for programmers everywhere.

“I work for a computer technology company, but I'm the only computer science degree-holder on my team. There is only one other person in the office within the sound of my voice who would know what I was talking about if I said I wanted to parse an LR(1) grammar with a deterministic pushdown automata. So, CS students and graduates aren't likely to make up the bulk of the interested readership for this text. For that reason, I've tried to keep the geeky CS material to a minimum. I assume that readers have some basic knowledge of data structures and algorithms and asymptotic efficiency of algorithms (Big-Oh notation) that is typically taught in an undergraduate computer science curriculum. For whatever else I've covered, I've tried to include enough of an explanation to get the idea across. If you've been coding for more than a year, you should do just fine.” (ArtConc 2009)

“I've written all the codes using C. Meaning no disrespect, I figured this was the lowest common denominator of programming languages that supports threads. Other languages, like Java and C

  1. , support threads, but if I wrote this book using one of those languages and you didn't code with the one I picked, you wouldn't read my book. I think most programmers who will be able to write concurrent programs will be able to at leastreadC code. Understanding the concurrency methods illustrated is going to be more important than being able to write code in one particular language. You can take these ideas back to C# or Java and implement them there.” (ArtConc 2009)

“I'm going to assume that you have read a book on at least one threaded programming method. There are many available, and I don't want to cover the mechanics and detailed syntax of multithreaded programming here (since it would take a whole other book or two). I'm not going to focus on using one programming paradigm here, since, for the most part, the functionality of these overlap. I will present a revolving usage of threading implementations across the wide spectrum of algorithms that are featured in the latter portion of the book. If there are circumstances where one method might differ significantly from the method used, these differences will be noted.” (ArtConc 2009)

“I've included a review of the threaded programming methods that are utilized in this book to refresh your memory or to be used as a reference for any methods you have not had the chance to study. I'm not implying that you need to know all the different ways to program with threads. Knowing one should be sufficient. However, if you change jobs or find that what you know about programming with threads cannot easily solve a programming problem you have been assigned, it's always good to have some awareness of what else is available — this may help you learn and apply a new method quickly.” (ArtConc 2009)

What's in This Book?

Chapter 1, Want to Go Faster? Raise Your Hands if You Want to Go Faster!, anticipates and answers some of the questions you might have about concurrent programming. This chapter explains the differences between parallel and concurrent, and describes the four-step threading methodology. The chapter ends with a bit of backg[[round on concurrent programming and some of the differences and similarities between distributed-memory and shared-memory programming and execution models.” (ArtConc 2009)

Chapter 2, Concurrent or Not Concurrent?, contains a lot of information about designing concurrent solutions from serial algorithms. Two concurrent design models — task decomposition and data decomposition — are each given a thorough elucidation. This chapter gives examples of serial coding that you may not be able to make concurrent. In cases where there is a way around this, I've given some hints and tricks to find ways to transform the serial code into a more amenable form.” (ArtConc 2009)

Chapter 3, Proving Correctness and Measuring Performance, first deals with ways to demonstrate that your concurrent algorithms won't encounter common threading errors and to point out what problems you might see (so you can fix them). The second part of this chapter gives you ways to judge how much faster your concurrent implementations are running compared to the original serial execution. At the very end, since it didn't seem to fit anywhere else, is a brief retrospective of how hardware has progressed to support the current multicore processors.” (ArtConc 2009)

Chapter 4, Eight Simple Rules for Designing Multithreaded Applications, says it all in the title. Use of these simple rules is pointed out at various points in the text.” (ArtConc 2009)

Chapter 5, Threading Libraries, is a review of OpenMP, Intel Threading Building Blocks, POSIX threads, and Windows Threads libraries. Some words on domain-specific libraries that have been threaded are given at the end.“ (ArtConc 2009)

Chapter 6, Parallel Sum and Prefix Scan, details two concurrent algorithms. This chapter also leads you through a concurrent version of a selection algorithm that uses both of the titular algorithms as components.” (ArtConc 2009)

Chapter 7, MapReduce, examines the MapReduce algorithmic framework; how to implement a handcoded, fully concurrent reduction operation; and finishes with an application of the MapReduce framework in a code to identify friendly numbers.“ (ArtConc 2009)

Chapter 8, Sorting, demonstrates some of the ins and outs of concurrent versions of Bubblesort, odd-even transposition sort, Shellsort, Quicksort, and two variations of radix sort algorithms.” (ArtConc 2009)

Chapter 9, Searching, covers concurrent designs of search algorithms to use when your data is unsorted and when it is sorted.“ (ArtConc 2009)

Chapter 10, Graph Algorithms, looks at depth-first and breadth-first search algorithms. Also included is a discussion of computing all-pairs shortest path and the minimum spanning tree concurrently.” (ArtConc 2009)

Chapter 11, Threading Tools, gives you an introduction to software tools that are available and on the horizon to assist you in finding threading errors and performance bottlenecks in your concurrent programs. As your concurrent code gets more complex, you will find these tools invaluable in diagnosing problems in minutes instead of days or weeks.” (ArtConc 2009)

Fair Use Sources

Concurrency: Concurrency Programming Best Practices, Concurrent Programming Fundamentals, Parallel Programming Fundamentals, Asynchronous I/O, Asynchronous programming (Async programming, Asynchronous flow control, Async / await), Asymmetric Transfer, Akka, Atomics, Busy waiting, Channels, Concurrent, Concurrent system design, Concurrency control (Concurrency control algorithms‎, Concurrency control in databases, Atomicity (programming), Distributed concurrency control, Data synchronization), Concurrency pattern, Concurrent computing, Concurrency primitives, Concurrency problems, Concurrent programming, Concurrent algorithms, Concurrent programming languages, Concurrent programming libraries‎, Java Continuations, Coroutines, Critical section, Deadlocks, Decomposition, Dining philosophers problem, Event (synchronization primitive), Exclusive or, Execution model (Parallel execution model), Fibers, Futures, Inter-process communication, Linearizability, Lock (computer science), Message passing, Monitor (synchronization), Computer multitasking (Context switch, Pre-emptive multitasking - Preemption (computing), Cooperative multitasking - Non-preemptive multitasking), Multi-threaded programming, Multi-core programming, Multi-threaded, Mutual exclusion, Mutually exclusive events, Mutex, Non-blocking algorithm (Lock-free), Parallel programming, Parallel computing, Process (computing), Process state, Producer-consumer problem (Bounded-buffer problem), Project Loom, Promises, Race conditions, Read-copy update (RCU), Readers–writer lock, Readers–writers problem, Recursive locks, Reducers, Reentrant mutex, Scheduling (computing)‎, Semaphore (programming), Seqlock (Sequence lock), Serializability, Shared resource, Sleeping barber problem, Spinlock, Synchronization (computer science), System resource, Thread (computing), Tuple space, Volatile (computer programming), Yield (multithreading), Concurrency bibliography, Manning Concurrency Async Parallel Programming Series, Concurrency glossary, Awesome Concurrency, Concurrency topics, Functional programming. (navbar_concurrency - see also navbar_async, navbar_python_concurrency, navbar_golang_concurrency, navbar_java_concurrency)


© 1994 - 2024 Cloud Monk Losang Jinpa or Fair Use. Disclaimers

SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.


art_of_concurrency_preface.txt · Last modified: 2024/04/28 03:52 by 127.0.0.1