art_of_concurrency_glossary

The Art of Concurrency - Glossary

Return to Concurrency Glossary, The Art of Concurrency - A Thread Monkey's Guide to Writing Parallel Applications, Concurrency Bibliography, Concurrency, Concurrent Programming, Parallel Programming, Multi-Threaded Programming, Multi-Core Programming, Asynchronous Programming, Concurrency and Functional Programming, Concurrency and Security, Concurrency and Data Science - Concurrency and and Databases, GitHub Concurrency, Awesome Concurrency, Concurrency Topics

Adjacency matrix - “A graph representation method that uses a 2D array. Each row and column corresponds to a node from the graph. If an edge exists between two nodes, a 1 value is placed at the intersection of the two nodes; otherwise, a 0 is stored to show that no edge exists.” (ArtConc 2009)

Asynchronous - “Separate execution streams that can run concurrently in any order relative to each other are asynchronous.” (ArtConc 2009)

Atomic - “A type of operation that cannot be divided into smaller components or is allowed to complete execution before the operating system swaps the executing thread out of the processor.” (ArtConc 2009)

Barrier - “A synchronization object that holds all threads until every participating thread has arrived. Upon the arrival of the last thread, all threads are released to continue execution.” (ArtConc 2009)

Benign data race - “A data race that has no adverse consequences. For example, threads may write the same value into a shared location or a flag may be updated by one thread as another is reading, leading to a little extra work for the reading thread if it does not see the updated flag value.” (ArtConc 2009)

Bus overload - “A situation in which data from I/O or memory cannot be moved at the rate of the thread or process requests. This overloading of the bus capacity will cause threads to wait for memory requests to be satisfied.” (ArtConc 2009)

Cache prefetching - “Processors may have special hardware that allows them to predict which line of cache will be needed next by an executing process. The processor can retrieve (fetch) these cache lines prior to the thread requiring them.” (ArtConc 2009)

Chunk - “A portion of a larger data set that can be assigned to a thread for processing.” (ArtConc 2009)

Compiler pragma - “A programming notation that a compiler can interpret designed to provide information or hints about the code around the pragma and how to handle it.” (ArtConc 2009)

Complete binary tree - “A binary tree structure in which each level of the tree, except perhaps the deepest, is full. Leaf nodes are placed as far left as possible, if that level of the tree is not full.” (ArtConc 2009)

Concurrency - “The capability of having more than one computation in progress at the same time. These computations may be on separate cores or they may be sharing a single core by being swapped in and out by the operating system at intervals.” (ArtConc 2009)

Connected components - “A set of connected subgraphs with no shared nodes between any two subgraphs. A graph with a number of nodes and no edges is one extreme example of a graph built from connected components.” (ArtConc 2009)

Connected graph - “A graph in which there is a path from any node to any other node.” (ArtConc 2009)

CRCW PRAM - “Concurrent Read, Concurrent Write PRAM. A theoretical model of the architecture of a shared-memory machine with more than one processor. Multiple threads are allowed to read an item within the shared memory simultaneously with other threads, and are allowed to update the item at the same time as other threads. The model itself specifies the value that is written to a shared memory location by multiple threads.” (ArtConc 2009)

CREW PRAM - “Concurrent Read, Exclusive Write PRAM. A theoretical model of the architecture of a shared-memory machine with more than one processor. Multiple threads can read an item within the shared memory simultaneously, but access to update a shared location must be restricted to a single thread at any time.” (ArtConc 2009)

Critical region - “A portion of code from a concurrent algorithm where shared variables are accessed and updated. Mutual exclusion is needed for execution of a critical region by threads to prevent a data race.” (ArtConc 2009)

Critical section - “Alternate name for ”critical region.“ Also, the name of a synchronization object type in the Windows Threads API.” (ArtConc 2009)

Data decomposition - “A method for identifying independent work that focuses on dividing large data sets that can be processed concurrently. For each chunk of data that is assigned to a thread, the computation required to process that data is assigned to that thread.” (ArtConc 2009)

Data flow parallelism - “Style of concurrency that executes an instruction when the arguments for that instruction are ready for use, as opposed to the original sequence in which the instruction was written.” (ArtConc 2009)

Data race - “The result of having two or more threads accessing the same shared resource or memory location, where at least one of the threads is attempting to update the shared resource. The threads are ”racing“ to deposit their value into the contended memory location or to read the memory location before or after a thread updating the value.” (ArtConc 2009)

Deadlock - “A situation in which one or more threads are waiting for an event that will never occur. The most common situation is when two threads are each holding a synchronization object that the other thread wants and there is no way for one thread to release the object it holds in order to allow the other thread to proceed.” (ArtConc 2009)

Dependency - “In a serial program, a coding sequence or property of the algorithm that may prevent parallelisation of the code. There are two broad categories of dependencies: data, where reference order to the same memory location is vital to the proper execution of the algorithm; and execution, where two blocks of code must execute in the same relative order for correct execution of the algorithm. In some cases, dependencies can be removed or rewritten to be able to parallelize the affected code.” (ArtConc 2009)

Deterministic - “Given the same inputs, a deterministic application will present the same (observable) results each and every time.” (ArtConc 2009)

Directed graph - “A graph whose edges are ordered pairs of nodes; this allows connections between nodes in one direction. When drawn, the edges of a directed graph are commonly shown as arrows to indicate the ”direction“ of the edge.” (ArtConc 2009)

Distributed-memory model - “A configuration of processors connected to each other through a network. Memory attached to each processor is directly accessible to that processor only. Data can be shared between processors by executing processes utilizing API functions designed for moving data across the network.” (ArtConc 2009)

Dynamic allocation - “The allocation of work to threads as needed during the execution of the concurrent computation.” (ArtConc 2009)

Edge - “The part of a graph data structure that connects two nodes in the graph.” (ArtConc 2009)

Edge weight - “The real value associated with an edge in a graph.” (ArtConc 2009)

Efficiency - “A quasimetric used throughout this book to describe how well memory and other resources of the processor and platform are utilized by a concurrent implementation.” (ArtConc 2009)

Efficiency (Speedup) - “The ratio of speedup to the number of threads. This metric is expressed as a percentage that reflects on the average amount of the execution time a thread is actually doing computation.” (ArtConc 2009)

Enchantingly parallel - “The state in which decomposition into tasks does not create any dependencies between tasks. This used to be calledembarrassingly parallel“ until it was pointed out that there was nothing to be embarrassed about.” (ArtConc 2009)

ERCW PRAM - “Exclusive Read, Concurrent Write PRAM. A theoretical model of the architecture of a shared-memory machine with more than one processor. A single thread can read an item within the shared memory at any time, but multiple threads are allowed to update the item at the same time. The model itself specifies the value written to a shared memory location by multiple threads.” (ArtConc 2009)

EREW PRAM - “Exclusive Read, Exclusive Write PRAM. A theoretical model of the architecture of a shared-memory machine with more than one processor. Items within the shared memory must be read and written by a single thread at any time.” (ArtConc 2009)

Exclusive prefix scan - “A prefix scan that excludes the corresponding vector item's value in the computation.” (ArtConc 2009)

Execution stream - “The portion of a process launched by the operating system that executes the instructions of the process (application).” (ArtConc 2009)

False sharing - “Updating different elements of the same cache line by different threads not sharing a cache. The updates aren't a storage conflict, but cause cache lines to be moved back and forth between the separate caches. This thrashing of cache lines will cause threads to wait for the cache to be reloaded.” (ArtConc 2009)

Fork-join parallelism - “Form of concurrency in which threads are spawned (forked), threads execute concurrent tasks, and when done, threads wait for all other threads to complete before terminating (join).” (ArtConc 2009)

Game tree - “A tree structure in which nodes are positions within the game, and edges connect nodes that can be reached by a legal move within the rules of the game. Successive levels in the tree are the alternation of moves between players.” (ArtConc 2009)

Ghost cells - “Extra array elements that surround a chunk of data. These are used to hold copies of values stored in array elements that would be in the same relative positions as the cells if the entire data array were considered as a whole.” (ArtConc 2009)

Gustafson-Barsis's Law - “A speedup metric that takes into account an increase in the data size in proportion to the increase in the number of cores. The speedup is computed as if the larger data set could be executed in serial.” (ArtConc 2009)

Granularity - “Loosely defined as the amount of computation done before synchronization is needed. The longer the time between synchronizations, the coarser the granularity will be. Fine-grained parallelism runs the danger of not having enough work assigned to threads to overcome the overhead costs of using threads. Adding more threads, when the amount of computation doesn't change, only exacerbates the problem. Coarse-grained parallelism has lower overhead costs and tends to be more readily scalable to an increase in the number of threads.” (ArtConc 2009)

Graph - “A computation object that is used to model relationships among things. A graph is defined by two finite sets: a set of nodes and a set of edges. Each node has a label to identify it and distinguish it from other nodes. Edges in a graph connect exactly two nodes and are denoted by the pair of labels of nodes that are related.” (ArtConc 2009)

Gray code - “An ordering of 2n binary numbers such that a single bit is different between successive numbers in the sequence.” (ArtConc 2009)

Heisenberg Uncertainty Principle - “Knowing the position of something disallows the chance to measure the momentum of that thing, and conversely, being able to measure a thing's momentum precludes discovering that thing's position. This principle is typically applied to particles in small regions of space.” (ArtConc 2009)

Helper function - “A function that is threaded but calls another function to do the actual computations. A helper function can unpack the multiple parameters from the single parameter of a threaded function to be used in the call to the function doing the processing. The helper function can also provide a ”waiting area“ for threads in a pool and receive signals to direct the threads back to work when needed.” (ArtConc 2009)

Hotspot - “A portion of the code that has a significant amount of activity. Typically, this activity is clock cycles or execution time spent in that code portion. This activity could also be cache misses, I/O accesses, floating-point operations, etc.” (ArtConc 2009)

Inclusive prefix scan - “A prefix scan that includes the corresponding vector item's value in the computation.” (ArtConc 2009)

Intel Threading Building Blocks - “A template-based threading library for C++ programs. The library includes parallel algorithms, concurrent containers, and synchronization primitives.” (ArtConc 2009)

Length - “See Path length” (ArtConc 2009)

Linear speedup - “A speedup curve that is drawn as a straight line on a speedup graph. This results when the speedup increases at a steady rate as more threads (cores) are utilized.” (ArtConc 2009) - See Also Perfect speedup

Livelock - “A situation in which threads are doing some computation but are unable to proceed past the current computation phase due to the actions of some other thread. This is typically caused by resource starvation where there's constant change, with no thread progressing. Two cars that meet in a narrow alley can be in livelock if there is not enough room for them to get past one another. The cars jockey back and forth and side to side, but are unable to proceed. The first meeting of Robin Hood and Little John on the log bridge is another example of livelock (solved when Little John pushed Robin off the bridge into the water).” (ArtConc 2009)

Load balance - “The measure of how the overall work is equitably divided among the threads. For best performance, each thread should be assigned the same or roughly the same amount of work. This will ensure that threads finish assigned work at the same time and threads aren't going to sit idle waiting for other threads that have been given more work.” (ArtConc 2009)

Local variable - “A variable that is accessible to only a single thread. Most common is for each thread in the application to have a copy of a variable that is given the same name.” (ArtConc 2009)

Message-passing - “A parallel programming method in which processes share data and synchronize with each other by moving data (messages) from the memory of one process to another through API calls designed for such communication operations.” (ArtConc 2009)

Modulo locks - “An algorithmic technique to associate multiple indexed objects with a single lock object. This method is a compromise between having a single lock to protect the entire set of objects and using a single lock object per object to be protected.” (ArtConc 2009)

Mutex - “Common name for a programming object that is used to enforce mutual exclusion.” (ArtConc 2009)

Mutual exclusion - “The process of allowing only one thread at a time to execute a given portion of code.” (ArtConc 2009)

Nodes - “In a graph, nodes represent the objects whose relationship is being modeled by the graph.” (ArtConc 2009)

Nondeterministic - “An application whose state of execution cannot be predicted is said to be nondeterministic. Since the operating system schedules threads for execution on processor resources and there are too many factors that influence the OS scheduling, the state of execution of a concurrent application cannot be reliably predicted. Typically, incorrect use of nondeterminism in concurrent applications will be evidenced by the application returning different results from the same inputs.” (ArtConc 2009)

OpenMP - “A directive- and pragma-based threading library. An OpenMP-compliant compiler transforms the pragmas into threaded code. A set of APIs is available to give the programmer more control over how threads execute.” (ArtConc 2009)

Out-of-order execution - “Processor technology that allows the execution of instructions to be initiated when all operands for the instruction are available.” (ArtConc 2009)

Overhead - “Additional work and execution time that was not in a corresponding serial code. In concurrent programs, this typically consists of thread management and synchronization API calls.” (ArtConc 2009)

Parallel - “Executing more than one computation at the same time. These computations must be on separate cores to be running in parallel.” (ArtConc 2009)

Parallel region - “In OpenMP, a code segment that has been prefixed with a parallel pragma, which will cause the compiler to create/use threads that will then execute the code segment concurrently.” (ArtConc 2009)

Parallel sum - “A reduction algorithm that computes the sum of all the elements within a data collection.” (ArtConc 2009)

Path - “A sequence of nodes in which successive nodes are connected by edges in the graph.” (ArtConc 2009)

Path length - “The sum of the weights of the constituent edges along a path.” (ArtConc 2009)

Perfect speedup - “The speedup that is equal to the number of threads applied to the parallel execution. Graphically, this is a 45° line on a speedup curve. This is typically the maximum achievable speedup, and is the execution goal of every concurrent application.” (ArtConc 2009)

POSIX threads - “The explicit threading library available on UNIX and Linux platforms.” (ArtConc 2009)

Portability - “A quasimetric used throughout this book to describe how easily a concurrent algorithm could be translated from the given threading library to an alternative.” (ArtConc 2009)

PRAM - “Parallel Random Access Machine. A theoretical model of the architecture of a shared-memory machine with more than one processor. Execution of instructions is done in lockstep fashion (SIMD) across all processors that are defined for the model.” (ArtConc 2009)

Prefix scan - “An algorithm that computes all partial results of an associative operation applied to all elements of a vector that precede the position in the vector that stores the result. For example, if the operation is addition, the sum of all items with a lower index value is computed for each item in the vector.” (ArtConc 2009)

Private variable - “A variable that is accessible to only a single thread. Most common is for each thread in the application to have a copy of a variable that is given the same name.” (ArtConc 2009)

Process - “The operating system's spawned and controlled entity that encapsulates an executing application. A process has two main functions. The first is to act as the resource holder for the application, and the second is to execute the instructions of the application.” (ArtConc 2009)

Pthreads - “See POSIX threads” (ArtConc 2009)

Quicksort - “A recursive sorting algorithm that first partitions the elements to be sorted such that the lesser elements are found to the left and greater elements to the right of a pivot element. Each subset is then sorted recursively.” (ArtConc 2009)

Race condition - “A flaw in a concurrent application in which the result is dependent on the timing or sequence of multiple threads' execution.” (ArtConc 2009)

Readers/writer lock - “A synchronization object that allows either multiple threads to read the data in a critical region or a single thread to update the data within the critical region.” (ArtConc 2009)

Reduction - “A computation that takes a large data set and computes a value based on the data. Typical reduction operations include addition, multiplication, minimum, and maximum. The operation involved with a reduction computation executed concurrently must be associative and commutative.” (ArtConc 2009)

Reentrant code - “Code that multiple threads can execute concurrently without any adverse side effects. The most common way to ensure that a function is reentrant is to not update shared variables.” (ArtConc 2009)

Removable dependence - “A dependence that can be eliminated by rewriting the serial code. Examples of potential removable dependencies include recurrences and induction variables.” (ArtConc 2009)

Rendezvous - “A ”meeting“ of two threads to exchange data or information.” (ArtConc 2009)

Reordering buffer - “A location where instructions wait for other instructions to complete so that they can be retired in the proper order. When instructions start and execute out of order, they will wait in the reordering buffer for the completion of the instructions that originally precede them.” (ArtConc 2009)

Round-off error - “The truncation of floating-point result accuracy to fit a result into a fixed-sized data type. With concurrent execution, the order of arithmetic computations can be different, which can lead to different results than the serial code due to round-off error.” (ArtConc 2009)

Scalable - “The capability of a process or application to smoothly handle changes in the number of threads and size of data without adverse performance side effects. For example, if you add threads to execute an application and the speedup increases, that is a scalable application. If the speedup decreases, the application is not considered scalable.” (ArtConc 2009)

Scalability - “A quasimetric used throughout this book to describe an application's capability to handle changes, typically increases in system resources (e.g., number of cores, memory size, bus speed) or data set sizes.” (ArtConc 2009)

Scaled speedup - “See Gustafson-Barsis's Law” (ArtConc 2009)

Semaphore - “A synchronization object that has an associated nonnegative count. Two operations that are defined on a semaphore are known as wait (if the count is nonzero, decrement count; otherwise, block until count is positive) and post (increment count).” (ArtConc 2009)

Separable dependence - “A dependence that can be eliminated by rewriting the serial code to move the dependence outside of the code to be parallelized. Reduction computations are examples of a potential separable dependence.” (ArtConc 2009)

Simple path - “A graph path that has no repeated nodes.” (ArtConc 2009)

Sequential consistency - “The property of parallel and concurrent programs to obtain the same results as an equivalent sequential application on the same input data.” (ArtConc 2009)

Shared-memory - “A configuration of processors or cores that can read and write into the same memory address space without any special functions or APIs.” (ArtConc 2009)

SIMD - “Single Instruction, Multiple Data stream. This is a category of parallel hardware that executes the same instruction on all processors in lockstep fashion. The processors will all have different data on which to apply that instruction. The most common parallel programming model for SIMD processors is data parallelism.” (ArtConc 2009)

Simplicity - “A quasimetric used throughout this book to describe how much or how little a concurrent implementation resembles the original serial code.” (ArtConc 2009)

Speedup - “The ratio of serial execution time of the best serial algorithm to the parallel execution time for a given number of threads.” (ArtConc 2009)

Starvation - “A condition of execution in which a thread is not allowed to proceed with assigned computations. This term applies to threads that are blocked from being scheduled into a processor by the operating system due to execution of higher priority threads.” (ArtConc 2009)

Static allocation - “Allocation of all the work to threads at the outset of the concurrent computation.” (ArtConc 2009)

Storage conflict - “A situation in which two or more threads access the same memory location and at least one of the threads attempts to update the shared memory location.” (ArtConc 2009)

Streaming SIMD execution (SSE) - “An instruction set and built-in, special hardware that can execute the same instruction on multiple arguments at the same time.” (ArtConc 2009)

Superlinear speedup - “Speedup of an application that is higher than the number of threads used. This atypical speedup is only achieved through some artifact of parallel execution, for example, splitting the data into chunks that easily fit into cache, which cannot be duplicated by the serial application.” (ArtConc 2009)

Synchronization - “Coordination in the execution of multiple threads. The most common cases of synchronization occur when you provide mutually exclusive access to shared resources or gather all threads at a point in the code before they are allowed to proceed.” (ArtConc 2009)

Synchronization object - “A programming object that can control how threads execute in relation to each other or how they interact with shared resources. Locks, mutexes, and semaphores are examples of synchronization objects.” (ArtConc 2009)

Task decomposition - “A method for identifying independent work that focuses on the computations to be performed by threads. At the time a task is assigned to a thread, the data for that task needs to be available to the thread.” (ArtConc 2009)

TBB - “See Intel Threading Building Blocks” (ArtConc 2009)

Thread - “The operating system object that executes the instructions of a process.” (ArtConc 2009)

Thread-local storage (TLS) - “An API available within explicit threading libraries that creates private storage per thread, which will persist across function call boundaries.” (ArtConc 2009)

Thread monkey - “A programmer with wicked mad skills, capable of designing multithreaded, concurrent, and parallel software, as well as grinding out correct and efficient code to implement those designs.” (ArtConc 2009)

Thread pool - “A group of threads that are typically spawned once and assigned work as the need arises. When not needed, threads in the pool are blocked, waiting for some ”wakeupsignal. By recycling threads in the pool, the overhead of repeatedly spawning and terminating threads every time they are to be used is avoided.” (ArtConc 2009)

Thread-safe - “The characteristic of a function that can execute correctly when two or more threads call that function concurrently.” (ArtConc 2009)

Undirected graph - “A graph in which the nodes of an edge are unordered. This implies that the edge can be thought of as a two-way path.” (ArtConc 2009)

Vertices - “An alternate name for nodes in a graph.” (ArtConc 2009)

Wavefront algorithm - “A parallel algorithm method that allows threads to proceed with computations at regular intervals. Typically, some large data structure is the basis for using this technique, and threads sweep across the structure one after the other like waves washing up on shore.” (ArtConc 2009)

Weight - “See ”Edge weight“ (ArtConc 2009)

Weighted graph - “A graph that has a value assigned to every edge. These values can represent some measure of the relationship between the nodes connected by the edge. When speaking about weighted graphs, it is normal to refer to the weight in terms of a distance.” (ArtConc 2009)

Windows Threads - “The explicit threading library available on Microsoft Windows platforms.” (ArtConc 2009)

Worksharing - “In OpenMP, a construct that distributes the computation of the associated region to the members of the thread team.” (ArtConc 2009)

Fair Use Sources

Concurrency: Concurrency Programming Best Practices, Concurrent Programming Fundamentals, Parallel Programming Fundamentals, Asynchronous I/O, Asynchronous programming (Async programming, Asynchronous flow control, Async / await), Asymmetric Transfer, Akka, Atomics, Busy waiting, Channels, Concurrent, Concurrent system design, Concurrency control (Concurrency control algorithms‎, Concurrency control in databases, Atomicity (programming), Distributed concurrency control, Data synchronization), Concurrency pattern, Concurrent computing, Concurrency primitives, Concurrency problems, Concurrent programming, Concurrent algorithms, Concurrent programming languages, Concurrent programming libraries‎, Java Continuations, Coroutines, Critical section, Deadlocks, Decomposition, Dining philosophers problem, Event (synchronization primitive), Exclusive or, Execution model (Parallel execution model), Fibers, Futures, Inter-process communication, Linearizability, Lock (computer science), Message passing, Monitor (synchronization), Computer multitasking (Context switch, Pre-emptive multitasking - Preemption (computing), Cooperative multitasking - Non-preemptive multitasking), Multi-threaded programming, Multi-core programming, Multi-threaded, Mutual exclusion, Mutually exclusive events, Mutex, Non-blocking algorithm (Lock-free), Parallel programming, Parallel computing, Process (computing), Process state, Producer-consumer problem (Bounded-buffer problem), Project Loom, Promises, Race conditions, Read-copy update (RCU), Readers–writer lock, Readers–writers problem, Recursive locks, Reducers, Reentrant mutex, Scheduling (computing)‎, Semaphore (programming), Seqlock (Sequence lock), Serializability, Shared resource, Sleeping barber problem, Spinlock, Synchronization (computer science), System resource, Thread (computing), Tuple space, Volatile (computer programming), Yield (multithreading), Concurrency bibliography, Manning Concurrency Async Parallel Programming Series, Concurrency glossary, Awesome Concurrency, Concurrency topics, Functional programming. (navbar_concurrency - see also navbar_async, navbar_python_concurrency, navbar_golang_concurrency, navbar_java_concurrency)

Major Glossary Categories: Information Technology - IT - Computing Topics, AWS Glossary, Azure Glossary, C Language Glossary (21st Century C Glossary), C++ Glossary, C# Glossary, Cloud Glossary, Cloud Native Glossary, Clojure Glossary, COBOL Glossary, Cybersecurity Glossary, DevOps Glossary, Fortran Glossary, Functional Programming Glossary, Golang Glossary, GCP Glossary, IBM Glossary, IBM Mainframe Glossary, iOS Glossary, Java Glossary, JavaScript Glossary, Kotlin Glossary, Kubernetes Glossary, Linux Glossary, macOS Glossary, MongoDB Glossary, PowerShell Glossary, Python Glossary and Python Official Glossary, Ruby Glossary, Rust Glossary, Scala Glossary, Concurrency Glossary, SQL Glossary, SQL Server Glossary, Swift Glossary, TypeScript Glossary, Windows Glossary, Windows Server Glossary, GitHub Glossary, Awesome Glossaries. (navbar_glossary)


Cloud Monk is Retired (for now). Buddha with you. © 2005 - 2024 Losang Jinpa or Fair Use. Disclaimers

SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.


art_of_concurrency_glossary.txt · Last modified: 2023/10/02 11:33 by 127.0.0.1