parallel_programming_fundamentals

Parallel Programming Fundamentals

Return to Parallel Programming, Parallelism, Asynchronous Programming, Asynchrony, Concurrency Fundamentals, Concurrency, Functional Programming Fundamentals, Functional Programming and Concurrency, Fundamentals, Programming Fundamentals, Awesome Async

Parallelism

Snippet from Wikipedia: Parallelism

Parallelism may refer to:

  • Angle of parallelism, in hyperbolic geometry, the angle at one vertex of a right hyperbolic triangle that has two hyperparallel sides
  • Axial parallelism, a type of motion characteristic of a gyroscope and astronomical bodies
  • Conscious parallelism or also tacit parallelism, price-fixing between competitors that occurs without any communication between the parties
  • Parallel computing, the simultaneous execution on multiple processors of different parts of a program
    • In the analysis of parallel algorithms, the maximum possible speedup of a computation
  • Parallel evolution, the independent emergence of a similar trait in different unrelated species
  • Parallel (geometry), the property of parallel lines
  • Parallelism (grammar), a balance of two or more similar words, phrases, or clauses
  • Parallelism (rhetoric), the chief rhetorical device of Biblical poetry in Hebrew
  • Parallelsim (laboratory test), the relationship between the concentration of an analyte in a sample and the signal produced by the reference standard

standard of the laboratory test used to measure that analyte

  • Psychophysical parallelism, the theory that the conscious and nervous processes vary concomitantly
  • Parallel harmony/doubling, or harmonic parallelism, in music
Snippet from Wikipedia: Parallel computing

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency, and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.

In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance.

A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised.

Snippet from Wikipedia: Asynchrony (computer programming)

Asynchrony, in computer programming, refers to the occurrence of events independent of the main program flow and ways to deal with such events. These may be "outside" events such as the arrival of signals, or actions instigated by a program that take place concurrently with program execution, without the program hanging to wait for results. Asynchronous input/output is an example of the latter case of asynchrony, and lets programs issue commands to storage or network devices that service these requests while the processor continues executing the program. Doing so provides a degree of parallelism.

A common way for dealing with asynchrony in a programming interface is to provide subroutines that return a future or promise that represents the ongoing operation, and a synchronizing operation that blocks until the future or promise is completed. Some programming languages, such as Cilk, have special syntax for expressing an asynchronous procedure call.

Examples of asynchrony include the following:

  • Asynchronous procedure call, a method to run a procedure concurrently, a lightweight alternative to threads.
  • Ajax is a set of client-side web technologies used by the client to create asynchronous I/O web applications.
  • Asynchronous method dispatch (AMD), a data communication method used when there is a need for the server side to handle a large number of long lasting client requests. Using synchronous method dispatch (SMD), this scenario may turn the server into an unavailable busy state resulting in a connection failure response caused by a network connection request timeout. The servicing of a client request is immediately dispatched to an available thread from a pool of threads and the client is put in a blocking state. Upon the completion of the task, the server is notified by a callback. The server unblocks the client and transmits the response back to the client. In case of thread starvation, clients are blocked waiting for threads to become available.

Concurrency: Concurrency Programming Best Practices, Concurrent Programming Fundamentals, Parallel Programming Fundamentals, Asynchronous I/O, Asynchronous programming (Async programming, Asynchronous flow control, Async / await), Asymmetric Transfer, Akka, Atomics, Busy waiting, Channels, Concurrent, Concurrent system design, Concurrency control (Concurrency control algorithms‎, Concurrency control in databases, Atomicity (programming), Distributed concurrency control, Data synchronization), Concurrency pattern, Concurrent computing, Concurrency primitives, Concurrency problems, Concurrent programming, Concurrent algorithms, Concurrent programming languages, Concurrent programming libraries‎, Java Continuations, Coroutines, Critical section, Deadlocks, Decomposition, Dining philosophers problem, Event (synchronization primitive), Exclusive or, Execution model (Parallel execution model), Fibers, Futures, Inter-process communication, Linearizability, Lock (computer science), Message passing, Monitor (synchronization), Computer multitasking (Context switch, Pre-emptive multitasking - Preemption (computing), Cooperative multitasking - Non-preemptive multitasking), Multi-threaded programming, Multi-core programming, Multi-threaded, Mutual exclusion, Mutually exclusive events, Mutex, Non-blocking algorithm (Lock-free), Parallel programming, Parallel computing, Process (computing), Process state, Producer-consumer problem (Bounded-buffer problem), Project Loom, Promises, Race conditions, Read-copy update (RCU), Readers–writer lock, Readers–writers problem, Recursive locks, Reducers, Reentrant mutex, Scheduling (computing)‎, Semaphore (programming), Seqlock (Sequence lock), Serializability, Shared resource, Sleeping barber problem, Spinlock, Synchronization (computer science), System resource, Thread (computing), Tuple space, Volatile (computer programming), Yield (multithreading), Concurrency bibliography, Manning Concurrency Async Parallel Programming Series, Concurrency glossary, Awesome Concurrency, Concurrency topics, Functional programming. (navbar_concurrency - see also navbar_async, navbar_python_concurrency, navbar_golang_concurrency, navbar_java_concurrency)

Async Programming: Async Programming Best Practices, Asynchronous Programming Fundamentals, Promises and Futures, Async C, Async C++, Async C, Async Clojure, Async Dart, Async Golang, Async Haskell, Async Java (RxJava), Async JavaScript, Async Kotlin, Async PowerShell, Async Python, Async Ruby, Async Scala, Async TypeScript, Async Programming Bibliography, Manning Concurrency Async Parallel Programming Series. (navbar_async - see also navbar_concurrency, navbar_python_concurrency, navbar_golang_concurrency, navbar_java_concurrency)


© 1994 - 2024 Cloud Monk Losang Jinpa or Fair Use. Disclaimers

SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.


parallel_programming_fundamentals.txt · Last modified: 2024/04/28 03:41 (external edit)