Global Information Lookup Global Information

Parallel computing information


Large supercomputers such as IBM's Blue Gene/P are designed to heavily exploit parallelism.

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously.[1] Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.[2] As power consumption (and consequently heat generation) by computers has become a concern in recent years,[3] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.[4]

Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency, and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU).[5][6] In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.

In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones,[7] because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance.

A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised.

  1. ^ Gottlieb, Allan; Almasi, George S. (1989). Highly parallel computing. Redwood City, Calif.: Benjamin/Cummings. ISBN 978-0-8053-0177-9.
  2. ^ S.V. Adve et al. (November 2008). "Parallel Computing Research at Illinois: The UPCRC Agenda" Archived 2018-01-11 at the Wayback Machine (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."
  3. ^ Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".
  4. ^ Asanovic, Krste et al. (December 18, 2006). "The Landscape of Parallel Computing Research: A View from Berkeley" (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance… Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limits."
  5. ^ "Concurrency is not Parallelism", Waza conference Jan 11, 2012, Rob Pike (slides Archived 2015-07-30 at the Wayback Machine) (video)
  6. ^ "Parallelism vs. Concurrency". Haskell Wiki.
  7. ^ Hennessy, John L.; Patterson, David A.; Larus, James R. (1999). Computer organization and design: the hardware/software interface (2. ed., 3rd print. ed.). San Francisco: Kaufmann. ISBN 978-1-55860-428-5.

and 25 Related for: Parallel computing information

Request time (Page generated in 0.8867 seconds.)

Parallel computing

Last Update:

of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but...

Word Count : 8561

Massively parallel

Last Update:

computations in parallel. GPUs are massively parallel architecture with tens of thousands of threads. One approach is grid computing, where the processing...

Word Count : 372

Distributed computing

Last Update:

common goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction...

Word Count : 5463

Vectorization

Last Update:

Look up vectorization in Wiktionary, the free dictionary. Vectorization may refer to: Array programming, a style of computer programming where operations...

Word Count : 115

Embarrassingly parallel

Last Update:

In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel...

Word Count : 955

Grid computing

Last Update:

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system...

Word Count : 4799

David Gelernter

Last Update:

Gelernter is known for contributions to parallel computation in the 1980s, and for books on topics such as computed worlds (Mirror Worlds). Gelernter is...

Word Count : 2807

Computer cluster

Last Update:

and scheduled by software. The newest manifestation of cluster computing is cloud computing. The components of a cluster are usually connected to each other...

Word Count : 3747

Concurrent computing

Last Update:

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with...

Word Count : 2908

Supercomputer

Last Update:

High-performance computing High-performance technical computing Jungle computing Nvidia Tesla Personal Supercomputer Parallel computing Supercomputing in...

Word Count : 7945

Theoretical computer science

Last Update:

(used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine...

Word Count : 4543

Data parallelism

Last Update:

Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different...

Word Count : 1878

Radix sort

Last Update:

Proceedings of International Conference on Parallel Computing Technologies. Novosibirsk. 1991. David M. W. Powers, Parallel Unification: Practical Complexity,...

Word Count : 2603

IPython

Last Update:

Tools for parallel computing. IPython is a NumFOCUS fiscally sponsored project. IPython is based on an architecture that provides parallel and distributed...

Word Count : 1034

Message Passing Interface

Last Update:

standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of...

Word Count : 6321

Computer science

Last Update:

and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications...

Word Count : 7040

Fifth Generation Computer Systems

Last Update:

International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. It aimed to create an "epoch-making computer"...

Word Count : 2301

CUDA

Last Update:

Compute Unified Device Architecture (CUDA) is a proprietary parallel computing platform and application programming interface (API) that allows software...

Word Count : 4146

Unconventional computing

Last Update:

Unconventional computing is computing by any of a wide range of new or unusual methods. It is also known as alternative computing. The term unconventional...

Word Count : 4566

Parallel processing

Last Update:

Parallel processing may refer to: Parallel computing Parallel processing (DSP implementation) – Parallel processing in digital signal processing Parallel...

Word Count : 56

Optical computing

Last Update:

Optical computing or photonic computing uses light waves produced by lasers or incoherent sources for data processing, data storage or data communication...

Word Count : 3426

DNA computing

Last Update:

DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional...

Word Count : 4916

Cloud computing

Last Update:

Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct...

Word Count : 7998

Computing

Last Update:

Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic...

Word Count : 5156

List of distributed computing conferences

Last Update:

academic conferences in the fields of distributed computing, parallel computing, and concurrent computing. The conferences listed here are major conferences...

Word Count : 773

PDF Search Engine © AllGlobal.net