This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
This article is written like a manual or guide. Please help rewrite this article and remove advice or instruction.(February 2024)
This article contains a pro and con list. Please help rewriting it into consolidated sections based on topics.(February 2024)
(Learn how and when to remove this message)
CUDA
Developer(s)
Nvidia
Initial release
June 23, 2007; 16 years ago (2007-06-23)
Stable release
12.4.1
/ April 12, 2024; 17 days ago (2024-04-12)
Operating system
Windows, Linux
Platform
Supported GPUs
Type
GPGPU
License
Proprietary
Website
developer.nvidia.com/cuda-zone
Compute Unified Device Architecture (CUDA) is a proprietary[1] parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations (like moving data between the CPU and the GPU). [2] CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.[3] In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
CUDA is designed to work with programming languages such as C, C++, Fortran and Python. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming.[4] CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL.[5][3]
CUDA was created by Nvidia in 2006.[6] When it was first introduced, the name was an acronym for Compute Unified Device Architecture,[7] but Nvidia later dropped the common use of the acronym and no longer uses it.[when?]
^Shah, Agam. "Nvidia not totally against third parties making CUDA chips". www.theregister.com. Retrieved 2024-04-25.
^Nvidia. "What is CUDA?". Nvidia. Retrieved 21 March 2024.
^ abAbi-Chahla, Fedy (June 18, 2008). "Nvidia's CUDA: The End of the CPU?". Tom's Hardware. Retrieved May 17, 2015.
^Zunitch, Peter (2018-01-24). "CUDA vs. OpenCL vs. OpenGL". Videomaker. Retrieved 2018-09-16.
Compute Unified Device Architecture (CUDA) is a proprietary parallel computing platform and application programming interface (API) that allows software...
Look up Cuda or cuda in Wiktionary, the free dictionary. Cuda or CUDA may refer to: CUDA, a parallel programming framework by Nvidia Barracuda Networks...
"gills" on the 'Cuda model. Only 1970 'Cuda models received a "hockey stick" stripe with an engine call out within it, where as 1971 'Cudas were the only...
Nvidia CUDA Compiler (NVCC) is a proprietary compiler by Nvidia intended for use with CUDA. CUDA code runs on both the CPU and GPU. NVCC separates these...
card test, see CUDA-Z tool. Comparison of Nvidia graphics processing units List of Nvidia graphics processing units CUDA – Nvidia CUDA technology Nvidia...
CUDA, which is actually named "CUDA Runtime API", is somewhat similar to SYCL. But there is actually a less known non single-source version of CUDA which...
Season 7. Mark Worman wanted to document the restoration of a 1971 Plymouth 'Cuda, painted Hemi Orange, equipped with a 440 6 Barrel V8, a Heavy Duty 4-Speed...
rCUDA, which stands for Remote CUDA, is a type of middleware software framework for remote GPU virtualization. Fully compatible with the CUDA application...
set architecture used in Nvidia's CUDA programming environment. The NVCC compiler translates code written in CUDA, a C++-like language, into PTX instructions...
fixed-function decoding hardware (Nvidia PureVideo), or (partially) decode via CUDA software running on the GPU, if fixed-function hardware is not available...
NVIDIA compiler. HIPIFY is a source-to-source compiling tool, it translates CUDA to HIP and reverse, either using a clang-based tool, or a sed-like Perl script...
Emilce Cuda (born 26 December 1965) is an Argentine theologian, university professor, and Roman Curia official. Dubbed "the woman who knows how to read...
GeForce Now. In addition to GPU design and manufacturing, Nvidia provides the CUDA software platform and API that allows the creation of massively parallel...
used to write thread-safe programs. Compute Capability 1.2: for details see CUDA All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic...
drop-in replacement to run NumPy/SciPy code on GPU. CuPy supports Nvidia CUDA GPU platform, and AMD ROCm GPU platform starting in v9.0. CuPy has been initially...
PyTorch Tensors are similar to NumPy Arrays, but can also be operated on a CUDA-capable NVIDIA GPU. PyTorch has also been developing support for other GPU...
pricing. GPGPU was the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating...
without the speed malus of CPU rendering. Octane Render runs on Nvidia's CUDA technology when using Nvidia GPU video cards; Octane X for macOS Big Sur...