CUDA

Stands for "Compute Unified Device Architecture." CUDA is a parallel computing platform developed by NVIDIA and introduced in 2006. It enables software programs to perform calculations using both the CPU and GPU. By sharing the processing load with the GPU (instead of only using the CPU), CUDA-enabled programs can achieve significant increases in performance.

CUDA is one of the most widely used GPGPU (General-Purpose computation on Graphics Processing Units) platforms. Unlike OpenCL, another popular GPGPU platform, CUDA is proprietary and only runs on NVIDIA graphics hardware. However, most CUDA-enabled video cards also support OpenCL, so programmers can choose to write code for either platform when developing applications for NVIDIA hardware.

While CUDA only supports NVIDIA hardware, it can be used with several different programming languages. For example, NVIDIA provides APIs and compilers for C and C++, Fortran, and Python. The CUDA Toolkit, a development environment for C/C++ developers, is available for Windows, OS X, and Linux.

Updated July 3, 2015

Definitions by TechTerms.com

The definition of CUDA on this page is an original TechTerms.com definition. If you would like to reference this page or cite this definition, you can use the green citation links above.

The goal of TechTerms.com is to explain computer terminology in a way that is easy to understand. We strive for simplicity and accuracy with every definition we publish. If you have feedback about the CUDA definition or would like to suggest a new technical term, please contact us.

Want to learn more tech terms? Subscribe to the daily or weekly newsletter and get featured terms and quizzes delivered to your inbox.

Sign up for the free TechTerms Newsletter

How often would you like to receive an email?

You can unsubscribe or change your frequency setting at any time using the links available in each email.

Questions? Please contact us.