CUDA

Home page Models Systems Tools IT staff References

Installing CUDA Using CUDA

CUDA refers to both a hardware architecture and a collection of software development tools for running general purpose computation code (in C/C++) on a GPU (Graphics Processing Unit). The architecture and many of the tools were developed by NVidia, a leading company in the graphics card business.

GPUs are modern graphics cards that efficiently support a high degree of parallelism. Originally intended for the extensive vector and matrix math needed for graphics processing, this capability is now made available to scientific programming via CUDA.

This is the main website for CUDA.

Installing CUDA

Linux Install

Download the CUDA toolkit from the NVidia CUDA web site. Make sure you download the CUDA toolkit installation file. There is another installation file called "GPU Computing SDK - complete package including all code samples". However, if you try to install this without first installing the CUDA toolkit you will get an error message like "Could not locate CUDA. Enter the full path to CUDA" because it is looking for the CUDA toolkit, which does not exist. Apparently this "complete package" does not include the required CUDA toolkit. The "GPU Computing SDK - complete package including all code samples" can be installed after installing the CUDA toolkit.

MAKE SURE TO USE THE CORRECT VERSION! Correct for both the OS and the hardware architecture - 32 or 64 bit. If you install the wrong architecture, when you try to run any of the installed executables, such as the nvcc compiler, you will get a message like "cannot execute binary file". If that happens then delete the directory where cuda was installed (ex. /usr/local/cuda - will require root privileges, i.e. use sudo), and re-install using the correct install package.

The CUDA toolkit install file will be a ".run" file. To run it, from a terminal window cd to the directory with the install file and run it. If it is called cudatoolkit_4.0.17_linux_64_ubuntu10.10.run then use the following commands:

This will create a directory structure in /usr/local/cuda.

Follow these instructions, displayed by the installer at the end of the installation, to complete the CUDA setup:

* Please make sure your PATH includes /usr/local/cuda/bin
* Please make sure your LD_LIBRARY_PATH
*   for 32-bit Linux distributions includes /usr/local/cuda/lib
*   for 64-bit Linux distributions includes /usr/local/cuda/lib64:/usr/local/cuda/lib
* OR
*   for 32-bit Linux distributions add /usr/local/cuda/lib
*   for 64-bit Linux distributions add /usr/local/cuda/lib64 and /usr/local/cuda/lib
* to /etc/ld.so.conf and run ldconfig as root

* Please read the release notes in /usr/local/cuda/doc/

* To uninstall CUDA, delete /usr/local/cuda
* Installation Complete

You can do this by using the following commands:

Once installed correctly the command nvcc --version should work. Note that you must do source ~/.bashrc to have the path setting take effect in any open terminal window. Also, you should be able to compile and run CUDA code. If you get an error like "error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory" then the set up to access the run-time library was not completed or not done correctly.

Once the CUDA Toolkit is installed you can optionally install the "GPU Computing SDK". It is not clear what this is - it seems to be a set of example programs. To install download the install file, a ".run" file. If it is called gpucomputingsdk_4.0.17_linux.run then do the following:

Mac Install

We have not installed CUDA on Macs.

Windows Install

We have not installed CUDA on Windows.

Using CUDA

Using CUDA is too complex to give a short overview here. Here are some CUDA references. Many of these, as well as some others, are available from the CUDA SDK dowload page, which can be reached from this page. Introduction to CUDA is very good overview that comprises about 6 hours of online presentations. Nvidia's CUDA: The End of the CPU? and CUDA, Supercomputing for the Masses are two useful introductory web articles (really series of articles) on CUDA and the NVidia gpu architecture. CUDA by Example is also a very good and very easy to follow introduction to using CUDA. The other references from Nvidia are more detailed reference material. Programming Massively Parallel Processors is a detailed presentation on the issues in programming GPUs and algoithms to address them.