Child pages
  • Running CUDA and OpenCL in course server
Skip to end of metadata
Go to start of metadata

Initial draft, subject to change

CUDA and OpenCL

We have a Linux server with NVidia Tesla C1060 available for course use. The server has Intel Core i7 2.67 GHz CPU with 6 GB of memory. We have installed NVidia CUDA Toolkit 3.0 Beta which provides CUDA and OpenCL support
The NVidia kernel driver version is Linux-64 195.17.

Note! The latest official version of the Cuda toolkit is 2.3 which has some driver version issues when running both OpenCL and Cuda code, thus we have chosen to use this unofficial beta version. However, if you want to install NVidia drivers to your home computer, you might want to use the latest official version.

Getting an account and logging in

Our server name is {{} and in order to get an account there you have to fill the CSE department's account application form and delivering it personally to course staff. We will fill these application forms during the first meeting of the seminar.

Once you get your account, you can log in via ssh:


Developing, Compiling and Running


First you have to set the environment by executing

use cuda
[cuda is in use]

The compiler binary name is nvcc. See the manual page for command line options.

The script sets PATH, MANPATH and LD_LIBRARY_PATH:


Once these are set, you can compile Cuda programs by issuing

nvcc -o vec

Cuda SDK includes some additional libraries and header files which provide utility
functions for initializing the Cuda device, checking the return values of Cuda API calls, etc. TO use this, you have to set the include and linker paths accordingly.
See the vector addition example from the introduction presentation for a sample Makefile.


To compile OpenCL programs in miranda, say

gcc -o vec vec.c -lOpenCL

Like in Cuda, SDK has some utility libraries for OpenCL, See the example (ttps://'s Makefiles for details.

Compiling the provided example programs

We have installed NVidia's GPGPU computing SDK into miranda. To copy the SDK into your home directory. It is already compiled in /usr/local/gpu-computing-sdk-3.0 but these instructions apply if you want to compile it yourself. You probably shouldrun make clean first before issuing the compilation commands.

cp -a /usr/local/gpu-computing-sdk-3.0 ~

To compile NVidia Cuda example programs

use cuda
make -C gpu-computing-sdk-3.0/sdk/C

To compile NVidia OpenCL example programs:

use cuda
make -C gpu-computing-sdk-3.0/OpenCL

The CUDA sample programs are built into ~/gpu-computing-sdk-3.0/sdk/C/bin/linux/release directory. To run the sample program called {{deviceQuery} which reports some status information of the devices currently present

cd gpu-computing-sdk-3.0/sdk/C/bin/linux/release

Likewise, the OpenCL programs are located in ~/gpu-computing-sdk-3.0/OpenCL/bin/linux/release. To run the corresponding deviceQuery OCL program:

cd ~/gpu-computing-sdk-3.0/OpenCL/bin/linux/release

Note that some of the sample programs use graphics and require a local X connection to work properly. So there is no way to use them remotely with, for example, through SSH forwarding of X.

  • No labels