CUDA Programming: A Developer's Guide to Parallel Computing with GPUsNewnes, 28 груд. 2012 р. - 600 стор. If you need to learn CUDA but don't have experience with parallel computing, CUDA Programming: A Developer's Introduction offers a detailed guide to CUDA with a grounding in parallel fundamentals. It starts by introducing CUDA and bringing you up to speed on GPU parallelism and hardware, then delving into CUDA installation. Chapters on core concepts including threads, blocks, grids, and memory focus on both parallel and CUDA-specific issues. Later, the book demonstrates CUDA in practice for optimizing applications, adjusting to new hardware, and solving common problems.
|
З цієї книги
Результати 1-5 із 57
Сторінка 3
... parameter i would be loaded into another register. The loop exit condition, 100, is loaded into another register or possibly encoded into the instruction stream as a literal value. The computer would then iterate around the same ...
... parameter i would be loaded into another register. The loop exit condition, 100, is loaded into another register or possibly encoded into the instruction stream as a literal value. The computer would then iterate around the same ...
Сторінка 23
... parameter. This is one of the unfortunate areas of thread-based operations; they operate with a shared memory space. This can be both an advantage in terms of not having to formally exchange data via messages, and a disadvantage in the ...
... parameter. This is one of the unfortunate areas of thread-based operations; they operate with a shared memory space. This can be both an advantage in terms of not having to formally exchange data via messages, and a disadvantage in the ...
Сторінка 36
... parameters to be pushed onto the stack along with any local variables. GPUs and CPUs implement a stack in the same way, simply an area of memory from the global memory space. Although CPUs and the Fermi-class GPUs cache this area ...
... parameters to be pushed onto the stack along with any local variables. GPUs and CPUs implement a stack in the same way, simply an area of memory from the global memory space. Although CPUs and the Fermi-class GPUs cache this area ...
Сторінка 74
... parameter, different for each thread, which defines the thread ID or number. You can use this to directly index into the array. This is very similar to MPI, where you get the process rank for each process. The thread information is ...
... parameter, different for each thread, which defines the thread ID or number. You can use this to directly index into the array. This is very similar to MPI, where you get the process rank for each process. The thread information is ...
Сторінка 77
... parameters you can pass, and we'll come back to this, but for now you have two important parameters to look at ... parameter is simply the number of threads you wish to launch into the kernel. For this simple example, this directly ...
... parameters you can pass, and we'll come back to this, but for now you have two important parameters to look at ... parameter is simply the number of threads you wish to launch into the kernel. For this simple example, this directly ...
Зміст
Chapter 8 MultiCPU and MultiGPU Solutions | 267 |
Chapter 9 Optimizing Your Application | 305 |
Chapter 10 Libraries and SDK | 441 |
Chapter 11 Designing GPUBased Systems | 503 |
Chapter 12 Common Problems Causes and Solutions | 527 |
Index | 565 |
Інші видання - Показати все
CUDA Programming: A Developer's Guide to Parallel Computing with GPUs Shane Cook Обмежений попередній перегляд - 2012 |
Загальні терміни та фрази
256 threads algorithm allocate application array atomic atomic operations blockDim.x blockIdx.x bytes calculation compiler compute 2.x const int const u32 constant memory copy CUDA CALL cuda CUDA cores dataset device device_num elements example execution Fermi Figure function GB/s GeForce GTX 470:GMEM global memory GMEM hardware host memory ID:0 GeForce GTX InfiniBand instruction issue iterations Kepler kernel L1 cache latency Linux look loop malloc Memcpy memory access memory bandwidth memory fetch merge sort node num_elem num_elements num_threads number of blocks number of threads NVIDIA OpenMP operation optimization output Parallel Nsight parameter PCI-E performance pointer prefix sum problem processor radix sort reduce registers result serial shared memory SIMD simply single SP SP SP speedup stream synchronization Tesla threadIdx.x threads per block transfer typically uint4 unsigned int usage version is faster void warp write þ¼