CUDA Programming: A Developer's Guide to Parallel Computing with GPUsNewnes, 28 груд. 2012 р. - 600 стор. If you need to learn CUDA but don't have experience with parallel computing, CUDA Programming: A Developer's Introduction offers a detailed guide to CUDA with a grounding in parallel fundamentals. It starts by introducing CUDA and bringing you up to speed on GPU parallelism and hardware, then delving into CUDA installation. Chapters on core concepts including threads, blocks, grids, and memory focus on both parallel and CUDA-specific issues. Later, the book demonstrates CUDA in practice for optimizing applications, adjusting to new hardware, and solving common problems.
|
З цієї книги
Результати 1-5 із 89
Сторінка 22
... threads or POSIX threads) is taken care of for you by OpenMP. The MPI ... number of nodes in the network. The Achilles' heel of any network is the ... threads to allow the multiple cores on the CPU to be exploited. A thread is a separate ...
... threads or POSIX threads) is taken care of for you by OpenMP. The MPI ... number of nodes in the network. The Achilles' heel of any network is the ... threads to allow the multiple cores on the CPU to be exploited. A thread is a separate ...
Сторінка 24
... threads which operate cooperatively in batches called warps. We will look at ... number of steps. For these, consider each step oriteration individually. Can ... number of problems are known as “embarrassingly parallel,” a term that ...
... threads which operate cooperatively in batches called warps. We will look at ... number of steps. For these, consider each step oriteration individually. Can ... number of problems are known as “embarrassingly parallel,” a term that ...
Сторінка 25
... threads. To perform some action, central command (the kernel/host program) must provide some action plus some data ... number of the jobs he will attend are similar, so the same four tools are repeatedly used (a cache hit). However, a ...
... threads. To perform some action, central command (the kernel/host program) must provide some action plus some data ... number of the jobs he will attend are similar, so the same four tools are repeatedly used (a cache hit). However, a ...
Сторінка 30
... number of cores (SMs) per device. Let's assume for simplicity that we implement a kernel as four blocks. Thus, we have four kernels on the GPU and four processes or threads on the CPU. The CPU may support mechanisms such as ...
... number of cores (SMs) per device. Let's assume for simplicity that we implement a kernel as four blocks. Thus, we have four kernels on the GPU and four processes or threads on the CPU. The CPU may support mechanisms such as ...
Сторінка 32
... number of logical hardware threads available. For the GPU, this is the number of SMs multiplied by the maximum load we can give to each SM, 1 to 16 blocks depending on resource usage. Notice we use the term logical and not physical ...
... number of logical hardware threads available. For the GPU, this is the number of SMs multiplied by the maximum load we can give to each SM, 1 to 16 blocks depending on resource usage. Notice we use the term logical and not physical ...
Зміст
Chapter 8 MultiCPU and MultiGPU Solutions | 267 |
Chapter 9 Optimizing Your Application | 305 |
Chapter 10 Libraries and SDK | 441 |
Chapter 11 Designing GPUBased Systems | 503 |
Chapter 12 Common Problems Causes and Solutions | 527 |
Index | 565 |
Інші видання - Показати все
CUDA Programming: A Developer's Guide to Parallel Computing with GPUs Shane Cook Обмежений попередній перегляд - 2012 |
Загальні терміни та фрази
256 threads algorithm allocate application array atomic atomic operations blockDim.x blockIdx.x bytes calculation compiler compute 2.x const int const u32 constant memory copy CUDA CALL cuda CUDA cores dataset device device_num elements example execution Fermi Figure function GB/s GeForce GTX 470:GMEM global memory GMEM hardware host memory ID:0 GeForce GTX InfiniBand instruction issue iterations Kepler kernel L1 cache latency Linux look loop malloc Memcpy memory access memory bandwidth memory fetch merge sort node num_elem num_elements num_threads number of blocks number of threads NVIDIA OpenMP operation optimization output Parallel Nsight parameter PCI-E performance pointer prefix sum problem processor radix sort reduce registers result serial shared memory SIMD simply single SP SP SP speedup stream synchronization Tesla threadIdx.x threads per block transfer typically uint4 unsigned int usage version is faster void warp write þ¼