CUDA Programming: A Developer's Guide to Parallel Computing with GPUs
Newnes, 28 груд. 2012 р. - 600 стор.
If you need to learn CUDA but don't have experience with parallel computing, CUDA Programming: A Developer's Introduction offers a detailed guide to CUDA with a grounding in parallel fundamentals. It starts by introducing CUDA and bringing you up to speed on GPU parallelism and hardware, then delving into CUDA installation. Chapters on core concepts including threads, blocks, grids, and memory focus on both parallel and CUDA-specific issues. Later, the book demonstrates CUDA in practice for optimizing applications, adjusting to new hardware, and solving common problems.
Результати 1-5 із 95
This is a common problem with any pipeline-based model of execution. The alternative approach of putting everything on one SPE and then having each SPE process a small chunk of data is often a more efficient approach.
This is not to say one is better than the other, for the traditional CPUs are aimed at serial code execution and are extremely good at it. They contain special hardware such as branch prediction units, multiple caches, etc., ...
The SPs execute work as parallel sets of up to 32 units. They eliminate a lot of the complex circuitry needed on CPUs to achieve high-speed serial execution through instruction-level parallelism.
In a completely unrelated section of the code a pointer was not initialized under certain conditions. Due to the way the program ran, some of the time, depending on the thread execution order, the pointer would point to our ...
In terms of GPUs we see coarse-grained parallelism only in terms of a GPU card and the execution of GPU kernels. GPUs support the pipeline ... First, kernels can be pushed into a single stream and separate streams executed concurrently.
Відгуки відвідувачів - Написати рецензію
Chapter 8 MultiCPU and MultiGPU Solutions
Chapter 9 Optimizing Your Application
Chapter 10 Libraries and SDK
Chapter 11 Designing GPUBased Systems
Chapter 12 Common Problems Causes and Solutions