The CUDA Handbook: A Comprehensive Guide to GPU Programming et plus d'un million d'autres livres sont disponibles pour le Kindle d'Amazon. En savoir plus


ou
Identifiez-vous pour activer la commande 1-Click.
ou
en essayant gratuitement Amazon Premium pendant 30 jours. Votre inscription aura lieu lors du passage de la commande. En savoir plus.
Amazon Rachète votre article
Recevez un chèque-cadeau de EUR 14,48
Amazon Rachète cet article
Plus de choix
Vous l'avez déjà ? Vendez votre exemplaire ici
Désolé, cet article n'est pas disponible en
Image non disponible pour la
couleur :
Image non disponible

 
Commencez à lire The CUDA Handbook: A Comprehensive Guide to GPU Programming sur votre Kindle en moins d'une minute.

Vous n'avez pas encore de Kindle ? Achetez-le ici ou téléchargez une application de lecture gratuite.

CUDA Handbook: A Comprehensive Guide to GPU Programming, The [Anglais] [Broché]

Nicholas Wilt

Prix : EUR 47,73 Livraison à EUR 0,01 En savoir plus.
  Tous les prix incluent la TVA
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Il ne reste plus que 2 exemplaire(s) en stock (d'autres exemplaires sont en cours d'acheminement).
Expédié et vendu par Amazon. Emballage cadeau disponible.
Voulez-vous le faire livrer le mardi 26 août ? Choisissez la livraison en 1 jour ouvré sur votre bon de commande. En savoir plus.

Formats

Prix Amazon Neuf à partir de Occasion à partir de
Format Kindle EUR 23,06  
Broché EUR 47,73  
Vendez cet article - Prix de rachat jusqu'à EUR 14,48
Vendez CUDA Handbook: A Comprehensive Guide to GPU Programming, The contre un chèque-cadeau d'une valeur pouvant aller jusqu'à EUR 14,48, que vous pourrez ensuite utiliser sur tout le site Amazon.fr. Les valeurs de rachat peuvent varier (voir les critères d'éligibilité des produits). En savoir plus sur notre programme de reprise Amazon Rachète.

Offres spéciales et liens associés


Produits fréquemment achetés ensemble

CUDA Handbook: A Comprehensive Guide to GPU Programming, The + Programming Massively Parallel Processors: A Hands-on Approach + CUDA Programming: A Developer's Guide to Parallel Computing with GPUs
Acheter les articles sélectionnés ensemble


Descriptions du produit

Présentation de l'éditeur

 

The CUDA Handbook begins where CUDA by Example (Addison-Wesley, 2011) leaves off, discussing CUDA hardware and software in greater detail and covering both CUDA 5.0 and Kepler. Every CUDA developer, from the casual to the most sophisticated, will find something here of interest and immediate usefulness. Newer CUDA developers will see how the hardware processes commands and how the driver checks progress; more experienced CUDA developers will appreciate the expert coverage of topics such as the driver API and context migration, as well as the guidance on how best to structure CPU/GPU data interchange and synchronization.

 

The accompanying open source code–more than 25,000 lines of it, freely available at www.cudahandbook.com–is specifically intended to be reused and repurposed by developers.

 

Designed to be both a comprehensive reference and a practical cookbook, the text is divided into the following three parts:

Part I, Overview, gives high-level descriptions of the hardware and software that make CUDA possible.


Part II, Details, provides thorough descriptions of every aspect of CUDA, including

  •  Memory
  • Streams and events
  •  Models of execution, including the dynamic parallelism feature, new with CUDA 5.0 and SM 3.5
  • The streaming multiprocessors, including descriptions of all features through SM 3.5
  • Programming multiple GPUs
  • Texturing

The source code accompanying Part II is presented as reusable microbenchmarks and microdemos, designed to expose specific hardware characteristics or highlight specific use cases.


Part III, Select Applications, details specific families of CUDA applications and key parallel algorithms, including

  •  Streaming workloads
  • Reduction
  • Parallel prefix sum (Scan)
  • N-body
  • Image Processing
These algorithms cover the full range of potential CUDA applications.

 

Quatrième de couverture

 


Détails sur le produit


En savoir plus sur l'auteur

Découvrez des livres, informez-vous sur les écrivains, lisez des blogs d'auteurs et bien plus encore.

Dans ce livre (En savoir plus)
Parcourir les pages échantillon
Couverture | Copyright | Table des matières | Extrait | Index
Rechercher dans ce livre:

Quels sont les autres articles que les clients achètent après avoir regardé cet article?


Commentaires en ligne 

Il n'y a pas encore de commentaires clients sur Amazon.fr
5 étoiles
4 étoiles
3 étoiles
2 étoiles
1 étoiles
Commentaires client les plus utiles sur Amazon.com (beta)
Amazon.com: 4.3 étoiles sur 5  6 commentaires
11 internautes sur 12 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Excellent but NOT for beginners! 4 septembre 2013
Par Timothy Masters - Publié sur Amazon.com
Format:Broché|Achat vérifié
As one slowly learns CUDA programming, numerous questions arise concerning the internal workings of the GPU. The beginning programmer does many things on faith: the documentation says to do it this way, so you do it that way, and it works. Why was that way necessary? Not clear.

The documentation supplied by nVidia is very good, and several excellent beginners' books are available. But these things fail to answer the many subtle issues that arise. That's where this book comes in. Over and over as I read it, I said, "Ohhh, that's why I have to do it that way." This book was written by a real insider, someone who knows CUDA as only an insider can. So this book is MANDATORY for anyone who wants to become an expert in CUDA programming.

However, be warned that this book is NOT for beginners! It presupposes extensive experience in CUDA programming. If this is the first CUDA book you pick up, you'll be hopelessly lost. Tackle this book only after you have a lot of CUDA under your belt.
8 internautes sur 10 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Put a (Bara)CUDA in your programming. 5 août 2013
Par Robin T. Wernick - Publié sur Amazon.com
Format:Broché|Achat vérifié
"The CUDA Handbook" is the largest(480p) and latest( June 2013 ) of NVIDIA's series of GPU programming books. It is also the most comprehensive and useful GPU programming reference for programmers to date. It's a tough world out there for programmers who are trying to keep up with changes in technology and this reference makes the future a much more comfortable place to live. Learn about GPGPU programming and get ahead of the crowd.

For those programmers who haven't had the time to perceive the changes, GPU programming is a current change in programming design that is sweeping the world of network VOIP management, parallel analysis and simulation, and even supercomputing in a single box. I have personally run a Starfield Simulation on a portable with an i7 processor that increased in speed 112 times by using the internal NVIDIA GeForce 570M. The Starfield frame time reduced from about 2 seconds to about .015 sec. Imagine what I could do with a GeForce 690! Charts indicate that it might exceed 700 times the computing speed!!This book not only tells me how to arrange the software to work with the NVIDIA SDK, but it also shows me the important differences in the architecture of many of the NVIDIA cards to obtain optimum performance.

The world of computing is still filled with 32 bit machines( or OS sysstems ) using most of their memory to get their assigned tasks completed. Many of these machines do not have even four core CPUs, forget having over 4GB of memory. They fill computers in production devices, desktops in database support companies, and the racks of IT departments everywhere. The need for faster and more computing does not slow down or stop for these hardware limits. Ant the cost to replace them outright is prohibitive. Now, a demand to manage 5000 computer domains arrives or a messaging demand for 1500 VOIP channels to be mixed in a hundred groups is brought on board or a control simulation to manage six robotic arms in an assembly line needs to be run. Without clustering a dozen to one hundred other computers to manage the computing load, the only practical solution is to employ one or two GPUs. Projects that ignore this message are destined to fail and along with that comes damaged careers and lost jobs.

The solution to avoiding the trap of limited legacy hardware is to use GPUs to take up the load and stop overloading the limited memory and CPU cores to do the increased workload. Each GPU can add 2300 streaming multiprocessors to perform the work. And each GPU cards can add 4GB of high speed memory to the limited program memory on the motherboard, which may only be 2GB.

The book introduces the GPU architecture, GPU device memory usage and loading, and Kernel processor code design. Once you have mastered the terminology and run some of the examples, you will be able to start developing code for specific solutions. The first chapters introduce you to NVIDIA GPU devices. The meat of the book starts in Chapter 5 with proper memory handling procedures. Chapter 7 expands the material on blocks, threads, warps, and lanes will straighten out the terminology and get you headed into constructive code to manage the upcoming design.

If your task goes beyond the capabilities of a single GPU, Chapter 9 introduces multiple GPU programming management. The choice of one of the later client motherboards provides up two four PCIE sockets with the potential of holding four GPUs. That kind of super-computing ability for about $500 a GPU can meet even a gamer's budget. Be aware though that added complexity requires added design refinement. Routines need to be optimized, and Chapter 11 will help you reduce memory usage and Chapter 12 will help you increase the efficiency of Warp usage.

Three more chapters involve reductions for routines used in specialized applications that may become of interest to you and are also helpful in further mastering the concepts needed to master GPU computing.

Personally, I have a financial program that exceeded my i7 CPU capability for prediction using neural networking because it took more than all night to determine ranking for 400,000 stocks. And I thought that the one hour download time off the internet was onerous. Now I have an affordable solution that won't require me to build a shed out in the backyard to hold all the computers that would normally be required to add this feature to my design. All I have to pay for is a bigger power supply and a single GPU card. Happy computing!
4 internautes sur 5 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Fantastic Book 27 août 2013
Par pafluxa - Publié sur Amazon.com
Format:Broché|Achat vérifié
This book is a must have if you want to dive into the GPU programming world. It is written in a user-friendly language; it is not a "CUDA manual", because even if it describes certain functions and technical aspects of CUDA, the book explains the main features of it by addressing (simplified) real life problems in a very pedagogical way. The book also includes a not-so-extensive review of Dynamic parallelism (which is why I bought the book in the first place), but it should be more than sufficient for most CUDA "newbies" like me.

I can't say much more about this book except this: if you really want to learn CUDA, buy it. You won't be disappointed.
3 internautes sur 4 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 It'll be a classic 12 septembre 2013
Par cuda.geek - Publié sur Amazon.com
Format:Broché|Achat vérifié
I know a good books about C++, template metaprogramming, C#. The are become classical for people who desired in CS. For CUDA we have only a few books and all of them basically does not provide any answers on question why. But Nicholas do!

I really love it.

Only one thing that not so good from my point of view is latests part about common algorithms. I think people who read this book already know it. But anyway it's only my feelings.
4.0 étoiles sur 5 Not about the car. 13 juillet 2014
Par Sundadar - Publié sur Amazon.com
Format:Broché|Achat vérifié
I bought this expecting to read about how to work on my 1970 Plymouth Barracuda, but discovered it was actually detailing the operation of a proprietary language for writing scientific computation on nvidia GPUs. Despite the surprise I settled into read a gripping tale of tile caches, processor groups and efficient matrix formats. Definitely helpful if that is the sort of thing you are into. I used it to write a CNN learner for engine tuning and now my 'cuda runs smoothly and with its original 220hp and I can get my smog check.
Ces commentaires ont-ils été utiles ?   Dites-le-nous

Discussions entre clients

Le forum concernant ce produit
Discussion Réponses Message le plus récent
Pas de discussions pour l'instant

Posez des questions, partagez votre opinion, gagnez en compréhension
Démarrer une nouvelle discussion
Thème:
Première publication:
Aller s'identifier
 

Rechercher parmi les discussions des clients
Rechercher dans toutes les discussions Amazon
   


Rechercher des articles similaires par rubrique


Commentaires

Souhaitez-vous compléter ou améliorer les informations sur ce produit ? Ou faire modifier les images?