成功加入购物车
实图拍摄,书品相一般,请细看图
[美] 帕切克 著 / 机械工业出版社 / 2011-11 / 平装
售价 ¥ 5.00
品相 七品
优惠 满包邮
延迟发货说明
上书时间2023-07-22
卖家超过10天未登录
并行程序设计导论:(英文版)
采用教程形式,从简短的编程实例起步,一步步编写更有挑战性的程序。重点介绍分布式内存和共享式内存的程序设计、调试和性能评估。使用MPI、PTrlread和OperIMP等编程模型,强调实际动手开发并行程序。并行编程已不仅仅是面向专业技术人员的一门学科。如果想要全面开发机群和多核处理器的计算能力,那么学习分布式内存和共享式内存的并行编程技术是不可或缺的。由PeterS.Pacheco编著的《并行程序设计导论(英文版)》循序渐进地展示了如何利用MPI、PThread和OperlMP开发高效的并行程序,教给读者如何开发、调试分布式内存和共享式内存的程序,以及对程序进行性能评估。
帕切克(PetmS.Pacheco),拥有佛罗里达州立大学数学专业博士学位。曾担任旧金山大学计算机主任,目前是旧金山大学数学系主任。近20年来,一直为本科生和研究生讲授并行计算课程。
CHAPTER1WhyParallelComputing?1.1WhyWeNeedEver-IncreasingPerformance1.2WhyWe'reBuildingParallelSystems1.3WhyWeNeedtoWriteParallelPrograms1.4HowDoWeWriteParallelPrograms?1.5WhatWe'llBeDoing1.6Concurrent,Parallel,Distributed1.7TheRestoftheBook1.8AWordofWarning1.9TypographicalConventions1.10Summary1.11ExercisesCHAPTER2ParallelHardwareandParallelSoftware2.1SomeBackground2.1.1ThevonNeumannarchitecture2.1.2Processes,multitasking,andthreads2.2ModificationstothevonNeumannModel2.2.1Thebasicsofcaching2.2.2Cachemappings2.2.3Cachesandprograms:anexample2.2.4Virtualmemory2.2.5Instruction-levelparallelism2.2.6Hardwaremultithreading.2.3ParallelHardware2.3.1SIMDsystems2.3.2MIMDsystems2.3.3Interconnectionnetworks2.3.4Cachecoherence2.3.5Shared-memoryversusdistributed-memory2.4ParallelSoftware2.4.1Caveats2.4.2Coordinatingtheprocesses/threads2.4.3Shared-memory2.4.4Distributed-memory2.4.5Programminghybridsystems2.5InputandOutput2.6Performance2.6.1Speedupandefficiency2.6.2Amdahl'slaw2.6.3Scalability2.6.4Takingtimings2.7ParallelProgramDesign2.7.1Anexample2.8WritingandRunningParallelPrograms2.9Assumptions2.10Summary2.10.1Serialsystems2.10.2Parallelhardware2.10.3Parallelsoftware2.10.4Inputandoutput2.10.5Performance.2.10.6Parallelprogramdesign2.10.7Assumptions2.11ExercisesCHAPTER3Distributed-MemoryProgrammingwithMPI3.1GettingStarted3.1.1Compilationandexecution3.1.2MPIprograms3.1.3MPIInitandMPIFinalize3.1.4Communicators,MPICommsizeandMPICommrank3.1.5SPMDprograms3.1.6Communication3.1.7MPISend3.1.8MPIRecv3.1.9Messagematching3.1.10Thestatuspargument3.1.11SemanticsofMPISendandMPIRecv3.1.12Somepotentialpitfalls3.2TheTrapezoidalRuleinMPI3.2.1Thetrapezoidalrule3.2.2ParallelizingthetrapezoidalruleContentsxiii3.3DealingwithI/O3.3.1Output3.3.2Input3.4CollectiveCommunication3.4.1Tree-structuredcommunication3.4.2MPIReduce3.4.3Collectivevspoint-to-pointcommunications3.4.4MPIAllreduce3.4.5Broadcast3.4.6Datadistributions3.4.7Scatter3.4.8Gather3.4.9Allgather3.5MPIDerivedDatatypes3.6PerformanceEvaluationofMPIPrograms3.6.1Takingtimings3.6.2Results3.6.3Speedupandefficiency3.6.4Scalability3.7AParallelSortingAlgorithm3.7.1Somesimpleserialsortingalgorithms3.7.2Parallelodd-eventranspositionsort3.7.3SafetyinMPIprograms3.7.4Finaldetailsofparallelodd-evensort3.8Summary3.9Exercises3.10ProgrammingAssignments.CHAPTER4Shared-MemoryProgrammingwithPthreads.4.1Processes,Threads,andPthreads4.2Hello,World4.2.1Execution4.2.2Preliminaries4.2.3Startingthethreads4.2.4Runningthethreads4.2.5Stoppingthethreads4.2.6Errorchecking4.2.7Otherapproachestothreadstartup4.3Matrix-VectorMultiplication4.4CriticalSectionsxivContents4.5Busy-Waiting4.6Mutexes.4.7Producer-ConsumerSynchronizationandSemaphores4.8BarriersandConditionVariables4.8.1Busy-waitingandamutex4.8.2Semaphores4.8.3Conditionvariables4.8.4Pthreadsbarriers4.9Read-WriteLocks4.9.1Linkedlistfunctions4.9.2Amulti-threadedlinkedlist4.9.3Pthreadsread-writelocks4.9.4Performanceofthevariousimplementations4.9.5Implementingread-writelocks4.10Caches,CacheCoherence,andFalseSharing4.11Thread-Safety4.11.1Incorrectprogramscanproducecorrectoutput4.12Summary4.13Exercises4.14ProgrammingAssignments.CHAPTER5Shared-MemoryProgrammingwithOpenMP.5.1GettingStarted5.1.1CompilingandrunningOpenMPprograms5.1.2Theprogram5.1.3Errorchecking5.2TheTrapezoidalRule5.2.1AfirstOpenMPversion5.3ScopeofVariables5.4TheReductionClause.5.5TheparallelforDirective5.5.1Caveats5.5.2Datadependences5.5.3Findingloop-carrieddependences5.5.4Estimating5.5.5Moreonscope5.6MoreAboutLoopsinOpenMP:Sorting.5.6.1Bubblesort5.6.2Odd-eventranspositionsort5.7SchedulingLoops5.7.1Thescheduleclause5.7.3Thedynamicandguidedscheduletypes5.7.4Theruntimescheduletype5.7.5Whichschedule?5.8ProducersandConsumers5.8.1Queues5.8.2Message-passing5.8.3Sendingmessages5.8.4Receivingmessages5.8.5Terminationdetection5.8.6Startup5.8.7Theatomicdirective5.8.8Criticalsectionsandlocks5.8.9Usinglocksinthemessage-passingprogram5.8.10criticaldirectives,atomicdirectives,orlocks?5.8.11Somecaveats5.9Caches,CacheCoherence,andFalseSharing5.10Thread-Safety5.10.1Incorrectprogramscanproducecorrectoutput5.11Summary5.12Exercises5.13ProgrammingAssignments.CHAPTER6ParallelProgramDevelopment6.1Twon-BodySolvers6.1.1Theproblem6.1.2Twoserialprograms6.1.3Parallelizingthen-bodysolvers6.1.4AwordaboutI/O6.1.5ParallelizingthebasicsolverusingOpenMP6.1.6ParallelizingthereducedsolverusingOpenMP6.1.7EvaluatingtheOpenMPcodes6.1.8Parallelizingthesolversusingpthreads6.1.9ParallelizingthebasicsolverusingMPI6.1.10ParallelizingthereducedsolverusingMPI6.1.11PerformanceoftheMPIsolvers6.2TreeSearch6.2.1Recursivedepth-firstsearch6.2.2Nonrecursivedepth-firstsearch6.2.3Datastructuresfortheserialimplementations6.2.6Astaticparallelizationoftreesearchusingpthreads6.2.7Adynamicparallelizationoftreesearchusingpthreads6.2.8Evaluatingthepthreadstree-searchprograms6.2.9Parallelizingthetree-searchprogramsusingOpenMP6.2.10PerformanceoftheOpenMPimplementations6.2.11ImplementationoftreesearchusingMPIandstaticpartitioning6.2.12ImplementationoftreesearchusingMPIanddynamicpartitioning6.3AWordofCaution6.4WhichAPI?6.5Summary6.5.1PthreadsandOpenMP6.5.2MPI6.6Exercises6.7ProgrammingAssignmentsCHAPTER7WheretoGofromHereReferencesIndex
展开全部
配送说明
...
相似商品
为你推荐
开播时间:09月02日 10:30