FiringSquad Editors Challenge Round...
No friends yet.
||1 entry(ies) in this category
| GPGPU and the High Performance Computing arena (5 comments )|
by: Techno+ (1) | Posted in cluster FiringSquad Editors Challenge Round 1 Prelim 2
Posted 75 months ago in category DEFAULT
*The concrete definition of GPGPU
For those of you who haven't still heard of this, GPGPU stands for General Purpose Computation on Graphics Processing Units. In the last twenty years, GPUs have evolved from fixed-function processors to massively parallel floating point engines; hence, the idea of GPGPU was born to take advantage of the processing power of GPUs for non-graphics tasks.
*Do we really need GPGPU for HPC?
The idiom 'jack of all trades, master of none' perfectly applies to today's CPUs, they are completely general purpose, i.e. they can perform any kind of workload, like graphics, sound scientific etc., but are so slow and inefficient at them that any piece of specialized hardware would outperform them in large orders of magnitude. As we all know, graphics processing is a floating point heavy and embarrassingly parallel application, and since GPUs are designed for and thus well suited for graphics, they outperform CPUs in current FP32 applications, and in the near future, FP64 applications, these applications include scientific research, CAD/CAM, and digital content creation etc, which share a lot of inner features with graphics.
GPUs dedicate all their transistors to floating point operations thus they offer more FLOPS and their performance doubles from generation to generation, unlike CPUs which dedicate most of their transistors to branch prediction, cache etc, this difference is most noticeable with current hardware, for example, Intel's latest quad core product, the Core 2 extreme QX6700 runs at 2.6 GHz and has a peak floating point performance of 50 GFLOPS, while NVIDIA's latest GPU, the G80 has a peak floating point performance of 330 GFLOPS.
What does all this mean? Putting it in simple words, we can say that GPUs are more efficient than CPUs, from both the cost and power consumption perspectives, for FP32 floating point ( and in the near future FP64) applications. Now you answer the question, do we really need GPGPU?
An impediment that, till today prevents GPUs from breaking into the HPC arena is their inability to support 64 bit (double precision) floating point operations, which are a requirement of almost all scientific applications, this fact puts them at a disadvantage. However, they are a lot cheaper than the dedicated vector processors that current supercomputers use, which is one of the advantages of using them. GPUs with FP64 support are on track by both AMD and NVIDIA, NVIDIA promises FP64 GPUs by late 2007 while AMD hasn't pointed anything out, IMO, When FP64 GPUs are released, I expect them to have a big effect on the HPC industry.
If you have been tracking the latest hardware news, you will find a lot of debate regarding how GPUs perform against STI's Cell in the HPC arena. Up until now, there haven't been any practical supercomputer implementations of the Cell processor, so I cannot really comment on this. Always keep in mind that GPUs and the Cell chip share a lot of inner working and both provide large amounts of FLOPS.
Recently, AGEIA, the makers of the physx chip have said that they are targeting PPUs at non-gaming applications, i.e. GPPPU (I made this name up). If AGEIA plays this game well, and I'm sure they will try their best to, they will be a big threat to both the cell chip and GPUs, however one problem of using this chip for supercomputing is that it is more specialized than GPUs and Cell chips, but it sports a very low power consumption of 20 W and undoubtedly, any supercomputer engineer would love this.
*AMD Fusion: ''ready or not, here I come''
When AMD completed its acquisition of ATI, they announced a CGPU program called 'Fusion', which would integrate the CPU and the GPU at the silicon level. How can this help supercomputing? The on die GPU can be utilized as a FLOP stream processor which would mean a significant boost in FLOPS per chip, and the CPU and GPU will be connected by a crossbar, which would mean a large increase in CPU to GPU bandwidth and a huge decrease in power consumption when compared to a CPU and GPU on separate chips. Surely AMD will be making a lot of money out of this, however, Intel isnít going to stand still and watch AMD gobble up market share, they are currently developing their own CPU-GPU product but they have been too tightlipped about it, as for NVIDIA, there are rumours that they too are developing their own CGPU.
*Folding@Home: a perfect example of the capabilities of GPUs
Stanford University has done extensive research on using GPGPUs for FAH, and has observed a 40x-50x performance increase in using GPUs over CPUs for their protein folding research.
*AMD's and NVIDIA's progress in the GPGPU space
Months ago, AMD released a product which they called 'stream processor' for stream computing which uses an X1900 chip, with the ROPs and TMUs disabled. Unfortunately it didn't gain much popularity, but AMD won't be giving up and are planning a ' stream processor 2' product based on their new R600 GPU, hopefully, this will be a killer GPGPU product.
Both AMD and NVIDIA have released GPGPU SDKs, AMD's Close to Metal API gives developers low level access to their GPU hardware by utilizing a thin hardware-software interface, thus providing over 10X increase in performance over using normal 3D APIs for writing GPGPU applications.
NVIDIA have recently released their CUDA SDK, which allows developers to write applications for their G80 GPU using the C programming language, CUDA is only supported by G80 but developers have NVIDIA's word that applications written using CUDA will work on all future GPU products made by NVIDIA.
The GPGPU concept is gaining a lot of reputation in the computer industry, but IMO, it isn't still clear whether it would be accepted as an industry standard, but rest assured that AMD, NVIDIA and in the near future Intel, will be doing their best to make it a success.
|5 User Comment(s) • 5 root comment(s)|
| GrapeApe (36) Mar 05, 2007 - 01:12 am|
|Interesting, but I think you're confusing some of the implications of GPGPU and what it can and can't do. |
Also FP64 isn't required to be up to the task of a CPU, as FP64 refers to 'per channel' meaning it's 256bit overall.
Older FP16 VPUs were 64bit computational.
The main problem with them is the programming, and while C and a few other direct compiler apps can be run through the GPUs, the main issue is X86 and X64 extensions (be it AMD64 or i/EM64T) to give it wider application beyond just being math co-processors. The future for high end arrays and such will likely go towards massively parallel smaller cores , and while multi-vpus would work too, the move towards all-purpose units, likely Intel and AMD's future designs means that something closer to IBM's Power series is the future IMO with many cores insteado of two disparate units needing to balance loads. Folding in the FP power into the cores makes sense long term, not to the level of the 'CELL' but to something more like a partial CELL.
But interesting take on it, just needs some polishing IMO.
» Login to reply to this
| Salman (4) Feb 26, 2007 - 11:21 am | Edited on Feb 26, 2007 - 11:28 am|
|This article has potential, but it needs to be fleshed out. If you look closely at firingsquad's and anandtech's hardware articles, you'll see that they're written for the lowest common denominator of readership that visits their site.|
Dumbing things down is important, don't make assumptions about your readership.
What is the significance of GPGPU to your reader? Keep this at the forefront of your mind while you write this article.
You can still revise and repost it. I'd be glad to re-vote then.
» Login to reply to this
| Techno+ (1) Feb 26, 2007 - 07:51 am|
|Thanks for your comment GX-Alan, now I know what I have to work on the next time I post anything. I'm really happy with the votes i got, it isn't too bad for my first review ever.|
» Login to reply to this
| GX-Alan (78) Feb 24, 2007 - 06:13 pm|
|Good concept, solid research, but you will need to walk someone unfamiliar with the concept of GPGPU through. I think it's the organization that's off...|
» Login to reply to this
» Note: You need to be logged in to write a comment!Login here, or if you don't have an account with FiringSquad, register here, it's FREE!
My Media-Blog categories
No categories created yet.