Update: this article in BYTE magazine (subscription required) describes how Intel really wants programmers to write parallel code.
This EE Times report from the Hot Chips conference, NVIDIA scientist calls for expanded research into parallelism, raises one of the "dirty little secrets" of all the hype about multi-core CPUs -- it is hard to make applications multi-threaded! Do we already have a good programming language for describing an application's parallelism, or is a new language needed?
Meanwhile, those working in the graphics arena are ideally suited to taking advantage of Moore's Law:
Kirk contrasted this situation against the entirely different structure inside the GPU. "Graphics has been called embarrassingly parallel," he said. "In effect, each stage in our pipeline, each vertex in the scene and each pixel in the image is independent. And we have put a lot of effort into not concealing this parallelism with programming."This allows a GPU developer to simply add more vertex processors and shading engines to handle more vertices and more pixels in parallel, as process technology allows. "We are limited by chip area, not by parallelism," Kirk observed.
No comments:
Post a Comment