For many years, CPUs handled all non-graphics calculations, while GPUs were only used for graphics operations. As GPU performance increased, hardware manufacturers and software programmers realized GPUs had a lot of unused potential. Therefore, they began to find ways to offload certain system calculations to the GPU. This strategy, called “parallel processing,” enables the GPU to perform calculations alongside the CPU, improving overall performance.
The APU takes parallel computing one step further by removing the bus between the CPU and GPU and integrating both units on the same chip. Since the bus is the main bottleneck in parallel processing, an APU is more efficient than a separate CPU and GPU. While this strategy may not make sense for desktop computers with dedicated video cards, it can provide significant performance gains for laptops and other mobile devices that have integrated graphics chips.
NOTE: While Intel processors are not called APUs, modern Intel architectures, such as Sandy Bridge and Ivy Bridge are designed with integrated CPUs and GPUs. These chips are sometimes called “hybrid processors,” since they contain both the central processing unit and the graphics processing unit.
Updated: November 14, 2013