Blog

Hadoop + GPU: Boost Performance of Your Big Data Project by 50x-200x?

Vladimir Starostenkov

Hadoop, an open-source framework that enables distributed computing, has changed the way we deal with big data. Parallel processing with this set of tools can improve performance several times over. The question is, can we make it work even faster? What about offloading calculations from a CPU to a graphics processing unit (GPU) designed to perform complex 3D and mathematical tasks? In theory, if the process is optimized for parallel computing, a GPU could perform calculations 50-100 times faster than a CPU.

Read my article at NetworkWorld to find out what is possible and how you can try this for your large-scale system.

No Comments

Benchmarks and Research

Subscribe to new posts

Get new posts right in your inbox!