Intel introduces a new family of chips called Nervana Neural Network Processor family.
Nvidia’s GPU’s have dominated the tech market for over two decades starting with GeForce 256 SDR in 1999 to this date Nvidia Titan XP in 2017. The GPU used many cores as compared to the few cores that are used in our computers. GPU’s have changed the face of computational applications in the various fields. But now a supposed-to-be-game-changer enters the market.
Intel has launched its first family of artificial intelligence chips and will be shipped by the end of 2017. As artificial intelligence is itself a budding field, Intel decided that it wouldn’t stay behind in this. In fact, it aims to conquer AI with Nervana Processors.
Neuromorphic chips attempt to model the workings of the human brain, in which information captured by billions of sensory receptors is processed in parallel by neurons and synapses. Over time the connections between neurons alter according to their inputs; that is, they learn from experience. These processors would work on the lines of the concept of neuromorphic chips. Unlike standard chips, the on-chip memory on the NNP is directly managed by software rather than as a physical cache, Intel says, which increases the memory bandwidth, allowing for increased parallelization and reduced power consumption at the same time.
In this endeavor, Intel has collaborated with Facebook to get tech insights, “We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market,“ –Intel CEO Brian Krzanich.
But how is it going to challenge Nvidia’s GPUs?
As Naveen Rao, vice president of AI unit in Intel said that these chips would start off with running simple algorithms and then progressing to intensive computational parts where Nvidia’s GPUs specialize. Nvidia’s graphics processors (or GPUs) have taken off in deep learning because of their ability to compute in parallel. A few years ago, researchers discovered that the capabilities of GPUs were almost ideally suited to running deep learning algorithms, which also require thousands of parallel computations. Intel said the NNP would also do well with computing in parallel using a number of new chip designs. Nvidia has already contributed and upped the game for research in Artificial Intelligence and deep learning. Let’s see how high Intel’s Nervana soars. For all the latest trends, tech news, reviews & new technologies follow us on Instagram, Twitter, Facebook, and subscribe to our YouTube Channel.
(Read More from author: Google Tez: India’s new digital wallet that uses audio QR to transfer money)
Article By Etisha Gurav