In previous posts I have discussed the algorithms and applications of machine learning. One of the most sophisticated forms of machine learning is building a neural network that mimics the flesh-and-blood brain – with software- or hardware-based versions of neurons and synapses. But silicon operates differently than the stuff in our brains. In the past, researchers have suspected an entirely new type of computer chip would be needed – something radically different from our current silicon- and transistor-based CPUs and GPUs.
But, leave it to creative engineers to find a way. IBM has invented a new type of computer chip, dubbed TrueNorth, that could get us closer to the sci-fi dream of a silicon brain. Using traditional semiconductor manufacturing techniques (instead of trying to mimic the actual physics and chemistry of the brain), IBM has gotten as close as they can to modeling the way a real brain works.
IBM’s ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart – Wired.com
… I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
… The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars
The implications of IBM’s new chip are far-reaching. Many modern technologies, like face recognition in Facebook photos, or voice recognition in Andriod or SIRI, could benefit from a small, low-power neural network directly on your phone. Currently, those technologies require communication with powerful servers in the cloud to operate. If neural network-based chips end up in our phones, we might be able to get rid of the dreaded, “I’m really sorry about this, but I can’t take any request right now.”