IBM Unveils a ‘Brain-Like’ Chip With 4,000 Processor Cores. The TrueNorth chip mimics 1 million neurons and 256 million synapses that IBM calls “spiking neurons.”
…the chip can encode data as patterns of pulses, which is similar to one of the many ways neuroscientists think the brain stores information.
IBM Research: Neurosynaptic chips provides more information on the low power system architecture and potential applications:
This is similar to Qualcomm’s Brain-Inspired Computing effort.
Bringing artificial intelligence to mobile computing is a significant challenge. That’s the goal of Qualcomm’s new Zeroth Processors.
Mimicking the human nervous system and brain to allow computers to learn about their environment and modify their behavior based on this information has long been the goal of artificial neural networks. Whatever computing model is used to achieve this capability the real problem is one of scale. The human brain is estimated to have 100 billion neurons — with 100 trillion connections. That is at least 1,000 times the number of stars in our galaxy.
These computational models can be implemented in software (e.g. Grok), but the ability to scale to the levels required for even simple human-like interactions is severely limited by conventional computing platforms. The Zeroth Neural Processing Unit (NPU) is a hardware implementation of the brain’s spiking neural networks (SNN) method of information transmission. Integrating the NPU into computing platforms at the chip level would begin to address the computational and power requirements for these types of applications.
The goals of the Zeroth* platform are:
- Biologically Inspired Learning
- Enable Devices To See and Perceive the World as Humans Do
- Creation and definition of an Neural Processing Unit—NPU
Achieving “human-like interaction and behavior” is an ambitious goal, but it seems like this is a good first step.
UPDATE (25-Oct-13): Good overview here: Chips ‘Inspired’ By The Brain Could Be Computing’s Next Big Thing.
UPDATE (1-Jan-14): CES 2014: Intel launches RealSense brand, aims to interface with your brain in the long run
* The name Zeroth comes from the science fiction Three Laws of Robotics. The First law was that “A robot may not harm a human being.”
Asimov once added a “Zeroth Law”—so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity.
We’ll have to wait and see, but let’s hope so!
I found two related posts today:
The content of the Larry Lessig talk is interesting, but it’s the presentation that’s unique and engaging. The remixed videos are great.
In looking through some of the other TED offerings and I ran across a 2003 Jeff Hawkins presentation on Brain theory. I’ve been interested in his software company, Numenta, for a while now. They have implemented a hierarchical temporal memory system (HTM) model which is “a new computing paradigm that replicates the structure and function of the human neocortex.” The talk is a broader look at why it has taken so long to develop a framework for how the brain works.
Jeff Hawkins has had an interesting career in non-neuroscience areas (pen-based and tablet computing, Handspring). Hopefully his memory-prediction model of human intelligence will lead to improved artificial intelligence software systems.
In this months IEEE Spectrum magazine there’s an interesting article about Microsoft’s efforts in robotics called Robots, Incorporated by Steven Cherry.
The article describes the team that created Microsoft Robotics Studio, how the group came to be, some of the software technologies, and an overview of the Microsoft’s strategy in the Robotics marketplace.
What prompted this post is an example of how robotics might be used for medical purposes:
Imagine a robot helping a recovering heart-attack patient get some exercise by walking her down a hospital corridor, carrying her intravenous medicine bag, monitoring her heartbeat and other vital signs, and supporting her weight if she weakens.
Also, in the discussion about multi-threaded task management:
Or there might arise two unrelated but equally critical tasks, such as walking beside a hospital patient and simultaneously regulating the flow of her intravenous medications.
It’s clear that these are just illustrative examples and there’s no attempt to delve into the complexities of how to achieve these types of tasks. What I think is enlightening is that it provides examples of what the expectations are for robotics in medicine.
There are many research efforts in this area, but there’s not really a lot of commercialization yet. There are numerous efforts in Robotic Surgery and robotic prosthetics (e.g. see iWalk) hold a lot of promise for improving lives. It’s not exactly robotics, but the integration of an insulin pump with real-time continuous glucose monitoring for diabetes management (see the MiniMed device) can certainly be considered the application of “intelligent” technology.
I think that the expectations for the future use of robots for medical purposes are as realistic as any other potential use. There are some areas where the technological hurdles are very high, e.g. neural interfacing (see BrainGate), but many practical medical uses will have the same set of challenges as any other robotic application. Human safety will have to become a primary issue anytime a robot is interacting with people. Manufacturers of medical devices have the advantage that risk analysis and regulatory requirements are already part of their development process. Cost is certainly the other major challenge for the use of robots in both the consumer and medical markets. No matter how good the solution is, it must still be affordable.