If you’ve followed futuretech reports at all in the past few years, you probably came across a new term that’s emerging: the singularity.
The “singularity”, as it’s being called, is the prediction that at some point in the future computer technology will catch up to the processing power of the human brain. At that point, nobody knows really what will happen.
Will consciousness move from being born to being invented? Will computers have a sense of self? Or will we simply see mobile devices of the future become as powerful as supercomputers today?
Futurists love to speculate when (or if) the singularity will happen. But the move toward that time has just been pushed wide open.
Recently IBM scientists Dr. Dharmendra S Modha’s reported a new development in computer processing. He and his team have managed to find a way to combine cutting-edge research into neuroscience, supercomputing and nanotechnology to create the first cognitive-computing chip.
Measuring at 4.22 square millimeters (for you non-metric people, that equals out to “really small” in US terms!), this chip contains “256 leaky integrate-and-fire neurons, 1024 axons, and 256x1024 synapses using an SRAM crossbar memory.” If that doesn’t sound like standard computer language to you, you’re not alone. Dr. Modha’s chip is built to mimic a brain rather than a standard computer.
Part of the problem with achieving singularity is in the fundamental difference between how a mammalian brain operates compared to a computer. Computers evaluate logical functions one at a time. Over the years, they’ve gotten really quick at evaluating those functions, but the process has remained relatively unchanged. 1+1=2 and IF X=i THEN DO Y. It’s pretty standard … except that the human brain operates nothing like this at all.
The beauty of the brain is in its complex, multi-functioning design. Things don’t happen one at a time – instead the brain is constantly doing multiple things at once. You may be thinking about a math problem in your head, or thinking of what next to say, but you are also subconsciously evaluating that strange odor coming in your nose and determining whether it signals something good or something bad. Memory is not perfect, and we don’t receive an input and immediately know what to do with it. Instead, we make lots and lots of educated guesses, each one getting better and better with the more input we receive.
Think of it this way. Pretend I start to draw something. I begin by drawing a small vertical line. I then slowly add a semi-circle connected to the lower-half of the line on the right-hand side. Next I draw a smaller vertical line to the right of the first drawing, and put a dot over it. Already you are starting to get the idea that I’m writing letters – in this case “bi”. Your brain already starts to evaluate whether I am spelling out “big” or “binary” or something else, based on the context clues of what we’ve been talking about. As more data comes in, the better your guesses become. You don’t have to wait until you can fully evaluate the process before guessing. This creates an extraordinary amount of speed and efficiency.
Enter cognitive computing. Dr. Modha’s chip is designed to “think” in this way. It receives partial input and guesses at the rest based on context and history. It works side-by-side with other processes to allow multi-tasking. And it already is starting to approximate the computing power of small animal brains.
What does the singularity look like? Time will tell. But we can probably rest assured that it will include cognitive computing. Check out this video of Dr. Modha explaining more about his chip:
[tentblogger-youtube gQ3HEVelBFY]
Amazing.
Thoughts?
[Image via kurzweilai.net]
[…] Continue reading at ChurchMag « An App to Connect with Parishioners […]