Wednesday, June 10, 2015

The Coming Merge of Human and Machine Intelligence

 



It is certainly a process but also definable.  We really know what we want.  We want our consciousness to be augmented and able to operate devices external to ourselves as well as been able to communicate easily.  Much of this will also arrive through biological enhancement of the brain itself.  After all we already know that the brain is able to share images mind to mind.  Just doing that properly and universally will also easily allow image transition to physical devices as well and make it all simpler. 


i do not think this is too far away.  Our problem is that research is targeted at brain waves which are derivative phenomena.  We need to intercept neuronic signalling at least and possibly photonic signalling as well, although that is likely to interfere with information flow.  We at least need to observe engagement.


I also think that we already can do all this and that it is a matter of untying our hands easily instead of with difficulty.

.

The coming merge of human and machine intelligence

http://archaeologynewsnetwork.blogspot.ca/2015/05/the-coming-merge-of-human-and-machine.html

For most of the past two million years, the human brain has been growing steadily. But something has recently changed. In a surprising reversal, human brains have actually been shrinking for the last 20,000 years or so. We have lost nearly a baseball-sized amount of matter from a brain that isn’t any larger than a football. The coming merge of human and machine intelligence “Just as the Wright brothers didn’t learn to fly by dissecting birds, we will not learn to create intelligence by recreating a brain,” says Jeff Stibel [Credit: David Plunkert] The descent is rapid and pronounced. The anthropologist John Hawks describes it as a “major downsizing in an evolutionary eye­blink.” If this pace is maintained, scientists predict that our brains will be no larger than those of our forebears, Homo erectus, within another 2,000 years. The reason that our brains are shrinking is simple: our biology is focused on survival, not intelligence. Larger brains were necessary to allow us to learn to use language, tools and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—certain aspects of intelligence are less necessary. 


This is actually true of all animals: domesticated animals, including dogs, cats, hamsters and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large brain sizes are selected out when nature sees no direct survival benefit. 

It is an inevitable fact of life. Fortunately, another influence has evolved over the past 20,000 years that is making us smarter even as our brains are shrinking: technology. Technology has allowed us to leapfrog evolution, enabling our brains and bodies to do things that were otherwise impossible biologically. We weren’t born with wings, but we’ve created airplanes, helicopters, hot air balloons and hang gliders. We don’t have sufficient natural strength or speed to bring down big game, but we’ve created spears, rifles and livestock farms. Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature’s intent and grow larger by proxy. 


It is not a stretch of the imagination to believe we will one day have all of the world’s information embedded in our minds via the Internet. Psychics and Physics In the late 1800s, a German astronomer named Hans Berger fell off a horse and was nearly trampled by cavalry. He narrowly escaped injury, but was forever changed by the incident, owing to the reaction of his sister. Though she was miles away at the time, Berger’s sister was instantly overcome with a feeling that Hans was in trouble. Berger took this as evidence of the mind’s psychic ability and dedicated the rest of his life to finding certain proof. 


Berger abandoned his study of astronomy and enrolled in medical school to gain an understanding of the brain that would allow him to prove a “correlation between objective activity in the brain and subjective psychic phenomena.” He later joined the University of Jena in Germany as professor of neurology to pursue his quest. At the time, psychic interest was relatively high. There were numerous academics devoted to the field, studying at prestigious institutions such as Stanford and Duke, Oxford and Cambridge. Still, it was largely considered bunk science, with most credible academics focused on dispelling, rather than proving, claims of psychic ability. But one of those psychic beliefs happened to be true. That belief is the now well-understood notion that our brains communicate electrically. This was a radical idea at the time; after all, the electromagnetic field had only been discovered in 1865. But Berger found proof. He invented a device called the electroencephalogram (you probably know it as an EEG) that recorded brain waves. Using his new EEG, Berger was the first to demonstrate that our neurons actually talk to one another, and that they do so with electrical pulses. He published his results in 1929. The New Normal As often happens with revolutionary ideas, Berger’s EEG results were either ignored or lambasted as trickery. 


This was, after all, preternatural activity. But over the next decade, enough independent scholars verified the results that they became widely accepted. Berger saw his findings as evidence of the mind’s potential for “psychic” activity, and he continued searching for more evidence until the day he hanged himself in frustration. The rest of the scientific community went back to what it had always been doing, “good science,” and largely forgot about the electric neuron. That was the case until the biophysicist Eberhard Fetz came along in 1969 and elaborated on Berger’s discovery. Fetz reasoned that if brains were controlled by electricity, then perhaps we could use our brains to control electrical devices. In a small primate lab at the University of Washington in Seattle, he connected the brain of a rhesus monkey to an electrical meter and then watched in amazement as the monkey learned how to control the level of the meter with nothing but its thoughts. 


While incredible, this insight didn’t have much application in 1969. But with the rapid development of silicon chips, computers and data networks, the technology now exists to connect people’s brains to the Internet, and it’s giving rise to a new breed of intelligence. Scientists in labs across the globe are busy perfecting computer chips that can be implanted in the human brain. In many ways, the results, if successful, fit squarely in the realm of “psychics.” There may be no such thing as paranormal activity, but make no mistake that all of the following are possible and on the horizon: telepathy, no problem; telekinesis, absolutely; clairvoyance, without question; ESP, oh yeah. While not psychic, Hans Berger may have been right all along. The Six Million Dollar Man, For Real Jan Scheuermann lifted a chocolate bar to her mouth and took a bite. A grin spread across her face as she declared, “One small nibble for a woman, one giant bite for BCI.” BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. 


The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004. These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is. BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann’s case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts. A smart, vibrant woman in her early 50s, Scheuermann has been unable to use her arms and legs since she was diagnosed with a rare genetic disease at the age of 40. “I have not moved things for about 10 years . . . . This is the ride of my life,” she said. 


“This is the roller coaster. This is skydiving.” Other patients use brain-controlled implants to communicate, control wheelchairs, write emails and connect to the Internet. The technology is surprisingly simple to understand. BrainGate is merely tapping into the brain’s electrical signals in the same way that Berger’s EEG and Fetz’s electrical meter did. The BrainGate chip, once attached to the motor cortex, reads the brain’s electrical signals and sends them to a computer, which interprets them and sends along instructions to other electrical devices like a robotic arm or a wheelchair. In that respect, it’s not much different from using your television remote to change the channel. Potentially the technology will enable bionics, restore communication abilities and give disabled people previously unimaginable access to the world. Mind Meld But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers. 


Computers have been creeping closer to our brains since their invention. What started as large mainframes became desktops, then laptops, then tablets and smartphones that we hold only inches from our faces, and now Google Glass, which (albeit undergoing a redesign) delivers the Internet in a pair of eyeglasses. Back in 2004, Google’s founders told Playboy magazine that one day we’d have direct access to the Internet through brain implants, with “the entirety of the world’s information as just one of our thoughts.” A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe. Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s. The pursuit of artificial intelligence has been plagued by problems. For one, we keep changing the definition of intelligence. 



In the 1960s, we said a computer that could beat a backgammon champion would surely be intelligent. But in the 1970s, when Gammonoid beat Luigi Villa—the world champion backgammon player—by a score of 7-1, we decided that backgammon was too easy, requiring only straightforward calculations. We changed the rules to focus on games of sophisticated rules and strategies, like chess. 


Yet when IBM’s Deep Blue computer beat the reigning chess champion, Gary Kasparov, in 1997, we changed the rules again. No longer were sophisticated calculations or logical decision-making acts of intelligence. Perhaps when computers could answer human knowledge questions, then they’d be intelligent. Of course, we had to revise that theory in 2011 when IBM’s Watson computer soundly beat the best humans at Jeopardy. But all of these computers were horribly bad sports: they couldn’t say hello, shake hands or make small talk of any kind. Each time a machine defies our definition of intelligence we move to a new definition. What Makes Us Human? We’ve done the same thing in nature. We once argued that what set us apart from other animals was our ability to use tools. Then we saw primates and crows using tools. So we changed our minds and said that what makes us intelligent is our ability to use language. Then biologists taught the first chimpanzee how to use sign language, and we decided that intelligence couldn’t be about language after all. Next came self-consciousness and awareness, until experiments unequivocally proved that dolphins are self-aware. With animal intelligence as well as machine intelligence, we keep changing the goalposts. 


There are those who believe we can transcend the moving goalposts. These bold adventurers have most recently focused on brain science, attempting to reverse engineer the brain. As the theory goes, once we understand all of the brain’s parts, we can recreate them to build an intelligent system. But there are two problems with this approach. First, the inner workings of the brain are largely a mystery. Neuroscience is making tremendous progress, but it is still early. The second issue with reverse engineering the brain is more fundamental. Just as the Wright brothers didn’t learn to fly by dissecting birds, we will not learn to create intelligence by recreating a brain. It is pretty clear that an intelligent machine will look nothing like a three-pound wrinkly lump of clay, nor will it have cells or blood or fat. Daniel Dennett, University Professor and Austin B. Fletcher Professor of Philosophy at Tufts—whom I consider a mentor and a guide on the quest to solving the mysteries of the mind—was an advocate of reverse engineering at one point. But he recently changed course, saying “I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart.” Dennett’s mistake was to reduce the brain to the neuron in an attempt to rebuild it. That is reducing the brain one step too far, pushing us from the edge of the forest to deep into the trees. This is the danger in any kind of reverse engineering. Biologists reduced ant colonies down to individuals, but we have now learned that the ant network, the colony, is the critical level. 


Reducing flight to the feathers of a bird would not have worked, but reducing it to wingspan did the trick. Feathers are one step too far, just as are ants and neurons. Scientists have oversimplified the function of a neuron, treating it as a predictable switching device that fires on and off. That would be incredibly convenient if it were true. But neurons are only logical when they work—and a neuron misfires up to 90 percent of the time. Artificial intelligence almost universally ignores this fact. The New Intelligence Focusing on a single neuron’s on/off switch misses what is happening with the network of neurons, which performs amazing feats. The faultiness of the individual neuron allows for the plasticity and adaptive nature of the network as a whole. Intelligence cannot be replicated by creating a bunch of switches, faulty or not. Instead, we must focus on the network. Neurons may be good analogs for transistors and maybe even computer chips, but they’re not good building blocks of intelligence. 



The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn’t allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network. It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.

No comments:

Post a Comment