Hacking The Brain: The Future Computer Chips In Your Head

7005

Over the past twenty years, neuroscientists have been quietly building a revolutionary technology called BrainGate that wirelessly connects the human mind to computers and it just hit the world stage

Picture: © Depositphotos.com/maxkabakov

Entrepreneurs such as Elon Musk and Mark Zuckerberg have entered the race with goals of figuring out how to get computer chips into everyone’s brains. The attention of Musk and Zuckerberg means the potential for giant leaps forward. But the question no one seems to be asking is whether our dependence on machines and technology has finally gone too far.  Countries annually celebrate their independence from other countries, but it now seems we should start asking deeper questions about our personal independence.

60 Minutes recently ran a piece showing how engineers are using what scientists have learned about the brain to manipulate us into staying perpetually addicted to our smartphones. The anxiety most of us feel when we are away from our phone is real: During the 60 Minutes piece, researchers at California State University Dominguez Hills connected electrodes to reporter Anderson Cooper’s fingers to measure changes in heart rate and perspiration. Then they sent text messages to his phone, which was out of his reach, and watched his anxiety spike with each notification.

The segment revealed that virtually every app on your phone is calibrated to keep you using it as often and as long as possible. The show made an important point: a relatively small number of Silicon Valley engineers are experimenting with, and changing in a significant way, human behavior and brain function. And they’re doing it with little insight into the long-term consequences. It seems the fight for independence has gone digital.

While smartphone dependency — and its effects on the brain — is in itself a cause for concern, it’s merely the next logical step in computing, and that logic is becoming increasingly worrisome. We’re standing on the precipice of major developments in what scientists call brain computer interfaces (BCIs, sometimes called brain machine interfaces or BMIs). BCIs are electronic microchips that are embedded into the brain to literally connect our minds to computers — basically, brain chips.

Before you dismiss it as out of hand, consider two things: First, computers have been creeping closer to our brains since their creation. What started as large mainframes became desktops, then laptops, and now tablets and smartphones that we hold only inches from our faces. And secondly, brain computer interfaces already exist, and have for almost twenty years.

I watched on the sidelines during the development of one of the earliest versions in the late 1990s, when it was only a handful of professors at Brown University. I was a graduate student studying the brain, but even with that background, I had trouble believing brain chips could be anything but science fiction.

Now, brain chips are going mainstream in a huge way. We’ve recently learned that no less than the likes of both Elon Musk and Mark Zuckerberg are deeply involved in developing BCIs. Imagine using Facebook without typing, or driving a Tesla with only your mind.

Zuckerberg hasn’t revealed many details yet, but he has said that he envisions a platform that “lets you communicate using only your mind.” We know that about a year ago, Zuckerberg hired Regina Dugan, former director of the Defense Advanced Research Projects Agency (DARPA), to run his company’s experimental technologies division, known as Building 8. The division now has 60 full-time scientists and engineers and hundreds of millions in funding. That’s a big commitment of resources that could produce rapid advancements in technology.

Elon Musk’s vision goes beyond Zuckerberg’s (or at least beyond what Zuckerberg has said publicly). While Zuckerberg has described external brain sensors that would allow you to “type” 100 words per minute with your mind, Musk wants to merge brains and computers much more deeply, through a brain chip that both sends and receives information.

The possibilities of brain computer interfaces are both exciting and frightening. The ability to communicate with others via thought, for example, is exciting, but giving others the ability to read your mind is frightening. Controlling a light switch or driving a car with one’s mind is exciting; the potential of others controlling your mind is frightening. It might be cool to have a perfect memory but it would be terrifying if your memory could be hacked. Leveraging artificial intelligence to make us smarter would be great; creating artificial intelligence that could grow much smarter and more powerful than us is the stuff of nightmares.

That last fear has raised heavy eyebrows, from Steven Hawking to Bill Gates and even Elon Musk. Yet Musk has entered the AI race perhaps without fully realizing it: brain computer interfaces will inevitably bring us closer to artificial intelligence. Musk argues that for us to compete with AI, we need to have some of our own machine intelligence, but has he fully considered the cost? What few understand is that the path to AI is through creating artificial brains, not “artificial intelligence.” The machine intelligence we create will be real. We are already in the process of digitally mapping the human brain. Fully charting the map will require data, which will ultimately come from recording the brain through brain chips.

To understand the significance of brain chips, it would behoove us to take a quick look at where we started. I am chairman of BrainGate, Inc., the company that holds many of the relevant patents on brain computer interfaces. I first encountered BrainGate in 1998 as a graduate student at Brown University. A handful of neuroscientists under the leadership of professor John Donoghue had invented technology that could connect a computer chip to the human mind, allowing it to act as a remote control to other devices. It was a clunky system that required brain surgery and wires connecting your skull to a mainframe, but it worked. They called the technology BrainGate.

The possibilities were unbelievably exciting, and the Brown team set out to turn BrainGate into a business. They raised a pile of money, pushed the science and technology further, and eventually took the company public under the auspicious name of Cyberkinetics. Fast forward a decade and Cyberkinetics continued to show promise but no profits. BrainGate had entered pre-clinical trials, proved viability with human patients, and showed real signs of potential success. By 2008, a BrainGate patient controlling a wheelchair with her mind was featured on 60 Minutes, but it was clear that commercialization was still far off.

Earlier that year, I approached Cyberkinetics about buying BrainGate personally.  Having just sold Web.com, I had the means to buy the BrainGate business, and we came to an agreement. My team took the company private again with the belief that BrainGate held great technological promise and could do great things for society.

BrainGate has always spent far more money than it brought in. After all, a product that requires brain surgery to use has extremely limited commercial viability. Like everyone else at the time, we thought of BrainGate as purely positive, a tool to help the physically disabled walk, talk, use their hands, and control wheelchairs, cars, and computers. It was cutting-edge technology that allowed paralyzed people to live more normal lives. It was a noble pursuit with no real downside. Or so we thought.

I have spent most of my career in engineering and business, despite having received my academic degrees studying the brain across science and philosophy. None of these disciplines speak the same language, nor do they overlap in their goals. This is largely a good thing, as each field serves as a sort of check and balance as new ideas come to light. Scientists determine whether something is possible. Philosophers ask and answer questions such as should we do it. Engineers focus on the nuts and bolts of how to make those things that are possible and desirable into reality.

Engineers are not scientists or philosophers, so it always concerns me when they get involved in things too early. The most extreme examples have devastating consequences. The most egregious that comes to mind is Thomas Midgely, the engineer who created toxic leaded gasoline for General Motors. If the company had been science-focused, they would have been more inclined to examine the risks before engineering a “solution” that caused hallucinations and death. Absurd as it seems, Midgley moved on from poison fuel to the Frigidaire division of General Motors and created chlorofluorocarbons (CFCs), those pesky chemicals that went into hairspray and destroyed the ozone. Again, science would have thought beyond engineering, and any philosopher or ethicist would have warned against something likely to do more harm than good.

There was a reason the Manhattan Project was led by scientists and not engineers. The risks were so severe that it was imperative that what was engineered was capable of doing exactly and only what was intended. Many philosophers were against developing the atom bomb, concerned that the short-term benefits (ending a world war) would not outweigh the long-term consequences (ending the world). But then the allies learned that Germany was attempting to develop its own atom bomb, and that was the end of the “should we” discussion. We sure as hell had to do it before they did.

Beating the Germans before they beat us is one thing… adding profit into science is a whole other thing. In business, profit is often reason enough to throw “should we” out the window. While Jurassic Park is fiction, is there anyone among us who honestly believes that some entrepreneur would not recreate dinosaurs if it were possible? In the movie, mathematician and chaos theory specialist Ian Malcolm is the voice of reason. He gives creator John Hammond an ethics lecture: “You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had, you patented it and packaged it and slapped it on a plastic lunchbox, and now you’re selling it.” In one of the most memorable and prescient scenes of the movie, Dr. Malcolm correctly points out that Hammond’s team was so preoccupied with whether or not they could, they didn’t stop to think if they should.

For many of these reasons, we moved BrainGate back to the domain of academics until we had answers to questions that cannot be resolved by executives or engineers. Our team delayed the business venture and allowed academics to continue to use our patented innovations to determine whether we should build BrainGate. But with Musk, Zuckerberg, and other engineers attempting to move forward, there is now a critical void of deliberation, and an acute risk of doing the wrong thing.

We are on the verge of connecting the human mind to machines, and it seems the engineers are leading the charge, figuring out techniques to make it happen. But brain chips have not yet been properly vetted. Where are the philosophers and scientists? Even the uninitiated should be able to see the dire consequences of not asking some hard questions before building something like BrainGate for the masses.

Philosophers and ethicists, could you please tell us what the risks are to humanity? Would a brain chip make us something other than human, and do the benefits of becoming non-human outweigh the risks? What is the cost of allowing direct access into our brains? Could this type of access be used by governments, militaries, or deviants, to make us do things that we would not have done otherwise? Could this whole enterprise accelerate artificial intelligence in such a way that threatens humanity?

Scientists, is this technology possible without physical and mental consequence? Do we know how our wet brains will react long-term to hardware implants? Will the efficacy decay over time? Can our biology handle the speed at which computers interact? If Anderson Cooper’s cortisol levels spiked in response to his smartphone’s text notifications, what would be the effect of that notification going straight into his brain? Beyond physical consequences, what will happen to our understanding of selfhood, our proprioception, our cognitive maps?

The dedicated resources of the likes of Elon Musk and Mark Zuckerberg will make a difference. I have no doubt that a convergence of the world’s best engineering minds can overcome the challenges that have plagued brain computer interfaces. It could take another decade, or it could take far less time. Just a few “small” breakthroughs — a higher capacity chip, a more effective sensor, a less invasive implantation technique — combined with engineers smart enough to connect the dots, could yield results much faster than we expect. The danger, of course, is that we may find out that we can, long before we have carefully considered whether we should. As it stands now, we are literally giving up our minds to a group of engineers without any consideration of the consequences.

You may not believe it from this writing, but I am a technology optimist. I deeply believe that technology usually delivers progress. I am also a capitalist and would not mind seeing BrainGate worth trillions of dollars. But more importantly, I’m a human being with friends, family, and others I care deeply about. I would never place them in harm’s way for the sake of profit or technological curiosity. For our part, BrainGate will keep itself out of the headlines and leave its patents, data, and intellectual property where it belongs - in the ivory tower of academia - until we know it is safe.

Personally, I am not smart enough to answer many of the questions I have raised. I can merely ask, and hope that we find answers before we build something that could be far worse than leaded gas, chlorofluorocarbons, a dinosaur park, or even an atom bomb.

Jeff Stibel, Contributor

   Если вы обнаружили ошибку или опечатку, выделите фрагмент текста с ошибкой и нажмите CTRL+Enter

Орфографическая ошибка в тексте:

Отмена Отправить