Instant Alert: 'It's been a dream ever since the early days of science fiction': 5 tech CEOs on the rise of AI

Posted On // Leave a Comment

Your Message Subject or Title

  MANAGE SUBSCRIPTIONS   |   UNSUBSCRIBE   |   VIEW ONLINE
 
 
 
 
 

'It's been a dream ever since the early days of science fiction': 5 tech CEOs on the rise of AI

by Seth Archer on Jun 29, 2016, 9:21 AM

Advertisement

Artificial intelligence has been a popular topic for decades.

"2001: A Space Odyssey" came out in 1968, and introduced the world to HAL 9000, a smart computer that ended up being homicidal.

Evil artificial intelligence is mostly a product of science fiction so far, but as technology advances, it is starting to seem more and more plausible.

Siri is just a harmless assistant, and Poncho is a cute little weather cat, but soon we could have highways full of driverless cars and skies full of autonomous drones that we wouldn't want to go rogue. 

This has led several tech CEOs to speak openly about the potential negative effects of AI, and many of them have started research and education projects aimed at making AI as useful and human-friendly as possible.

"The Matrix," "Terminator" and even Auto, the pilot robot in "WALL-E," depict a future where AI has gone bad. So let's hope these CEOs listed below know what they are doing.

Eric Schmidt - executive chairman of Alphabet, former CEO of Google

Google has a bit of a sunnier outlook on AI than most.

"Some voices have fanned fears of AI and called for urgent measures to avoid a hypothetical dystopia," Schmidt wrote in Fortune.

"We take a much more optimistic view."

The Fortune op-ed, written with Google X founder Sebastian Thrun,was headlined "Let's stop freaking out about AI," and painted a rosier view than other CEOs on this list.

Google's AI research is particularly worrisome to some because of the amount of data it collects from its users. 

Google recently mastered the notoriously complicated game of Go, beating a Grandmaster using only artificial intelligence. 

Google set up an AI ethics board to help quell some of its critics, but refuses to reveal who sits on the board.



Elon Musk - CEO of Tesla and Space X

Musk is perhaps the most prominent voice warning us about the impact of AI. He helped start Open AI, an open source, non-profit company that hopes to develop benevolent AI technology.

He told the audience at Recode's Code Conference that there is only one company's artificial intelligence he is worried about, strongly hinting at Google, but not confirming the exact company.

Musk is so vocal, perhaps because of his view of how humans and technology will interact in the future.

Musk said that he expects a "neural lace" technology to help humans interface directly with computers, and he thinks humans are already cyborgs.

Both ideas are the kind of far-out concepts that we have come to expect from the eccentric CEO, but if he's right, AI could have even more access to humans minds and bodies.

That means making sure future AI tech is not like the it was in "Terminator" or the "Matrix" is probably in our best interest.

"It's really just trying to increase the probability that the future will be good," Musk said about his AI research at the Code Conference.



Satya Nadella - CEO of Microsoft

"Depending on whom you listen to, the so-called “singularity,” that moment when computer intelligence will surpass human intelligence, might occur by the year 2100—or it’s simply the stuff of science fiction," Nadella said in a recent Slate op-ed.

"I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology."

One of Microsoft's most recent forays into AI ended up becoming a genocidal racist. The Twitter-bot only lived within the context of social media which rendered it pretty harmless, but it demonstrated how quickly things can go wrong. 

As AI enters more areas of our lives, it has the potential to impact us in untold ways. This is why Nadella says we have to be compassionate toward AI as we teach it to be smarter.

He lays out four rules humans should follow as it begins to create more and more AI. They are Empathy, Education, Creativity and Judgment and Accountability.

Read the details about his rules on Slate.



See the rest of the story at Business Insider


 
Share the latest business news with your network:

Facebook Share Twitter Share Email Share
Email sent to:   |   Manage your email preferences   |   Unsubscribe

Terms of Service   |   Privacy Policy

Business Insider. 150 Fifth Avenue, New York, NY 10011
Sailthru

0 comments:

Post a Comment