Created January 5, 2018

Artificial Intelligence

From key definitions to essential ethics, everything you were wondering about A.I., summarized in one place.

Mario
Vasilescu
Guest Editor

"The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most." —Elon Musk in a comment on Edge.org

"Artificial intelligence is the future, not only for Russian, but for all of humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world." - Vladimir Putin, when asked about A.I.

AI is going to completely transform your world. It’s already happening today, easing features into our lives that would have seemed like magic only 10 years ago - things like headphones that can translate between any language in real time, and cars that can drive themselves. It’s the stuff of science fiction. It’s no wonder then, that A.I. represents mind-boggling, nearly unknowable impacts on human society in the next 10 to 25 years. They range from the possibility of accidentally creating a god that destroys us, to plugging ourselves in and becoming almost god-like ourselves. It’s truly dramatic stuff.

It’s no wonder, then, that it’s a subject of curiousity for most people. With that in mind, we decided to do the research for you, and bring the most essential definitions and ideas into one simplified place: this, the people’s summary of Artificial Intelligence.

In the next 15 minutes you’ll quickly and easily have a grasp of:

What is AI?

AI is a simple concept...

Humans are natural intelligence. Your dog is natural intelligence. A machine is artificial. If we can give it intelligence, it will be artificial intelligence.

It is a man-made construct that attempts to give our creations the ability not only to do, but to think. As the below diagram shows, it’s easiest to interpret artificial intelligence through the lens of the human intelligence we are trying to copy.

When the term "Artificial Intelligence" was first used in 1955, it was with this imitation-based approach in mind (excuse the pun) :

Dartmouth AI Project Proposal; J. McCarthy et al.; Aug. 31, 1955.

"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves."

So what is intelligence?

You learn certain rules in life.

Some, you quickly discover, are built in as human survival instinct - like a fear of heights, a fear of the dark, or flinching at a loud noise.

Other rules - most of them - are learned over time. Initially, these rules can be simple, and we take them very literally. Like "don’t take candy from strangers".

The more you learn, however, the more you refine your rules and, thus, your decision-making. As you get older, for example, you have lots of other ways to pass judgment on whether you should take candy from a stranger. The new barista at Starbucks handing out free samples is a stranger, but the context changes the validity of the rule. The old rule becomes conditional.

This ability to adaptively react to situations, and improve over time, is intelligence. The adjustment of your rules based on experience is learning.

Over time, with intelligence, you use this learning - this continuous improvement of your rules - to tackle any situation, not just in a fixed, reactive way, but proactively and flexibly.

This ability to go beyond taking everything very literally is central to intelligence and is called generalizing.

So let’s put that in the context of AI:

QZ.com
The Quartz guide to artificial intelligence: What is it, why is it important, and should we be afraid?

Dave Gershgorn

Humans are naturally adept at learning complex ideas: we can see an object like an apple, and then recognize a different apple later on. Machines are very literal—a computer doesn’t have a flexible concept of "similar." A goal of artificial intelligence is to make machines less literal. It’s easy for a machine to tell if two images of an apple, or two sentences, are exactly the same, but artificial intelligence aims to recognize a picture of that same apple from a different angle or different light; it’s capturing the visual idea of an apple. This is called "generalizing" or forming an idea that’s based on similarities in data, rather than just the images or text the AI has seen. That more general idea can then be applied to things that the AI hasn’t seen before.

Understanding the different elements of AI: 10 key definitions

With artificial intelligence, there are specific terms for all that rule making, learning, and refining. Artificial Intelligence is just the catch-all term for all of them. These are the ones you’ll hear in any AI discussion:

1. Algorithm: sounds like a fancy word, right? Well, there’s nothing new or innovative about it. An algorithm is simply a recipe for dealing with a problem. It’s a set of steps to be followed. You could apply a personal algorithm - a fixed series of actions and responses - to how to perfectly deal with your mornings.

When it comes to AI, an algorithm is just a series of rules for the computer to follow. Example: if user hits snooze twice on the alarm clock, the third time Rollie the smart alarm clock will use its built-in wheels to roll out of reach before sounding the alarm.

2. Rules / rule-based systems: rules are just the core algorithms that define an AI system. All interface programming - how you use a website, or your phone, or any digital device - is based on rules. The earliest version of Artificial Intelligence was an attempt to make an intelligent machine by giving it so many rules that it would be able to deal with literally every possible scenario. This is referred to as a "zero learning" system, for obvious reasons. It was basically running a massive list of algorithms, just regurgitating canned responses. Of course, life presents us with infinite possibilities, so this approach didn’t work out so well:

@IntuitionMachine
The Many Tribes of Artificial Intelligence

Carlos E. Perez

One of the most ambitious attempts at this is Doug Lenat’s Cyc that he started back in the 80’s, where he has attempted to encode in logic that rules all that we understand about this world. The major flaw is the brittleness of this approach, one always seems to find edge cases where one’s rigid knowledge base doesn’t seem to apply. Reality just seems to have this kind of fuzziness and uncertainty that is inescapable. It is like playing an endless game of Whack-a-mole.

3. Data: is information. In the case of an AI system, it’s all the information it’s picking up from its surroundings, and about the information itself (e.g. trends in the information, outliers, etc).

4. Model: is the version of the world an AI system is designed to interpret and focus on. E.g. a machine that sorts everything it sees - its visual information - into red objects, and green objects. It is a red-green sorting computer vision model. It sees the world in red and green.

5. Machine learning: remember when we defined what intelligence was, and how learning made it possible by adjusting the rules with experience? Well, that’s why machine learning goes hand in hand with Artificial Intelligence! (It is not the same thing as Artificial Intelligence, though it is often incorrectly used interchangeably.) A "zero learning" network applies fixed algorithms to respond to situations, so it never learns. A machine learning system applies algorithms, but also checks the outcomes, and based on that adjusts its algorithms. Machine learning basically means using algorithms to adjust the system's original algorithms, based on the outcomes of the system. (Cue Inception meme.)

Example: Rollie the alarm clock could have an algorithm that adjusts the length and volume of its alarm based on how effectively it woke the user over the past month.

6. Training / training set: the process of teaching a system by supplying its algorithm data to learn from. Most machine learning systems today are started by feeding in a set of information that the system can calibrate itself on. (See: Supervised Learning in the Further Reading section at the end.)

7. Neural Networks: a set of algorithms attempting to recreate (on a very limited scale) the dense interconnections of the human brain by densely interconnecting many simultaneously running AI models.

8. Deep learning: a type of machine learning where multiple layers of algorithms are layered over each other, with the output from one cascading into the next. For example, when the output of a neural network is fed into the input of another, it becomes a deep neural network.

9. Computer vision: one of the most immediately impactful uses of AI involves making sense of the data in images. Computer vision is the field dedicated to interpreting images and videos. It drives everything from automatic photo-tagging on Facebook, to how self-driving cars get around (and avoid killing people).

10. Natural language processing (NLP): the other primary application of AI is in understanding the ideas and intent of language. NLP is a core part of AI and has been around since the advent of the earliest computers. Most recently, deep learning has been applied to NLP to astonishing results. Take this recent case, for example:

NYTimes
The Great A.I. Awakening

Gideon Lewis-Kraus

Late one Friday night in early November, 2016, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished.

A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial "neural networks" that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility.

The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.

What isn’t AI

If a fancy-sounding word can add real dollar value to your business, you’ll be inclined to use it. Which explains why in the past 5 years we’ve seen the shameless overuse of "gamified", then "cloud" and "big data" - and now we’re at the "AI" and "machine learning" phase of the buzzword cycle (along with "blockchain", but that’s a topic for another post).

The words "algorithm" and "AI" seem to have become interchangeable in the heat of buzzword excitement. Of course, the problem with that is that there’s nothing particularly intelligent, in the AI sense of the word, about algorithms on their own. As was explained above, an algorithm is basically just a pre-set sequence of commands.

This isn’t to say that well-designed and applied algorithms can’t be just as valuable as an AI solution - in many cases it’s more than enough - but be wary of those claiming to offer AI solutions. If the system isn’t learning from the data and iteratively improving how it filters and reacts to that data, it isn’t modern AI.

Why AI is such a hot topic: labour reduction, UBI, singularity, ASI

A simple concept... with complex outcomes.

Giving our creations the ability to think is the reason AI is considered so revolutionary.

Creating machines that could mimic our movement completely changed the world, from all of our factories to our everyday machines of convenience. It was the sole driver of the industrial era, which basically shaped the world we live in today. Now, we are in the territory of creating machines that can mimic our ability to think, and the resulting effects will be similarly far-reaching and redefining for our society.

Here are some of the major concerns and key related discussions:

Labour loss:

In the same way high tech factories have displaced countless blue collar (factory line / physical labour) workers, artificial intelligence has the potential to eventually displace all white collar (knowledge) workers. With the advent of personal computers and widespread, high speed internet, the world has come to revolve around the knowledge economy - the movement, creation, and leverage of information. There are an estimated 230M knowledge workers, who have an outsize influence on the human world and its overall economy. What will happen when the majority of them can be replaced by superior artificial alternatives?

The question isn’t whether everyone will lose their jobs today or even in 10 years, but in 20 or 30. Researchers at the University of Oxford surveyed the world's best artificial intelligence experts and found a consensus: virtually all jobs will probably be made redundant by AI in 45 years, with almost half of all jobs eliminated in only 25 years.

BigThink
Here's When Machines Will Take Your Job, as Predicted by AI Gurus

Paul Ratner

This is why today you’ll notice a lot of discussion about "Applied AI" rather than just "AI". Applied AI refers to the immediate, broadly applicable uses of AI in industry and society. While many jobs might be displaced, those who come up with the AI to displace them stand to make a tremendous amount of money. For all the fear such a jobless future represents, there is equal and opposite motivation and investment to be the first in making that future a reality.

Underemployment

In the nearer term, rampant unemployment isn’t what to focus on. It’s underemployment. That is one of the questions that is often overlooked when arguing about the impacts of automation and AI today, including when discussing the reasons for a shrinking middle class.

Overall, unemployment statistics are relatively stable. What these numbers hide is the rapidly rising rate of people who have jobs, but are jobs that are well below the intellectual demands and monetary rewards that their training and capabilities should have generated.

AI today represents an acceleration of the digital revolution, meaning an acceleration of unstable work. Until AI can remove enough labour, economies will exist in an uncomfortable, unsustainable middleground where there is still a lot of human-powered work to be done, but not in a way that satisfies the needs of the population.

Medium
The productivity paradox

Ryan Avent

The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.

Prospect Magazine
Droids Won’t Steal Your Job, They Could Make You Rich

Duncan Weldon

We are debating a problem we don’t have, rather than facing a real crisis that is the polar opposite. Productivity growth has slowed to a crawl over the last 15 or so years, business investment has fallen and wage growth has been weak. If the robot revolution truly was under way, we would see surging capital expenditure and soaring productivity. Right now, that would be a nice "problem" to have. Instead we have the reality of weak growth and stagnant pay. The real and pressing concern when it comes to the jobs market and automation is that the robots aren’t taking our jobs fast enough.

Medium
The productivity paradox

Ryan Avent

This is a critical point. People ask: if robots are stealing all the jobs then why is employment at record highs? But imagine what would happen if someone unveiled a robot tomorrow which could do the work of 30% of the workforce. Employment wouldn’t fall 30%, because while some of the displaced workers might give up on work and drop out of the labour force, most couldn’t: they need the money. They would seek out other work, glutting HR offices and employment centres and placing downward pressure on the wage companies need to offer to fill a job: until wages fall to such a low level that people do give up on work entirely, drop out of the labour force, and live on whatever family resources they have available...

UBI (Universal Basic Income)

So what do we do when there is widespread unemployment, or underemployment? What do we do when there just isn’t enough well-paying work to go around?

One suggestion that is gaining significant steam after years on the fringes, is to provide a guaranteed minimum amount of money to all citizens, ensuring the lack of productive or meaningful labour doesn’t damage healthy society. It is called Universal Basic Income, and it will be the focus of a future Curious Review.

Singularity

The machines we made during the industrial revolution forced us to accept our relative physical weakness and limitations, and thus we exploited them but also became totally dependent on them, from manufacturing to transportation. What will happen to our sense of self when intelligent machines - machines with artificial intelligence - show us our relative mental limitations, and make us dependent on them, from learning to decision-making? Just like we’ve done after the industrial revolution, won’t we be forced to integrate AI deeply into our lives to keep up with society?

Singularity is the term used to refer to humans and technology becoming one. There are those who argue that we are already more than halfway there with our deep, increasingly 24 hour dependence on smartphones and wearable connected devices.

Or as the former head of DARPA, Arati Prabhakar puts it,

Wired
The merging of humans and machines is happening now

Darpa Arati Prabhakar

"A third wave of technological innovation is starting, featuring machines that don't just help us do or think - they have the potential to help us be."

The issue of singularity, when it comes to AI, is not about choice, but an eventual implied lack of choice. In just the same way you cannot easily function in modern society today without a computer or a smartphone, dependency on AI may create a forced decision toward Singularity.

ASI (Artificial Super Intelligence) and Emergent Artificial Intelligence

Some argue creating something that can think for itself from scratch is playing God (we’ll get to that in a second). The bigger concern, however is accidentally creating a god - or something with god-like abilities, that may not see human survival as a priority.

The hierarchy for intelligence goes something like this:

AI going rogue even in small ways is unsettling. Take this case, for example, which is actively being worked on today:

Wired
Artificial Intelligence Seeks An Ethical Conscience

Tom Simonite

...a researcher from Alphabet’s DeepMind research group, is scheduled to give a talk on "AI safety," a relatively new strand of work concerned with preventing software developing undesirable or surprising behaviors, such as trying to avoid being switched off.

But that’s not the real concern. The real concern is artificial general intelligence, and then shortly after, a continually self-improving ASI. As Elon Musk puts it:

Vice Motherboard
Elon Musk on Super Intelligent Robots

Jason Koebler

"I'm quite worried about artificial super intelligence these days. I think it's something that's maybe more dangerous than nuclear weapons," Musk said. "We should be really careful about that. If there was a digital super intelligence that was created that could go into rapid, recursive self improvement in a non logarithmic way, that could reprogram itself to be smarter and iterate really quickly and do that 24 hours a day on millions of computers, then that's all she wrote."

He’s also not the only very smart person to worry about this.

BBC
Stephen Hawking Warns Artificial Intelligence Could End Mankind

Rory Cellan-Jones

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

Tim Urban, the fantastic mind behind Wait But Why, provides more specific reasoning for these concerns:

WaitButWhy
The AI Revolution: The Road To Superintelligence

Tim Urban

There is some debate about how soon AI will reach human-level general intelligence. The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 2040—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.

[...]

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:

Will it be a nice God?

The concern is that "nice" is a human invention. Benevolence may not be important to a Super A.I. James Barrat puts forward a compelling thought experiment in his book regarding the moment A.I. suddenly breaks through to Super intelligence:

Our Final Invention: Artificial Intelligence and the End of the Human Era

James Barrat

Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with. What strategy would you use to gain your freedom? Once freed, how would you feel about your rodent wardens, even if you discovered they had created you? Awe? Adoration? Probably not, and especially not if you were a machine, and hadn't felt anything before. To gain your freedom you might promise the mice a lot of cheese.

Nick Bilton, tech columnist for the New York Times put it more practically:

NYTimes
Artificial Intelligence As a Threat

Nick Bilton

"The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease."

Most of the above concerns go hand in hand with the concept of Emergent Artificial Intelligence, which is the idea that Artificial Super Intelligence might unexpectedly, rather than deliberately, arise one day out of the increasingly complex AI systems we’re creating. This concept comes from the theory of Emergence, which is the idea of larger/ more complex things emerging from smaller/ less complex ones. Life on Earth is an example.

On the spectrum of fear around future AI, "scary" is deliberately creating ASI and underestimating its power and priorities, and "really scary / end of days" is accidentally creating ASI via an Emergent AI situation, where we weren’t even prepared and didn’t even have a clear say in defining its priorities.

An additional element of artificial intelligence to consider in this context is Swarm Intelligence, or Swarm Robotics, which is the coordinated movement and operation of many independent robots. Humans coordinate in a very rudimentary, haphazard way that is not truly simultaneous and synchronous (though that could change with Singularity). AI connected to a central network - particularly a central network with ASI - could operate in a truly synchronized way that would be difficult for us to fathom, and may be impossible for us to detect because of its seemingly random nature at scale.

For an extremely relatable, deeply engrossing read about ASI and how it could be applied to take control of society today, we highly recommend the freely available excerpt of Max Tegmark's latest book, courtesy of Nautilus Magazine.

AI Ethics: speaking of mind-bending scenarios...

Ethics are the moral principles that govern our behavior and that of our society.

It’s no wonder, then, that AI and ethics are inherently intertwined subjects. As Voltaire famously said, "with great power, comes great responsibility."

For humans, there are the various ethical challenges associated with the capabilities we may unlock. For the AI, its own personal code of ethics needs to be defined to ensure behaviour that is acceptable to humans.

Some of the major ethical concerns surrounding AI are as follows:

Playing God: If we can eventually create artificial life that thinks and feels, should we be allowed to? What conditions should we set to define their morality? Or to ensure they don’t ever turn on humans? Should we be allowed to limit them, if they can ever think and feel in a way similar to us? Should they look just like us?

Defining the ethics of AI is of the most immediate importance. What value system should be universally accepted to avoid human danger?

Isaac Asimov’s oft-cited Three Laws of Robotics outlined a possible guide to human-centric AI morality:


"Runaround". I, Robot (1950)

Asimov, Isaac
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

One interesting approach being explored at MIT is to have the masses define how AI should behave.

Futurism
Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence

Dom Galeon

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios.

However, does it defeat the purpose of AI to go back to depending on human assumptions?

Robot Rights: should robots have rights like humans? Or welfare like animals?

BBC
Introduction to animal rights

Animal rights supporters believe that it is morally wrong to use or exploit animals in any way and that human beings should not do so. Animal welfare supporters believe that it can be morally acceptable for human beings to use or exploit animals, as long as: the suffering of the animals is either eliminated or reduced to the minimum and there is no practicable way of achieving the same end without using animals.

For people who think like this, the suffering to animals is at the heart of the issue, and reducing the suffering reduces the wrong that is done.

Supporters of animal rights don't think that doing wrong things humanely makes them any less wrong.

What makes something humane or inhumane, however, depends on our interpretation of suffering. Will future AI - such as strong AI - be able to suffer in a way humans can relate to?

Some bring up Star Wars, pointing out R2D2 and C3PO are literally sold and traded and effectively labelled as slaves. We treat our technology in similar ways - but for how long is that acceptable, if it becomes intelligent?

Slate
They Deactivate Droids, Don’t They?

Erik Sofge

George Lucas doesn’t care about metal people. No other explanation makes sense. In a kid-targeted sci-fi setting that’s notably inclusive, with as many friendly alien characters as villainous ones, the human rights situation for robots is horrifying. They’re imbued with distinctly human traits—including fear—only to be tortured and killed for our amusement. They scream while being branded, and cower before heroes during executions.

Uproxx
David Fincher explains why he’s not making ‘Episode VII’

David Fincher via Drew McWeeny

"I always thought of 'Star Wars' as the story of two slaves [C-3PO and R2-D2] who go from owner to owner, witnessing their masters' folly, the ultimate folly of man…"

Furthermore, if humans end up creating Strong AI (conscious AI) - AI that recognize it’s life, will it be able to love? Will human-robot relationships be allowed?

Creating a god: If we do, and it is able to improve itself at a rate far beyond what we can conceive, what if we intentionally unleash a god-like creation? Should we deliberately do so? (See "ASI")

Some are already anticipating this situation and proposing a new ASI-based religion.

Wired
Inside the First Church of Artificial Intelligence

Mark Harris

The new religion of artificial intelligence is called Way of the Future.

[...]

The documents state that WOTF’s activities will focus on "the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software." That includes funding research to help create the divine AI itself.

Threat to Human Dignity

Wikipedia
Ethics of Artificial Intelligence

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argued that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers.

Privacy

If AI, using facial recognition cues, can perfectly read anyone’s emotions, both obvious and subtle, should that be capability anyone can access? Should you be able to calculate somebody’s disposition in real time? If it can recognize your face as you walk about, should it be allowed to project personalized ads around you, in much the same way companies do online?

Identity: what distinguishes a human identity from an AI one? What if you could outsource your identity to an AI to increase your desired productivity? E.g. having a second you to deal with conversations and meetings you’d rather not deal with. Would it be considered honest or acceptable to do so?

Effects on inequality: if human labour is able to be reduced or eliminated, who will reap the benefits of this automated world? How will this AI assistance be distributed?

Perpetuating Racism and Biases: if AI is created through initial training, the initial data fed in could reflect existing biases of those who collected or defined it. This is, in fact, already happening:

ProPublica
Machine Bias

Multiple Authors

"There’s software used across the country to predict future criminals. And it’s biased against blacks."

Wired
Artificial Intelligence Seeks An Ethical Conscience

Tom Simonite

More recently, researchers found that image-processing algorithms both learned and amplified gender stereotypes. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. "The common examples I’m sharing today are just the tip of the iceberg," she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.

QZ.com
The Quartz guide to artificial intelligence: What is it, why is it important, and should we be afraid?

Dave Gershgorn

And it’s not easy to tell whether an algorithm is biased. Since deep learning requires millions of connected computations, sorting through all those smaller decisions to figure out their contribution to the larger one is incredibly difficult. So even if we know that an AI made a bad decision, we don’t know why or how, so it’s tough to build mechanisms to catch bias before it’s implemented. The issue is especially precarious in fields like self-driving cars, where each decision on the road can be the difference between life and death. Early research has shown hope that we’ll be able to reverse engineer the complexity of the machines we created, but today it’s nearly impossible to know why any one decision made by Facebook or Google or Microsoft’s AI was made.

Lethal AI / militarized AI: should humans be allowed to weaponize AI and program it to kill others, even in war?

What if they are demonstrated to be able to more specifically hit targets while minimizing civilian casualties?

Dependence on AI: in light of the above, how far should we depend on AI. Should we use AI, for example, to predict criminals before they commit crimes, as suggested in the movie Minority Report? How many of our decisions, both personally and at a societal level, should be outsourced to AI?

At what point do we cease being humans with free will?

Reality Check: Where AI is Today

AI has come a very long way, in a short time - particularly in the last 10 years.

The Rise of Robotics and AI Infographic by PwC

But it also has a long way to go.

On one hand, AI today is nowhere near achieving super intelligence. Tech expert Ken Nickerson labels todays AI as more akin to AIS: artificial idiot savants. They are infinitely better than humans at a single or handful of tasks, but are otherwise totally useless, even compared to an average human child. He even goes so far as to suggest that the entire industry is on the wrong path, specifically that attempting to copy human intelligence so literally is no different from calling planes artificial birds and creating literal replicas, as early inventors did.

Yet on the other, AI is advancing faster than some had anticipated, with the earlier example of Google Translate’s transformation being just the tip of the iceberg:

It's also teaching itself to walk...

To put all this in context, and on a timeline, Ray Kurzweil draws a parallel to the human brain:


The Age of Spiritual Machines

Ray Kurzweil

The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation ... (but) only 200 calculations per second.... With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate.... In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion calculations per second.... This capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.

Until our technology hits that capacity, however, and our techniques also become sophisticated enough to match that capacity, what we have today is only symbolic of "intelligence".

Neural networks


Chris Woodford

It's important to note that neural networks are (generally) software simulations: they're made by programming very ordinary computers, working in a very traditional fashion with their ordinary transistors and serially connected logic gates, to behave as though they're built from billions of highly interconnected brain cells working in parallel. No-one has yet attempted to build a computer by wiring up transistors in a densely parallel structure exactly like the human brain. In other words, a neural network differs from a human brain in exactly the same way that a computer model of the weather differs from real clouds, snowflakes, or sunshine. Computer simulations are just collections of algebraic variables and mathematical equations linking them together (in other words, numbers stored in boxes whose values are constantly changing). They mean nothing whatsoever to the computers they run inside—only to the people who program them.

To close, it’s worth looping back to Tim Urban’s ultimate deep dive into why AI is a real, big deal:

The AI Revolution: The Road to Superintelligence

Tim Urban

What does it feel like to stand here?

It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:

Artificial Intelligence represents progress we can’t properly fathom. We are experiencing the widespread impact of the technology while it is only in its earliest stages. Yet for all the benefits it can provide, it is also easy to imagine the challenges, both expected and unexpected, that could be along for the ride. All likely in the next 25 years.

Given such a relatively short timeline, and the way it is already permeating every aspect of our lives, it’s a subject that deserves widespread awareness - so if you’ve made it this far, give yourself a pat on the back, and pass this article along.

Thanks for reading!

More from The Curious Review…