Right now you have one of three positions.
- Artificial Intelligence will help us solve some of our biggest problems and relieve us of our most boring tasks
- I don’t trust it. They will take over the world as soon as they get the chance.
- I have no idea what artificial intelligence even is, let alone what it will do.
In some respects the first two are true, but we can’t address it while most of us identify with option three.
What is Artificial Intelligence?
For the purposes of this article we will look at AI, machine learning, deep learning and neural networks as means for using computer processing power and a variety of algorithms to process massive sets of data to enable a computer to make probability assessments.
The output may help us get from one point to another, determine the best result for a search query, or identify a cancer cell from millions of healthy ones.
In each case it can perform the task much faster that we ever could and often with much better results without ever needing to take a rest.
The potential for improving almost every aspect of our lives is real and the ability to give humans an escape from the many dull tasks that still require our attention will be welcome.
What AI is not
Just because a computer could identify you from a grainy video frame does not mean it knows who you are. We sometimes assume because AI can solve mathematical problems and answer questions posed in English, that it understands what it is doing, but it does not.
When you ask Siri what the weather will be tomorrow, Siri determines that you are looking for a weather forecast for where you are in 24 hours from now. But Siri has no idea what time, or weather or even here is. It is simply processing the words you used and matched them against potential answers that included references to your location for a weather report for the next 24 hours.
For Siri to understand that you would want to know that to plan a potential outdoor event or to make a decision about what outfit to prepare would require general intelligence, it is what humans have and so for a computer this would be Artificial General Intelligence. The advances here are are by systems designed to use deep neural networks and could theoretically teach themselves something new by simply supplying them with datasets.
It was a system like this that beat esports players at a Dota2 match mentioned in an earlier article and it is at the heart of the systems that would be set the task to drive our vehicles.
They are improving, but they are still not like us.
Here is where it becomes important to consider the implications because work in this field holds the greatest promise and peril.
On 3 September, Elon Musk tweeted his abiding fear about AI, not that they wish to become our overlords, but that the company or nation that manages to gain a sufficient advantage would be able to dominate their area of speciality and in time the world.
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.— Elon Musk (@elonmusk) September 4, 2017
The threat of such an advantage would spur nations to put measures in place to either compete or, if that does not seem feasible, to attack the company or nation in possession of it.
So a new worldwide body would be needed to monitor the progress and ideally require advances to be placed in the public domain. It would lower the incentive to invest and experiment, but may be the way to avoid a global conflict.
A previous article considered the challenge of managing companies that are more powerful than countries and the need for such an organisations to oversee them.
There is another reason for having a controlling body to peer review the research and results of AI experiments. AI with limited or biased datasets will themselves be biased, many of the applications that would benefit from an AI solution are prone to human bias. Recruiting is an example, but a recruiter can be quizzed and asked to justify their recommendations, other than a result, AI generally can’t. It is like a box of magic whose power is determined by the quality of the information it uses to learn. If the info is bad so are the decisions.
You can imagine overstretched judicial systems might welcome AI for assessments and even judgements, but we need to know what they will be given to train or else face entrenched prejudice on a grand scale.
This bias may already be in place as banks and insurers run automated risks assessments of new clients with little opportunity to ask or challenge why you were turned down or penalised for your rating. Watch the TEDtalk below for more on this.
We may be preparing students in the wrong way, in effect, making them more likely to be replaced by machines. In the TEDtalk below you will find out about an AI bot that does better than 80% of applicants to Japan's most prestigious university.
Hopefully this short overview allows you appreciate that there is no stopping progress and that humanity will pursue AI.
The question is how we will do it.
With a few, willing to take risks that could negatively affect us all, or by many engaging on the varied and complex challenges that require the best of our intelligence to ensure we get the best from our ever smarter machines.