Over the past five years, we’ve seen huge advances in artificial intelligence and its applications that signal a shift in how we’ll work and live over the next few decades. These advances have come as a result of new, more powerful computers that cost less to run for the same amount of computing and new tools that have made it much easier to use AI methods and that have turned them into commodities.
While some might think of runaway AI causing a catastrophe, one of the true potential perils of AI is the discrepancy in abilities it might cause between those who adopt it and those who choose to live their lives without or who might not have access or means to it. The result might not just be a slight advantage to those who use it but orders of magnitude increase in abilities. We might need to ask ourselves whether access to AI is a basic human right that all people need to enjoy and if so, what are the minimum capabilities that each person would be entitled to have? We might also need to ask if we need AI to watch over us and prevent the potential influence by AIs with super capabilities.
Those with access to the latest AI might be at such a large advantage compared to those without it that would not have any fair dealing with them. In law, when there’s such a power discrepancy, it’s referred to as unconscionability. Large corporations with deep pockets and teams of lawyers can’t as easily strong arm individuals due to this concept. Judges have been known to throw out agreements where unconscionability plays a role in reaching the agreement.
AI is not magic. If we have to boil it down to something simple, we can call it computer-based pattern recognition or statistical analysis. The applications of this pattern recognition are what can make a magical experience. AI methods usually ingest a lot of data and then come up with an effort saving filter to help identify a pattern, typically referred to as a classifier. Speech recognition, computer vision, natural language understanding, text-to-speech – all of these applications of pattern recognition.
The more abstraction of the AI service, the higher the potential impact and value. For example, speech recognition combined with natural language understanding provides the ability for a service like Alexa or Google Assistant to interpret a voice command so that it can execute it. Even more magical is when you can then execute on a request based on a recognized pattern and do something interesting, like play music.
Talking to our homes and surroundings is just the beginning of this seemingly magical experience. What we’re starting to see is that AI-based services are reaching human-level error rates. Word Error Rates (WER) for speech transcription (in a perfect environment in English) reached human levels about a year ago. Once human parity is reached in a perfect setting, it then starts to spread to other more difficult scenarios. We’ll likely see similar performance in noisy environments and other languages, and better-than-human capabilities over the next few years, such as picking out many speakers in a crowd or being able to hear and transcribe a whisper from across the room.
We’re also starting to see more of these types of capabilities become easily available to any developer through APIs. For example, facial recognition from Microsoft can pick out multiple faces from an image within milliseconds and can speculate on the emotion of the faces and guess the age (today, the error range is plus or minus a decade). Acoustic analysis can pick up sounds like glass breaking or baby crying. However, some of the spookier gains are coming from what AI-based services are able to at scale.
One of the powers of AI is that it can work at scale. Looking for one face in a photo is one thing, but looking at potentially thousands or millions of Instagram posts where your ex appears is another. Or what if a video feed of a stadium could be used to look at the face, age, mood, gender, etc. of everyone who walks in to take away a sentiment of the crowd or potentially market particular products to that crowd? Such technologies are already being used change display ads based on demographics in a retail store.
With scale, an AI can analyze multiple facets of an individual against huge amounts of data to find correlations or predict next actions. This is especially true in writing. In English, there are only so many ways of saying something and at some point, in writing out a sentence, you can get a higher level of confidence of how the sentence will end.
An Infinite Number of Monkeys
Last May, Google introduced sentence completion as a feature in its mail service. As you type, it will predict the end of the sentence and by pressing enter, the text will appear. It can save on time responding to emails. But can this be tuned further?
Services like Boomerang already provide a prediction back for how likely it is that the recipient of the email you’ve written is going to respond. It compares your writing to millions of other emails written and whether people have responded to those emails and as a result, makes a prediction.
What if these services were coupled or could be honed? Eventually, email responses could be pre-written and tuned to increase sales, clicks, or get a response. How much of an advantage does this put the person or company that uses these tools to those who don’t? Today, the advantage might be slight but what if we increase the scale of learning by these machines from millions of emails to billions? Do they become much better at reaching our goals than we do?
We’ve Already Had This Conversation
Who’s better at negotiation the sale of a car? The car buyer or the car dealer? It’s the car dealer. We might buy a handful of cars in a lifetime but the car dealer is negotiating car sales multiple times per day, every day. An experience dealer will have seen it all.
Their experience has taught them how to price out counter offers, how to read facial ticks, when to make an offer, and what type of statements can be used to nurture the buyer into the sale. AI can work the same but at a much more massive scale.
Call centers and chat bots might be handling thousands of conversations at any given time around the same topics. By analyzing different approaches, an AI might be able to determine the optimal type of response to negotiate with a given individual. Similarly, such a system might also have conducted discussions and negotiations millions of times over in simulation, using techniques such as Generative Adversarial Networks (GANs) to come up with new strategies to overcoming objections.
Could such a system come up with such a powerful pitch that is irresistible or rather very effective? Could we lose our agency against this system? Could it come up with the world’s funniest joke that is so dangerous it could be weaponized?
A concern about the potential effectiveness of such systems could stem from how easily they can be deployed at scale. Already, copyright enforcement services use tools for image scanning and recognition on mass, generating adaptive emails that threaten legal action, and then automate the entire process so that only a few people are needed to run the operation. The letters sound scary enough that many might be fall prey to these demands. The threats are usually baseless and fall flat when it comes to true legal action.
This is the tip of the iceberg.
As these systems become better at automation, they might actually be able to extend their capabilities to file materials with courts. The result could be legal systems being inundated with bot-generated legal actions.
We might start asking ourselves then if such AI-driven services are ethical and if not, what constraints we start to put on them.
Using Powers for Good
If we make the assumption that AI-driven systems will have a great power to influence us, then perhaps we can use this power to make us better? In the UK, economists with the Behavioural Insights Team try to influence people to make better decisions on thems. The unit is often referred to as the Nudge Unit.
However, what if AI-based behavioural analysis was applied to many of the bad decisions we make on a day to day basis. Smoking, poor food choices, staying up too late, texting while driving – these create for us so much unnecessary misery that even small gains could have a dramatic positive impact.
In that case, maybe we’ll tolerate the extreme examples of AI creating a runaway advantage for some if we greatly benefit it from it as a whole.