AI and Machine Learning are gaining ground in a wide range of industries. Is this something to the embraced or to be feared?
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence is the expression of intelligence by machines rather than humans or animals. The concept of artifical intelligence has existed both in academic and spiritual writings since ancient times. Working automaton have existed for as much as 2,000 years, with the earliest examples developed in Greece, China and the Middle East.
Ever since the first computers were developed artificial intelligence was a core focus of their use and programming, finding results, making decisions and competing in games. Traditional AI essentially consists of a sequence of 'if this then that' instructions that are developed by a human programmer. While of course it is a lot more nuanced and complicated than this it usually involves programmers writing code for all of the rules of how the AI is to behave.
A subset of AI, machine learning focusses on the ability of computers to learn information using statistical analysis. This is done through processing and analysis of large data sets allowing classification and predictions to be made.
A simple example in this complex field would be if a computer was trained on a set of emails written by Alice and another set written by Bob. When a new email comes in the system will be able to predict the author based on the patterns that it has identified in the data that it was trained on. The interesting aspect of this is that no human programmer gives the computer the rules on how it identifies the author. A data set is provided and the computer is trained.
The training set usually needs to provide the answers. While being trained the system needs to know who wrote each email, but once trained it is possible to identify and classify future emails.
Deep Learning can take this a step further and by using sophisticated tools and techniques can identify these patterns in large datasets without being told what to be trained on. Using techniques like Artifical Neural Networks and deeply nested layers of learning in order to produce complex and surprising results.
The key to deep learning is processing and classification of very large amounts of data and automatically identifying patterns and making predictions from this data.
The rise of big data has allowed enormous data sets to be available for analysis and processesing. Coupled with advances in processing power and the use of GPUs which allow massively parallel processing has lead to a rapid increase in development of macine learning systems.
An artifical intelligence called GPT-2 was trained on the text data of 8 million web pages, building a model which allows it to write new articles that are suprisingly believable. Tay was trained to tweet like a 19-year old American girl by providing it with an appropriate dataset from Twitter. GauGAN is an AI from NVIDIA trained on art works and can produce landscape paintings in different styles from rough sketches provided by human users. All of these systems and many more are only possible with access to large digital datasets and a lot of processing power in order to train the AI.
Knowns and Unknowns
The challenge with these systems is that no one 'knows' how they work under the hood. During training parameters can be tweaked to ensure that the results are matching expectations, once training is completed then the system becomes a black box where information is fed in and the results output without the traditional clear step-by-step code that a human programmer would write.
This unknown aspect can, understandably, be a cause of concern. AI is used in a wide range of industries such as healthcare, criminal justice, the military and driverless cars. The algorithms used are developed by training on datasets and 'learning' behaviours, not by being directly programmed to follow the rules of these industries. The advantage is that a system that can identify patterns, classify information autonomously and make decisions based on a large amount of data can be adaptable and work in a range of situations where the inputs are not discrete or clear. On the other hand without humans fully knowing the reasoning and algorithms behind the decisions that are made what oversight do we have?
Tay.ai was taken down after 16 hours online. She was designed to learn from her interactions online and rapidly become a racist and sexually charged as trolls fed her politically incorrect phrases. Tay highlights the lack of any judgement that machine learning has, it processes data and produces outcomes that are directed by that data.
GPT-2 has only been released to the public in a limited form with fear that it is too good at producing convincing articles and could fuel the fake news and online spam epidemics.
As machines are taught from existing datasets they can embed and reinforce existing biases, further discriminating against minority or marginalised group. This is a concern when using AI within the criminal justice system for example where historical biases are then embedded into the descision making process.
Machines do not currently have the capbility to conduct ethical reasoning which is of significant concern if they are used in applications that have life-threatening consequences.
Machine learning allows for discovery of patterns and classification of data in a manner beyond that of human ability. This can lead to breakthroughs in scientifical discovery including in healthcare and medicine. Self driving cars, when perfected, have the potential to eliminate the human error aspect of the road toll - a factor in 94-96% of call motor vehicle crashes.
The more machines become capable of doing complex or dangerous jobs the less risk to human life in conducting those jobs.
AI can be used to analyse data on how an organisation is running, or the impact that processes are having, and help identify more efficient ways of doing business which can have significant benefits to staff, customers and the environment. AI can also be used to monitor processes and provide warning of potential problems before they eventuate - anything from cashflow to nuclear meltdown.
Something to be feared?
Like most new technologies the adoption of more and more sophisticated AI should be done with consideration to the potential negative effects. We shouldn't avoid this technology simply because it is powerful and life-changing, but we also shouldn't blindly embrace it or pursue AI applications simply for the sake of doing so.
AI and machine learning provide a wide range of beneficial applications that can be applied in a practical way far beyond academic curiousity and this is only going to increase with more and more data available and more processing power to apply to learning from that data.
If we do this in a careful and considered way there are opportunties that we haven't even dreamed of yet.
Play at home
Talk to Transformer gives you access to GPT-2 and will write articles based on any starting text that you give it.
GauGAN can turn your sketches into landscape drawings.
Quick Draw Is an AI that can guess what you are drawing, no matter how bad you are at drawing it.