The 5 Biggest Misconceptions about Machine Learning
Machine learning and artificial intelligence –
Collectively known as ML/AI – will be a part of our daily lives within the next few years. These technologies are already embedded into digital assistants such as Siri and Alexa, video games such as “Grand Theft Auto,” search engines such as Google and Baidu, and the cameras used to identify faces on Facebook. It is now clear that ML/AI has immense potential for all sectors of technology including health care, finance, manufacturing and transportation. As these fields improve their processes due to machine learning and AI it’s critical for business leaders to understand what they’re getting into – lest fall prey to the many misconceptions that abound.
1. “Machine learning and artificial intelligence are the same thing”
Fact: Machine learning is a subset of AI, but not all AI is machine learning
The term “machine learning” was coined by Arthur Samuel in 1959 to describe computer programs that can change for the better based on their own experience. But what exactly does this mean? For instance, consider how Google uses search engine algorithms to track your click history in order to deliver personalized results. Since you’ve already clicked certain links more often than others, the algorithm has learned which links you are most likely to click on next – suggesting these results first when you conduct a new search. Similarly, Amazon’s AI-powered product recommendations take note of purchasing records in order to make suggestions based on what you’ve bought in the past.
2. “Machine learning will render human workers obsolete”
Fact: Machine learning and artificial intelligence are tools to aid, not replace humans
AI and machine learning have been controversial since their inception, with some warning that they could lead to mass unemployment and others claiming that this is patently false. In fact, both sides of this debate hold valid points. Tim O’Reilly, founder and CEO of O’Reilly Media, believes that AI will empower the workforce rather than displace it because while “machines can do a lot of jobs we thought only people could do,” they will also create new types of work requiring uniquely human capabilities such as creativity and judgment.
Some have suggested that the role of humans in the workplace will change from performing repetitive tasks to being overseers and collaborators with machine intelligence. In this scenario, workers will provide oversight for intelligent systems while also becoming more efficient due to their work alongside these machines. In addition, other jobs will be created as a result of ML/AI-related advancements such as data scientists, who are in demand right now.
3. “It is all about self-improving algorithms”
Fact: Machine learning encompasses a range of techniques used by computer programs to improve based on experience rather than explicit programming
When people think of machine learning, they often picture self-improving algorithms. But there are several types. Supervised machine learning algorithms improve as they receive “supervision” from data used to train them. In contrast, unsupervised machine learning algorithms don’t require input from humans to learn and improve. Instead, they scour large datasets in search of relationships and structure without explicit instructions or labels.
4. “It is advanced enough that we should just let it run free”
Fact: Machine learning requires constant monitoring and oversight by a human
All machines need instruction from a human at some point – even if it’s just a simple on/off switch – so the claim that machine learning systems can be left alone doesn’t hold water. When these technologies are first being applied, there will always be bugs to fix and issues to resolve related to accuracy. But that’s just the tip of the iceberg.
Machine learning systems will need to be monitored because they’re built on algorithms that may reflect, and even amplify, human biases without people making intentional decisions about what is learned and how it should be applied.
5. “You can always tell when a machine has made an error”
Fact: AI isn’t infallible – and neither are humans for that matter!
The problem with fallibility is that we don’t know when we’ve made errors until after we’ve committed them – sometimes much later than that or not at all. Why? When computers make errors during calculations, those mistakes are typically detected immediately by other computational steps unless those steps were designed to such signals (for example as in deep learning, which we’ll discuss below). With humans, we only know we’ve made an error after it’s been pointed out to us by others – and sometimes not even then.
There is a lot of misinformation out there, but don’t be afraid!
As you can see from this list, sometimes the same talking point can be simultaneously true and untrue. It’s important to remember that these claims are over-generalizations about the present and the future state of AI. That means they’re subject to change when advancing technologies require adjustments in how we plan for their impact on society. In addition, some points about machine learning sound scary but they may not be as bad as you think.