Machine Learning – Why it is the future

By Chinthaka Chandrasekara

Machine learning is most probably a term that you have heard at least once during recent times, provided you are, or at least keep up with, the technology industry. That is because it is the future. To elaborate more on this statement, we need to take a better and closer look at what machine learning actually is, and how it works, or how it actually applies itself in the modern world and its workings.

What is machine learning?

As the term implies, it is in essence, the ability of machines, or specifically computers, to learn by themselves without human beings (engineers) ever having to teach (program) them. This kind of learning can happen in different ways and using different types of systematic methods. Although initially categorized and closely tied with the field of computer science, machine learning has evolved into a much broader study area that encompasses and closely relates to many other fields or industries.

Giving computers the ability to learn, without having to explicitly program them, is no small feat. So how is it done? Machine learning uses different approaches to perform the actual learning. These are mainly categorized into two main types, namely supervised and unsupervised learning. In a nutshell, supervised learning implies the type of learning where a set of target values are provided along with the input data set which can be used to evaluate the validity of the machine learning algorithm being used to perform the learning task. Inversely, unsupervised learning is the type of learning where no such target values are provided, and the algorithm has to perform the learning using the only the input data set. Additionally there is also another type of learning method called reinforcement learning which, although is not used as much as the other two popular learning approaches, has started gaining popularity among machine learning applications during recent times. This type of learning deals with learning through intelligent guessing and repetition. Machine learning deals with learning through experience. As defined by Tom Mitchell, the famed American computer scientist and professor at Carnegie Mellon University, a machine learning computer program would “learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” What this means is that measured by any performance metric, the main objective of the learning process is to gain experience at performing a task, so that in the future, the program can perform that task more accurately and effectively.

Next, let us look at some of the most popular machine learning algorithms that are currently used and their underlying theories and origins.

Most of the modern algorithms that are being used in machine learning are actually derived from simple real world concepts. Artificial neural networks, which is one of the hottest topics in machine learning, simulates the behavior of neurons in the human brain. Genetic algorithms derive from the field of genetics, and uses the concepts of crossover and mutation operators, as well as that of natural selection. Decision trees uses the simple data structure of trees to perform its learning tasks. These among many others such as Support Vector Machines, Bayesian Networks, Rule Engines, Clustering etc. provide some of the highly efficient calculations and computations that lead the way to solving some of the most complex problems that exist within the modern world.

Real world applications

Many of the world’s superpowers, including governments as well as high end technology companies, have realized the true value and potential of machine learning, and have started acting upon that knowledge. Foremost of these is Google, who has some made amazing breakthroughs and invested heavily on its machine learning technologies during recent years.

For example, Google’s own machine learning and dataflow programming open-source library TensorFlow has become one of the most used software libraries in modern machine learning applications. Eric Schmidt, the chairman of Alphabet (the parent company of Google), has said, “Google’s self-driving cars and robots get a lot of press, but the company’s real future is in machine learning, the technology that enables computers to get smarter and more personal.” Another area that Google has provided many breakthroughs in is Deep Learning, which is a form of machine learning where the focus is on learning data representations or features, as opposed to task-specific algorithms. The AlphaGo program, which simulates the ancient Chinese board gam Go, and which managed to beat the reigning world Go champion Lee Sedol in 2016, was a product of an artificial intelligence company called DeepMind, which was acquired and is currently being managed by Google.

However, groundbreaking achievements in machine learning in top technological companies have been happening far earlier than Google’s time. IBM developed a chess playing computer called Deep Blue back in the 1990’s, which beat the then world champion Garry Kasparov in 1997. Therefore, it is clear that although in recent times people have started talking about machine learning, the buzz has been there for a very long time and has grown steadily louder. Ever since Charles Babbage originated the concept of a digital programmable ‘computer’, Alan Turing invented the Turing machine formalizing the concepts of algorithms and computations, and John von Neumann made amazing advancements in computer architecture, there has always been a momentum in the field of computer science, where things have moved forward and developed continuously in various specific areas, and machine learning is no exception.

Furthermore, the future shows that machine learning will play a big part in other industries as well. For example, in the world of finance and marketing, machine learning helps to eliminate marketing waste, enables better predictions and forecasts, enables real time processing of data, helps to structure content, reduces costs, etc. In addition, in the field of accounting, it helps in many tasks including the auditing of expense submissions, clear invoice payments, risk assessment, analytics calculation, automated invoice categorization, bank reconciliation, etc. In short, the possibilities are endless. We are beginning to see applications of machine learning in more and more real world places including academic institutions, health-care facilities, banking establishments and even the military. It is easy to see how important this factor would be in the future workings of humankind.

But in essence, why is it so? What makes machine learning such a pivotal factor in development? The answer is fairly simple. Human beings are always trying to improve and find easier and more effective ways of performing their tasks. Teaching a machine to do something would eliminate the need for humans to involve themselves. Also, it is a question of accuracy, speed and effectiveness, and machines will triumph over man in all three aspects in whatever they do, as long as they have been provided the initial data. If a machine only needs that initial push, and takes care of things from then onwards, that would save an incredible amount of time, energy and labour cost. But even though this sounds pretty much like a fairytale with a happy ending, it has its own vices.

Man vs machine

There has been an age-old debate of whether someday the machine would equal or surpass the human. And with the recent advancements in machine learning, this topic has again been discussed very much in the scientific community. The possibility that a man-made machine would someday surpass human beings seems frightening to some people. This is made even more so by the various depictions in science fiction media. Anyone who has watched the Terminator movies, or the more recent TV show Westworld, would attest to that fact. A machine which would look like and behave like a human in every way - in early days, this would certainly have been thought impossible, and even tantamount to magic. But in recent times, very much less so. The famous quote from Arthur C. Clarke - “Any sufficiently advanced technology is indistinguishable from magic” seems very relevant here. What would have appeared as magic a few decades or even years back, would now be considered as advancements in technology. The fact that whether or not this is a good or bad thing is certainly debatable. Another quote by Arthur C. Clarke - “The only way to discover the limits of the possible is to go beyond them into the impossible.” How can we ever hope to test the abilities of mankind if we are too afraid to take the necessary steps?

So this leaves us with a couple of questions. With respect to machine learning, will machines someday be equal to, or even surpass human beings? And at what cost? And if someday machines do become as sentient as humans, would that really be a bad thing? These are questions that are yet to be answered, and in this ever-changing, ever-adapting world, perhaps never will, and some might argue, do not ever need to be. A final quote by Edsger W. Dijkstra, who developed the famous Dijkstra shortest path algorithm – “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

References:

https://www.forbes.com/sites/bernardmarr/2017/07/07/machine-learning-artificial-intelligence-and-the-future-of-accounting/ https://datafloq.com/read/why-machine-learning-is-the-future-data-analytics/2311 https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms/ https://www.entrepreneur.com/article/300713 https://techcrunch.com/2016/07/06/key-trends-in-machine-learning-and-ai/ https://www.sas.com/en_us/insights/analytics/machine-learning.html https://www.kdnuggets.com/2016/06/machine-learning-trends-future-ai.html