Big Data and Hadoop
December 28, 2017
Artificial Intelligence
January 10, 2018
Show all

Machine Learning

  “A breakthrough in Machine Learning would be worth ten Microsofts”.

What is Machine Learning? We keep hearing this term everywhere these days. A popular application of ‘Machine Learning’, is in the iPhone X, where the face detection sensor/camera in the front uses algorithms to detect one’s face patterns and expressions to unlock the iPhone using Face ID. Earlier, we read about Big Data and Hadoop. Now, let us see what Machine Learning is all about.

Machine Learning is an application of Artificial Intelligence (AI) and is creating a revolution in every industry. Essentially, it is a set of algorithms that learns patterns in Big Data (http://techilia.com/2017/12/28/big-data-hadoop/) and then predicts similar patterns in new data that we enter or feed. In layman’s terms, it’s the theory that machines should be able to learn and adapt through experience to produce reliable, repeatable decisions and results.

Machine Learning was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks. The iterative characteristic of Machine Learning is important because as models are exposed to new data, they are able to independently adapt, learn from previous computations to produce reliable, repeatable results.

It is a subset of Artificial Intelligence and is fundamentally different from much of what we think of as programming. When we think of a computer program, we generally think of a human engineer giving a set of instructions to a computer, telling it how to handle certain inputs that will generate certain outputs. The state maintained by the program changes over time—a web browser keeps track of the pages it’s displaying and responds to user input by reacting in a determinate and predictable fashion—but the logic of the program is essentially described by the code written by the human. These machine-generated programs—neural networks, Bayesian belief networks, evolutionary algorithms—are nothing like human-generated algorithms. Instead of being programmed, they are “trained” by their designers through an iterative process of providing positive and negative feedback on the results they give. They are difficult to understand, tricky to debug, and harder to control. Yet it is precisely for these reasons that they offer the potential for far more “intelligent” behavior than traditional approaches to algorithms and A.I.

Machine learning algorithms are often categorized as being supervised or unsupervised. Supervised algorithms require humans to provide both input and desired output, in addition to furnishing feedback about the accuracy of predictions during training. Once training is complete, the algorithm will apply what was learned to new data. Unsupervised algorithms do not need to be trained with desired outcome data. Instead, they use an iterative approach called Deep Learning to review data and arrive at conclusions. Unsupervised learning algorithms are used for more complex processing tasks than supervised learning systems.

Image result for machine learning process

In the past decade, Machine Learning has given us self-driving cars, speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine Learning is so ubiquitous today that one probably uses it dozens of times a day without even knowing it.

Leave a Reply

Your email address will not be published. Required fields are marked *