Artificial Intelligence, Machine Learning, Deep Learning.
“Artificial Intelligence”, “Machine Learning”, “Deep Learning” knowledge of mathematics, science, engineering fundamentals, and an engineering specialization to the solution of complex engineering problems. “Artificial Intelligence”, “Machine Learning”, “Deep Learning” are some of the hot buzzwords in the technological waves that are sweeping across the world now. They are often used interchangeably. For example, when AlphaGo, Alphabet Inc.’s Google DeepMind’s narrow AI program defeated South Korean Master Lee Se-dol in the board game ‘Go’ earlier this year, all of the above three ‘buzzwords’ were used to describe how AlphaGo won. This is true; however the three are not quite the same thing.
The difference can be understood by visualizing concentric circles: with Artificial Intelligence as the outermost circle, then Machine Learning, and then Deep Learning as a subset of Machine Learning.
Artificial Intelligence makes our machine smarter. We are training our computers to understand our needs, to suggest us and assist us in taking right decisions. In other words, we are achieving the power of gods! No, not by the use of magic potion or wands but by the power of computers developed by human mind. There is a fear that computers will overtake humans in the next few years. Many feel that humans
characterized by slow biological evolution will soon be superseded by Artificial Intelligence. Stephen Hawking warned us that “The development of full Artificial Intelligence could spell the end of human
race”. But here’s the thing.
Far from it actually! In fact, Deep Neural nets take a lot of time and a ton of memory to train themselves. So why did we consider DNNs as a candidate in the first place? There are several reasons for why they are so popular now. The four major ones are :-
1) Real data sets - Availability of data sets like ImageNet which are more complex and realistic (contains a network of words along with related images).
2) No need for feature extraction - Elimination of the cumbersome process of hand-engineering features by using non-ML methods (HOG, SIFT) required by traditional ML algorithms. The training data is directly fed to the network with only a little pre-processing.
3) Back propagation - A beautiful algorithm that makes much of the ‘learning’ in a (supervised) NN possible. It updates weights to minimize errors.
4) GPU - The hardware that breathes parallelism is the reason why DNNs with huge training sets can train themselves in a feasible amount of time.
The difference can be understood by visualizing concentric circles: with Artificial Intelligence as the outermost circle, then Machine Learning, and then Deep Learning as a subset of Machine Learning.
Artificial Intelligence makes our machine smarter. We are training our computers to understand our needs, to suggest us and assist us in taking right decisions. In other words, we are achieving the power of gods! No, not by the use of magic potion or wands but by the power of computers developed by human mind. There is a fear that computers will overtake humans in the next few years. Many feel that humans
characterized by slow biological evolution will soon be superseded by Artificial Intelligence. Stephen Hawking warned us that “The development of full Artificial Intelligence could spell the end of human
race”. But here’s the thing.
Far from it actually! In fact, Deep Neural nets take a lot of time and a ton of memory to train themselves. So why did we consider DNNs as a candidate in the first place? There are several reasons for why they are so popular now. The four major ones are :-
1) Real data sets - Availability of data sets like ImageNet which are more complex and realistic (contains a network of words along with related images).
2) No need for feature extraction - Elimination of the cumbersome process of hand-engineering features by using non-ML methods (HOG, SIFT) required by traditional ML algorithms. The training data is directly fed to the network with only a little pre-processing.
3) Back propagation - A beautiful algorithm that makes much of the ‘learning’ in a (supervised) NN possible. It updates weights to minimize errors.
4) GPU - The hardware that breathes parallelism is the reason why DNNs with huge training sets can train themselves in a feasible amount of time.
Leave a Comment