What is the difference between AI/machine learning/deep learning?

AI (Artificial Intelligence) is the future and is a science fiction novel that is part of our daily lives. All the arguments are correct, just to see what AI you are talking about.

For example, when Google's DeepMind developed the AlphaGo program to defeat Lee Se-dol, a Korean professional go expert, the media used AI, machine learning, and deep learning to describe the victory of DeepMind. AlphaGo defeated Lee Se-dol, all three technologies have made great contributions, but they are not the same thing.

To find out their relationship, the most intuitive way of expression is concentricity. The first thing that comes up is the idea, then the machine learning. When the machine learning prosperity, there is deep learning. Today's AI explosion is driven by deep learning. .

What is the difference between AI/machine learning/deep learning?

From decay to prosperity

In 1956, at the Dartmouth Conferences, computer scientists first proposed the term "AI", and AI was born. In the following days, AI became the "fantasy object" of the laboratory. After decades, people's views on AI are constantly changing. Sometimes AI is a sign, and it is the key to human civilization in the future. Sometimes it is just technical waste, just a rash concept, too ambitious, and destined to fail. To be frank, until 2012, AI still has both of these characteristics.

In the past few years, the AI ​​has exploded and has been developing rapidly since 2015. The rapid development is mainly due to the widespread popularity of GPU, which makes parallel processing faster, cheaper and more powerful. Another reason is that the actual storage capacity is infinitely expanded, and data is generated on a large scale, such as pictures, texts, transactions, and map data information.

AI: Let the machine show human intelligence

Back in the summer of 1956, at the time of the meeting, the dream of the AI ​​pioneer was to build a complex machine (for the computer driver that just appeared at the time), and then let the machine show the characteristics of human intelligence.

This concept is what we call "General AI", which is to create a superb machine that has all the human perceptions, and even surpasses human perception. It can think like a human. We often see such machines in movies, such as C-3PO and Terminator.

Another concept is "Narrow AI." Simply put, "weak artificial intelligence" can accomplish certain tasks like humans, and it is possible to do better than humans. For example, Pinterest service uses AI to classify images, and Facebook uses AI to recognize faces. This is "weak labor." intelligent".

The above examples are examples of the actual use of “weak artificial intelligence”, which have already embodied some of the characteristics of human intelligence. How to achieve it? Where does this intelligence come from? With a deep understanding of the problem, we come to the next circle, which is machine learning.

Machine learning: a path to the AI ​​target

In general, machine learning uses algorithms to actually parse data, learn continuously, and then make judgments and predictions about what is happening in the world. At this point, the researchers don't write the software by hand, determine the special instruction set, and then let the program do the special tasks. Instead, the researchers "train" the machine with a lot of data and algorithms to let the machine learn how to perform the task.

What is the difference between AI/machine learning/deep learning?

The concept of machine learning was proposed by early AI researchers. In the past few years, many algorithmic methods have emerged in machine learning, including decision tree learning, inductive logic programming, clustering, reinforcement learning, Bayesian. Network, etc. As everyone knows, no one really achieves the ultimate goal of “strong artificial intelligence”. With the early machine learning method, we have not even achieved the goal of “weak artificial intelligence”.

In the past many years, the best use case for machine learning has been "computer vision." To achieve computer vision, researchers still need to manually write a lot of code to complete the task. Researchers manually write classifiers, such as edge detection filters, so that the program can determine where the object starts and where it ends; shape detection can determine if the object has eight edges; the classifier can recognize the character "STOP". By manually writing a grouper, researchers can develop algorithms to identify meaningful images and then learn to judge that it is not a stop sign.

This method can be used, but it is not very good. If it is in a foggy day, when the visibility of the sign is low, or if a tree blocks part of the sign, its recognition ability will drop. Until recently, computer vision and image detection technology were far removed from human capabilities because it was too error-prone.

Deep learning: the technology to achieve machine learning

"Artificial Neural Networks" is another algorithmic method that was proposed by early machine learning experts and has existed for decades. The idea behind Neural Networks stems from our understanding of the human brain—the connections between neurons. There are also differences between the two. The neurons of the human brain are connected by a specific physical distance. The artificial neural network has independent layers, connections, and data propagation directions.

For example, you might take a picture, cut it into many pieces, and then implant it into the first layer of the neural network. The first layer of independent neurons transmits the data to the second layer, and the second layer of neurons also has its own mission, which continues until the last layer and produces the final result.

Each neuron weighs the input information, determines the weight, and clarifies its relationship to the task being performed, such as how accurate or incorrect. The final result is determined by ownership. Taking the stop sign as an example, we will cut the stop sign image and let the neurons detect it, such as its octagonal shape, red, distinctive characters, traffic sign size, gestures, and so on.

The task of the neural network is to give a conclusion: whether it is a stop sign. The neural network gives a "probability vector" that relies on well-founded guesses and weights. In this case, the system has 86% confidence that the picture is a stop sign, 7% confidence that it is a speed limit sign, 5% confidence that it is a kite stuck in the tree, and so on. Then the network architecture tells the neural network whether its judgment is correct.

Even a simple thing is very advanced. Not long ago, the AI ​​research community was still avoiding neural networks. Neural networks existed in the early days of AI development, but it did not form much "intelligence." The problem is that even if it is just a basic neural network, it has a high computational load and therefore cannot be a practical method. Despite this, there are still a handful of research teams moving forward, such as the team led by Geoffrey Hinton of the University of Toronto, who put the algorithms parallel into the supercomputer and validate their concepts until the GPU is widely adopted and we really see hope.

Going back to the example of identifying the stop sign, if we train the network, train the network with a lot of wrong answers, adjust the network, the result will be better. What the researchers need to do is train. They collect tens of thousands or even millions of pictures until the weight of the artificial neurons is highly accurate, so that every judgment is correct - whether it is foggy or foggy. Whether the sun is shining or not raining is not affected. At this point, the neural network can "teach" itself and figure out what the stop sign is; it can also identify Facebook's face image and recognize the cat - Andrew Ng did what Google did in 2012. Let the neural network recognize the cat.

Wu Enda's breakthrough lies in making the neural network enormous, increasing the number of layers and the number of neurons, allowing the system to run large amounts of data and training it. Wu Enda's project calls pictures from 10 million YouTube videos, and he really gives deep learning a "deep".

Today, in some scenarios, machines trained in deep learning techniques are better at identifying images than humans, such as recognizing cats, identifying cancer cell features in the blood, and identifying tumors in MRI scans. Google AlphaGo learns Go, and it keeps playing with itself and learning from it.

With the future of deep learning AI, a bright future

With deep learning, machine learning has many practical applications, and it extends the overall scope of AI. Deep learning breaks down tasks, making various types of machine assistance possible. Driverless cars, better preventive treatments, and better movie recommendations have either emerged or even appeared. AI is both now and in the future. With the help of deep learning, perhaps one day AI will reach the level of science fiction description, which is what we have been waiting for for a long time. You will have your own C-3PO and have your own Terminator.

Open the door of the "Industry 4.0 + Internet of Things" ecological chain!

What are you waiting for in the torrent of the Internet of Things! "How can you miss this new feast of the Internet of Things?! The 3rd "China IoT Conference" organized by Huaqiang Jufeng's Electronic Enthusiasts Network will be held in Shenzhen on December 2: Global Vision The exclusive view of higher value, more professional technology sharing, more cutting-edge pulsation, and the gathering of well-known companies and elites of the global Internet of Things, you must not miss it! More information Welcome everyone to continue to pay attention to the electronic enthusiast network!" (Click on the picture see details)

Disposable Vape Pen

Mainstay (Guangdong) biotechnology Co., Ltd. , https://www.heyleme.com

Posted on