The question that is on everybody’s mind these days is when did Deep Learning take off. It’s an interesting question because while neural networks have made it to the forefront of artificial intelligence, many people still haven’t understood what they are doing.
But the question is also quite important, since the technology is so much more than simply a way to make computers understand human language. For instance, when was the first time you saw a car that wasn’t able to recognize a person holding a stop sign?
The answer lies in the first publication of deep learning. In June 2012, Google Brain published the results of a “cat experiment” that struck a comical chord and went viral. The results were so accurate, that the experiment was soon repeated by a wide variety of humans, proving that deep learning could do a lot more. This breakthrough has since gone on to become one of the most popular ways to train machines.
Geoffrey Hinton invented the term “deep learning” and described new algorithms for computer vision. A few years later, IBM’s Deep Blue chess-playing bot beat the world champion using the power of machine learning. Google’s DeepFace algorithm developed by Facebook and Amazon is able to recognize individual faces in photographs. These advances made the technology possible in a wide range of fields. But even today, these advancements have a long way to go.
While deep learning is still in its infancy, it is already transforming society in the coming decades. Self-driving cars and digital assistants are now being tested worldwide. These AIs can even recognize people and identify their faces. Meanwhile, self-driving cars can even predict the weather and advise evacuating before a hurricane. Further, deep learning applications in medicine will make our lives better, from saving lives to helping doctors identify early signs of cancer to forecasting future traffic congestion.
Google uses deep learning to deliver solutions to its users. Its computer program AlphaGo recently beat standing champions in the game of Go. Google also utilizes deep learning to create speech that sounds more natural than the current speech systems. Among other applications, Google uses deep learning in their applications such as Google Translate and Google Planet. Its deep learning database, known as TensorFlow, has produced a variety of artificial intelligence applications.
While computer scientists have been experimenting with neural networks since the 1950s, it was not until the mid-1980s that deep learning took off. Two major breakthroughs made it possible to create a network with many layers and improve its performance over traditional approaches. Computing power and data availability made this possible. With the rise of AI applications, computer vision and bioinformatics can now perform tasks that once seemed impossible. And movies featuring deep learning systems may come true before your very eyes.
The first examples of ML software were created decades ago. They used handcrafted features such as graphs and images, and often incorporated a simple computer model such as a simple linear regression. Despite this, they were not able to learn as well as more complicated systems. These models were based on low-cost artificial neural networks. However, their lack of understanding of brain wiring made them a popular choice for researchers at the time.
In the early 1990s, researchers at Google Brain published results from The Cat Experiment, which involved a neural network spread across more than a thousand computers. In the experiment, researchers took ten million “unlabeled” images from YouTube and left the training software to run unsupervised. In the end, one neuron in the highest layer responded strongly to images of cats. While it wasn’t quite as deep as it is now, this was a milestone in the history of neural networks.
The technology that drives machine learning is very similar to the learning that a toddler uses to identify a dog. With enough repetition, the toddler will associate a picture of a dog with the word dog. The same process will happen for deep learning. Initially, a model is created with only one layer of features, but later on, the layers of feature sets will become much more complex. That way, a computer program will be able to learn more accurately than a human.