Is Gal Braun the Future of Deep Learning?


Tech • Information Technology Society • Education

Eps 2: Is Gal Braun the Future of Deep Learning?


The copy is for your personal, non-commercial use only.
The Future of Deep Learning Is Unsupervised, AI Pioneers Say
But for AI to reach new heights, the technology must figure out how to learn on its own, according to three AI pioneers who spoke Sunday at a technology conference.

Seed data: Link 1, Link 2, Link 3
Host image: StyleGAN neural net
Content creation: GPT-2, transformers, CTRL


Dianne Douglas

Dianne Douglas

Podcast Content
Deep learning is a class of machine learning algorithms inspired by biological nervous systems. In this article I will give a short introduction to DNNs that motivate and introduce the SIDL framework.
In 2012, Google demonstrated that deep-learning networks were able to decipher cat faces and other objects from a vast pile of unlabeled images. In recent years, a growing number of areas such as computer vision, machine learning, artificial intelligence, and artificial neural networks have found their way into the mainstream. This confirms intensive research and development in many areas directly related to artificial intelligence (AI) applications. DNNs have long been frowned upon by a certain segment of artificial intelligence scientists.
The technique is impressive and has produced extremely interesting and useful results, but so far unsupervised learning has not been achieved in more than a little.
The gap between the two techniques is caused by the fact that it is driven by a major problem that has proved confusing. To meet this challenge, we have used a combination of deep learning techniques as well as a number of other techniques and methods.
A suitable example is an image classification scenario in which SM is used to find the best image for a particular classifier (e.g. the image of a human face). SM defines a set of images that classifiers look at to make their prediction.
The idea behind SM is to understand that neural networks are locally dependent (i.e. identify the input the model uses to generate the output). The result can be a model that memorizes training data instead of learning general concepts from the data.
As a result, there is a risk that the network will focus on specific characteristics (such as rounded shapes) rather than the details needed to distinguish only the training data accurately.
For example, to train a neural network to identify images of apples and oranges, it must be fed images that are labeled as such. The idea is to prepare the machine so that it can understand the data to find out what the images have in common with the designations "apples" and "orange," so that it can eventually use the recognized pattern to predict more accurately what it sees in the new images. Fine-grained details may well describe the training apple images specifically, but they have proved to be inconsequential or even false when trying to detect new, invisible apples over the course of the test period.
In practice, it makes the practice almost perfect, and the accuracy of the forecast is refined when it sees more labeled images. The more data available, the better it can refine its prediction, even in the face of new, invisible images.
The approach is similar to how we identify something from photographs, videos, graphics and handwriting, but with other tools.
We've talked about seven research methods that could potentially overtake the spread in the future. Future ML technologies, including DL, will need to demonstrate learning from limited training materials and place this learning in the context of continuous learning and adaptive skills in order to remain useful. The ubiquitous presence of DL shows that NLP and computer vision applications are not just the next big thing in deep learning for machine learning.
The hope is that unsupervised learning can keep pace with the accuracy and effectiveness of supervised learning over time. As I argued in my article in the Information Age, "Over time and through research opportunities, a method of unsupervised learning could provide a model that accurately mimics human behavior.
Even where a person makes the final decision, AI tools should provide detailed reasons for people to find out that they are wrong, reverse the AI's decision, and accept it as an explanation they have prepared. While some DL models can be represented and simplified in words, pages with DL variables are available but not acceptable to judges or users, and most are unusable for DL algorithms.
One of the highly limiting features of DL-enabled solutions to this problem is that the learning algorithm still cannot provide a clear explanation for its decisions, which can provoke the user to blindly accept the AI tool's decision and then concoct false explanations for the rejected answer. No one knows how to modify the DL to give an ordinary person an explanation, so it cannot be made compatible.
With the advancing automation of deep learning tools, there is a danger that the technology will become something the average developer knows nothing about. Predictions for the future of deep learning, which claims to democratize DL. Reusable DL components built into the standard DL library replicate the training features of previous models to speed up learning.