A new computational method for faster development of AI systems

A new computational method for faster development of AI systems

⇧ [VIDÉO] You may also like this partner content (after advertising)

Artificial intelligence (AI) systems, which exploit functions inspired by the brain, are today at the center of all interest. They are able to “learn” by integrating blocks of data. These new type “students” don’t seem talented enough to some scientists: so a team of researchers recently conducted experiments to reduce the learning time needed to develop AI.

The researchers have now published an initial version of arXivpending peer validation: We are pleased to have the results confirmed and further studied by the research community. ”, they say. In this writing, they say that they have found a solution that can significantly reduce the learning time of AI.

As a reminder, artificial intelligence or artificial intelligence According to the Council of Europe “The major of a young man of about sixty years old, combining sciences, theories, and techniques (particularly mathematical logic, statistics, probability, computational neuroscience, and computer science) whose goal is to succeed in making a machine that imitates the cognitive abilities of a human being.”

Human-level artificial intelligence, common elements in science fiction, fuels many fantasies. However, reality is still far from presenting machines like the ones we see in the movies. Therefore, specialists often prefer this term to the exact name of the technologies already at work. This means, in many cases currently, “machine learning”.

In this case, “artificial intelligence” consists of a system that is “fed” with a large amount of data to “learn” and extract logical connections towards a particular goal. This can be, for example, learning to recognize faces, texts, or even creating realistic landscapes from words. These learning methods are inspired by the work of biological neurons, which then became closer to statistical methods. So we are talking about an “artificial neural network”.

READ  Newly developed vaccine that uses measles vaccine to protect against SARS-CoV-2 - a healing practice

Thus, the transmitted data will circulate in an artificial “network” of neurons, generally hypothetical. These are points in the network linked together by computer code. So this network receives the incoming information and training data and transmits the outgoing information.

Divide the calculation time by two

The scientists in this study were interested in the next step. In fact, in the current methods, the data crosses this network, and then the second process is to evaluate the quality of the information at the output to calculate the gradient: this indicates how to implement weightings in the calculations to improve AI. But to calculate this gradient, it is necessary to “return” in the opposite direction, for the entire chain of neurons to rise after the first pass of data: this process is called “reverse propagation”. This whole process, which is repeated over and over, is very time consuming. It sometimes takes several months, to end up with a “fully trained” AI.

To reduce this learning time, the team of scientists sought to combine these two steps into one, using what they called a “forward gradient.” The idea is to pass the data only once through the neural network and calculate a direct approximation of the gradient from that path. Inevitably, since the entire network is not tracked, this reduces computation time.

The first calculations made in this way seem rather encouraging to them: “ From the point of view of automatic differentiation applied to machine learning, the ‘holy grail’ is whether the practical benefit of gradient descent can be achieved using only the forward gradient, thus eliminating the need for backpropagation. This could change the computational complexity of typical machine learning training pipelines, reduce training time and energy costs, influence the design of devices for machine learning, and even have implications with respect to the biological plausibility of backpropagation. “.

READ  Satellite constellations: polluted and privatized space? | echo

At best, the scientists say, and depending on the field, the computing time could be cut in half. However, many tests still need to be done to confirm this. Ultimately, the scientists also hope that their research will help in a better understanding of the functioning of the human brain in certain specific aspects: In the long term, we want to see if the forward gradient algorithm can contribute to the mathematical understanding of biological learning mechanisms in the brain. In fact, reverse reproduction has historically been considered biologically implausible, as it requires precise reverse contact. “.

source: arXiv

Leave a Reply

Your email address will not be published. Required fields are marked *