Skip to content Skip to sidebar Skip to footer

Source: SciecleDigital as of 28-04-2020

 

A new model that proves that artificial intelligence is capable of acquiring a reasoning ability on its environment.

Artificial intelligence researchers at DeepMind (aune company owned by the Alphabet Group) have just published a new learning framework for AI on Arvix.org. This model called Approximate Best Response Information State Monte Carlo Tree Search (ABR IS-MCTS) is able to adapt to a player’s strategy. 

ABR IS-MCTS: the new AI framework capable of reasoning

A number of games such as chess, go or Texas Hold’em have allowed ABR IS-MCTS to practice. According to DeepMind CEO Demis Hassabis: “Games are an extremely practical testing ground for developing algorithms that can be transposed into the real world to work on difficult problems.”” After DeepMind‘s work on  a self-taught  AI,  this new framework proves that artificial intelligence is capable of acquiring a reasoning ability on its environment.

It’s a real achievement. The Holy Grail for artificial inteliligence researchers who want to show that while AI is capable of performing automatic tasks such as data entry, even smarter forms can reason and adapt. What DeepMind has just accomplished is also the goal sought by OpenAI,another pioneer of artificial intelligence that develops in particular an environment called Neural MMO, to  train agents in an  RPG context. .

In November 2019, go game champion Lee Se-doldol   retired.   At the time, he said he could no longer compete with DeepMind’s artificial intelligence, which had become far too strong. He explained at the time that “with the progression of artificial intelligence in the go. game, I realized that I can no longer compete, even if I continue to progress and I become the best player in the world, artificial intelligence is far too strong, it can no longer be beaten.

AI can calculate the best answer

With ABR IS-MCTS, DeepMind adopts a new technique this time, which consists of dodging decision points. To get around the problem,the researchers wanted to adopt a technique that involves analyzing a player to adapt to his strategy. By using reinforcement learning (a training technique that encourages agents to achieve goals through a reward system) AI can calculate the best response to be provided.

The game of en artificial intelligence is therefore getting closer and closer to that of human players.  By analyzing the actions of its opponent,ABR IS-MCTS is able to assimilate its strategy and thus find a way to counter it. This AI simulates what would happen if a human trained for years to know his opponent. The percentage of artificial intelligence is more than 50% for all games tested and more than 70% for go.

Show CommentsClose Comments

Leave a comment

News ORS © 2021. All Rights Reserved.