Skip to content Skip to sidebar Skip to footer

Source: ZDNet as of 18-05-2020

Technology: The Covid-19 pandemic, which has derailed many AI algorithms, has had consequences that raise the question of the autonomy of intelligent machines. 

When artificial intelligence and machine learning work, it can be spectacular.. But on the other hand, when they stop working, they fail dramatically as well. This is one of the lessons that can be learned from the current health crisis, as evidenced by  the MIT Technology Review.. The author,Will Douglas Heaven, explains that unusual consumer shopping behaviours have caused “problems for algorithms involved in inventory management, fraud detection, marketing, etc. Machine learning ou models, formed on “normal” human behaviour, now find that normality has changed,and some are no longer functioning as they should.” 

Machine learning-based models “are designed to respond to changes,” he says. “But most are also fragile; they malfunction when input data differs too much from the data on which it was trained.. It’s a mistake to think that you can set up an AI system and then let it work independently.”

It then becomes clear that we are still fully self-managed solutions, if at all possible. If the current situation tells us anything, it is that human knowledge will always be an essential part of the equation that includes AI and machine learning.

Artificial intelligence can also be wrong

In recent months, I’ve explored the potential range of AI and machine learning with industry leaders, and the role that humans need to play. Much of what I heard foreshadowed the upheaval of the pandemic. “There is always the risk that the AI system will make bad assumptions, reducing performance or data availability,” says Jason Phippen, global product and solutions marketing manager at SUSE. “It is also possible that the data is the result of poor combinations, or poor learning, leading to poor business or operational decisions. The worst-case scenario would probably be one where a freely functioning system moves data to refrigerated storage, resulting in loss of life or limbs.” 

AI and machine learning simply cannot be integrated into an existing infrastructure or set of processes. Chris Bergh, CEO of Data Kitchen, en warns that existing systems need to be adapted and adjusted. “In a traditional architecture, an AI and machine learning system consumes data environments to meet data needs,” he says. “We need to en slightly change this architecture by letting the AI manage the data environment. . This transition must be smooth en to avoid catastrophic failures in existing systems,and to put in place robust systems.”

AI and machine learning systems “developed to manage data environments must be considered critical systems, and development must be done with great care,” Bergh says. “Because data is the driving force behind current business decisions, data environments will be at the heart of the business. Therefore, even a slight failure in data management will result in a significant cost to the company due to the loss of operational time, other resources and user confidence. »

Another problem highlighted by Chris Bergh is that data experts have gaps in knowledge of AI and machine learning and, conversely, AI and machine learning specialists have little knowledge of data management. 

Self-managed data systems … a distant fantasy

The bottom line is that qualified people will always be needed to manage the flow and ensure the quality of data injected into AI and machine learning systems. The mechanics of data management will be autonomous, but the context of the data requires human involvement. .

“We can look at examples, like self-driving cars, or energy optimization of data centers, which use Google’s  DeepMind, and have some confidence in identical opportunities in database management,” says Erik Brown, a senior chief technology officer at West Monroe Partners. “However, fully autonomous databases seem to belong to a more distant future, and human involvement should become more strategic and focus on areas in which it is better equipped.”.

Fully autonomous data environments “will probably take many years to complete,” agrees Jeremy Wortz,chief architect at the West Monroe Technology Design Office. “Machine learning is far from solving vast and complex problems. However, an approach that develops narrow and deep use cases will make the difference over time and begin the journey of a self-managed system. Most organizations organisations can take this approach but will need to make sure they have a way to list close use cases, with the right technology and talent to carry out these use cases. »

A d’œuvre human workforce needed but not necessarily expert

The more organizations organisations depend on AI, the more humans will have to get involved and supervise the data that moves through these systems, as well as the information that is produced.80% or more of THE AI and machine learning efforts “are often related to data search, translation,validation and the preparation of complex models,” says Brown. “As these models inform more critical commercial use cases – fraud detection, patient life cycle management – there will always be more and more requests on the custodians of this data.”

Few data environments, apart from the World’s Google and Amazon, are really ready, says Erik Brown. “This is a huge growth opportunity in most industries. The data is there, but collaborative and inter functional organizational structures, and flexible data pipelines, are not ready to exploit them effectively. »

 You don’t have to be a data-educated expert to manage AI systems: what you need is an interest in learning and using new techniques. “Artificial intelligence technology is driving the trend towards citizen data usage, which is a game-changer,” says Alan Porter, director of product marketing at Nuxeo. “In the past, these roles have required in-depth technical knowledge and coding skills. But with advances in technology, many tools and systems do the bulk of the technical work for you. It is not as important for people in these positions to have technical knowledge, rather organizations are looking for more analytical people with specific business expertise.»

Analyze machines before giving them more autonomy

While people with technical and coding skills will continue to play an essential role in organisations organizations, Porter continues, “a big piece of the puzzle now is to have analysts with specific business knowledge so they can interpret the information collected and understand how it fits into the bigger picture. Analysts also need to be able to communicate their findings to stakeholders outside the analysis team in order to make changes.” 

In his article on MIT, Heaven concludes that “everything being linked, the impact of a pandemic has been felt very broadly, affecting mechanisms that, in more typical times, remain hidden. If we’re looking for a positive, it’s time to take stock of these newly exposed systems and ask how they could be better designed,more resilient. If you want to trust machines, you have to watch them.” Indeed.

Source: ZDNet.com

Show CommentsClose Comments

Leave a comment

News ORS © 2021. All Rights Reserved.