Skip to content Skip to sidebar Skip to footer

Source: Journalism.co.uk as of 02-03-2020

From correcting algorithms that discriminate certain groups of people to fighting filter bubbles that endanger democracy, we need to remind ourselves who is in charge of machine learning

Outsourcing human decisions to computers used to bear a glimmer of hope for a more equal society: a robot cannot tell the difference between genders, geographies or skin colours and it will treat everyone equally, right?

Wrong. And the wake-up was quite harsh as we realised that an algorithm is only as good as the data we use to create it. The truth is, data mirrors human behaviour with all its prejudices, errors and biases.

Ethics of artificial intelligence – a concern around how humans design and use machine learning – has been around for some time now. Most recently, the Finnish AI company Utopia Analytics has published its Ethical AI Manifesto. Its aim is to not only outline company principles but also to start a conversation about what is – and is not – acceptable in the world of newsroom algorithmsAI is a tool, and humans are the master,” said Utopia’s CEO Mari-Sanna Paukkeri“People should always define what is right and what is wrong.” She explained that technology can help journalists automate many routine and labour-intensive tasks, such as comment moderation, sifting through large datasets or spotting audience trends. However, a quality algorithm is not everything. Journalists need practical tools to use it.

The manifesto draws on the United Nations Universal Declaration of Human Rights and, according to Paukkeri, the company is ready to walk away from a contract if a client is seeking to breach these principles. She shared the story of a publisher who asked them to build a model that would prefer women’s comments rather than men’s because of the assumption that male users are more likely to misbehave online.

Besides this, a well-functioning AI model must also stay up to date able with the latest policies and apply them accordingly. It must decide, for example, whether a comment or a piece of user-generated content is publishable or whether it needs to be flagged to humans. Failure to do this could result in costly decisions.

Despite the pitfalls, it pays off to get your AI strategy right and get there early. For example, since the Finnish publisher Iltalehti adopted an AI-powered comment moderation tool, it tripled its volume of real-time online comments which are an important driver of its audience engagement. “When you build an AI model you need to know how it behaves and why,” said Paukkeri.

If the model is built against a certain group of people based on, say, demographics, opinions, or gender, you may quickly find yourself in trouble. The quality of a machine learning tool depends on the quality of data it is built with. The problem is, once people get burnt, they are more reluctant to use AI again. “The media and editors bear a lot of responsibility and the stakes are high,” said Paukkeri. “So we need to build models together with editors to mimic their decision-making.” She stressed that humans need to supervise machine learning every step of the way, from creating the algorithm to the ways the tools are used and selecting quality data.

The reality is that editors are already busy enough before spending hours pondering these questions. So how do you choose what is right for your newsroom? Where do you even start?

“It is a bit like when you recruit a new employee,” said Paukkeri. “How do you know they are good?” Most of us will rely on reputation, check the references and then give them the chance to prove themselves, while closely monitoring the quality of their performance. After all, robots may soon become your best colleagues.

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.