top of page
  • William Hall

A voice of reason? The rising challenge of MDH in politics

BY WILLIAM HALL


Those of you on TikTok in February this year may remember seeing videos of Barack Obama, Donald Trump, and Joe Biden playing Minecraft or other online games together. After a brief period of incredulity, you probably made the safe assumption that these three powerful men were not in fact bonding over their dirt starter-house but were simply bodyless AI generated voices. But what if you could not make that safe assumption so easily? What if that trust in your news was not there?


This is currently the case in many of the world’s most chaotic and conflict-riven regions. For instance, avid followers of the fringe radio show "The Voice of Sudan" will have recognised the voice of infamous Sudanese President/Dictator Omar Al-Bashir last week railing against the state of the country and its military leader General Burhan. Al-Bashir's sudden re-emergence on the political scene after a year-long hiatus under heavy guard and following indiction for war crimes is the perfect catalyst to further destabilize the already catastrophic scene in Sudan, which has seen 5.4 million displaced since fighting broke out in April. Memories of his 20-year long rule and the three distinct genocides he is accused of would further inflame the regional tensions driving the conflict. That is, if he actually were back. Investigations by the BBC show that his ‘return’ to Sudanese politics was in fact a strikingly realistic AI-generated voiceover of another Sudanese commentator, Al Insirafi (who is not involved in the fake). Whilst it is good news that this case of M.D.H (misinformation, disinformation and hate speech) was discovered, what is more worrying is the relative ease with which audio fakes can be created today – a simple Google search provides hundreds of online websites offering realistic voiceovers.


In addition to AI voiceovers, the AI image or video is becoming a more and more common tactic of MDH distributors. Those of you following the war in Ukraine last year will probably have seen a grainy and shaky video of President Zelensky issuing Ukraine’s formal surrender to the Russian forces. Alternatively, those keeping up with Russian domestic news this year will have seen Putin declaring that Russia was being invaded and martial law was in effect. Except, of course, these events never actually happened. Both cases were state-sponsored espionage and counterespionage attempts to promote panic and fear in their opponents populace. As with the voice cloning of Al-Bashir, both deepfake videos were investigated and denounced as fake, yet a similar threat as with the voice cloning is raised by how quickly these videos can be made and distributed – and how much better they are getting.


These examples, and the ease with which they were made, pose an immediate threat of inciting chaos in conflict zones, hindering governmental or peacekeeping forces from doing their job. But AI also poses a more existential risk of further reducing trust in news and information, particularly as these fakes are posted on harder to verify social media sites. In the US alone, 50% of Americans get some of their daily news from social media, and 44% of US social media users reported seeing false or misleading information. This overreliance on often inaccurate social media posts has in part led to low public faith in global news, which lies at only 32% and 33% in the US and UK respectively. Such a lack of trust can promote social fragmentation and conflict in all societies, not just warzones.


The situation might seem dire, and for good reason, yet all hope is not lost for public faith in the news. Whilst there are clearly problems with MDH and social media, broadcast news and newspapers have higher trust levels – particularly in Europe – with the BBC and Der Spiegel being seen as neutral by 68% and 57% of the UK and Germany respectively. These higher numbers are in part due to good vetting practices employed by these institutions on possible fakes, with BBC Reality Check vigorously scrutinising huge quantities of footage and recordings before publishing stories. More trust in traditional media would go a long way in combatting MDH.


The example set by these institutions, and the headway that states around the world are making in both anti-deepfake legislation and deepfake recognition are glimmers of hope for trust in the news, yet more needs to be done by both national and supranational governments to make these glimmers tangible.


Image: Defense Visual Information

68 views
bottom of page