UK government pays to achieve “Foreign interference” such as misinformation is a priority offense under the proposed Internet Security Bill, forcing tech companies to remove infringing content shared by foreign state actors.
Follow this move Announcing the latest legislation by the United Kingdom and which is designed to deter foreign government actors seeking to “undermine the interests of the United Kingdom”, which includes targeting attempts at foreign interference in elections with harsher penalties. The proposed legislation comes shortly after MI5 warned of a Chinese agent with links to the Chinese Communist Party (CCP). It has infiltrated Parliamentwhile the UK later intensify its efforts To counter Russian disinformation and “trolley factories” Seeking to spread misinformation About the war in Ukraine. And then there was Prank call to Ben WallaceUK Secretary of State for Defense, from Russian pranksters pretending to be Ukrainian Prime Minister Denis Shmyal.
It is also worth noting that the UK is no stranger to the controversy of disinformation, perhaps most notably Russia’s alleged interference in the 2016 Brexit referendum that saw the UK leave the European Union. a Subsequent report found that the British government and intelligence agencies No real assessment of Russia’s attempts to interfere in the referendum has been made, despite the available evidence.
Russia and the ‘enemy war on the Internet’
While today’s announcement applies to disinformation from all foreign actors, British Digital Minister Nadine Dorries specifically referred to the recent “cyber-hostile war” emanating from Russia.
“The invasion of Ukraine demonstrated once again how easily Russia can weaponize social media to spread misinformation and lies about its atrocities, often targeting the victims of its aggression,” Doris said. In a statement published By the Ministry of Digital, Culture, Media and Sports. We cannot allow foreign countries or their clients to use the Internet to wage a hostile online war without hindrance. This is why we are strengthening our new internet safety measures to ensure that social media companies identify and remove state-backed disinformation.”
This essentially results in the UK establishing closer ties between two new bills currently making their way through Parliament – the National Security Bill, which was introduced in the Queen’s speech in May as an alternative to existing espionage laws, and the Online Security Bill, which includes new rules on how online platforms manage suspicious content on the Internet. Under the latest law, which is expected to take effect later this year, online platforms such as Facebook or Twitter will have to take proactive action against illegal or “harmful” content, and could face fines of up to £18 million (22 US$1 million) or 10% of its global annual sales, depending on which is higher. Furthermore, the government regulator Ofcom will have new powers to block access to specific websites.
priority in crime
As a “priority crime,” disinformation joins a host of crimes already addressed in the Internet Security Act, including terrorism, harassment, stalking, hate crimes, human trafficking, extremist pornography, and more.
With this latest amendment, social media companies, search engines and other digital entities hosting user-generated content will have a “legal duty to take proactive preventive measures” to reduce exposure to state-sponsored misinformation seeking to interfere in the UK
Part of this will involve identifying fake accounts created by groups or individuals representing foreign countries, with the express purpose of influencing democratic or legal processes. It will also include the spread of “information hacked to undermine democratic institutions”, which – although not entirely clear – may include accurate content surreptitiously purchased from the UK government or political parties. So this may mean that Facebook and others They will be forced to remove content if it contains embarrassing statements about prominent British politicians.
But if we’ve learned anything over the past decade from managing user-generated content online, it’s going to be very difficult to do it at scale — and even then, it’s often not easy to tell whether a user is a legitimate or a bad actor acting from by a foreign government. Faced with the prospect of massive fines, it’s a challenge that could see a lot of legitimate online content or accounts caught in the firing line as internet companies struggle to comply with the legislation.