Digital technologies can be used to empower – or oppress

By Ellen Judson

COMMUNICATION STRENGTHENS DEMOCRACY

Transparency, communication and accessibility are cornerstones of any healthy democratic system. Digital technologies allow exponentially increased possibilities for sharing of knowledge and expertise, of building relationships, of engaging in collective analysis and campaigning, and collective scrutiny and engagement with democratic processes at an unprecedented speed and scale.
The emancipatory potential of new technologies is clear – people are able to connect and to organise, whether that is to publically call out oppression and injustice, or to communicate privately and securely in contexts where open (or even private, but surveilled) conversation would put movements or people at risk (i).

BUT COMMUNICATION CHANNELS ARE USED TO SPREAD DIVISIVE MESSAGES

However, this potential is not fully actualised. And just as much as these technologies can be used for positive action, they can also be used by malicious actors. These actors benefit from the speed, security and connectivity that digital technologies offer, to spread, amplify and exacerbate division, hate and violence on an unprecedented scale – all of which are increasingly threatening democracies in Europe (ii) and beyond.
Demos research into Russian information operations on Twitter found that a common theme of these operations was spreading content which could foster opposition to migrants, through the selective amplification of stories related to numbers of migrants and integration, as well as widespread Islamophobic content relating to migrants (iii).

PEOPLE CAN ENGENDER VIOLENCE, BOTH ONLINE AND OFFLINE, AGAINST OTHERS, THROUGH THE USE OF DIGITAL PLATFORMS

People, both knowingly and unknowingly, can use digital technologies to perpetrate or bring about violence. Sometimes, they enact violence through speech, such as threats, harassment, or abuse online – particularly targeting women – which can have a significant impact on those affected and their engagement in online spaces (iv). In other cases, online actions have been associated with physical violence. In Europe, we have seen the radicalisation of individual extremists online who go on to commit violent acts (v). False reports being shared, often on WhatsApp, have also led to acts of violence against people from certain groups – in France, false reports about child abductions led to attacks on members of the Roma community (vi) , as has also been seen in India where rumours on WhatsApp have led to mob violence (vii).
And in the extreme, we see digital platforms implicated in atrocity crimes – systematic campaigns leading to the persecution of and crimes committed against minority groups. In Myanmar, military officials have used social media platforms, in particular Facebook, to spread disinformation and incite hatred and violence against Rohingya Muslims, contributing to what UN investigators have described as a possible genocide (viii).
Threats to people’s fundamental rights and freedoms, inside or outside of Europe, undermine the ability and claim of democracies in Europe to champion them – and what may seem like isolated incidents of violence need to be addressed before they become systematised, not after (ix).
A true democracy is not viable in conditions of division and violence – in conditions where certain people or groups are systematically being shut out of areas of discourse and information exchange, by government or by other individuals, and where those information spaces are used to coordinate or spontaneously bring about acts of violence against them (x).

EUROPEAN GOVERNMENTS ARE TAKING ACTION: BUT NATIONAL GOVERNMENTS MUST RECOGNISE THEIR REGIONAL AND GLOBAL DUTIES TO DEALING WITH THE HARMS THAT CAN BE BROUGHT ABOUT THROUGH USE AND CONTROL OF DIGITAL TECHNOLOGIES

There is increasing discussion in Europe about which forms of harm online are sufficient to lead to restrictions, either by governments, or platforms, or both. There is by no means a consensus, but there are increasing voices arguing that platforms, and the governments that regulate them, hold some kind of ‘duty of care’ (xi) to users to protect them and others from certain harms, whether that be exploitation, abuse, harassment, bullying, disinformation, or another form (xii).

However, considering duties towards individual users within individual national jurisdictions, is not sufficient in a context where the platforms in question operate transnationally, and facilitate communication – and so facilitate the resulting harms discussed – internationally. Coordinated, multilateral action is needed, as the UK’s Online Harms White Paper gestures towards, rather than a piecemeal regulatory structure (xiii).
As the UN Special Rapporteur on freedom of expression has argued (xiv), global platforms operate across democracies and non-democracies, across countries which are more or less liberal. Hence there is a need for a globally applicable standard – human rights – to be at the forefront of determining governmental and company responses to these problems.

This is especially salient given that all governments also hold the global responsibility to protect their citizens and prevent genocide, war crimes, ethnic cleansing and crimes against humanity. This responsibility of a government extends to the protection of citizens of other countries when their own government fails to do so (xv). Our conversations on online harms cannot afford to exclude considerations of how measures may increase or decrease the risk of these crimes occurring.

What this responsibility entails will be different in different situations – it does not mean that introducing the tightest of restrictions, or conversely loosening all regulations, can be the answer to every challenge. In Myanmar, fake news being spread by malicious actors on Facebook contributed to violence against Rohingya Muslims, with Facebook’s attempts to ban individuals or monitor hate speech not being effective (xvi) or being easily circumvented (xvii). In China, the government’s campaign of human rights violations against Uighur Muslims is being facilitated and compounded by their use of digital technologies to facilitate mass surveillance (xviii). In Sudan, as violence against protestors by government security forces escalated, internet access was completely blocked (xix). These are all instances of governments and tech capabilities interacting in a way that puts civilians at risk – but each in a different way, and each one requiring a different response. What these have in common, however, is the necessity of a strategy for responding which has human rights at its core (xx).

European governments hold responsibilities towards those in countries outside of Europe who may be at risk of these crimes – and they hold responsibilities to tackle the worrying trends within Europe itself. The UN Special Advisor on the Prevention of Genocide, Adama Dieng, has warned of ‘systematic dehumanisation’ occurring in Europe, given increasing levels of ultranationalism and violence against immigrants. ‘While extremists spread inflammatory language in mainstream political discourse under the guise of ‘populism’, hate crimes and hate speech continue to rise. Hate crimes constitute one of the clearest early-warning signs for atrocity crimes. Therefore, they must not remain unchallenged’, he writes (xxi). And the UN Secretary General this year mandated the Special Adviser on the Prevention of Genocide to form a global action plan on hate speech and online extremism (xxii).

Where people’s fundamental human rights are not being protected (xxiii) , and divisions stoked, democratic norms become strained even further – as we are seeing across Europe as extremist and ultranationalist views and rhetoric grow. Digital technologies have the potential to unite, to emancipate, and to empower – to shape how we interact and the narratives we live by. But we need a robust, evidence-based and value-led framework to assess, when designing new technologies, rethinking old ones, or establishing regulatory frameworks to limit the harms that arise: will this empower, or oppress?

REFERENCES

i) https://www.opendemocracy.net/en/fear-of-surveillance-is-forcing-activists-to-hide-from-public-life-in-belarus/; https://privacyinternational.org/news-analysis/2816/communities-risk-how-encroaching-surveillance-putting-squeeze-activists;  https://twitter.com/N_Waters89/status/1136956229983621120; https://www.politico.com/story/2017/02/federal-workers-signal-app-234510
ii) https://www.ft.com/content/86f2645a-c7a2-11e8-ba8f-ee390057b8c9; https://nebula.wsimg.com/1b7ae50d3bad3b657e5f35fe5141524c?AccessKeyId=9136D1A332A73825C5C6&disposition=0&alloworigin=1
iii) https://demos.co.uk/project/warring-songs-information-operations-in-the-digital-age/
iv) https://decoders.amnesty.org/projects/troll-patrol; https://www.pewinternet.org/2014/10/22/part-4-the-aftermath-of-online-harassment/; https://www.thetimes.co.uk/article/1a6f281e-412c-11e9-aa0a-30b9d78dd63b
v) https://www.ft.com/content/6abda90e-70bf-11e9-bf5c-6eeb837566c5, https://www.ft.com/content/86f2645a-c7a2-11e8-ba8f-ee390057b8c9
vi) https://www.bbc.co.uk/news/world-europe-47719257
vii) https://edition.cnn.com/2018/07/16/asia/india-whatsapp-lynching-intl/index.html
viii) https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html; https://www.theguardian.com/technology/2018/mar/13/myanmar-un-blames-facebook-for-spreading-hatred-of-rohingya; https://www.mic.com/articles/191899/myanmar-military-members-fake-news-facebook#.ggl3PU9Of; https://news.un.org/en/story/2018/08/1017802
ix) https://www.protectionapproaches.org/our-approach.html
x) https://demos.co.uk/wp-content/uploads/2019/05/Warring-Songs-final-1.pdf
xi) https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf
xii) https://www.gov.uk/government/consultations/online-harms-white-paper; https://www.euronews.com/2018/11/22/france-passes-controversial-fake-news-law; https://www.bbc.com/news/technology-42510868
xiii) https://www.gov.uk/government/consultations/online-harms-white-paper;
xiv) https://slate.com/technology/2019/06/social-media-companies-online-speech-america-europe-world.html
xv) https://www.una.org.uk/r2p-detail
xvi) https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/
xvii) https://www.mic.com/articles/191899/myanmar-military-members-fake-news-facebook#.ggl3PU9Of
xviii) https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition
xix) https://www.ft.com/content/b1848126-8c0f-11e9-a1c1-51bf8f989972
xx) https://nebula.wsimg.com/b98809100cd1ee6563e884c6cdb2ec06?AccessKeyId=9136D1A332A73825C5C6&disposition=0&alloworigin=1
xxi) https://www.un.org/en/genocideprevention/documents/Adama%20Dieng-Systematic%20Dehumanization%20in%20Europe.pdf
xxii) https://www.japantimes.co.jp/news/2019/05/14/asia-pacific/u-n-chief-meets-survivors-new-zealand-mosque-shootings-decries-online-hate/#.XP-qyIhKg2w; https://news.un.org/en/story/2019/04/1037531
xxiii) https://www.ohchr.org/EN/Issues/RuleOfLaw/Pages/Democracy.aspx

Sign me up