JULY 31 — It is humbly submitted that to suggest that social media platforms should not be regulated — or regulated to a minimal extent — because of concerns that regulation may impact the right to freedom of expression, is to fundamentally mischaracterise the right and its interaction with other fundamental human rights.

The right to free speech ought to be understood in relation to other fundamental rights, such as the right to freedom of thought and conscience, right to information and right to life, liberty and security of person, among others.

It must be appreciated that online harm has the potential to impact upon the enjoyment of the above rights.

What is online harm?

The World Economic Forum’s Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms report is frequently cited as helping to define the general nature of online harms.

The report acknowledges how much the Internet has heightened various social harms, such as bullying and harassment, hate speech, disinformation and radicalisation and how the amplification of these harms has far-reaching consequences, affecting individuals, communities and societies.

According to the author, online harms are not only categorised but given examples. Examples of online harms that threaten personal and community safety include child sexual exploitation material, pro-terror material and extremist context, among others. — AFP pic
According to the author, online harms are not only categorised but given examples. Examples of online harms that threaten personal and community safety include child sexual exploitation material, pro-terror material and extremist context, among others. — AFP pic

Developed by a working group of the Global Coalition for Digital Safety, comprising representatives from industry, governments, civil society and academia, the Typology of Online Harms is intended to serve as a “foundation for facilitating multi stakeholder discussions and cross-jurisdictional dialogues to find a common terminology and shared understanding of online safety”.

It is useful in providing a framework with which to identify and categorise online harms.

The Typology of Online Harms recognises the complex and interconnected nature of online safety, encompassing content, contact and conduct risks.

Content harms include harms sustained in content production, distribution and consumption. Contact harms are those harms that can occur as a result of online interactions with others, whereas conduct harms are harms incurred through an individual user’s behaviour which is facilitated by technology and digital platforms.

Online harms are not only categorised but given examples. Examples of online harms that threaten personal and community safety include child sexual exploitation material, pro-terror material and extremist context, among others.

For harms to health and wellbeing, the example is online contents that promotes suicide, self-harm and disordered eating caused by content that promotes suicide or disordered eating.

Harms that constitute violation of dignity include bullying and harassment, doxxing and image-based abuse. Deception and manipulation are exemplified by impersonation (posing as an existing person, group or organisation in a confusing or deceptive manner), scams (dishonest schemes that seek to manipulate and take advantage of people to gain benefits such as money or access to personal details), phishing (sending of fraudulent messages, pretending to be from organisations or people the receiver trusts, to try and steal details such as online banking logins, credit card details and passwords from the receiver) and catfishing (use of social media to create a false identity, usually to defraud or scam someone).

Consequently, one can easily identify and appreciate that online harms are now rampant, with victims experiencing a range of significant and lasting impacts. In acknowledging the role that users play in the production, distribution and consumption of content, the typology is also cognisant of the ways in which technology facilitates behaviour that is conducive to harm.

It is imperative, therefore, that the responsibility for content, contact and conduct risks and harms must include the social media platforms.

It is a misguided interpretation of the right to free speech that social media platforms should not be regulated and not be accountable for the harms caused by abuses of the right to free speech.

No government should be dissuaded from pursuing a comprehensive regulatory regime. The World Economic Forum has already called for urgent action which is needed “to minimise the potential harm to all people, with an emphasis on society’s most vulnerable groups, including children”.

* This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.