There has been an upsurge in cyber crimes against women amidst the COVID-19 crisis. One of such cases is where teenagers from Delhi were recently revealed to be sharing objectionable material of minor girls over an Instagram group named ‘bois locker room’ (to be henceforth referred to as the ‘Boys Locker Room incident’).

Delhi Police Cyber Crime Cell and the Delhi Commission for Women have taken suo motu cognisance of the Boys Locker Room incident. It is a common knowledge that this is not an isolated incident. Such groups and pages are easy to find on social media platforms. However, something that definitely catches one’s attention upon bare perusal of the screenshots is the discussion about changing their social media handles, bio, and profile pictures after the screenshots started drawing a lot of attention. The screenshots also show messages from the deleted Instagram accounts.

These lead us to a larger question:

What is the role of social media platforms such as Instagram, Facebook, and Snapchat in restricting such criminal acts, and assisting in the investigation?

Role of Social Media Platforms and the Safe Harbour Clause

Social media platforms, which facilitate publication of user-generated content, as well as the Internet Service Providers (ISPs) are known as intermediaries, as defined under India’s Information Technology Act, 2000 (“IT Act”) [1]. Intermediaries are not held accountable for any unlawful user-generated content, if the intermediary holds no editorial control over the user generated content [2]. This is known as the ‘safe harbour’ provision.

This safe harbour provision bestows on the intermediaries some protection against liabilities for user generated content. Understandably, the social media platforms cannot be held liable for the user-generated content on their platforms, and cannot be expected to respond to all such request and complaints by millions of users. They are only required to inform the user through their terms and conditions, or user agreement that publication of content which is “obscene, pornographic, libellous, invasive or other’s privacy” [3] or “harm minors in any way” [4] is prohibited.

However, an intermediary’s responsibility is only limited to putting such clauses in their user agreement, and they are not legally responsible to take any actions against publication of such data, unless it is brought to their ‘actual knowledge’. Further, intermediaries are required to preserve such information and associated records for at least ninety days for investigation purposes [5]. The records related to such information shall be disclosed to the government agencies upon receipt of an order in furtherance of the legal obligation, even if the data includes sensitive information [6]. 

This rule also empowers administrative and quasi-judicial bodies to obtain the records of such information, as it does not distinguish between orders issued by such bodies or the judicial bodies, as long as it is backed by the law.

However, end-to-end encryption messaging services, such as WhatsApp, have been known to pose difficulties for the security agencies to intercept and decrypt such information. WhatsApp claims that it does not retain any logs of such information; but it does produce records of identities, location, telephone number, and contact details when such information is subpoenaed by the courts, pursuant to the legal requirement of assisting the law enforcement agencies [7].

Judicial Intrusion vs Fundamental Rights

In light of the Fundamental Right to Freedom of Speech and Expression and reasonable restrictions thereto, the Supreme Court of India set the benchmark as the restrictions enumerated under Article 19(2) of the Indian Constitution and observed that the intermediaries would be considered to have ‘actual knowledge’ upon receipt of a notice by an appropriate government agency, or a court order [8]. Therefore, even if the content over any platform is reported by users, the intermediary is not bound to take the content down unless a relevant government agency orders them to do so.

When users report any content on various grounds available, the intermediary’s decision to act upon it is subjected to its policy, user agreement and community standards. Absence of a legal mandate to act on such user-reported content, the need for approaching the authorities, judicial procedure, and fear of societal implications often result in unreported cases. This does not help prevent social media platforms from becoming a virtual ‘boys locker rooms’.

Certain amendments to the Information Technology (Intermediary Guidelines) Rules, 2011 were proposed by the Ministry of Electronics and Technology in 2018. These can help prevent similar incidents from taking place. The proposed Rule 3(9) required the social media platforms to deploy “automated tools … to proactively identify, remove or disable public access to unlawful information and content.[9] 

The amendment has been heavily criticised by various organisations primarily on three grounds: first, its practical implications may result in abrogation of the freedom of speech and expression; second, the terms used in the proposed amendment are vague, and have a very wide scope; and third, the current technical advancements are not sophisticated enough to determine meaning the same way as a human agency can.

However, it has been held that the intermediaries need to be more diligent and use measures such as content filters to restrict publication of obscene or pornographic material on the platforms [10].

Further, another proposed amendment prescribes that the “intermediary shall enable tracing out of such originator of information on its platform as may be required” [11]. Such a provision may also require end-to-end encryption messenger services such as WhatsApp to record and store data. Understandably, it has been criticised on inter alia two grounds: one, possible encroachment on people’s fundamental right to privacy; and two, usage of vague terms with a very wide scope.

Recommendations

Taking ideas from such proposals and lessons from incidents such as the ‘Boys Locker Room’ incident, it is recommended that a notification containing guidelines incorporating the proposed amendments be released. The guidelines can mandate the use of automated tools based on artificial intelligence, and machine learning to red flag content related to rape threats, sexual harassment, publication of obscene, pornographic material, and material that may harm minors in any such form.

Further, mandatory supervision by a human agency over the red-flagged content is recommended. A similar mode can also be used to monitor user-reported content. Application of such guidelines may be extended to platforms such as Facebook, Instagram, WhatsApp, Snapchat, LinkedIn, and Tinder, or any such intermediary based on the active monthly users.

Further, a set of guidelines governing end-to-end encryption messaging platforms must also be introduced, which would require messaging services to record and maintain a record of communication metadata. A decryption key must also be mandatorily made available to track the messages. This would help substantially in the event of destruction of evidence on personal devices. However, considering that such an option of decryption may lead to encroachment on privacy of the users, it is critical that the provision be available only under strict judicial scrutiny and supervision in cases related to dangerous, defamatory or lewd content.

Needless to say, the terms used to draft such guidelines must not be vague nor have a wide scope. There is a very thin line segregating the necessity to prevent cases similar to the the Boys Locker Room incident, and encroaching the freedom of speech and expression and right of privacy of users.

Conclusion

In the modern-day cyberspace, sexual harassment, morphed images, revenge porn, and rape threats are common. The victims often suffer in silence. It is time to employ measures that prevent such crimes from happening, or at least minimise the suffering of the victims.

Although criminal complaints can be filed against the perpetrators, it is often avoided. However, with appropriate policy measures and technological advances, not only can the suffering of the victim be mitigated, but social media can also become a free and safe space for exchange of opinion and ideas.

It is important to keep in mind while walking the thin line between encroaching upon civil liberties and protecting the interests of victims. It is important to take a step before all the social media platforms turn into boys locker rooms.

This article was originally published by the Law School Policy Review and has been republished here under a collaboration.


Endnotes

[1] Section 2(1)(w), Information and Technology Act, 2000.

[2] Section 79, Information and Technology Act,  2000.

[3] Rule 3(2)(b), Information Technology (Intermediary Guidelines) Rules, 2011.

[4] Rule 3(2)(c), Information Technology (Intermediary Guidelines) Rules, 2011.

[5] Rule 4, Information Technology (Intermediary Guidelines) Rules, 2011.

[6] Rule 6, Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

[7] Rule 3, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[8] Shreya Singhal v. Union of India, (2015) 5 SCC 1,

[9] Draft Rule 3(9), Information Technology [(Intermediary Guidelines) Amendment] Rules, 2018.

[10] Avnish Bajaj v. State, 150 (2008) DLT 769.

[11]  Draft Rule 3(5), Information Technology [(Intermediary Guidelines) Amendment] Rules, 2018.


Shubham Jain is a student at National Law University, Jodhpur

Featured image (representational) by Nahel Abdul Hadi on Unsplash.