[1/3] Making sense of the noise:

Sukriti
4 min readMar 1, 2021

Notification of the Intermediary Guidelines, IT Rules, 2021 — what does it really mean for you ?

The ‘new’ or I should say ‘old’ since these have been in discussion since 2018, intermediary guidelines have finally been notified by MEITY, first leaked, then released officially, and scrutinized in detail thus far. The rules encompass social media, OTT and digital news platforms — and here I distinguish the operational and strategic implications the rules pose for social media players, for the government and for us as users who generate, consume and exchange content on multiple channels.

If you are a ‘significant’ (the government is yet to define this threshold — but safe to assume Facebook, and Twitter lay within) social media intermediary, the rules now require additional operational bandwidth and speed to support proactive and reactive content filtering and removal. On the proactive front, the rules require platforms to engage in automated filtering of unlawful content related to rape and child sexual abuse besides voluntary takedown of other violative* content. On the reactive front, the rules have placed a time limit for platforms to service content takedown requests ranging from 24 hrs for user reported complaints to 36 hrs to service an order of court or law enforcement agency, and 72 hrs to address information requests from the government to support crime prevention and investigations. In order to support this proactive and reactive content moderation, social media platforms will also be required to designate a Chief Compliance Officer, a 24x7 Nodal Officer, and a Resident Grievance Officer to oversee dispute resolution requests that arise in case a user contests any action taken. The platforms will additionally publish monthly transparency reports which chronicle requests raised by users and government, and serviced by the intermediary — providing visibility to all stakeholders involved.

While the rules place additional compliance burden and costs on platforms, it would be errant to consider this to be the greater challenge, which is more strategic in nature. Indeed, the extant measures in place at the most widely used platforms — as evident in the community trust and safety guidelines, and published transparency reports, are sufficiently aligned with the spirit of removing and regulating violative content as proposed in the recently notified rules. It remains to be seen whether the volume, nature and result of user complaints and government court orders to remove content will change by virtue of the amendments to the earlier 2011 rules. Moreover, it is critical to distinguish the operational hurdle this change might pose, from the more fundamental strategic one.

The more strategic question then, is just how would these rules affect user engagement, safety and privacy? How could mandates that require platforms, in exceptional circumstances, to break end-to-end encryption of personal messages to aid law enforcement requests surface? Could these new amendments prove restrictive to creative self-expression by users or in fact make these platforms safer where questionable content is removed rapidly? Who would increasingly determine what qualifies as ‘questionable’ content: would it be the government and lawmakers, the user base, or the platform itself ? Social media discourse currently prides itself in being a democratic force — one that is equalizing by safeguarding the voice of religious and socioeconomic minorities, voice of dissent, and one that is decentralizing information flows, thereby homogenizing access for a diverse and heterogeneous user base. Platforms, particularly those that feature user generated content, are barometers of culture — would increasing regulation and responsiveness to government and users challenge this premise altogether, compelling platforms to become engineers and regulators of culture altogether?

The government aspires through the new amendments to drive accountability within social media — particularly to avoid circuitous loops it has historically faced while requesting content removal to mitigate the spread of fake news, or while seeking information to serve interests related to national security. Are contracted response timelines and more focal points of contact necessary or sufficient to address this goal, or could an ecosystem wide preparedness, in fact, better prevent the spread of misinformation? As a regulatory authority, tapping into the potential of digital economies and technological innovation is a key priority. Therefore, the moot question is again strategic in nature — less centered on over or under-regulating, or placing operational mechanisms to drive accountability, but more on effective and efficient regulation that sets the right precedent. Efficient regulation will allow the government to balance key strategic priorities of promoting healthy public discourse and platform economies on the one hand, and national security and societal stability on the other.

As users, we want to create, share and view content with freedom, while feeling safe in expression of our personal narratives, opinions and creativity. To the extent that the new rules introduce processes for us to request urgent responses and content takedown where we find it violative or repulsive, and subsequently increase quality of user experience on platforms, we have reason to rejoice in the agency we will enjoy. It could equally be the case, however, that we sense that our individual agency is in fact reduced in the new, more regulated social discourse. For example, with the rules carrying a traceability mandate that requires under select circumstances breakdown of end-to-end encryption, we might need to be prepared to risk providing government glimpses of content of our private messages. Here again, it is critical to isolate the operational from the strategic implications of the amendments — would we as users concede agency to a platform, or to the government in regulating and ensuring a safer social media experience, or would we prioritize retaining more agency with ourselves ?

  • Disclaimer — these views are strictly personal, please feel free to bring to notice what may have escaped fact, reason and judgment here.
  • *Violative — defamatory, obscene, pornographic, paedophilic, invasive of privacy, related to money laundering, or indicative of harassment

--

--

Sukriti

recently product policy @Tiktok, previously public policy @LinkedIn