Privacy in a ‘fishbowl society’

Share This Post


In the age of Artificial Intelligence (AI), technology is a double-edged sword, with users grappling with the trade-offs between convenience and privacy. While India has a normative privacy framework in terms of the Puttaswamy judgment (2017); the Information Technology Act, 2000 and its Intermediary Guidelines; and the Digital Personal Data Protection Act, 2023, and Rules, the reality of privacy remains opaque.

We now live in a fishbowl society where we are gauging ‘harm’ from a myopic lens of privacy and dignity instead of obscurity. As Meredith Broussard notes in her book Artificial Unintelligence, society’s over-reliance on technology is leaving us ill-prepared to cope with the very systems we have built. This not only exposes individuals to the risks of data breach but also pushes them into obscurity, especially in cases of Non-Consensual Intimate Image Abuse (NCII), where algorithms generate deepfake pornographic images without one’s knowledge or control. Regulating such an assault is an urgent legal and policy imperative. The conventional frameworks for addressing such abuses are inadequate. Traditional approaches often describe risks of any such surveillance as loss of privacy, when in reality it is many more things as well: anxiety, chronic fear of being watched, victim blaming and shaming, societal stigma, career stagnation, permanent loss of autonomy, and bodily integrity.

Laws are not enough

Surprisingly, despite cybercrimes being on the rise, there is no contemporary data on NCII. Data of the National Crime Records Bureau (NCRB) puts all cybercrimes in one category, without any granular classification of specific offences. We filed an Right to Information application on October 3, 2025 seeking information on the number of cases registered in the previous year relating specifically to cyberbullying and cybervoyeurism, along with the gender-wise distribution of victims. After more than a month, the Ministry responded that “law and order” and “police” fall under the State List, and therefore, the most appropriate authority to furnish such information would be the respective State governments.

This shows that mere legal provisions are not sufficient to address the realities of online abuse. Accessibility, awareness, and social acceptance of these laws play an equally critical role in determining their effectiveness. A significant share of young women are unaware of what offences such as voyeurism or deepfake porn legally entail. The lack of digital literacy is compounded by deep-rooted social stigma, shame, and fear of blame, which often deter victims from reporting. In extreme cases, this has driven some survivors to self-harm.

Going beyond an SOP

On November 11, 2025, the Ministry of Electronics and Information Technology issued Standard Operating Procedures (SOPs) to curb the circulation of NCII. These guidelines mandate that such content must be taken down within 24 hours of reporting and seek to safeguard the “digital dignity” and privacy of women by offering multiple platforms for complaints. This is a welcome and long-awaited step. However, an SOP is only the starting point. Its effectiveness depends on being backed by strong capacity-building programmes, stakeholder consultations, and strengthening of enforcement agencies.

A key limitation lies in the absence of a gender-neutral framework. Studies show that transgender persons, particularly transwomen, are disproportionately targeted through deepfake-based harassment. Yet the SOP is silent on transgender victims, overlooking the Supreme Court’s recognition of transgender persons as the “third gender” entitled to equal rights. Further, it does not establish clear accountability mechanisms, define the quantum of punishment, or articulate specific regulations for deepfake generation, dissemination, and tracing. Thus, having a dedicated law on NCII is the need of the hour — one that goes beyond the traditional focus on actus reus and mens rea and emphasises explicit duties on platforms, AI developers, and intermediaries, more specific and comprehensive than the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rule, 2025.

With the proliferation of AI-generated deepfakes, mainly used to harass, shame, and silence victims (mostly women), privacy is increasingly shaped and threatened by technological capabilities rather than legal protections. The lack of procedural safeguards, traceability norms, and independent oversight mechanisms has allowed such crimes to go unreported and unpunished for years, even as their frequency and severity escalate. These challenges raise an important question: Is an SOP enough?

Lack of awareness of rights or even of what “voyeurism” or “revenge porn” legally constitutes, inadequate sensitisation of police officials, victim-blaming, and deficient cyber-investigative capacity further dilute the impact of existing laws. As NGOs and research studies highlight, thousands of cases are filed daily across India, yet convictions remain disproportionately low. In this context, while the SOP is a crucial first step, a meaningful response to NCII and deepfake harms requires gender-neutral reforms, police training, capacity building, platform accountability, AI-specific safeguards, and stronger victim-centric legal mechanisms.

Aastha Tiwari, Assistant Professor (Law) and PhD scholar, Maharashtra National Law University Mumbai; Shweta Bhuyan, Research Assistant (Law) and PhD scholar, Maharashtra National Law University Mumbai

Published – December 03, 2025 02:03 am IST



Source link

spot_img

Related Posts

Making multiple breakthroughs in spacecraft swarms

Each year, SpaceNews selects the people, programs and...

New App Lets Users Bet on Deadly Conflicts in Real Time

Thanks to the rise of cryptocurrency, it’s never...

Nvidia CFO says chipmaker yet to finalise $100 billion OpenAI deal

Nvidia's agreement with ChatGPT parent OpenAI to invest...
spot_img