In the past month, top influencers Bhuvan Bam, Payal Dhare (aka Payal Gaming), and Slayy Point’s Gautami Kawale and Abhyudaya Mohan have sought legal remedies against their artificial intelligence (AI)-generated images and videos circulating on social media without consent. Some depicted obscene content while others intend commercial misuse of their identity threatening the reputation these creators have built over years.
While Slayy Point and Bam secured takedowns through the Delhi High Court and Maharashtra’s cyber police arrested the maker of Dhare’s AI generated intimate video and made his identity public, none of these creators secured permanent personality rights protections—like podcaster Raj Shamani in November 2025. This gap leaves influencers vulnerable in India’s booming ₹4,500 crore creator economy.
What is the growing threat AI poses?
At a time, when tech giants including Meta, Google, and X are in a race to improve their AI tools and chatbots, their capabilities of generating sexually explicit deepfakes raise concerns. Indian government issued a directive to microblogging platform X to crack down on the misuse of its AI platform grok to generate and share “sexualized and obscene” images of women earlier in January.
As per a November 2025 report by cyber security firm mcafee, 90% of Indians have encountered fake or AI-generated celebrity endorsements, with victims losing an average of ₹34,500 to such scams. The report further added that 60% of Indians have encountered AI-generated or deepfake content from influencers and online personalities, not just mainstream celebrities.
What are the legal provisions?
India is tackling this growing threat of AI deepfakes with existing laws that don’t single out AI but cover all such harms. Under the Information Technology Act, 2000, creating deepfakes to impersonate someone, steal identities, invade privacy, or share obscene content is punishable. The 2021 IT Rules require social media platforms to remove misleading deepfakes, hate speech, or privacy-violating posts within hours of complaints, label dodgy AI tools, and let users appeal to government panels if ignored.
Newer laws such as 2023 Digital Personal Data Protection Act fine AI firms for using personal info without consent add teeth. The Bharatiya Nyaya Sanhita jails those spreading deepfake rumours that cause public panic. The ministry of electronics & IT (MeitY) also introduced AI governance guidelines in November to further regulate high-risk AI systems, including deepfake generators, mandating declaration of AI content across platforms.
How do AI fakes affect creators?
Influencers and online personalities have become valuable digital assets, where their name, image, voice, and likeness drive commercial value through endorsements, sponsorships, and brand deals worth millions. Just as celebrities like Amitabh Bachchan or Anil Kapoor have long protected their “personality rights” in court against unauthorized misuse, influencers now seek similar safeguards against deepfakes, AI clones, or fake ads.
In a landmark November 2025 ruling, the Delhi High Court made podcaster Raj Shamani the first Indian influencer to secure comprehensive personality rights protection, restraining platforms from hosting AI-generated videos, chatbots, or morphed content exploiting his persona without consent. This affirmed that creators’ goodwill is a protectable IP amid rising digital impersonation threats.
Is there a bright side to AI?
In contrast, content creators are themselves using AI tools to build their own digital avatars, slashing content production time and costs while boosting creativity. Apps like ElevenLabs enable realistic voice cloning, allowing creators to generate natural-sounding narrations or podcasts in seconds without studio sessions, while OpenAI’s Sora crafts hyper-realistic video clips from text prompts—turning a simple script into polished visuals that once required days of filming and editing.
Are there any solutions?
A technical solution to the rising AI threat can be content labelling and watermarking, that embeds a visible or invisible identifier, such as a logo or unique code, into digital content like images, videos, or documents to assert ownership, deter unauthorized use, and enable tracking.
In 2024, prime minister Narendra Modi advocated for this solution in a free wheeling chat with Microsoft co-founder Bill Gates. The IT ministry is expected to issue AI generated content labelling guidelines soon.


