top of page

Deepfakes, Identity, Persona and the Indian Legal Frontier

  • Writer: Gopal Trivedi
    Gopal Trivedi
  • Nov 18, 2025
  • 11 min read

Updated: Nov 20, 2025

Now algorithms can create cloned faces, identical voices, and even have capabilities of manipulating human values. Owing to Intelligent machines or Artificial Intelligence (AI), we are in an era where nothing is impossible. The bigger question, therefore, is not what AI can do, but what it should be allowed to do. Courts in India are quickly rewriting the rules of identity and authenticity; however, the sufficiency of the same remains moot.

 

Rethinking Identity


The rise of intelligent machines is not just changing industries, but also how we perceive human identity and trust. "Seen with one's own eyes" or "heard by one's own ears" were once the ultimate test of truth and authenticity. However, with the growth of AI-generated audio and video, that basic trust and belief have been shaken. A clip or a voice that may look or sound perfect may have been created by an algorithm (synthetic), and the scene may be entirely a machine-created hallucination. As AI learns to create, decide, and mimic human behavior, we must rethink basic questions about choice, memory, identity, and moral responsibility. Trust, once based on firsthand experience, must now go through additional layers of verification, consent, and legal protections. The challenge is not just to control or train AI, but to understand how it is quietly changing our views on authenticity and human dignity.

 

When Persona Becomes Property: The Indian Perspective


In India, the courts have, and are recognizing that one's persona, including name, image, likeness, voice, signature gestures, and even style, is an asset with value that should be protected. It can be owned, licensed, infringed upon, and defended. Persona or the Personality rights, which were once only the concern for celebrities, are now playing a key role in the larger conversation about human authenticity in the AI-influenced synthetic world.


Recent times have seen a rise in court orders involving personality rights, these are cases involving deepfakes, AI scams, and digital impersonation. The Courts are quick in issuing interim injunctions, rapidly removing harmful content, and allowing John Doe orders that require platforms to reveal the identities of offenders. These actions are becoming a consistent national response against AI-enabled impersonation and deepfakes.


This change is not confined to the entertainment industry or a single area of law. Whether the misuse involves fake endorsements, obscene deepfakes, spreading misinformation using AI generated audio-video or scams using cloned voices, Indian courts are focusing on the same core principles of dignity, autonomy, consent, and speed.


The judicial principles emerging from Indian courts can be summarized into four key ideas:


  1. Treating identity as an asset: One's name, image, voice, and likeness have dignity and commercial value and ought to be protected like any form of property.

  2. Misuse must be addressed quickly: Courts are acting promptly to curb digital impersonation, deepfakes, and AI misuse through urgent injunctions (including John Doe orders) and quick takedown orders.

  3. Cooperation by the platforms: Intermediaries are expected to assist in content removal, disclosure, and prevention, rather than merely acting as passive channels.

  4. Accessibility to relief: Victims deserve timely relief, even if the offender remains unknown or unidentified.

 

Indian Law on Persona and Identity (Personality Rights):


In India, personality rights are governed by a combined legal framework that relies on various statutory provisions, constitutional rights, and evolving court decisions. This framework offers both civil and criminal remedies and reflects India's proactive judicial approach in tackling challenges related to AI.

 

The combination of statutes protecting India's personality rights include:


Article 21 of the Constitution: The fundamental right to life and personal liberty has been broadly interpreted to include the right to dignity, privacy, and informational autonomy. This constitutional basis supports personality rights protection, allowing for quick blocking orders, complainant-focused dignity standards, and even John Doe relief against deepfakes that harm one’s reputation and dignity.


The Trade Marks Act, 1999: Trademark Laws and principles against passing-off and false endorsement limit unauthorized commercial use of a person's identity that conveys association, sponsorship, or approval. This enables civil action against misleading representations and false endorsements, allowing affected individuals to prevent unauthorized exploitation of their persona.


The Information Technology Act, 2000 – Sections like 66C (identity theft), 66D (cheating by impersonation), and 67/67A (obscene or sexually explicit content) make cyber fraud, impersonation, and the publication of inappropriate or altered content illegal. These provisions work alongside the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to allow coordinated takedown notices and criminal prosecution for AI-generated fraud, deepfake scams, and damaging content.


The Copyright Act, 1957 – Chapter VIII protects Rights of broadcasters and performers in the creative works, recorded voices, and performances including moral rights of performers. These provisions guard against unauthorized reproduction, adaptation, or digital manipulation of protected works and performances.


The Digital Personal Data Protection Act, 2023 – The Act is mainly focused on data processing; however this also reinforces the importance of consent and data autonomy. Though it does not specifically address voice or likeness protection, it establishes a framework for consent-based processing that supports personality rights claims.

 

This multi-statute framework is effective when supported by strong contracts, evidence preservation, and coordinated legal action. Recent court decisions have effectively elevated the persona into protectable and enforceable property interest. To say it simply, privacy guards the person; personality rights guard the persona. Together, they defend both dignity and economic interests.

 

Judicial Momentum in the AI Era: Recent Cases


In the past couple of years, and with an increased number of cases in 2025, courts across India have shown a strong commitment to tackling AI-driven impersonation, fraud, and the misuse of deepfakes. The protection of personality rights has shifted from a theoretical concept to an emerging legal issue.

The following cases highlight how judicial decisions regarding personality rights in the AI era are quickly evolving:

 

1. Chiranjeevi vs. Various Platforms – Digital Copycats Including AI[1]: Hyderabad City Civil Court, vide its order dated September 26, 2025 restrained over thirty platforms using the image, name, and voice through AI-generated content of Actor Chiranjeevi and also restrained any unauthorized digital use of the same, following the order the actor also lodged a formal Police complaint against misuse of his persona. 

 

2. Aishwarya Rai Bachchan vs. Aishwaryaworld.com & Ors.[2]: Delhi High Court, vide its order dated September 9, 2025, granted an interim injunction to Aishwarya Rai Bachchan restraining the violation of her Personality Rights and Moral Rights. The court also restrained any use by the counterparties, so as to suggest that the services or goods are being endorsed by her and preventing the misuse of her name, image, likeness, and AI-generated deepfake pornographic content, stressing that such violations damage both dignity and economic interests.

 

3. Abhishek Bachchan vs. The Bollywood Tee Shop & Ors.[3]: Delhi High Court, vide its order dated September 10, 2025, granted injunctive relief to protect Abhishek Bachchan from unauthorized AI/deepfake imagery, voice cloning, and false endorsements using his persona.

 

4. Asha Bhosle vs. Mayk Inc. & Ors.[4]: The Bombay High Court, vide its order dated September 29, 2025, ruled that AI-based voice cloning without consent violates personality rights, preventing platforms from cloning the legendary singer's voice or exploiting her image and likeness commercially.

 

5. Suniel V Shetty vs. John Doe S Ashok Kumar.[5]: The Bombay High Court, vide its order dated October 10, 2025, granted ex-parte temporary relief to Suniel Shetty against AI-powered deepfakes, impersonation, and false endorsements. The court described these acts as a dangerous mix of malicious intent and technology misuse.

 

6. Akshay Hari Om Bhatia vs. John Doe.[6]: The Bombay High Court, vide order dated October 15, 2025, ordered urgent removal of deepfake videos featuring Akshay Kumar, noting that the manipulation is so sophisticated and misleading that it is nearly impossible to distinguish real content from fake, while recognizing risks to reputation, safety, and public order.

 

7. Hrithik Roshan vs. Ashok Kumar / John Doe & Ors.[7]: The Delhi High Court, vide order dated October 15, 2025, granted an interim injunction protecting Hrithik Roshan’s digital likeness and name from AI-driven impersonation, unauthorized merchandise, and altered content, while refusing to remove non-commercial fan pages.

 

8. Global Health Ltd. Vs. John Doe & Ors.[8]: The Delhi High Court, vide its order dated January 8, 2025, granted injunction and takedown orders for deepfake videos misusing Dr. Trehan's persona to spread false medical advice, recognizing the seriousness of professional impersonation.

 

9. Jaikishan Kakubhai Saraf (alias Jackie Shroff) v. The Peppy Store & Ors. [9]: The Delhi High Court, vide its order dated May 15, 2024, granted an ex-parte temporary injunction preventing any use of Jackie Shroff's name, image, voice, likeness, signature phrase "Bhidu," or AI depictions without his consent, marking the first case where an Indian court specifically restricted AI chatbot misuse of a celebrity's persona.

 

10. Anil Kapoor v. Simply Life India & Ors.[10]: The Delhi High Court, vide its order dated September 20, 2023, protected Anil Kapoor's name, voice, likeness, and catchphrases, including "Jhakaas," by issuing injunctions against unauthorized commercial use and AI platform misuse.

 

11. Amitabh Bachchan v. Rajat Nagi and Ors.[11]: The Delhi High Court, vide its order dated November 25, 2022, granted broad injunctive relief to Amitabh Bachchan against unauthorized use of his persona in fraudulent KBC lottery schemes, on merchandise, and through misappropriated domain names. The court recognized the right of publicity as part of personality rights and set an important precedent for protecting celebrity rights in India.

 

These developments show a rapidly evolving judicial landscape that treats personality rights as enforceable rights rooted in privacy, dignity, and economic freedom of a person. Courts are working together to ensure that everyone, be it celebrities, entrepreneurs, or an ordinary citizen, maintains control over how their likeness, voice, and persona are used and presented in this digital age.

 

Beyond Law: The Human, Psychological, and Social Cost


The imitation of identity through algorithms, from face filters to voice clones, has led to what psychologists call a crisis of authenticity. When every video could be fake and every voice can be copied, it leads to an ongoing anxiety amongst the people leading to a stage of epistemic fatigue, a term defined as exhaustion resulting from uncertainty, polarization, misinformation, and an overwhelming volume of complex information. A continuous feeling of doubt about what to trust and anxiety about misrepresentation. Employees may worry about being replaced by their own synthetic versions. Creators juggle numerous AI-influenced personas. This decline is not just legal or economic; it is also psychological, civic, and existential.

 

Broadly, this leads to three interconnected streams of concern:


  1. Reputational abuse: Explicit or degrading fakes that undermine dignity, safety, and public trust.

  2. Commercial deception: Fake endorsements and misattribution that mislead consumers and erode trust.

  3. Financial crime: Scams that exploit trust in a familiar persona (e.g., "investment tips" using a cloned voice), leading to quick and wide-ranging victimization.

 

Thus, guardrails are not just theoretical requirements, rather they are essential for mental health and social stability.

 

Self-Restraint: The First Governance Layer


Before legal frameworks, there must be a sense of ethics. Individuals, brands, agencies, and innovators need to practice self-restraint. The potential of AI must harmonize with human responsibility. The same tools that foster innovation can erode trust when used without permission.

 

The core of the self-restraint is based on five straightforward ethical principles:

 

  1. Never use someone’s face or voice without permission: Before incorporating anyone's image, voice, or likeness in a work, seek explicit consent. Just as one wouldn’t use someone’s copyrighted song without permission, their identity must not be used without asking.

  2. Don’t train AI on people’s data without their knowledge: While developing AI systems, don’t input photos, videos, or voice recordings of real individuals unless there is a clear and unambiguous legal approval. Someone’s personal data isn’t free material for any algorithm.

  3. If it resembles someone real, pause and check: Even if it isn’t aimed to copy a specific person, but the AI-generated content accidentally resembles someone familiar, stop. Confirm whether consent is necessary. Saying "it was unintentional" isn’t a valid excuse when someone's reputation is at risk.

  4. Always label what's artificial: Be clear when content is AI-generated or synthetic. A simple note like "This image/voice was created using AI" helps people differentiate what is real from what is artificial. Transparency builds trust; deception destroys it.

  5. Treat people’s identities with respect: A person's face, voice, and identity aren’t mere pixels and data; they represent dignity, livelihood, and trust built over time. Don’t treat human identity as disposable material for experimentation. Handle it with the same care one would want for their own.

 

In summary, acting responsibly now, while technology is still in a developing phase, will help in avoiding heavy-handed laws later. Self-regulation grounded in ethics is preferable to forced rules that arise from misuse.

 

Building Safety Into the Technology (Not Just Relying on Courts)


Genuine protection comes from embedding safeguards directly into technology from the start, rather than waiting for damage to occur before taking legal action.

 

Keeping track of consent and data sources: Companies should maintain clear records of where their AI training data comes from and if they had permission to use it. If someone’s face or voice is part of the system, there should be traceable records showing their agreement.

 

Adding "fingerprints" to content: Just as currency has watermarks to prevent counterfeiting, AI-generated content should include markers that identify when and how it was created. This technology is already available (like C2PA standards) and assists in verifying whether a video or image is real or synthetic.

 

Making reporting easy and fast: Social media platforms and websites should have clear "Report Deepfake" or "Report Impersonation" buttons that work. When someone reports identity misuse, the platform should respond within a short span like 24-48 hours. Controversial content like videos of public figures or explicit material should undergo automatic reviews before going viral.

 

Giving people control over their own identity: A "Do Not Clone" list, like "Do Not Call" registries for telemarketers, should exist. People should be able to enter their names in databases informing AI companies: "Don’t use my face, voice, or likeness." If something slips through, there should be a simple method for removal.

 

Making it part of business practice: Companies should include AI-use policies in employment contracts, marketing guidelines, and vendor agreements. This means clearly stating: "We don’t create deepfakes of real people," "We verify consent before using anyone’s likeness," and having an action plan ready if something goes wrong.

 

When safety measures are built into the technology itself rather than added later through legal disputes, issues can be prevented before they arise. It is much better to install locks on doors than to pursue thieves after a burglary.

 

The Ethical Core: Authenticity as Infrastructure


The stakes extend beyond just celebrity control. The law now safeguards a fundamental social need of having the ability to be authentically oneself in public without being digitally manipulated. In an ever growing content and attention-driven economy, consent becomes valuable, and authenticity becomes the gold standard.

 

Courts are fulfilling their role, recognizing dignity, demanding swift action, and requiring platform cooperation. The rest is up to us, to craft better clauses, create safer systems, and use powerful tools with discipline.

 

Conclusion: From Code to Conscience


The age of AI requires more than regulation; it demands self-reflection. Technology will keep evolving to replicate us; the key question is whether we will remember how to be ourselves. Courts are leading the way, but lasting safety will come from a culture of self-restraint, transparent design, and legal foresight. The right to be real, to own your likeness, voice, and digital reflection, is the new human right. In the algorithmic marketplace of identity, authenticity is the only currency that retains its value.

 

ree


Gopal Trivedi

Partner






[1] I.A. No.6275/2025 in O.S.No.44/2025

[2] 2025 SCC OnLine Del 5943

[3] 2025 SCC OnLine Del 5944

[4] 2025 SCC OnLine Bom 3485

[5] 2025 SCC OnLine Bom 3918

[6] 2025 SCC OnLine Bom 4044

[7] CS(COMM) 1107/2025 & I.A. 25665-25667/2025

[8] CS(COMM) 6/2025

[9] 2024 SCC OnLine Del 3664

[10] 2023 SCC OnLine Del 6914

[11] 2022 SCC OnLine Del 4110

Comments


Search By Tags
bottom of page