Bold takeaway: The next AI challenge for courts is not merely detecting fakes but deciding what it means to be a person in law when synthetic outputs can generate biometric “truths” that rival or surpass human testimony. India needs synthetic-aware evidence rules, constitutional protection for ontological autonomy under Article 21, and equality safeguards against algorithmic arbitrariness under Article 14—especially as AI-assisted judicial systems expand their footprint.
The Case That Changed the Question: From Authenticity to Ontology
An Instagram user tried the “vintage saree” trend; an AI system added a mole to her arm—a feature hidden in the original photo but true in real life. This was not X-ray vision. It was statistical inference: AI filled in missing details based on patterns learned from millions of images. This is an AI hallucination, not a deepfake—an accidental output, not a deliberate deception. Yet because the hallucination overlapped with reality, it exposed a faultline in law: when AI invents a detail that happens to be correct, does evidence begin to constitute identity, not merely depict it?
That single incident reframes the legal
agenda. Courts must now decide when probabilistic reconstructions may define
people, fix legal consequences, or outweigh human self-representation. The
divide between evidentiary object and legal subject starts to collapse.
Hallucination vs. Deepfake: Why the
Difference Matters in Court
·
AI
hallucination: an unintentional invention induced by training patterns—e.g.,
adding a mole or scar the model “expects” to see.
·
Deepfake:
an intentional manipulation meant to deceive.
Both can be indistinguishable to the eye. But their legal treatment should diverge. Hallucinations are probabilistic guesses; deepfakes are intentional artifacts. Evidentiary rules must surface that distinction up front, not bury it inside expert skirmishes that occur too late to prevent prejudice.
Global Guardrails: Watermarks, Risk
Classes, and Criminalization
·
EU: GDPR
treats biometric data as special category; the EU AI Act classifies biometric
uses by risk and imposes duties according to context and purpose.
·
China:
mandates visible watermarks and persistent metadata labels to trace synthetic
media across transformations.
·
UK:
criminalizes sexually explicit deepfakes without consent with robust penalties.
·
US
states: restrict political deepfakes and enforce labeling around elections.
These responses share a theme: provenance first, context-sensitive risk controls, and targeted criminalization for specific harms.
India’s Legal Posture: Modernized
Evidence, Pre-AI Ontology
The Bharatiya Sakshya Adhiniyam, 2023, eases admission of “computer outputs” and electronic records—rightly modernizing procedure—but is not synthetically aware. The law does not distinguish authentic capture (e.g., CCTV) from generated content that mimics identity cues (e.g., a hallucinated mole). In the deepfake era, this neutrality can allow fabricated visuals to masquerade as documentary truth absent provenance and model-disclosure requirements. Data protection, biometric collection, and impersonation provisions exist in other statutes, but none address synthetic biometrics or algorithmic identity constructions head-on.
Meanwhile, India’s judiciary is scaling AI for transcription, translation, defect-flagging, research, and case management through deployments like SUPACE and e‑Courts, while explicitly maintaining that adjudication remains human-led. This expands the volume of AI-processed material entering records, raising the stakes for synthetic-aware evidentiary rules and fair process.
The Unexplored Frontier: Ontological
Autonomy and Algorithmic Subjects
The novelty is not just that AI can
fake. It is that AI can generate identity claims that sometimes align with
reality more convincingly than witnesses can. That possibility births the
“algorithmic subject”: a digital reconstruction that courts may be tempted to
treat as more reliable than a human’s own account, especially when low-light,
partial views, or noisy data are “enhanced.” This creates three existential
collisions for Indian law:
Evidence Law for the Synthetic Age: A
Practical Design
Move from device-output presumptions to
provenance-first authentication, especially for identity.
· Provenance thresholds: cryptographic capture, hardware-bound signatures, and end-to-end chain of custody for audio/video/images used for identification; visible labels plus persistent metadata for synthetic content. Detectors are not enough; authentication must be embedded at capture and preserved through processing.
·
Model
disclosure and reliability: for AI-generated or AI-enhanced identity evidence,
require disclosure of model class and version, validation domain, uncertainty
bounds, and known failure modes; treat such submissions analogous to scientific
evidence with rigorous reliability screening.
·
Corroboration
burden: where AI-generated enhancements introduce or “reveal” biometric
features not seen in the raw capture, admit only with independent corroboration
(e.g., witness, medical record, multiple modality confirmation). Uncorroborated
synthetic traits should not prove identity.
·
Contestation
rights: guarantee adversarial access to inspect the pipeline—source media,
processing steps, prompts, model parameters to the extent feasible, logs, and
error analyses—especially in criminal and high-stakes civil contexts.
·
Temporal
limits: bar predictive identity (aging/appearance forecasts) as proof of
present identity absent strong, non-algorithmic corroboration.
Judicial Practice Directions to Bridge
the Gap
·
Registry
rules: require affidavits disclosing any AI assistance or enhancement in
producing tendered media; preserve originals; attach cryptographic hashes and
chain-of-custody records.
· Expert standards: certify AI-forensics experts on authentication, not just detection; prefer methods that verify capture integrity and provenance over post hoc “deepfake detectors”
· Bench books and training: integrate modules on hallucinations vs. deepfakes, error modes, dataset shift, and context-specific risks into NJA/SJA programs aligned with e‑Courts rollouts.
·
High
Court pilots: set up “AI evidence cells” to pre-screen identity-linked media
for compliance and reliability in select jurisdictions, developing uniform best
practices.
Litigation Playbook for Indian
Advocates
·
Start
with provenance: demand device-level capture proofs and immutable
chain-of-custody; resist late-stage enhancements as substitutes for original
clarity.
· Press Article 14: challenge opaque identity inferences as constitutional arbitrariness and indirect discrimination; seek disclosure, audits, and proportionality review in state-reliant uses.
·
Invoke
Article 21: frame ontological autonomy and dignity to resist involuntary
algorithmic redefinition—particularly where intimate or biometric traits are at
issue.
·
Police
temporal creep: object to predictive identity and future-looking
reconstructions as proof of present facts.
·
Narrow
specialized uses: in criminal matters, insist that identity proof cannot rest
on uncorroborated synthetic enhancements; shift the risk of uncertainty to the
proponent.
Policy Priorities: Amendments and
Standards
·
Amend the
Bharatiya Sakshya Adhiniyam to define “synthetic media” and “AI‑enhanced
evidence,” require provenance for identity-linked submissions, codify
contestation rights, and set reliability thresholds.
· Update allied frameworks to address malicious identity synthesis, remedies, and safe harbors for compliant provenance. Align BPR&D and DFSL standards toward authentication-first forensics.
· Synchronize with e‑Courts and SUPACE: mandate AI-use disclosures within case management systems so synthetic-aware rules are woven into everyday practice, not bolted on.
The Bottom Line: Keep Decisions Human,
Make Evidence Human-Centric
India is right to adopt AI for administration while keeping adjudication human-led. The threat is not “robo-judges,” but “robo-identities” created or enhanced by AI being smuggled into the evidentiary bloodstream without provenance, explanation, or rebuttal rights. Synthetic-aware rules, equality safeguards against algorithmic arbitrariness, and a constitutional doctrine of ontological autonomy will preserve fairness while enabling the benefits of AI in court administration.
The law must now decide: when algorithms dream convincingly, who gets to be the person in law—the human who lives the life, or the synthetic portrait that looks more certain? The answer should remain human-centric. Build the guardrails now. The next “mole case” will not be a curiosity—it will be a conviction, a welfare exclusion, or a civil liability that turns on a machine’s probabilistic guess about who someone is.
.png)
No comments:
Post a Comment