Introduction:
In Gummadi Usha Rani & Anr. v. Sure Mallikarjuna Rao & Anr., the Supreme Court of India took serious exception to a trial court’s reliance on what were found to be non-existent, allegedly AI-generated judgments, holding that such conduct strikes at the very root of judicial integrity and may amount to misconduct rather than a mere error of law. The matter arose from a special leave petition challenging a civil revision decided by the High Court of Andhra Pradesh. The petitioners, who were defendants in a suit for injunction instituted by the respondents, questioned the propriety of the trial court’s order dated August 19, 2025, whereby their objections to an Advocate Commissioner’s report were dismissed on the basis of four purported decisions of the Supreme Court. These decisions—Subramani v. M. Natarajan (2013) 14 SCC 95, Chidambaram Pillai v. SAL Ramasamy (1971) 2 SCC 68, Lakshmi Devi v. K. Prabha (2006) 5 SCC 551, and Gajanan v. Ramdas (2015) 6 SCC 223—were later alleged to be non-existent and synthetic citations generated through artificial intelligence tools. A Bench comprising Justice Pamidighantam Sri Narasimha and Justice Alok Aradhe observed that the issue before it was not merely about the merits of the underlying property dispute but about the “process of adjudication and determination” itself. Declaring at the outset that a decision based on fake and non-existent judgments is not a mere error in decision-making but could constitute misconduct attracting legal consequences, the Court issued notice to the Attorney General for India, the Solicitor General of India, and the Bar Council of India, and appointed senior advocate Shyam Divan as amicus curiae to assist in examining the institutional ramifications of deploying AI-generated content in judicial orders.
Arguments of the Petitioners:
The petitioners contended that the trial court’s order dismissing their objections to the Advocate Commissioner’s report was fundamentally flawed, as it relied upon four judgments of the Supreme Court that were “non-existent and fake.” According to them, these citations could not be traced in official law reports, online databases, or authoritative sources. They argued that such reliance demonstrated not merely a mistaken understanding of precedent, but a grave procedural irregularity that undermined the legitimacy of the adjudicatory process. It was submitted that judicial reasoning must be anchored in authentic and verifiable sources of law, and any order founded upon fabricated or synthetic authorities stands vitiated in law. The petitioners emphasized that courts derive their legitimacy from fidelity to binding precedent and statutory interpretation; when a court relies upon authorities that do not exist, it creates an illusion of legal reasoning while in reality resting its conclusions on hollow foundations. They further submitted that this was not a case where an incorrect precedent had been cited or misapplied—rather, the so-called precedents themselves were fictional. Such conduct, they argued, shakes public confidence in the justice delivery system and cannot be brushed aside as a technical lapse. Before the High Court, they had specifically raised the plea that the cited judgments were artificial intelligence-generated and therefore non-existent. Although the High Court acknowledged this contention and recorded a word of caution regarding the use of AI tools, it nevertheless proceeded to decide the matter on merits and affirmed the trial court’s order. Aggrieved by what they perceived as a failure to adequately address the systemic implications of the issue, the petitioners approached the Supreme Court, urging it to lay down clear standards and accountability mechanisms to prevent the infiltration of synthetic legal authorities into judicial decision-making.
Arguments of the Respondents:
The respondents, on the other hand, sought to defend the outcome of the proceedings. It was contended that even if the trial court had inadvertently referred to incorrect or non-existent citations, the ultimate conclusion reached in dismissing the objections to the Commissioner’s report was legally sustainable. They relied upon the reasoning adopted by the High Court, which, while noting the problematic nature of the citations, proceeded to independently assess the merits of the civil revision petition and found no ground to interfere with the trial court’s order. According to the respondents, procedural lapses in citation do not automatically render an order void if the substantive reasoning aligns with established legal principles. They argued that courts must be cautious in distinguishing between clerical or research errors and deliberate misconduct. In an era where digital research tools are widely used, inadvertent reliance on incorrect database entries or AI-generated summaries could occur without malicious intent. The respondents thus submitted that the focus should remain on whether the order is sustainable in law, rather than on the source from which the reasoning was derived. They also emphasized that the dispute at hand concerned a suit for injunction and the limited scope of interference with orders relating to Advocate Commissioner reports. In their view, the High Court had rightly exercised restraint and declined to interfere, and the Supreme Court need not convert the matter into a broader inquiry unless clear evidence of deliberate fabrication or misconduct was established.
Supreme Court’s Judgment:
The Supreme Court, however, viewed the matter through a wider institutional lens. The Bench categorically observed that the case raised “considerable institutional concern,” not because of the merits of the property dispute, but because of the integrity of the adjudicatory process. The Court made a crucial distinction between an error in applying the law and a decision founded upon non-existent authorities. It declared in unequivocal terms that reliance on fake, synthetic, or AI-generated judgments cannot be treated as a mere error of law. Instead, such conduct would amount to misconduct, and legal consequences would follow. This pronouncement underscores a fundamental principle: judicial decisions must be rooted in authentic sources of law, and any deviation from this requirement threatens the credibility of the justice system. The Bench noted that the deployment of artificial intelligence tools in legal research, while not inherently impermissible, carries inherent risks if outputs are not independently verified. The Court signaled that the issue was not limited to this particular trial court order but had broader ramifications for the judiciary and the legal profession. Recognizing the seriousness of the matter, the Court issued notice to the Attorney General for India and the Solicitor General of India, thereby inviting the Union’s assistance on the institutional response required. It also issued notice to the Bar Council of India, indicating that professional standards and ethical guidelines for advocates may need reconsideration in light of emerging technologies. The appointment of senior advocate Shyam Divan as amicus curiae further reflected the Court’s intention to undertake a comprehensive examination of the issue. By doing so, the Supreme Court positioned the case as a potential watershed moment in defining the contours of responsible AI usage within the Indian legal system. The Bench’s observations also resonate with earlier concerns expressed by members of the Court regarding lawyers citing AI-generated fake cases and the risks inherent in overreliance on unverified digital tools. Ultimately, while the final outcome on merits remains to be determined, the Court’s interim observations send a clear message: the integrity of adjudication cannot be compromised by the uncritical adoption of synthetic authorities. Judicial accountability and public confidence demand rigorous verification, especially in an age where artificial intelligence can convincingly fabricate plausible yet entirely fictional legal citations.