Understanding AI in the Philippine Judiciary: A Simple Guide to the Supreme Court’s 2026 Framework
- Yasser Aureada
- 6 hours ago
- 5 min read

Artificial intelligence is no longer a futuristic concept—it is already transforming how legal professionals work. From drafting pleadings to conducting legal research, AI tools are becoming part of everyday legal practice.
Recognizing both the opportunities and risks, the Philippine Supreme Court introduced a landmark policy in 2026: the Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary (A.M. No. 25-11-28-SC).
This framework sets the rules for how AI can be used in courts and legal practice. More importantly, it defines how lawyers, judges, and institutions must balance innovation with responsibility.
In this article, we break down what the framework means in clear, practical terms—especially for lawyers and CPA lawyers navigating modern legal practice.
What Is the AI Governance Framework?
The AI Governance Framework is a comprehensive policy that regulates the use of artificial intelligence in the Philippine judiciary. Its purpose is not to restrict technology but to ensure that AI is used in a way that supports justice rather than undermines it.
At its core, the framework promotes “human-centered augmented intelligence.” This means AI should assist human decision-making, not replace it. Courts and legal professionals are encouraged to use AI to improve efficiency, reduce backlog, and enhance access to justice—but always with human control and accountability.
This is a significant step because it formally integrates AI into the legal system while setting clear ethical and operational boundaries.
Why AI Regulation in Law Matters
AI tools are powerful, but they are not perfect. They can generate incorrect legal citations, reflect hidden biases, or mishandle sensitive data. In a legal setting, even small errors can have serious consequences.
The Supreme Court recognized that without proper governance, AI could compromise fairness, due process, and public trust in the justice system. This is why the framework emphasizes regulation, transparency, and accountability.
By setting clear rules, the Court ensures that AI becomes a tool for improving justice—not a source of risk.
The Core Principle: Human Control Over AI
One of the most important ideas in the framework is that humans must always remain in control of AI systems.
AI is treated as a support tool that provides recommendations or assistance. However, final decisions must always come from human judgment. Judges, lawyers, and court personnel cannot rely blindly on AI outputs.
The framework even describes different levels of human involvement, but the message is consistent across all levels: AI must never operate without meaningful human oversight. Legal reasoning, ethical judgment, and accountability cannot be delegated to machines.
This principle reinforces the idea that justice is fundamentally a human responsibility.
Transparency and Accountability in AI Use
A key requirement under the framework is transparency. Legal professionals must disclose when and how they use AI in their work, especially in court submissions.
For example, if a lawyer uses AI to assist in drafting pleadings or conducting research, this must be clearly stated. The disclosure should include the AI tool used, the extent of its involvement, and confirmation that the output was reviewed.
This requirement is rooted in accountability. Even if AI is used, the lawyer remains fully responsible for the final output. Courts will not accept “the AI made a mistake” as a valid excuse.
This shifts the focus back to professional responsibility. AI may assist in legal work, but it does not reduce the duty of competence, diligence, and integrity expected from lawyers.
Data Privacy and Confidentiality Concerns
Another major focus of the framework is data protection. The judiciary handles highly sensitive information, and so do lawyers—especially CPA lawyers dealing with financial records, tax data, and confidential client information.
The framework makes it clear that AI tools must not be used in ways that compromise confidentiality. Uploading privileged or sensitive information into unsecured AI platforms can lead to serious ethical and legal violations.
This is particularly important in today’s environment where many AI tools are cloud-based. Lawyers must understand how these tools process data, where the data is stored, and who has access to it.
In practice, this means exercising caution and choosing AI tools that comply with data privacy laws and professional standards.
Addressing Bias and Fairness in AI
AI systems learn from data, and that data can contain biases. If not properly managed, these biases can affect outcomes and lead to unfair or discriminatory results.
The framework highlights this risk and requires continuous monitoring and evaluation of AI systems. Legal professionals must be aware that some AI tools are trained on foreign datasets that may not reflect Philippine realities, culture, or legal context.
Ensuring fairness means questioning AI outputs, validating results, and being mindful of how technology may influence decisions. The goal is to prevent AI from reinforcing inequality or distorting justice.
AI Adoption and Risk Management in the Judiciary
The framework introduces a structured process for adopting AI tools within the judiciary. Before any AI system can be used, it must be approved by the Supreme Court. This ensures that all tools meet ethical, legal, and technical standards.
In addition, AI systems are classified based on their level of risk. Some applications are considered high-risk because they directly affect rights and legal outcomes, while others are low-risk and used for administrative tasks.
This risk-based approach allows the judiciary to apply stricter controls where necessary while still allowing innovation in safer areas.
The framework also emphasizes continuous monitoring, testing, and auditing of AI systems. This ensures that tools remain reliable and aligned with ethical standards over time.
What This Means for Lawyers and CPA Lawyers
For lawyers, this framework changes how legal practice is approached in the digital age. AI is no longer optional—it is becoming part of the profession. However, its use comes with clear responsibilities.
Lawyers must now be transparent about AI use, ensure the accuracy of AI-assisted work, and remain fully accountable for their submissions. This raises the standard of diligence and reinforces the importance of professional judgment.
For CPA lawyers, the implications are even more significant. Since financial and tax data are highly sensitive, the risks associated with AI use are greater. Careless use of AI tools could lead to data breaches, compliance violations, or ethical issues.
At the same time, AI presents opportunities. It can streamline financial analysis, improve efficiency in compliance work, and enhance legal research. The key is to use it responsibly and within the boundaries set by the framework.
Aligning with Global AI Governance Standards
The Philippine Supreme Court’s framework does not exist in isolation. It aligns with international developments in AI regulation, including the European Union AI Act, UNESCO guidelines on AI ethics, and ASEAN governance frameworks.
This alignment ensures that the Philippine legal system remains globally competitive and consistent with international best practices. It also prepares legal professionals for cross-border issues involving AI and technology.
Final Thoughts: The Future of Law in the Age of AI
The introduction of the AI Governance Framework marks a turning point in the legal profession. It acknowledges that AI is here to stay while reinforcing that justice must remain grounded in human values.
For lawyers and CPA lawyers, this is both a challenge and an opportunity. It requires adapting to new tools, learning new skills, and maintaining high ethical standards in a rapidly evolving environment.
Ultimately, the message of the Supreme Court is clear:
Artificial intelligence can enhance the justice system—but it cannot replace human responsibility, judgment, and accountability.
As the legal profession moves forward, those who understand how to balance technology with ethics will be best positioned to succeed.