Skip to content
Back to Blog
EthicsFeb 14, 20267 min read

Understanding State Bar AI Ethics Guidelines

A summary of current bar association guidance on AI use in legal practice across major jurisdictions.


As AI tools become prevalent in legal practice, state bar associations are issuing guidance on ethical use. Understanding these guidelines is essential for any firm adopting legal AI tools. Here's a practical summary of the current landscape.

The ABA's Foundational Position

The American Bar Association has addressed AI use through several formal opinions and resolutions. The core principles are:

Duty of Competence (Rule 1.1): Attorneys must understand the AI tools they use sufficiently to evaluate whether the output is reliable and appropriate. You don't need to understand the underlying machine learning models, but you do need to understand what the tool does, its limitations, and how to verify its output.

Duty of Supervision (Rule 5.3): AI is treated analogously to non-lawyer assistance. Attorneys must supervise AI output with the same diligence they would apply to work from a paralegal or junior associate.

Duty of Confidentiality (Rule 1.6): Client information entered into AI tools must be protected. Attorneys must ensure that AI vendors have appropriate data security measures and that client data isn't used to train general-purpose models.

Duty of Communication (Rule 1.4): Some jurisdictions require disclosure to clients when AI is used in their representation. Even where not required, transparency is a best practice.

State-by-State Highlights

California The California State Bar issued Practical Guidance on the use of generative AI, emphasizing that attorneys remain responsible for all work product regardless of AI involvement. Attorneys must verify AI output, disclose AI use when material, and protect client confidentiality.

New York New York courts have implemented rules requiring attorneys to certify that AI-generated content has been verified for accuracy, particularly after high-profile incidents of AI-hallucinated case citations. The certification requirement applies to all court filings.

Florida The Florida Bar issued Ethics Opinion 24-1, confirming that attorneys may use AI tools but must supervise the output, maintain competence in the tools' use, and protect client confidentiality. The opinion specifically notes that AI-generated legal research must be independently verified.

Texas The Texas Center for Legal Ethics has provided guidance emphasizing the duty of competence in understanding AI limitations and the importance of maintaining client confidentiality when using cloud-based AI services.

Other Jurisdictions Similar guidance has been issued by bar associations in New Jersey, Colorado, Illinois, and many others. The consistent themes are: supervision, verification, confidentiality, and competence.

Practical Compliance Checklist

For firms using AI tools like CounselAI, here's a compliance checklist based on current guidance:

  • **Verify all AI output** before filing, sending, or relying on it
  • **Check all legal citations** independently — never rely solely on AI research
  • **Review your vendor's data practices** — ensure client data is protected and not used for model training
  • **Maintain an audit trail** of AI use across matters
  • **Consider client disclosure** — at minimum when AI plays a material role in work product
  • **Train your team** on proper AI use and supervision requirements
  • **Review local rules** — check your jurisdiction's specific requirements for AI use disclosures in court filings
  • **Document your AI policies** — have a written firm policy on acceptable AI use

How CounselAI Supports Compliance

CounselAI was designed with these ethical requirements in mind:

  • - **Human-in-the-loop by default**: Every agent output requires attorney review
  • - **Source attribution**: All research citations are linked to their sources for easy verification
  • - **Complete audit logging**: Every AI interaction is logged for compliance and review
  • - **Firm-scoped data isolation**: Each firm's data is isolated and never used for model training
  • - **Approval workflows**: High-risk outputs require explicit attorney approval before action
  • - **Disclaimers embedded**: Every output includes a clear disclaimer that it's AI-generated and requires human review

The Evolving Landscape

AI ethics guidance is evolving rapidly. We recommend that firms: - Subscribe to their state bar's ethics opinion alerts - Designate an attorney to monitor AI ethics developments - Review and update firm AI policies at least quarterly

The firms that thrive with legal AI will be those that embrace the technology while maintaining the human judgment and professional responsibility that define the practice of law.


This article provides general information about AI ethics guidance. It is not legal advice. Consult your jurisdiction's specific rules and ethics opinions for authoritative guidance.