Enforcing AI-Use Obligations in Professional Services Contracts

What Actually Works Under Thai Law and in Cross-Border Engagements

AI Clauses Are No Longer About Permission

In sophisticated professional services engagements, particularly legal services, AI-related provisions have evolved quietly yet decisively. They no longer ask whether AI may be used. Instead, they increasingly assume AI will be used and impose affirmative obligations to ensure efficiency, security, accountability, and human judgment, which we discussed in our article on AI Data Privacy.

Once AI use becomes part of the service standard, the issue is no longer an innovation policy matter. It becomes a matter of performance enforcement: how a counterparty detects, proves, and responds to non-compliance—especially where work is opaque, cross-border, and heavily dependent on internal workflows. Most commentary stops at drafting advice; enforcement lives elsewhere.

The Structural Reality: AI Obligations Are Operationally Invisible

AI use in professional firms is largely invisible to clients and counterparties. Unlike missed deadlines or incorrect filings, AI-related failures often surface indirectly and too late.

First, non-use manifests economically: when a provider commits to technology-enabled efficiency yet continues billing as if no such tools exist, the breach appears as cost and delay rather than a technical violation. Second, misuse surfaces late, often in regulatory or judicial settings, when human oversight fails. Third, data-related breaches are asymmetrical, usually discovered only after third-party incidents or policy changes, leaving remediation options already limited, which we have covered in more detail, along with PDPA issues in our article on Artificial Intelligence, Machine Learning, and Big Data in Thailand: Legal and Regulatory Developments 2025.

Enforcement Is Not Litigation-First—It Is Leverage-First

In practice, sophisticated counterparties rarely enforce AI obligations through litigation as a first resort. Litigation is slow, and the evidentiary burden is high. By the time courts are involved, commercial trust has often collapsed.

Instead, enforcement occurs through governance controls, economic pressure, and institutional consequences. These mechanisms are quieter, faster, and far more effective at shaping provider behavior.

Transparency as a Performance Obligation

The most effective enforcement tool is not technical auditing but forced visibility. When AI use is embedded in service delivery, the provider must be able to explain, at any time, which tools are used, for which categories of work, under what data-handling assumptions, and with what human review controls.

In practice, transparency is enforced through periodic AI-use attestations, mandatory disclosure of material tooling or policy changes, and escalation duties when third-party AI terms change. The enforcement principle is simple: inability to explain equals non-compliance.

Human-in-the-Loop Is Enforced Through Accountability, Not Process

Human review requirements are not enforced by checking whether a review occurred. Instead, they are enforced by assigning responsibility for outcomes.

Assigning responsibility for AI-assisted outputs, imposing fee consequences for AI-related errors, and treating repeated failures as systemic breaches transform human oversight from an aspiration into an operational requirement.

Economic Enforcement Outperforms Legal Remedies

Procurement professionals recognize this intuitively: pricing influences behavior more rapidly than judicial proceedings. Commitments to AI efficiency are upheld through the refusal to compensate for unnecessary inefficiencies, fixed or capped fee arrangements that internalize efficiency risks, and fee challenges on a retrospective basis.

None of this necessitates substantiating how AI was employed. It merely requires demonstrating that the economic result is incompatible with the stipulated operating model.

Data Deletion and Cessation Duties Are About Preparedness, Not Guarantees

Obligations to cease AI use or delete data are often misunderstood. They are not promises of instant or perfect erasure but tests of architectural preparedness and governance maturity.

In controlled environments, deletion is a tractable engineering task. In contrast, many third-party, multi-tenant AI systems impose structural limits on what deletion can reliably achieve or demonstrate. Where data has influenced model tuning, deletion may be technically irreversible without retraining. The compliance question, therefore, becomes one of capability and controllability, not metaphysical certainty.

Application of the Computer Crime Act to AI Prompts

Thai legislation currently does not explicitly define whether an AI prompt qualifies as actionable “computer data” under the Computer Crime Act. Nonetheless, comparative legal analysis from other jurisdictions indicates that prompts deliberately incorporating false data, illegal content, or deceptive instructions into automated systems may be subject to regulatory review, which we’ve covered in more detail in our article on the Computer Crime Act in Thailand.

From a thought-leadership perspective, prompts should be regarded as regulated conduct, in which intent, content, and foreseeable consequences intersect, particularly in professional service environments where automated outputs might impact third parties.

The Thailand Enforcement Lens: CCC, CPC, and PDPA

Thai contract law strictly enforces performance obligations, particularly those tied to service quality and good faith. Specific performance exists but is secondary in practice; enforcement typically occurs through termination, fee leverage, and reputational escalation.

Evidence continues to serve as the primary obstacle. Despite the implementation of enhanced electronic evidence regulations, claimants predominantly depend on pre-established contractual records. The considerations under PDPA further strengthen enforcement efforts by transforming data-handling deficiencies into regulatory liabilities, thereby establishing a hybrid environment of contractual and administrative risks.

Cross-Border Reality: Proof Beats Jurisdiction

In international engagements involving multiple nations, procedural conflicts tend to favor pre-established contractual evidence over strategic jurisdiction considerations. The emphasis placed by Thai law on documented obligations corresponds with this reality.

Well-designed AI obligations succeed not because of jurisdictional reach but because non-compliance becomes provable without discovery.

Final Observation: Enforcement Is an Operating Model

AI-use obligations are predominantly enforced through visibility, incentives, and institutional pressure points rather than primarily through judicial proceedings.

The parties that succeed regard enforcement as an integral operating framework—prioritizing governance over disputes, economics before injunctions, and evidence prior to accusations. As artificial intelligence becomes an embedded rather than exceptional element, this disciplined approach will delineate the manner in which trust is maintained within professional services relationships.

The comments herein are for informational purposes only, are not guaranteed to be up to date, and do not constitute legal advice.

For further inquiries, please contact Formichella & Sritawat at [email protected] or using the form below.


About the Authors

Naytiwut Jamallsawat serves as a partner at Formichella & Sritawat, where he chairs the firm’s Corporate and Regulatory Practice. He has over 10 years of experience providing counsel to international and regional media organizations, broadcasters, and digital service providers, with specialized expertise in OTT-related issues. He consistently advises clients on NBTC licensing strategies, content compliance, and the regulatory interface between broadcasting law and emerging online platforms.

John Formichella

John Formichella is a founding partner of the law firm Formichella & Sritawat and serves as the head of the firm’s Technology, Media, and Telecommunications (TMT) practice. With over 27 years of professional experience, including tenure as general counsel for a telecommunications company listed on NASDAQ, Mr. Formichella has provided counsel on telecommunications projects throughout Southeast Asia. He is recognized for his expertise in assisting clients with major infrastructure initiatives, international market entry strategies, and spectrum and licensing matters in Thailand. Earlier in his career, he offered guidance on the telecommunications provisions of the proposed United States-Thailand Free Trade Agreement. He continues to be a trusted advisor to investors and operators in the telecommunications, media, and technology sectors seeking to enter or expand within Thailand’s regulated TMT industry.

Onnicha Khongthon is a Senior Associate at Formichella & Sritawat. She possesses extensive expertise in telecommunications and media regulation, advising prominent broadcasters, production companies, and OTT service providers on compliance with Thai broadcasting laws and NBTC procedures. Her responsibilities include managing licensing and regulatory approvals, and she has directed international clients through some of the most complex issues at the nexus of traditional broadcasting and emerging digital distribution models.

Supitchaya Akeyati is an Associate at Formichella & Sritawat. She specializes in data privacy, corporate law, and digital services regulation. She advises global and regional OTT and technology companies on compliance with Thailand’s Personal Data Protection Act, cross-border data transfer rules, and regulatory expectations for digital platforms. Her work connects privacy law with the operational needs of media and online service providers.