˂  Back

Exit and Step-In Rights in Artificial Intelligence-as-a-Service

Believe us when we say this – exit clauses are the most under-negotiated provisions of most technology outsourcing contracts. They are usually structured close to the end of the agreement, attract little attention during contract review and negotiation, and tend to be drafted quickly from a template that has not been rethought in years. In this age where AI-as-a-Service is becoming the norm in technology outsourcing, outdated and traditional exit clauses are slowly becoming a serious risk. The reason is simple – these exit clauses, just like much of our legislation these days, are not crafted with the influence and impact of AI in mind.

Traditional exit assistance was built for a world in which the customer could, with some effort, take back its properties, people, processes, data and its tools. The service provider’s role was to facilitate and ensure the completion of the transition. Where some parts of the services may actually depend on the service provider’s proprietary AI, however, this assumption breaks down at the most fundamental level. The customer may find itself locked in not by commercial pressure or switching costs, but by the architecture of the service itself, due to how AI-as-a-Service is typically structured.

This article examines why AI-enabled outsourcing makes exit and step-in rights of a customer harder, what kinds of lock-in or exit complications customers should be guarding against, and what the contract of the AI-enabled world should be doing differently to mitigate the risks brought forth by AI-enabled outsourcing.

 

The Traditional Exit Framework

A mature, comprehensive and well thought out technology outsourcing contract will certainly contain a set of exit assistance provisions. These typically include knowledge transfer to a successor service provider or to the customer itself; return of customer data and properties in usable form; transitional licences to be granted by the service provider to the customer for a defined period, solely to facilitate transition of services; parallel running of outgoing and incoming services and reverse transition support from the service provider to ensure a smooth transition; and step-in rights enabling the customer to take over operations in defined circumstances, especially for mission-critical systems or operations.

These exit provisions would work properly for services that are, in principle, replicable by just anybody – work is being done by people, using tools the customer can either licence, substitute, or establish in-house. The operational know-how required to continue the outsourced services is well documented and is capable of being transferred. Essentially, the outsourced services and/or functions can be taken over just by pretty much anyone without breaking operational continuity, as long as the transition is carried out.

 

Why AI Breaks the Assumption

Entering the age of AI, where some parts of the services may be delivered through the service provider’s proprietary AI, fine-tuned over time on the customer’s data, several of the assumptions of traditional exit framework crumbles. The outsourced capability that the customer would need to maintain the same level of services is not really in the data to be taken back, neither is it in the people. The required capability would be in the AI model – in the trained weights, the prompt libraries, the workflow definitions and the embedded judgements that have accumulated over the life of the outsourcing contract.

Simply returning the customer’s data would not return the outsourced capability, documenting the process does not capture the AI model’s tacit knowledge, and granting a transitional licence to the service provider’s tools or intellectual property without the trained AI model itself is meaningless and would not support continued operation at the same level. The step-in rights of the customer would also become just a mere paper right when the operation depends on an AI model the customer does not own and cannot run.

 

Lock-In Risks Unique to AI-Enabled Technology Outsourcing

Let us consider a scenario of a bank that has outsourced its Know-Your-Customer (KYC) review to a service provider whose AI has been trained over several years on the bank’s historical decisions, edge cases and risk appetite. Switching providers in this case is not just a matter of onboarding a new vendor. It usually means starting from square one – a different AI model, with no understanding of how this particular bank treats borderline cases, erasing all the effort in the past few years.

In another example involving an outsourced AI customer service chatbot, the customer’s data may be able to be returned in their raw form, but the value created from using these data to train the service provider’s AI model is often not. Once the outsourcing arrangement is terminated, the customer ends up holding a copy of its inputs without any practical way to recreate the outputs, unless they retrain another AI model using these data, which is inevitably going to take time, effort and resources, and there is no guarantee that the newly trained AI model will perform at the same level as the outgoing one, simply because of how different AI models are developed, configured and tuned.

In some instances where a customer has integrated a service provider’s AI deeply into its operations for a long period of time, it often finds that it has gone and rebuilt its processes around the AI’s functionality. A finance team whose month-end close has been redesigned around AI-driven reconciliation cannot simply unplug it. A customer service operation built around AI-routed tickets and AI-drafted responses cannot revert to a human-led model overnight. In these scenarios, the operational lock-in is real. It takes much unwinding to switch service providers, and you would begin questioning if it is even worth the effort.

This last risk that we are about to highlight is perhaps something that may scare a lot of organisations – knowledge lock-in that comes from over-reliance on AI. Where AI takes over a meaningful share of work, the human capability to do the work actually diminishes over time. The customer’s employees lose practical fluency, and by the time the customer wants to exit the AI-enabled outsourcing, there may be very few people anywhere who actually know how to deliver the service without the AI. This is similar to how we can no longer read paper maps when GPS is the norm.

 

Contractual Mitigations to AI Lock-In Risks

At this point, it would not be surprising if the thought of avoiding AI-enabled outsourcing arrangements has crossed your mind, but this would not be the right response, especially when your competitors may be doubling down on AI to increase their efficiencies. The right response is to update the drafting of exit provisions to reflect the new nuances presented by AI-enabled technology outsourcing. Below we examine some pointers worth considering.

  1. (i) Rights over fine-tuned models
  2. The most straightforward mitigation in concept, though often the most contested negotiation, would be for the customer to negotiate for rights to a usable snapshot of the service provider’s AI model that has been customised using the customer’s data, ideally a version that can be deployed on a successor system. The viability of this option, however, depends on the technology deployed by the service providers, and service providers may charge additional fees for this, considering that the fine-tuned model forms their core IP.
  3. .
  4. (ii) Successor assistance obligations
  5. We often see “reasonable assistance” as the default language for transition or exit assistance. In the era of AI, what is “reasonable” may dangerously be open to interpretation. Exit provisions should be expanded expressly to cover replication of AI-driven workflows, transfer of relevant configurations and, where feasible, active support in onboarding the successor’s model so that the successor can perform the outsourced services at substantially the same level.
  6. .
  7. (iii) Realistic transition periods
  8. Similarly, vague wording and language on the transition period, such as “reasonable amount of time” should be avoided. With parallel running and the time potentially required to retrain the successor’s AI model, the outsourcing contract should contain provisions to reflect the actual time that is realistically required to fully complete a transition or migration exercise. Otherwise, the transition period may be subject to challenge by the outgoing service provider, or the customer may be exposed to additional fees for the transition assistance.
  9. .
  10. (iv) AI model escrow
  11. We spoke about the possibility of AI source code or model escrow a few months ago. This is a risk mitigation measure that customers should consider – an arrangement whereby AI model weights, configurations or full deployment packages are held by a specialised escrow agent and released on predefined trigger events. This would enable smoother and more executable step-in rights by customers when AI-enabled outsourcing is involved.
  12. .

The above being said, we need to highlight that it would be difficult for a provider of AI-enabled outsourcing services to accept all of these contractual suggestions. The capability that customers would want to extract on exit is, in many cases, the result of substantial investment and is genuinely the service provider’s IP. There is a real commercial negotiation to be had about what is and is not portable, and about the price the customer should pay for greater flexibility on exit. The point is not that providers should give everything away; it is that the right questions need to be asked openly at the start of the relationship rather than at the end. Exit terms in AI-enabled outsourcing should not be an afterthought and should be an architectural question that should be on the table from the RFP stage, so that both the customers and the service providers can jointly explore workarounds or mitigation measures.

 

The Technology Practice Group at Halim Hong & Quek frequently advise and assist clients in their technology outsourcing endeavours, whether it is on-premises solutions or cloud-based offerings. If you have any questions or would like to enquire about our services, please feel free to reach out to the partners and co-heads of the Technology Practice Group, Lo Khai Yi and Ong Johnson, for more information.

Our Technology Practice Group continues to be recognised by leading legal directories and industry benchmarks. Recent accolades include FinTech Law Firm of the Year at the ALB Malaysia Law Awards (2024, 2025 and 2026), Law Firm of the Year for Technology, Media and Telecommunications by the In-House Community, FinTech Law Firm of the Year by the Asia Business Law Journal, a Band 2 ranking by Chambers and Partners and a Tier 3 ranking by Legal 500 on FinTech, as well as a Tier 4 ranking by Legal 500 on Technology, Media and Telecommunications.


About the authors

Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”), Technology
Acquisition and Outsourcing, Telecommunication Licensing and
Acquisition, Cybersecurity
ky.lo@hhq.com.my.

Ong Johnson
Partner
Head of Technology Practice Group

Fintech, Data Protection,
Technology, Media & Telecommunications (“TMT”),
IP and Competition Law
johnson.ong@hhq.com.my


More of our Tech articles that you should read:

Our Services

© 2026 Halim Hong & Quek