˂  Back

Bank Negara Malaysia’s Discussion Paper – Artificial Intelligence in The Malaysian Financial Sector

Earlier in August, Bank Negara Malaysia (“BNM”) has released a discussion paper on artificial intelligence (AI) in the Malaysian financial sector, setting out its views on the adoption of AI in the financial sector, including risks and benefits of AI adoption, as well as regulatory expectations. While non-binding, the paper reflects BNM’s forward-looking posture on the use of AI in the financial sector, along with some questions that BNM is inviting feedback from industry players to help shape future policy directions.

Having gone through the discussion paper, it essentially can be broken down into 4 key sections, each reflecting an area where BNM is inviting public feedback on:

(i) Scope of future AI regulation;
(ii) Risks in AI adoption in the financial sector;
(iii) Adequacy of existing regulatory framework for the financial sector; and
(iv) AI development approach.

In this article, we will be summarising BNM’s views on each of the items set out above, along with the questions that BNM is posting for public feedback.

1. Scope of Future AI Regulation

BNM acknowledges AI as a transformative force that will reshape the delivery of financial services. With increased and wide-spread adoption of AI technology in the financial sector, coupled with the lack of a national AI regulation for the time being, we cannot discount the possibility of a sectoral regulation by the BNM.

From the discussion paper, it would seem like BNM is considering the possibility of a sectoral regulation on the adoption of AI, particularly in the financial sector. That being said, as with any AI regulation, the first challenge is no doubt to ascertain the definition of AI for the purpose of the regulation. For obvious reasons, in any regulation on AI, the definition of what constitutes “AI” will be crucial, as it will affect the scope of the regulation. Too wide, it may capture low risk AI applications and render the regulation too restrictive and translate to resource wastage; on the other hand, if the definition is too narrow, it may risk AI applications with niche use cases falling through the cracks of regulation.

For the purpose of the discussion paper, BNM has adopted the following definition of AI:

“The use of advanced computer systems capable of processing and analysing large volumes of data and performing tasks that traditionally require human intelligence, including generating content or making predictions to aid in decision making processes”

Ultimately, the central bank aims to facilitate responsible AI adoption that improves consumer outcomes, supports financial stability, and sustains confidence in the financial system. While the financial sector is given the opportunity and “free hand” to experiment and explore AI adoption, BNM emphasises the need for accountability – those operating in the financial sector will have to ascertain associated risks and manage such risks accordingly.

While BNM toys with the idea of a possible sectoral AI regulation, it is inviting feedback from the public on the appropriate definition of AI that supports greater clarity, consistency and regulatory compliance.

2. Risks in AI Adoption in the Financial Sector

AI adoption in Malaysian financial sector is accelerating. According to BNM’s 2024 industry survey, majority of financial service providers in Malaysia are exploring AI use cases within their organisations. Perhaps it is because most modern AI applications are still in their infancy stage, while financial service providers are actively looking into how they can implement and deploy AI in their businesses, most use cases currently are focused on facilitating internal business operations, as opposed to incorporating AI as revenue generating financial offerings. The outcome of BNM’s survey shows key areas of deployment of AI by financial service providers as follows:

(i) Customer analytics and marketing;
(ii) Internal operational improvements;
(iii) Technology and cyber risk;
(iv) Fraud anti-money laundering (AML); and
(v) Customer service and engagement.

With increased adoption of AI in the financial sector, so are the associated risks in the sector. BNM’s discussion paper highlights a few risks associated with AI adoption in the financial sector:

(a) Fairness and bias: In areas like credit underwriting, the use of AI could unintentionally exclude or price out vulnerable groups if the underlying AI models are trained on flawed or unrepresentative data. This could, in fact, run contrary to the very intention behind innovations such as digital banks, which are designed to improve financial inclusion by serving the underserved. Ironically, the underserved communities who stand to benefit the most from such innovations are also often those with weaker or inconsistent credit histories. If not carefully managed, AI-driven credit underwriting may inadvertently reinforce existing inequalities, leaving the very group it seeks to empower even further behind.

(b) Model risks: Some AI models have inherent model risks, depending on how the models are trained. This may in turn translate to poor decision-making on the part of users, especially when the data used to train the model is of low-quality, or where a generative AI is highly prone to hallucination.

(c) Systemic and sector-wide risks: Currently, there are limited options for financial service providers when it comes to reliable AI offerings. Assuming that most industry players adopt the same third-party AI models or applications, it may lead to synchronised reactions during stress periods.

(d) Cybersecurity threats: Depending on the architectural structure of the AI deployment, it may also increase cybersecurity risks for financial service providers, where new attack surfaces and attack vectors are introduced to the IT environment of financial service providers.

BNM notes that while AI adoption among financial service providers is growing, risks identification, management and mitigation remain critical. Financial service providers will have to strike a balance between innovation and risk management, in order to ensure that public confidence in Malaysian financial sector is protected. In the discussion paper, BNM is seeking public feedback on the AI adoption trend, as well as appropriate risk mitigation strategy for AI deployment.

3. Adequacy of Existing Regulatory Framework for the Financial Sector

As we have mentioned in the earlier of the article, there is currently no binding AI regulation in the country, be it on a national level or a sectoral level. Given the wide-spread adoption of AI in the financial sector, it is imperative that BNM assesses the adequacy of its existing regulatory framework in dealing with AI associated risks.

Specifically on this, BNM is inviting public feedback on the adequacy of existing regulatory framework in the financial sector in addressing risks associated with AI, and whether there should be an industry-led guidelines in complementing regulatory expectations for the responsible and ethical use of AI.

When it comes technology related regulation in the financial sector, BNM’s Policy Document on Risk Management in Technology (RMiT) will surely come to mind for legal practitioners who are well-versed in technology. Given that the RMiT is technology-neutral and outcome-focused, BNM has expressed in the discussion paper that it views the existing regulatory framework in the financial sector to be adequate for the time being to deal with risks associated with AI adoption. As part of the compliance effort with the RMiT, most financial service providers would have in place robust risk governance framework that takes into consideration risks associated with the onboarding of AI technologies and solutions, thereby insulating them in one way or another from risks in AI adoption.

That being said, RMiT mainly requires financial institutions to adopt appropriate risk management strategies in the use of technology in their day-to-day operation. It does not provide specific guidance on the type of AI that financial institutions can use in their business offerings. This is perhaps why we can see from the discussion paper that BNM is expecting financial service providers to strengthen oversight across the AI lifecycle by embedding responsible AI principles – fairness, ethics, accountability, transparency, explainability, reliability, and security – into institutional processes. At this juncture, these responsible AI principles are currently not incorporated in the existing regulatory framework of the financial sector. Based on the tonality of the discussion paper, it can be said that BNM may be exploring introducing a framework on responsible AI principles in its regulatory framework. This may potentially take the form of an amendment to the RMiT to include divisions or chapters on use of AI by the financial institutions, similar to the amendments introduced a few years ago to step up oversight requirement on onboarding of cloud technology by the financial institutions.

4. AI Development Approach

It is clear that BNM recognises the benefits of AI in the financial sector. It stresses the importance of responsible innovation in AI adoption, while ensuring alignment with broader policy objectives.

Looking ahead, BNM highlights three (3) focal areas in ensuring responsible AI development:

(i) Win-Win-Win Use Cases

While AI adoption is key, financial institutions should not blindly onboard AI solutions that do not have clear benefits to the financial market. In this regard, the central bank prioritises AI applications that benefit consumers, enhance business outcomes, and align with regulatory objectives for financial stability, development and inclusion. Examples include AI-driven fraud detection tools and personal financial management solutions that use consumer-permissioned data.

(ii) Regulatory Sandbox as an Enabler

At the same time, BNM also recognises that existing regulatory frameworks may not fully cater to every AI use case, particularly more innovative or novel ones. While the discussion paper did not go further to elaborate the point, we are able to come up with a few examples for the understanding of readers. For one, if an AI model is required to be trained on document or information relating to the affairs or account of any customer of a financial institution in order for it to achieve its desired performance and functionalities, it may breach the secrecy provision under the Financial Services Act 2013. Another example would be the use of AI in core banking systems – critical solutions of a bank are subject to high resumption and recovery service level requirements. Many AI systems rely on third-party providers, and given the nascency of the industry and technology, the stringent service level requirements imposed by BNM may just not be feasible for the technology at this juncture. To deploy these technologies in financial sector, major regulatory impediments will have to be overcome. In this regard, BNM is highlighting the importance of the BNM Regulatory Sandbox as an avenue for industry players to test out their AI solutions where there are regulatory impediments.

(iii) Addressing Adoption Challenges through Collaboration

In ensuring that the financial sector moves ahead as a whole in the right direction, BNM stresses the importance of industry-wide collaboration to address common barriers in AI adoption, such as talent shortages, data quality issues and integration with legacy systems. Knowledge sharing on responsible AI adoption, developing industry guidelines and best practices for AI risk management and governance, and enhancing consumer awareness and understanding of AI in financial services, are some of the key areas of industry collaboration that BNM has highlighted in the discussion paper.

Conclusion

BNM’s discussion paper strikes a careful balance between enabling innovation and managing emerging risks. It sends a clear signal that AI adoption in financial sector is encouraged, but must be underpinned by responsible practices, robust governance, and collective industry commitment. While not expressly mentioned, we believe that the discussion paper hints at the idea of a possible sectoral regulation on the adoption of AI by BNM. Such regulation may come in a similar form as the regulation of cloud technology by BNM – by focussing on risk identification and risk management, as opposed to imposing controls on the kind of AI technology that financial institutions can adopt.

The central bank’s call for feedback offers financial institutions, technology providers, and other stakeholders a timely opportunity to shape Malaysia’s AI trajectory – one that advances digitalisation while safeguarding trust, stability, and inclusion. For those who are interested in submitting feedback to the BNM, the deadline for submission closes on 17 October 2025.

The Technology Practice Group at Halim Hong & Quek frequently advises and represents companies in relation to RMiT, technology or FinTech matters. If you need any legal assistance, please do not hesitate to reach out to the Technology Practice Group.

Halim Hong & Quek has been awarded Fintech Law Firm of the Year in 2024 and 2025, and ranked as Tier 3 in Fintech and financial services regulatory by Legal 500, Band 2 in Fintech by Chambers and Partners.


About the authors

Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”), Technology
Acquisition and Outsourcing, Telecommunication Licensing and
Acquisition, Cybersecurity
ky.lo@hhq.com.my.

Ong Johnson
Partner
Head of Technology Practice Group

Fintech, Data Protection,
Technology, Media & Telecommunications (“TMT”),
IP and Competition Law
johnson.ong@hhq.com.my


More of our Tech articles that you should read:

Our Services

© 2025 Halim Hong & Quek