
As mentioned in our previous article, Malaysia’s personal data protection landscape has undergone a significant sea change since last year, following the full implementation of the amended Personal Data Protection Act 2010 (“PDPA”). With the introduction of new compliance obligations, including the appointment of data protection officers (“DPOs”) and the mandatory personal data breach notification obligation, we anticipated that this regulatory momentum would continue to build this year.
True enough, the three most highly anticipated guidelines have now been officially released by the Department of Personal Data Protection on 30 April 2026, namely: (i) the Data Protection Impact Assessment Guideline (“DPIA Guideline”); (ii) the Automated Decision-Making and Profiling Guideline (“ADMP Guideline”); and (iii) the Data Protection by Design Guideline (“DPbD Guideline”).
Gone are the days when practitioners complained that the PDPA landscape in Malaysia was outdated and slow-moving, where the pendulum has now swung in the other direction. As it rains, it pours, and organisations are quite literally trying to catch up with the wave of regulatory changes being introduced within such a relatively short period of time. There is certainly much to digest and absorb, especially for in-house legal counsel, who must not only understand these regulatory changes, but the more important and difficult task is to translate these requirements into actual internal implementation.
Against this backdrop, this article does not attempt to overwhelm readers with every detail contained in the new guidelines. Instead, it serves as a high-level preview to provide a broad understanding of what each guideline is intended to achieve, its general scope, how it may operate in practice, and how organisations, legal teams and data teams should begin preparing for implementation.
In subsequent articles, we will take a deeper dive into each guideline and discuss the practical steps required for operational implementation. For now, we set out below the top 7 key takeaways for each of the three guidelines, with the aim of giving organisations the necessary clarity before moving into the more detailed implementation phase.
.
- A. Data Protection Impact Assessment Guideline: Top 7 Key Takeaways
Key Takeaway 1: Understanding What a DPIA Is
At this stage, many organisations would understandably have heard of the term “DPIA”. However, many may still not fully appreciate what a DPIA actually means or entails, beyond the general understanding that it is an important feature of a mature personal data protection framework.
At a high level, a DPIA can be understood as an assessment to assess the impact of a processing operation specifically on personal data protection. To keep things simple, every company will have different processing operations, and a DPIA is a legal assessment to assess the impact of that specific processing operation on personal data protection.
For example, in the hotel sector, a hotel may introduce a new customer relationship management (“CRM”) tool to manage room reservations, guest preferences, loyalty programme data, customer service history, personalised marketing campaigns and post-stay engagement. In that context, the DPIA would be a legal assessment to assess the impact of such CRM tool on personal data protection. Similarly, a university may be using a new AI-enabled admissions screening tool, such as an automated document review or applicant screening system, to scan certificates, academic transcripts, and examination results as part of the initial filtering process for student admission purposes. In that context, the DPIA would be a legal assessment to assess the impact of such AI screening tool on personal data protection.
.
Key Takeaway 2: Understanding the Purpose of Carrying Out a DPIA
The purpose of conducting a DPIA is to understand the specific risks associated with a specific processing operation on personal data protection. To a certain extent, each processing operation that involves personal data may carry different kinds of risks of varying levels. Therefore, through a DPIA exercise, the organisation will be able to better understand the personal data protection risks that may arise from that processing operation.
Typically, a law firm would assist to produce a complete DPIA report, which allows the organisation to have a clear and proper understanding of the impact and risks of such processing operation on personal data. The DPIA report would also highlight the relevant gaps, red flags, and potential non-compliance issues associated with the processing operation, including recommendations on how the company may implement the necessary steps to mitigate and reduce those risks while implementing the processing operation accordingly.
In this sense, the DPIA is both a legal assessment and a practical risk-management tool. It enables organisations to proceed with innovation and operational improvements, but with a clearer understanding of the personal data protection risks involved and the safeguards required.
.
Key Takeaway 3: Understanding Who Is Responsible for Carrying Out a DPIA
The obligation to carry out a DPIA is on the data controller, instead of the data processor. This is natural because the data processor processes personal data solely on behalf of the data controller, and not for its own purposes. As such, it is still ultimately the decision of the data controller whether to proceed with a specific processing operation.
Therefore, the DPIA exercise should be conducted by the data controller to fully understand all relevant risks involved and the steps required to mitigate and reduce such risks. Of course, the data processor should provide the necessary support and assistance to the data controller in conducting the DPIA exercise. In many cases, the data processor may be the technology vendor, cloud provider, SaaS provider, system integrator, outsourced service provider or AI tool provider that has practical knowledge of how the system or processing activity actually operates. The data processor should therefore provide the necessary information, support and assistance to the data controller in the DPIA process, where this may include information on data flows, data storage locations, access controls, security measures, sub-processors, retention settings, audit logs, deletion functionality and incident response procedures.
.
Key Takeaway 4: Understanding When an Organisation Should Carry Out a DPIA
The next important point is to understand when an organisation is required to carry out a DPIA.
The DPIA Guideline provides that a data controller should adopt a two-tier assessment process to determine whether a DPIA is required for a specific processing operation. This involves: first, a quantitative assessment; and second, where the quantitative threshold is not met, a qualitative assessment.
Under the first tier, the data controller should assess whether the processing operation involves:
- a) the processing of personal data of more than 20,000 data subjects; or
- b) the processing of sensitive personal data, including financial information data, of more than 10,000 data subjects.
If either of the above thresholds is satisfied, the requirement to carry out a DPIA will be triggered for that specific processing operation.
However, if the processing operation does not meet the quantitative threshold, this does not necessarily mean that a DPIA is not required. The organisation should then proceed to the second tier, which is the qualitative assessment.
The qualitative assessment is less straightforward than the quantitative assessment. The quantitative assessment is relatively clear-cut because the DPIA requirement is automatically triggered once the relevant numerical threshold is met. By contrast, the qualitative assessment requires the organisation to consider whether the processing operation may carry a high risk to the protection of personal data, such that a DPIA should still be conducted even though the quantitative threshold is not reached.
In assessing whether a processing operation may carry such high risk, the DPIA Guideline provides several non-exhaustive factors for consideration. These include whether the processing operation:
- a) may produce legal effects or similarly significant effects on the data subject, such as a noticeable impact on the data subject’s legal status or rights, financial status, health, reputation, access to services, or other economic or social opportunities;
- b) involves systematic monitoring of data subjects;
- c) uses innovative technologies, including technologies involving a new or significantly improved product, good or service, a new process, a new marketing method, a new organisational method in business practices, or a new workplace organisation, external relations or business arrangement;
- d) involves the denial or restriction of the rights of data subjects;
- e) involves the tracking of the location or behaviour of data subjects;
- f) targets children or vulnerable individuals; or
- g) involves automated decision-making and profiling that may pose a high risk to data subjects.
If, after conducting the qualitative assessment, the organisation concludes that the processing operation does carry high risk to the protection of personal data, the organisation should then conduct a DPIA for that specific processing operation.
.
Key Takeaway 5: Understanding How to Carry Out a DPIA
The DPIA Guideline provides five steps to carry out a DPIA. This follows the approach abbreviated as “DEICA”, where the five steps are as follows:
The first step is Describe (D), which is to describe the processing operations, including the personal data involved, the data flow, and the purposes of the processing for the processing operation.
The second step is Evaluate (E), which is to evaluate the compliance, necessity, and proportionality of the processing operation in relation to its purposes.
The third step is Identify (I), which is to identify and analyse the specific risks to the protection of personal data of the data subject.
The fourth step is Consider (C), which is to consider the measures to be taken to address the specific risks identified in order to safeguard the personal data.
The last step is Assess (A), which is to assess the overall residual risk of the processing operation.
.
Key Takeaway 6: Understanding What to Do with the DPIA Report
As mentioned above, upon completion of the DPIA, the expectation is then to produce a DPIA report, which is to be presented to the management of the organisation for consideration.
The DPIA report should be more than simply highlighting all the risks, red flags, gaps, and potential non-compliance with the PDPA concerning the processing operation. More crucially, the DPIA report should also make recommendations on the mitigation and prevention measures to address and reduce those potential risks. While these measures may not necessarily fully eliminate the risks, they should at least reduce the probability or possibility of those risks materialising, as well as the potential impact of those risks.
The company should then consider the findings and recommendations in the DPIA report, implement the necessary steps, and allocate appropriate resources to address these issues.
.
Key Takeaway 7: Understanding the Validity of the DPIA
A completed DPIA report is valid for a period of two years from the date of completion. The DPIA Guideline makes it clear that upon expiry of that period, a refreshed DPIA should be carried out.
We can understand and appreciate the two-year validity period because a DPIA should be treated as an ongoing audit process that is carried out periodically. Given the nature of technological development and evolution, different kinds of risks may naturally be introduced as new features or updates are made to the processing operation. Hence, a refreshed DPIA is expected to be carried out periodically, with a two-year validity period.
- B. Automated Decision-Making and Profiling Guideline: Top 7 Key Takeaways
Key Takeaway 1: Understanding What ADMP Is
ADMP is essentially the combination of two concepts: automated decision-making and profiling.
In simple terms, automated decision-making refers to a processing operation where a decision is made automatically, without meaningful human involvement. Of course, when we say “without human involvement”, we are not necessarily speaking in absolute terms, where everything is fully conducted by agentic AI with no human touchpoint at all. It may also cover automated processing operations that involve minimal human involvement, where the automated system effectively carries out the decision-making.
“Profiling”, on the other hand, refers to automated processing that uses personal data to profile data subjects through: (i) predictive elements, where personal data is used to predict or generate insights relating to the data subject; or (ii) inference elements, where generalised inferences are made relating to the data subject.
It is referred to as ADMP because, in such processing operations, these two elements typically go hand in hand. The automated system will rely on the personal data provided to make predictions or inferences concerning the data subject by profiling the data subject, before making a further automated decision concerning that data subject. Put simply, the automated system does not merely receive data – it reads the data, draws a conclusion from it, and acts on that conclusion.
For example, a fintech company that provides micro-lending to consumers may deploy a fully automated system to evaluate loan applications by profiling an applicant through personal data such as income level, employment record, age, residential address, race, repayment history, spending patterns, banking transactions, device information, credit history, and other behavioural or financial indicators to determine the applicant’s creditworthiness. The system may then automatically decide whether the loan should be approved or rejected.
Another example would be an HR department using an automated recruitment system to assess candidates by profiling them through their CV, work history, education background, residential address, parents’ education, race, salary history, employment gaps, professional qualifications, online assessment scores, and other relevant indicators to determine the suitability and eligibility of the candidate. The system may then automatically shortlist, rank, reject, or recommend the candidate for hiring.
.
Key Takeaway 2: Understanding That the ADMP Guideline Is Only Applicable Where AI Is Used
The second key takeaway is that, while not all automated decision-making or profiling activities necessarily involve AI, the ADMP Guideline is directed at ADMP activities where AI is used for the processing of personal data.
This distinction is important, as there are many examples of automated decision-making and profiling that may not involve AI in the modern sense. For example, a bank may use a simple rule-based system that automatically rejects a loan application if the applicant’s income falls below a fixed threshold. Another example would be an online insurance platform that automatically declines an application if the applicant answers “yes” to certain predefined medical or risk questions. In both examples, the decision may be automated, but it is driven by fixed rules, rather than AI-enabled prediction, inference, or learning.
However, the ADMP Guideline makes it clear that its scope is focused on situations where AI is used in the processing of personal data for ADMP activities. This is understandable because AI-enabled ADMP may carry a different and, in many cases, more complex risk profile compared to traditional rule-based automation. This, in fact, is also broadly consistent with the regulatory direction seen in other jurisdictions, including the European Union’s approach under the EU AI Act, where it adopts a risk-based approach to AI governance, with particular attention given to AI systems that may affect important areas such as employment, education, access to essential services, creditworthiness, law enforcement and other areas where individuals may be significantly affected.
We can certainly appreciate the underlying reason because it is simply undeniable that where AI is used to process personal data, the risks may go beyond ordinary automation. AI systems may rely on large datasets, detect patterns that are not obvious to human reviewers, generate inferences that are difficult to explain, produce outcomes that may be biased or inaccurate, and affect individuals at scale. This is why ADMP involving AI deserves particular regulatory attention.
.
Key Takeaway 3: Understanding how ADMP fits within the DPIA framework
As you may recall from the DPIA section above, one of the qualitative assessment factors in determining whether a specific processing operation deployed by a company would require a DPIA is whether the processing operation involves ADMP that poses a high risk to the data subject.
Therefore, where it is assessed that the company carries out a processing operation involving ADMP that poses high risk to the data subject, the company would then need to proceed with a DPIA.
.
Key Takeaway 4: Understanding When ADMP May Be Triggered as High Risk and Require a DPIA
The fourth key takeaway is to understand when an ADMP process may be considered sufficiently high risk such that the organisation should proceed to conduct a DPIA. The DPIA Guideline states that the ADMP threshold would be met if the outcome of the ADMP process would: (i) result in legal effects concerning the data subject; or (ii) significantly affect the data subject.
In this context, “legal effects” means that the process will produce a decision that may affect the data subject’s legal status or legal rights. For example, this may include an automated decision to terminate an employment contract, reject an insurance claim, deny access to a financial product, or reject an application for admission.
The ADMP outcome may also be regarded as significantly affecting the data subject if it:
- i) significantly affects the circumstances, behaviour, or choices of the data subject;
- ii) has a prolonged or permanent impact on the data subject; or
- iii) at its most extreme, leads to the exclusion or discrimination of the data subject.
By way of illustration, an AI-powered ADMP system may significantly affect a data subject where it automatically rejects a consumer’s loan application and effectively restricts that person’s access to credit. Similarly, an AI-enabled insurance underwriting system that automatically classifies a person as high risk and materially increases the premium payable, or denies coverage altogether.
Put simply, the concern is not merely that a machine has made a decision. The deeper concern is that the decision may materially alter the opportunities, rights, or lived reality of the individual.
.
Key Takeaway 5: Informing Data Subjects Where the Processing Involves ADMP
If the processing involves ADMP, the data controller should inform the data subject of such information.
The ADMP Guideline states that the data controller should, through written notice provided to the data subject, inform and explain the types of decisions that will be made through ADMP, the reasons for such decisions, and the potential consequences of those ADMP decisions.
The information to be provided by the data controller concerning the use of ADMP for the processing of personal data should be as extensive as reasonably possible, so that the data subject may properly understand what is truly involved. This is important because meaningful transparency is not achieved merely by telling the data subject that “AI” or “automation” is being used. The more important point is to make sure the data subject can understand, in practical terms, how the ADMP process may affect them fundamentally.
.
Key Takeaway 6: Data Subjects May Withdraw Consent for Processing Involving ADMP
A data subject may exercise their data subject right to withdraw consent for their personal data to be processed where such processing may involve ADMP.
Data subject rights are one of the key fundamentals of the personal data protection framework in Malaysia. Upon reading and understanding the written notice provided by the data controller concerning ADMP, the data subject should have the right to withdraw their consent for their personal data to be processed in a processing operation that may involve ADMP.
Of course, upon receipt of such data subject request, the data controller should respect the decision and cease the processing of the personal data that involves ADMP, unless there is another lawful basis or legal requirement that permits or requires such processing to continue under the PDPA.
.
Key Takeaway 7: Understanding When ADMP May Be Used by Companies
Ultimately, the use of ADMP is still a processing of personal data. Therefore, before such ADMP processing can be applied, the data controller must first obtain consent from the data subject for their personal data to be processed.
The ADMP Guideline further states that ADMP may be undertaken by the company in the following three circumstances: (i) where the processing is necessary for entering into, or performance of, a contract between the data subject and the data controller; (ii) where the processing is necessary for compliance with laws; or (iii) where the data subject has given prior consent.
These grounds are important because they show that ADMP is not prohibited outright. Similarly, this is a useful reminder that ADMP should not be treated as a free-standing technology deployment exercise either. Rather, ADMP may be used where there is a proper legal basis, appropriate transparency and sufficient safeguards.
For example, a fintech company may rely on ADMP as part of a digital loan application process where automated assessment is necessary to process and evaluate the applicant’s request for financing. Similarly, an employer may use certain automated tools to assist with candidate screening, provided that the process is transparent, proportionate and subject to appropriate safeguards.
The question is therefore not simply whether the company has the technical capability to automate the decision and successfully deploy ADMP, but whether it has the proper legal basis, transparency, and safeguards to do so responsibly with the PDPA.
.
- C. Data Protection by Design Guideline: Top 7 Key Takeaways
Key Takeaway 1: Understanding What DPbD Is
DPbD can be understood as an approach where the company incorporates and implements the fundamentals and principles of personal data protection into the design, technicalities, and development of its systems, processes, projects, or programmes from the outset, and throughout the entire lifecycle of the personal data processing activities.
This means that, instead of designing a system, launching it, implementing it, and only then considering whether such processing operation complies with, or takes into account, personal data protection principles and fundamentals, the company should take active control by designing and incorporating personal data protection principles into the whole processing operation from the outset.
For example, before a financial institution implements a new digital onboarding platform, it should already have thought through the entire personal data processing lifecycle: what personal data is collected, why it is collected, whether all such data is necessary, how consent is obtained, how the data is stored, who may access it, how long it is retained, how it is secured, how the data subject may exercise their rights, and how the personal data will eventually be deleted or decommissioned.
Similarly, before a healthcare provider implements a new patient management system. Before deployment, the healthcare provider determines the categories of patient data to be processed, limits access to authorised medical and administrative personnel, implements role-based access controls, prepares audit trails, designs retention periods based on clinical and regulatory needs, ensures secure transfer of personal data between departments, and builds in processes for correction, access requests and secure disposal. This is DPbD in practice because the organisation is thinking through the entire personal data lifecycle before the system goes live.
.
Key Takeaway 2: Understanding the Key Elements of DPbD
The DPbD Guideline outlines four key elements, which help explain the thinking behind the concept of DPbD. These elements are: (i) proactiveness, (ii) end-to-end protection, (iii) transparency, and (iv) user-centricity.
First, proactiveness means that the organisation should approach personal data protection actively and deliberately. Personal data protection should not be treated as an afterthought, or something to be considered only when a problem arises. Instead, the organisation should establish and design personal data protection principles, controls and safeguards into the processing operation from the beginning.
Second, end-to-end protection means that personal data protection should cover the entire personal data lifecycle. This includes collection, use, disclosure, storage, access, transfer, retention and disposal. In other words, it is not enough for an organisation to focus only on the point of collection. The organisation should also consider what happens to the personal data after it is collected, who can access it, where it is stored, how long it is retained, whether it is shared, and how it is ultimately deleted or anonymised.
Third, transparency means being open and clear with data subjects about the personal data processing activities that take place throughout the relevant lifecycle. Data subjects should be able to understand, in clear and practical terms, what personal data is collected, why it is collected, how it is used, who it may be shared with, how long it may be retained, and what rights are available to them.
Fourth, user-centricity means recognising that personal data ultimately relates to the data subject. Accordingly, the processing operation should be designed around the rights, interests and reasonable expectations of data subjects. This requires organisations to think not only from the perspective of business efficiency, but also from the perspective of fairness, accountability and trust.
.
Key Takeaway 3: Understanding DPbD for the General Principle
The third key takeaway is to understand how DPbD applies to the General Principle under the PDPA.
At this point, we trust that readers are already familiar with the General Principle under the PDPA, which essentially requires a data controller to have a valid legal basis for the processing of personal data, only process personal data for lawful purposes and purposes that are necessary or directly related to those purposes, and only process personal data that is adequate but not excessive in relation to those purposes.
DPbD requires the data controller to comply with the General Principle by embedding these requirements into the design of the data processing operation from the very outset and throughout the end-to-end process. This means that before any system, process, project or programme involving personal data is implemented, the organisation should already be asking the right questions. What is the purpose of processing? What is the legal basis? Is the purpose specific enough? Is each data field necessary? Is the organisation collecting more personal data than it needs? Is consent required? Will the processing remain necessary over time?
To assist data controllers, the DPbD Guideline provides a non-exhaustive General Principle checklist. This includes the following:
- i) Predetermination — establishing the purposes and legal basis for processing before any personal data processing takes place;
- ii) Specificity — defining the purposes for processing as narrowly and specifically as possible;
- iii) Data Minimisation — minimising the collection and processing of personal data to what is strictly necessary for the identified purposes;
- iv) Consent — where consent is the legal basis, ensuring that it is obtained through an opt-in mechanism, is capable of being easily withdrawn, and is not based on misleading or vague language;
- v) Assessment — conducting a DPIA before the processing begins to identify personal data risks and implement appropriate mitigation measures; and
- vi) Review — conducting regular reviews throughout the lifecycle of the personal data to verify whether the processing remains necessary for the purposes for which the personal data was collected, and whether the relevant legal bases continue to apply.
.
Key Takeaway 4: Understanding DPbD for the Notice and Choice Principle
The embedding of the fundamental concept of the Notice and Choice Principle through the implementation of DPbD into the processing operation is to ensure that the data controller is clear and open with the data subject about how the data controller will collect, use, and share the data subject’s personal data.
One particularly important point under the DPbD Guideline is that data controllers should refrain from using deceptive design patterns. This refers to design choices that may mislead, pressure or steer data subjects into making unintended or potentially harmful choices that benefit the data controller, rather than protecting the data subject’s best interests. For example, an organisation should avoid designing consent buttons, privacy settings or withdrawal mechanisms in a way that makes it much easier for users to agree than to refuse, or much easier to give consent than to withdraw it. From a DPbD perspective, the user experience itself should be fair, transparent and respectful of the data subject’s rights.
To assist data controllers in embedding the Notice and Choice Principle into DPbD, the DPbD Guideline provides a non-exhaustive checklist of measures, including the following:
- i) User-Centred Design — designing systems that respect the data subject’s interests through robust default privacy settings, easily accessible personal data protection notices and appropriate user-friendly privacy management tools;
- ii) Consent — where consent is the legal basis, ensuring that consent is obtained through an opt-in mechanism, can be easily withdrawn, and is not obtained using misleading or vague language;
- iii) Notice — providing a personal data protection notice in both the National Language and English, using clear and plain language, and ensuring that the notice is easily accessible and, where applicable, communicated through multiple channels or media; and
- iv) User Control — ensuring that mechanisms enabling data subjects to exercise their rights are provided in clear and plain language, are easily accessible, are contextually appropriate, and where applicable, are communicated through multiple channels or media.
.
Key Takeaway 5: Understanding DPbD for the Disclosure Principle
The Disclosure Principle requires the data controller to obtain the data subject’s consent, or otherwise have a valid legal basis, for the disclosure of personal data. It also requires the data controller to only disclose personal data for the purposes for which the personal data was to be disclosed at the time of collection, and only to the class of third parties specified in the personal data protection notice.
This is important because many personal data protection risks arise not at the point of collection, but at the point of disclosure. In practice, personal data may be shared with vendors, affiliates, business partners, outsourced service providers, cloud providers, analytics providers, payment processors, logistics providers, professional advisers or other third parties. Each disclosure creates a potential risk point if it is not properly assessed, justified, documented and controlled. Therefore, DPbD requires organisations to embed disclosure controls into the design of their processing operations. Before personal data is shared with any third party, the organisation should already have considered the purpose of disclosure, legal basis, class of recipient, data fields to be disclosed, security measures, contractual safeguards and whether the disclosure is truly necessary.
To assist data controllers in embedding the DPbD into the design of data disclosure processes, the DPbD Guideline provides a non-exhaustive checklist, including the following:
- i) Predetermination — establishing the purposes and legal basis of disclosure before the disclosure of personal data takes place;
- ii) Abstraction — where the purpose of processing, such as compiling statistics, does not require the final dataset to refer to an identified data subject, the organisation should anonymise or delete the personal data as soon as identification is no longer necessary;
- iii) Security — implementing technical security measures, such as hashing and encryption, and organisational measures, such as policies and contractual obligations, to ensure that personal data is securely handled and disclosed;
- iv) Consent — where consent is the legal basis, ensuring that it is obtained through an opt-in mechanism, can be easily withdrawn, and is not obtained using misleading or vague language;
- v) Review — conducting regular reviews throughout the lifecycle of the personal data to verify whether the processing remains necessary for the purposes for which the personal data was collected, and whether the relevant legal bases continue to apply; and
- vi) Third-party Management — ensuring that third parties have adequate personal data protection measures in place, whether through contractual agreements or other appropriate safeguards, before personal data is transferred to them.
.
Key Takeaway 6: Understanding DPbD for the Retention Principle
The Retention Principle essentially requires the data controller not to keep personal data for longer than is necessary for the fulfilment of the purposes for which it was processed.
To guide the implementation of DPbD in complying with the Retention Principle, the DPbD Guideline provides a non-exhaustive checklist, including the following:
- i) Data Minimisation — minimising the collection and processing of personal data to only what is strictly necessary for the identified purposes;
- ii) Abstraction — where the purpose of processing, such as compiling statistics, does not require the final dataset to refer to an identified data subject, the organisation should anonymise or delete the personal data as soon as identification is no longer necessary;
- iii) Access Limitation — implementing access controls to ensure that access to personal data is granted only to authorised parties with a legitimate need; and
- iv) Security — implementing security measures to protect personal data throughout its entire lifecycle, so that all personal data is collected, processed, transferred, stored and destroyed in a secure manner.
.
Key Takeaway 7: Understanding DPbD Governance
Ultimately, DPbD is really about establishing the right organisational culture that will enable the company to proactively implement and design personal data protection principles into the processing operation from the outset.
With that, what is required is strong DPbD governance within the company, supported by a clear commitment from senior management. Senior management sets the standard for personal data protection, and from there, fosters a culture where all stakeholders share a similar commitment towards personal data protection. Where senior management sets the right standard, the organisation is more likely to build a culture where legal, compliance, IT, cybersecurity, product, procurement, HR, marketing and business teams share the same commitment towards responsible personal data processing.
The DPbD Guideline sets out several best practices for implementing DPbD governance, including the following:
- i) ensuring senior leadership commitment and active participation in establishing a robust and proactive personal data protection framework;
- ii) conducting periodic audits of personal data protection policies to verify their practical effectiveness and operational compliance;
- iii) developing systematic methods, including DPIA processes, to identify and assess risks so that negative impacts may be mitigated before they occur; and
- iv) fostering a culture and environment where all stakeholders, including users, are encouraged to suggest improvements to data protection practices, and ensuring that such suggestions are systematically reviewed and adopted where appropriate.
This is an important reminder that DPbD cannot succeed if it is owned only by the legal department. Legal may provide the framework, but implementation definitely requires the involvement of the whole organisation, especially from the senior management.
.
Closing Thoughts
It is understandable that all of this may feel overwhelming at first glance. But perhaps that is precisely the point. The release of these three guidelines sends the clearest message yet that personal data protection in Malaysia is moving from the margins to the centre of corporate governance. It will no longer sit quietly as a back-end legal compliance document, only to be reviewed when a data breach occurs or when a policy needs refreshing. Moving forward, personal data protection will increasingly take the front seat, deeply embedded into the processing operations of the organisation.
For in-house counsel, this is certainly not just another regulatory update to be noted. It is a broader signal of where the personal data protection landscape is heading next, and the importance of preparing accordingly, not only from a legal compliance perspective, but also from an operational, governance, and strategic implementation perspective.
If you have any questions on personal data protection, please feel free to reach out to the partners in our Technology Practice Group, Ong Johnson and Lo Khai Yi, for a consultation. We have extensive experience in assisting organisations with personal data breaches and data security incidents, and have advised on and responded to breaches at both the international and regional levels.
The Technology Practice Group of Halim Hong & Quek continues to be recognised by leading legal directories and industry benchmarks. Recent accolades include FinTech Law Firm of the Year at the ALB Malaysia Law Awards (2024, 2025 and 2026), Law Firm of the Year for Technology, Media and Telecommunications by the In-House Community, FinTech Law Firm of the Year by the Asia Business Law Journal, a Band 2 ranking for FinTech by Chambers and Partners, and a Tier 3 ranking by Legal 500.
About the authors
Ong Johnson
Partner
Head of Technology Practice Group
Fintech, Data Protection,
Technology, Media & Telecommunications (“TMT”),
IP and Competition Law
johnson.ong@hhq.com.my
◦
Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”), Technology
Acquisition and Outsourcing, Telecommunication Licensing and
Acquisition, Cybersecurity
ky.lo@hhq.com.my.
More of our Tech articles that you should read: