Kazakhstan: New AI Rules Mandate Audits for High-Risk Systems

Alexander Bazilevich is a CRM expert and Top Salesforce Partner with over 17 years of sales experience in the IT industry. He specializes in transforming corporate goals into profits through cross-functional collaboration and innovative business solutions, with deep expertise in business systems and IT products.

Kazakhstan mandates audits for high-risk AI, creating a "trusted systems" list for public procurement and partnerships.
Kazakhstan's new AI rules mandate audits for high-risk systems, ushering in a new compliance era for the nation's artificial intelligence market. Under regulations that took effect in 2026, AI products must pass a third-party audit to be included on the government's "trusted systems" lists. While the audit is voluntary, only listed systems will qualify for public procurement or partnerships involving sensitive data, compelling companies to comply.
What are the new compliance rules for high-risk AI in Kazakhstan?
Kazakhstan's new AI law requires all high-risk AI products to undergo compulsory third-party audits to be listed on government "trusted systems" registers. Only audited and approved systems can be used in public procurement or sensitive data partnerships, making compliance essential for market access.
Kazakhstan requires high-risk artificial intelligence systems to pass a mandatory third-party audit before they can be added to official 'trusted systems' lists. Inclusion on these lists is a prerequisite for government contracts and partnerships involving sensitive data, effectively making the audit essential for market participation.
The requirement stems from the Law on Artificial Intelligence, signed on 17 November 2025 and enacted on 18 January 2026. The statute classifies AI into minimal, medium, and high-risk tiers, leaving the initial assessment to the owner. For high-risk AI, the law mandates technical documentation, risk-management measures, and clear labeling of synthetic content for users. A separate article prohibits manipulative subliminal techniques, social scoring, non-consensual real-time emotion recognition, and biometric systems that create discriminatory profiles.
"High-risk" captures any AI whose output can affect life, health, fundamental rights or critical infrastructure. Typical examples named in guidance include medical-imaging software, credit-scoring engines, recruitment tools and algorithms that manage power grids or urban traffic.
From paperwork to public register - how the listing works
| Step | Deadline | Document package | Review body |
|---|---|---|---|
| 2. Compliance check | 10 working days | Full dossier plus auditor report | Same regulator |
| 3. Publication | 5 working days after approval | System name, version, intended use, limitations | Agency website (PDF, searchable) |
| 4. Re-submission | 5 working days | Corrected documents if gaps found | Same regulator |
Audits must be conducted by private firms accredited under Order No. 263 of 13 June 2018, the same framework used for traditional information system security evaluations. The auditor's review assesses the AI's purpose, functionality, data sources, algorithmic logic, potential harms, and mitigation measures. Regulators do not review code but verify the completeness of audit evidence and the disclosure of residual risks.
Why companies are lining up anyway
Strong market demand is driving developers toward this voluntary audit. Banks, hospitals, and retailers are integrating AI for critical functions like loan origination, clinical decision support, and dynamic pricing. These organizations prefer officially listed products to streamline procurement and shift liability to the vendor. Reflecting this trend, most new proposals for the 62 AI projects worth 9.7 billion tenge (about US $17 million) currently underway now require suppliers to commit to the trust audit.
The new compliance requirements have fueled a boom for advisory firms assisting with the extensive technical documentation. The cost to prepare the technical file averages US $25 000, while complex platforms can reach US $80 000 with stress-testing and fairness metrics. Non-compliance, such as failing to label AI-generated content or manage risks, can result in fines of 15 - 200 monthly calculation indices (MCI) under new articles 641-1 and 692-3 of the Code of Administrative Offences, with repeat offenses leading to suspension.
For global vendors the Kazakh framework is becoming a sandbox: aligning an AI module with Astana rules now can ease later entry into the EU market, where similar conformity assessments start in 2027 under the EU AI Act.
Practical challenges on the ground
- Auditor Shortage: A limited pool of only twelve accredited inspection bodies has created significant backlogs and month-long waiting times for audits.
- Intellectual Property Concerns: Some founders are hesitant to submit source code to government portals due to fears of trade secret exposure, despite provisions allowing for redacted submissions.
- Cross-Border Data Compliance: AI systems trained abroad must prove that the personal data of Kazakh citizens remained within the country during fine-tuning, a requirement that echoes the older Personal Data Law No. 94-V.
- Lack of Sector Coordination: The Ministries of Digital Development, Health, and Energy are still developing harmonized checklists to prevent a single fintech algorithm from requiring multiple, redundant assessments.
In response, international system integrators are offering "audit-ready" accelerators. These packages include pre-documented architectures, logging pipelines, and bias tests to streamline compliance. One regional Salesforce platinum partner that has already localised CRM analytics for Tele2 Kazakhstan and *L'Oréal * is marketing a similar toolkit for Einstein GPT implementations, arguing that model cards and data-lineage diagrams can be auto-generated from the same metadata used for CRM integration.
What happens next
The first public lists of trusted AI systems are expected in late Q3 2026, according to Astana Hub, a major technology park. Concurrently, the Ministry of Artificial Intelligence and Digital Development is preparing sandbox legislation to allow companies a 180-day trial period for unrated systems before committing to a full audit. Proposed amendments also aim to extend the audit requirement to medium-risk AI, such as chatbots and marketing engines, which could double the market for compliance services.
Developers have a limited opportunity to influence the final compliance standards. Regulators are accepting public comments on the proposed technical metrics until 15 September 2026, allowing early movers to help shape the criteria for inclusion in Kazakhstan's first official trusted AI catalog.