Kazakhstan's New AI Law Defines Strict Rules, Bans, and User Rights

Kazakhstan pioneers AI law in Central Asia, setting global standards for risk, transparency, and user rights. Learn key compliance.
Kazakhstan's new AI Law (No. 230-VIII) makes it the first Central Asian nation to regulate artificial intelligence, with enforcement beginning January 18, 2026. The legislation turns voluntary OECD/UN ethics guidelines into nationally binding rules, creating a comprehensive legal framework for technology companies.
The law introduces stringent requirements, including mandatory labeling for all AI-generated content, robust data protection measures, and prohibitions on high-risk AI applications like social scoring. It empowers individuals with rights to refuse AI services and demand explanations for automated decisions, ensuring that AI development in Kazakhstan is safe, clear, and respects everyone's rights.
What are the key requirements of Kazakhstan's new Law on Artificial Intelligence?
Kazakhstan's new AI law mandates a risk-based approach, requiring transparency labels for all synthetic media and strict data residency for biometric information. It grants users rights to refusal and explanation while banning social scoring and emotion recognition, with significant penalties for non-compliance and mandatory human oversight.
What the law calls "AI"
Under Article 1, the law defines artificial intelligence broadly as any system with the ability "to imitate human cognitive functions, yielding results comparable to or surpassing human intellectual activity." This inclusive definition encompasses everything from legacy expert systems to modern generative AI, signaling that regulatory focus will be on a system's actual performance rather than its marketing.
Risk table at a glance
| Risk level | Typical systems | Core duties before go-live |
|---|---|---|
| Minimal | Recommender widgets, spell checkers | Requires self-declaration and a basic user notice. |
| Medium | Chatbots for retail, CV screeners | Mandates an internal audit and a 10-year documentation archive. |
| High | Medical imaging, credit scoring, autonomous logistics | Requires a third-party conformity assessment, a human oversight protocol, and data deletion within 15 days of a user's objection. |
The Ministry of AI and Digital Development retains the authority to classify systems and may reclassify any AI within 20 working days, ensuring final oversight.
Prohibited practices - a near-copy of Brussels, but with local teeth
Article 17 bans:
- Using subliminal techniques to distort human behavior.
- Implementing social scoring systems that result in "unjustified or disproportionate" treatment.
- Deploying emotion recognition technologies in public spaces without obtaining explicit consent.
- Using biometric categorization to infer sensitive personal attributes like ethnicity or political opinions.
While specific fines and criminal liability are still under development, the market is already treating these prohibitions as non-negotiable. A senior banker told the Astana Times that the ban on certain scoring types compelled his institution to completely rebuild its credit underwriting engine.
Transparency rules that marketing teams cannot ignore
The law mandates that all synthetic media, including images, video, audio, and text, must feature a clear, machine-readable label. This responsibility falls on the owner/holder of the AI system, not the distribution platform, making brand managers directly accountable. Labels must remain "persistent and legible" through compression or format changes, favoring embedded watermarks over simple metadata.
User rights - 72-hour clock starts now
Consumers gain four new statutory rights:
1. The right to refuse interaction with an AI system, unless automation is required by law.
2. The right to demand an explanation for any AI-driven decision that "affects their legal position."
3. The right to receive a copy of their personal data processed by the AI within 3 business days.
4. The right to have data processing stopped within 15 business days of withdrawing consent.
Failure to comply with the 15-day data processing cessation deadline will result in administrative penalties of up to 2,000 monthly-calculation indices (approximately USD 13,000).
National AI Platform - sandbox plus cloud credits
A national AI platform, operated by National Information Technologies JSC, will provide a sandbox environment with GPU clusters, open datasets, and annotated Kazakh-Russian language corpora. Startups in the Astana Hub can access free compute credits (valued at ~USD 5,000 per quarter) but must open-source any models trained on the platform's public data. This policy aims to foster a domestic LLM ecosystem.
Data residency - global cloud, local vault
While the AI law does not directly address cross-border data transfers, concurrent amendments to the Personal Data Act strengthen the existing mandate: citizens' biometrics and health records must be stored on servers physically located within Kazakhstan. Major cloud providers are responding with "Kazakhstan-only" data zones, though one integrator notes this can increase total ownership costs by 8-12% for multinational clients.
Copyright twist - no human, no protection
In a significant move on intellectual property, the law aligns with the U.S. Copyright Office's stance: works generated "solely by an AI system without creative human contribution" are not eligible for copyright protection. This provision directly impacts media and advertising firms, requiring them to ensure human creative involvement to secure exclusivity over AI-assisted content.
Compliance checklist for 2026
- Establish an internal AI registry mapping each model to its risk classification, data sources, and training history.
- Prepare a concise "AI factsheet" for each system, ready for submission to regulators upon request.
- Integrate mandatory labeling logic for synthetic content into your CI/CD pipeline and test for persistence across formats.
- Update your privacy policy to reflect the new user rights, including the 15-day data processing cessation timeline.
- Allocate budget for third-party audits if your AI systems are classified as high-risk (e.g., finance, health, critical infrastructure).
Early market signals
- At the Almaty FinTech Days in December 2025, venture capital investors reported that their due diligence questionnaires now dedicate approximately 40% of questions specifically to AI risk assessment.
- Two international telemedicine startups have delayed their market entry by six months to re-engineer diagnostic algorithms, removing the now-prohibited emotion recognition features.
- To mitigate liability risks from "opaque decisions," Kazakhstan's second-largest airline is replacing its complex dynamic pricing models with simpler, rule-based alternatives.
"We see the law as export standard diplomacy - Kazakh firms that comply here will find doors open in Brussels or Singapore," Deputy Minister Zhaslan Madiyev told Caspian Post.
As the law takes effect, every AI deployment, code update, and model card within Kazakhstan will require a documented risk assessment, a transparency label, and clear human accountability. This marks a fundamental shift in the country's technology governance.