DCO Policy Tracker
Welcome to the DCO Policy Tracker (The Tracker), your go-to resource for the latest developments in the digital policy space. The Tracker reflects the DCO’s commitment to supporting inclusive, and sustainable digital transformation by equipping digital economy stakeholders, particularly policymakers, with timely insights into emerging digital policy trends.
Designed to deliver updates, analysis, and visibility on digital policy developments from across the globe, the Tracker translates a complex digital governance landscape into practical policy insights.
Pakistan has unveiled the “Islamabad AI Declaration,” setting out a national direction for sovereign and responsible AI. The declaration focuses on building AI systems that create public value, ensure trusted data stewardship, and establish accountable governance mechanisms.
The declaration signals political intent and provides a framing document for future policy and implementation measures; its practical impact will depend on follow-through via strategy, institutions, funding, and standards.
Bangladesh has published draft v2.0 of its National AI Policy 2026-2030. The draft adopts a risk-based regulatory framework, establishing four tiers from prohibited to minimal-risk. It explicitly prohibits mass surveillance and social scoring, and mandates algorithmic impact assessments for high-risk systems. It also includes a commitment to ratify the Council of Europe’s Framework Convention on AI.
This draft represents a significant step toward establishing AI governance infrastructure, including risk classification and safeguards aimed at reducing rights and trust-related risks. Its success, however, hinges on building effective institutions for enforcement, such as the proposed National Data Governance and Innovation Agency (NDGIA).
The U.S. National Institute of Standards and Technology (NIST), through its Center for AI Standards and Innovation (CAISI), announced the AI Agent Standards Initiative. It aims to foster industry-led technical standards and open-source protocols for AI agents. The initiative focuses on three pillars: facilitating standards development, fostering open-source protocols, and advancing research on AI agent security and identity. It includes public input mechanisms like Requests for Information (RFIs) and listening sessions.p>
This initiative proactively addresses fragmentation and trust barriers to the adoption of autonomous AI agents. By promoting interoperability and security standards, it aims to build public trust and cement U.S. leadership in this critical, emerging area of technology.
On February 18, the Spanish Data Protection Agency (AEPD) published new guidelines on the use of agentic AI systems in processing personal data. The document is aimed at data controllers and processors. It explains how agentic AI works, analyzes its specific vulnerabilities and associated risks to data protection, and lists possible measures to ensure compliance with regulations like the GDPR. The guidance emphasizes the need to understand the technology to make informed decisions and apply data protection by design.
This guidance provides a clear, authoritative interpretation of how existing data protection law applies to the emerging technology of agentic AI within a major European economy. It forces organizations to proactively assess risks and implement safeguards before deploying such systems, directly influencing how this technology is adopted in Spain.
During the AI Impact Summit 2026 in New Delhi (February 2026), the UAE announced it will host the Artificial Intelligence Summit in 2028. Additionally, in 2027, the UAE will co-chair the Summit alongside the Swiss Confederation in Geneva. These leadership roles reflect the UAE’s commitment to strengthening international cooperation in advanced technology, building strategic partnerships, and leveraging AI solutions for sustainable development. AI Minister Omar Sultan Al Olama emphasized the UAE’s focus on creating platforms for collaboration, expertise exchange, and joint policy development to ensure responsible AI adoption, balancing rapid innovation with ethical and regulatory frameworks.
These consecutive leadership positions underscore growing international trust in the UAE’s balanced approach to AI governance – combining rapid innovation with robust ethical and regulatory safeguards. By hosting and co-chairing these global summits, the UAE is positioning itself as a central convener for shaping international AI standards, promoting safe and inclusive AI use, and ensuring alignment with human values. The initiatives aim to expand knowledge transfer, build capacities in developing nations, and establish international frameworks to address global challenges.
From February 16-20, India hosted the AI Impact Summit in New Delhi, bringing together 20 heads of state (including France’s Macron and Brazil’s Lula) and top tech CEOs (from Google, OpenAI, Microsoft). The summit aimed to position India as a bridge between developed economies and the Global South on AI. Discussions focused on inclusive growth, sustainable development, and AI’s role in strengthening public infrastructure. The event concluded with a non-binding ‘New Delhi declaration’ on AI development goals.
This high-profile summit elevates India’s role as a key voice in global AI governance, particularly representing the perspectives and needs of the Global South. It reinforces the shift of AI dialogues from pure safety towards broader themes of economic development, infrastructure, and inclusive growth.
On February 19, participants at the India-hosted AI Impact Summit adopted the New Delhi Declaration. Structured around seven pillars (“Chakras”), the declaration outlines voluntary and non-binding principles for international AI cooperation. Key initiatives launched include a Charter for the Democratic Diffusion of AI, a Global AI Impact Commons for sharing use cases, a Trusted AI Commons for security resources, and an International Network of AI for Science Institutions. It also includes guiding principles on reskilling, workforce development, and energy-efficient, resilient AI infrastructure.
The declaration establishes a significant Global South-led framework for AI governance, complementing existing initiatives. It shifts the focus towards democratizing access, practical applications for social good, and capacity building, potentially influencing how emerging economies approach AI development and international partnerships.
The Cyber Security Agency of Singapore (CSA), with the Ministry of Home Affairs (MHA), launched a pilot National Simulated Scams Exercise (NSSE) from March to August 2026. The opt-in exercise, part of the broader Exercise SG Ready 2026 (a Total Defence Exercise held to increase Singaporeans’ readiness for crises and disruptions), sends participants simulated robocalls mimicking Government Official Impersonation Scams (GOIS). The goal is to educate the public on scam tactics and teach protective steps in a safe environment.
This large-scale, proactive public education effort directly addresses a rapidly growing scam type. By simulating real threats, it aims to build societal resilience and “digital defence” by making individuals more adept at recognising and resisting scams, thereby reducing the success rate of real criminal operations.
Thailand hosted the Cybersec Asia x Thailand International Cyber Week 2026 from 4-5 February 2026 in Bangkok, co-located for the first time with the AI ASIA event. Organized by VNU Asia Pacific and supported by the National Cyber Security Agency (NCSA) and other bodies, the event brought together over 4,000 professionals, 140 exhibitors from 15 countries, and international pavilions. It featured the “Cyber All Star Pavilion” on collaborative defense and focused on policy dialogue, investment opportunities, and regional cooperation at the intersection of cybersecurity and AI.
This high-profile event positioned Bangkok as a regional hub for digital security and AI dialogue. It facilitated cross-border partnerships between governments, industry, and innovators, aiming to strengthen collective cyber resilience and explore secure AI adoption to support Southeast Asia’s accelerating digital economy.
The Department of Information and Communications Technology (DICT) launched three cybersecurity initiatives to shift from a reactive to a proactive security posture. The initiatives are: 1) a Bug Bounty Program inviting and incentivizing ethical hackers for finding and reporting vulnerabilities in government systems; 2) the DICT Trusted Assessment Provider (D-TAP) framework, an accreditation program for trusted firms to conduct security audits on government platforms; and 3) the Cybersecurity Posture Assessment Laboratory (CPAL) to provide structured assessments for government offices to identify and address weaknesses early.
These programs aim to systematically strengthen the security of government digital services that millions of citizens rely on. By institutionalizing vulnerability disclosure, third-party auditing, and proactive assessments, the initiatives build a more resilient public sector digital ecosystem and embed security by design.
On February 11, the Central Bank of the UAE (CBUAE) issued a Guidance Note on Consumer Protection and the Responsible Adoption and Use of Artificial Intelligence. The document provides a framework for licensed financial institutions to govern their use of AI, ensuring it aligns with consumer protection principles. The guidance addresses key requirements such as ensuring fairness and non-discrimination in AI-driven decisions, maintaining transparency and explainability to consumers, establishing accountability for AI outcomes, and safeguarding consumer data privacy when using AI systems.
This guidance sets clear expectations for financial institutions in the UAE on how to integrate AI responsibly while prioritizing consumer protection. It helps mitigate risks such as algorithmic bias, opaque decision-making, and potential consumer harm, thereby fostering trust in AI-enabled financial services. By providing a structured framework, it encourages innovation within a safe and ethical boundary.
Bank Indonesia (BI) has joined the Bank for International Settlements (BIS)-led Nexus project as a full member. It will work with the central banks of Malaysia, the Philippines, Singapore, Thailand, and India to link its BI-FAST instant payment system with the Nexus platform. The goal is to enable faster, cheaper, and more interoperable cross-border payments, particularly benefiting remittances, trade, and tourism. BI will upgrade BI-FAST to ensure interoperability while maintaining domestic clearing and settlement within Indonesia.
This integration positions Indonesia within a growing regional and global network for instant payments. It promises to significantly reduce transaction costs and processing times for cross-border commerce and remittances, boosting financial inclusion and deepening regional economic ties. The project aligns with ASEAN and G20 payment connectivity goals.
Following the Bithumb internal control incident (where a South Korean cryptocurrency exchange accidentally gave away more than $40bn worth of bitcoin to customers during a promotional event), financial authorities agreed to regulate crypto exchanges at the level of financial firms. During a National Assembly inquiry on 11 February, the Financial Services Commission and the Financial Supervisory Service confirmed plans to reflect binding internal control standards, mandatory periodic checks by external institutions, and strict liability for damages from system accidents in the upcoming Digital Asset Basic Act (the so-called “second-stage” virtual asset law) .
This development increases momentum for more comprehensive crypto regulation, moving beyond user protection to cover exchange governance, risk management, and operational resilience. It imposes institutional-level obligations on exchanges, including real-time system reconciliation (matching wallet holdings with internal databases) to prevent similar incidents.
During its latest plenary on February 12, the European Data Protection Board (EDPB) adopted its work program for 2026-2027. Built on four pillars, it aims to: 1) enhance harmonization and promote compliance (e.g., new guidelines on anonymization, children’s data, “Consent or Pay”); 2) reinforce a common enforcement culture and cooperation among Data Protection Authorities (DPAs); 3) safeguard data protection in the digital landscape (e.g., interplay between the AI Act and GDPR); and 4) contribute to the global dialogue on data protection.
This program sets the strategic direction for data protection enforcement and guidance across Europe for two years. It directly impacts how the GDPR is interpreted and applied, influencing compliance burdens for businesses and the rights of individuals.
Singapore announced it will establish a National AI Council, chaired by the Prime Minister and comprising ministers from key portfolios. The Council will coordinate AI adoption and delivery across government and drive the implementation of Singapore’s National AI Strategy. Its roles include providing overarching strategic direction, overseeing the execution of “AI missions” in priority sectors (finance, healthcare, advanced manufacturing, and connectivity), and unlocking regulatory enablers and resources to accelerate the development, testing, deployment, and scaling of AI solutions.
This high-level institutional mechanism signals a whole-of-government commitment to AI leadership. By centralizing strategic direction under the Prime Minister and focusing on specific sectoral missions, the Council aims to streamline cross-ministerial coordination, remove regulatory bottlenecks, and ensure targeted resources are deployed to transform key economic sectors through AI.
The Cyberspace Administration of China (CAC) launched a one-month “Qinglang” (“clear and bright”) campaign ahead of the 2026 Spring Festival. The campaign specifically targets the misuse of AI to generate harmful and misleading content, including mass-produced low-quality articles, vulgar parodies of classic media using AI, and fabricated “motivational” or “expert” content designed to mislead public understanding. Regulators reported removing over 543,000 pieces of non-compliant content and dealing with more than 13,400 accounts. The campaign also addresses online conflicts in “fan circles,” wealth displays, and content promoting social antagonism.
This targeted enforcement action signals intensified government oversight of AI-generated content and its potential societal impact. It puts platforms on notice regarding their responsibility to label AI content and prevent the spread of algorithmically produced misinformation. The specific focus on AI misuse reflects growing regulatory concern about how generative AI can amplify low-quality or deceptive content at scale, potentially shaping public discourse and social values.
On February 13, China’s State Administration for Market Regulation (SAMR) released guidelines on antitrust compliance specifically for internet platforms. The guidelines translate key provisions of the Anti-Monopoly Law into clear behavioral boundaries, identifying new types of monopoly risks across eight scenarios relevant to the platform economy. These include algorithmic collusion between platforms, organizing or assisting on-platform operators to reach monopoly agreements, unfairly high pricing, below-cost selling, blocking and restricting access, and discriminatory treatment. The guidelines also provide platform operators with practical recommendations for strengthening their internal antitrust compliance management.
This guidance provides long-sought clarity for internet platforms on the specific boundaries of monopolistic conduct, resolving past compliance uncertainties (e.g., whether algorithm-based coordinated pricing constitutes a monopoly). By delineating clear “red lines,” it encourages platforms to shift focus from short-term profit-driven monopolistic practices toward technological innovation and service optimization. It is expected that platforms will proactively review and rectify their practices to improve compliance management systems.
Qatar’s Ministry of Commerce and Industry (MoCI), in partnership with Microsoft, launched an “AI Agent Factory” – a digital platform designed to rapidly develop and deploy intelligent AI agents across government services. The platform enables automation of tasks such as processing applications and answering queries, integrating with existing government systems to allow faster rollout of AI-powered services across multiple departments. The initiative aims to modernize bureaucracy, enhance operational efficiency, and create a unified user experience across government touchpoints.
This initiative represents a significant step in embedding AI into Qatar’s public sector. It promises to streamline government operations, improve service quality, and support decision-making by enabling faster deployment of AI solutions without lengthy traditional IT development cycles. It is part of a broader national push that includes AI governance frameworks, civil service upskilling, and pilot projects in transport and health.
The Saudi Council of Ministers approved a new Copyright Law on January 27, marking a major update to the IP framework. Key themes include: clearer regulation of “neighboring rights” for performers and broadcasters; targeted exceptions to enable the development of AI products and algorithms; enhanced enforcement mechanisms; and stricter penalties. The law aims to modernize protection for the digital environment and align with international conventions.
This new law significantly strengthens the legal foundation for protecting digital and creative assets in Saudi Arabia. By introducing AI-specific exceptions, it balances the rights of creators with the need to foster technological innovation. The enhanced enforcement and penalties provide stronger deterrence against IP violations, boosting confidence for investors and businesses in the digital and creative economies.
Singapore’s Ministry of Digital Development and Information (MDDI), with community partners, expanded digital parenting programs to support families in developing healthy digital habits. Activities included an engagement session for over 300 parents and youths (ages 12-17) with panel discussions and interactive booths on setting digital boundaries. The Infocomm Media Development Authority (IMDA) also introduced a new Cyber Wellness lesson package for schools and launched resources on the Digital for Life portal, focusing on four key actions: setting boundaries, thinking before acting, reporting inappropriate content, and engaging/supporting children online.
This multi-pronged initiative strengthens national efforts to foster a safer digital environment for children. By directly engaging families in communities and schools, it aims to build practical digital literacy skills, encourage open parent-child communication about online risks, and establish shared family rules for technology use, thereby enhancing overall societal digital resilience.
The Philippine Senate Committee held a hearing on five bills proposing to regulate or ban minors’ social media access, with minimum age limits ranging from 12 to 18. Experts and government agencies (National Privacy Commission, Department of Education) advocated for a “calibrated, age-appropriate framework” rather than an outright ban, recognizing children’s evolving capacities. Key issues discussed included the need for stronger data protection in age verification, the role of parental mediation, and addressing “addictive” platform design features like infinite scroll and algorithmic recommendations.
This hearing signals a significant move toward regulating online platforms to protect children in the Philippines. The debate is shaping a potential legal framework that could mandate age-appropriate design, impose data minimization standards for age verification, and require platforms to limit features that promote compulsive use among minors.
On February 18, the Indian government has released a draft Digital Trade Facilitation Bill, 2026, and is inviting public comments on its provisions. The bill aims to establish a comprehensive legal framework to facilitate and regulate digital trade. The proposed legislation seeks to provide statutory recognition to electronic trade documents, enable trusted digital verification mechanisms, and facilitate secure cross-border exchange of trade records.
This consultation marks a key step toward creating a dedicated law for digital trade in India. The eventual act is expected to impact cross-border data flows, e-commerce operations, and the ease of doing digital business.
On January 30, Cyberspace Administration of China (CAC) released draft guidelines governing the overseas transmission of data collected by smart vehicles. The proposed framework introduces a risk-based classification system: data categorized as “general” may be exported with self-assessments, while “important” or “sensitive” data (e.g., involving sensitive infrastructure or individuals) will face additional scrutiny and potential security assessments. The guidelines provide mechanisms for companies to apply for review, including for data essential to improving autonomous driving systems and software performance.
This draft signals potential regulatory flexibility from China’s previously stringent data localization rules. For global automakers like Tesla, which rely on real-world driving data to train features like Full Self-Driving (FSD), the guidelines could clarify compliance pathways and enable technology rollout. The framework aims to balance national security and privacy with industry competitiveness, potentially accelerating software-driven vehicle development in the world’s largest automotive market.
The Japan Fair Trade Commission (JFTC) published the first set of compliance reports under the new Act on Promoting Competition for Specific Smartphone Software. The reports, submitted by Google and Apple, detail how they are opening their ecosystems to comply with the law’s mandates regarding third-party app stores and alternative payment systems. Google’s report highlights the expansion of its User Choice Billing (UCB) program for game developers in Japan. Apple’s report acknowledges fundamental changes to its app distribution methods while expressing concerns about potential privacy and security risks from opening its platform.
This marks a major milestone in enforcing Japan’s new digital competition regime. The public disclosure of these reports provides unprecedented transparency into how major platform providers are adapting their business models. The process shifts the burden of proof to gatekeepers to demonstrate compliance and serves as a diagnostic tool for the JFTC to identify whether corporate policies align with the law’s spirit, potentially setting the tone for future enforcement actions.
Singapore’s Personal Data Protection Commission (PDPC) announced that private organizations have until 31 December 2026 to phase out the use of full or partial National Registration Identity Card (NRIC) numbers for authentication. From 1 January 2027, PDPC will step up enforcement action under the Personal Data Protection Act (PDPA) against this practice, which is considered a failure to make reasonable security arrangements.
This requirement compels private organizations to redesign authentication flows and reduce reliance on NRIC numbers as a credential, lowering risks of unauthorized access and identity-based fraud. It builds on a 2025 joint advisory and aligns private sector practice with existing government policy, significantly enhancing national data security.
Greece plans to complete the nationwide rollout of its digital work card system by the end of 2026, extending it to the entire private and public sectors. The system requires employees to digitally register their working hours, including overtime, to ensure compliance with labor laws.
The expansion aims to significantly increase transparency in the labor market, curb undeclared work, and ensure accurate recording and compensation for overtime. Fines for violations will range from USD 2,000 to 12,000 per employee.
The U.S. Department of Labor (DOL) released an AI Literacy Framework defining foundational competencies for using and evaluating AI responsibly in the workplace. The framework outlines five areas: Understand AI Principles, Explore AI Uses, Direct AI Effectively, Evaluate AI Outputs, and Use AI Responsibly. It also includes delivery principles like enabling experiential learning and building complementary human skills (e.g., critical thinking).
This framework provides a national reference point for integrating AI literacy into workforce development and education. It aims to prepare workers across industries for an AI-enabled economy by defining essential skills and ethical considerations.
As part of Budget 2026, Singapore launched targeted measures to accelerate AI adoption by small and medium enterprises (SMEs). A new “Champions of AI” program will support firms using AI for enterprise transformation and workforce training. The Enterprise Innovation Scheme is expanded to include AI expenditures as a qualifying activity, granting a 400% tax deduction for the Years of Assessment 2027 and 2028, capped at USD 39,000 per year. Additionally, the Productivity Solutions Grant (PSG) will be strengthened to support companies in adopting pre-approved AI solutions and equipment. Along with the enabling program is a workforce upskilling framework developed for non-tech sector professionals, starting with accountancy and legal sectors, to transform workflows with AI. Singaporeans in selected AI courses will receive six months of free access to premium AI tools.
These fiscal incentives directly lower the financial barrier for SMEs to invest in AI technologies and related training. The Champions of AI program provides tailored support for business transformation, while the enhanced PSG facilitates the adoption of practical, pre-vetted AI tools. This multi-pronged approach is designed to boost SME productivity, innovation, and competitiveness in the digital economy.
The Philippines, through the Intellectual Property Office of the Philippines (IPOPHL), is spearheading key initiatives under the newly effective ASEAN Intellectual Property Rights Action Plan for 2026-2030 (AIPRAP 2030). As part of its ASEAN chairship, the Philippines is championing projects including: developing a harmonized ASEAN IP Valuation framework, promoting intra-regional training for IP offices, establishing IP commercialization hubs (like Technology and Innovation Support Centres), and contributing to curriculum development for the proposed ASEAN IP Academy. These efforts support AIPRAP 2030’s goals of strengthening national IP regimes, harmonizing frameworks, facilitating IP commercialization, and fostering a culture of respect for IP to drive an integrated, innovation-driven ASEAN community.
This regional action plan, with the Philippines driving key components, aims to create a more cohesive and effective IP ecosystem across Southeast Asia. A harmonized IP valuation framework and commercialization hubs can help startups and SMEs unlock the economic value of their innovations. Enhanced regional cooperation and capacity building can streamline cross-border IP protection and enforcement, fostering greater digital trade and investment in intangible assets.
The Ministry of Trade, Industry and Resources (MOTI) announced partial amendments to the Enforcement Decree of the Industrial Cluster Development and Factory Establishment Act and related rules. Key changes include: 1) allowing electrical, information and communications (ICT), fire safety, and construction businesses to enter industrial complexes when supporting on-site manufacturing; 2) expanding eligible knowledge and ICT industries from 78 to 95 business types; 3) increasing advanced industry categories from 85 to 92 (e.g., basic pharmaceuticals, secondary batteries, electric trucks, aircraft engines), permitting new or expanded factories in the greater Seoul area and on greenfield sites; and 4) allowing cultural, sports, and renewable energy facilities, as well as cafes and convenience stores, within complexes for worker and community use.
These reforms significantly reduce regulatory burdens for corporations, eliminating the need for separate off-site offices for support services. By expanding eligible business types and permitting factory construction in previously restricted areas, the changes aim to stimulate investment, attract new industries, reduce vacancies in Knowledge Industry Complexes, and transform industrial zones into mixed-use spaces for innovation and community life.
The Technology Innovation Institute (TII) in Abu Dhabi unveiled a new cloud-based service granting direct access to its in-house Quantum Processing Units (QPUs). Initially available to TII partners, the platform allows users to run quantum workloads on physical quantum hardware via the cloud. The service is powered by TII’s Quantum Computing Hardware Lab, which operates multiple QPU systems ranging from 5 to 25 qubits, using chips fabricated in-house. It utilizes TII’s open-source quantum software framework, Qibo, to enable users to design quantum circuits and hybrid quantum-classical workflows through a unified interface.
This launch marks a significant milestone in positioning Abu Dhabi as an emerging hub for applied quantum computing. By providing cloud access to domestically developed quantum hardware, TII enables researchers and partners to accelerate experimentation and hybrid quantum-classical development. The initiative aims to foster a local quantum ecosystem, drive innovation, and attract global research collaboration.
South Korea’s Framework Act on the Development of AI and Creation of a Trust Foundation (“AI Basic Act”) entered into force on 22 January 2026, establishing a risk-based framework for responsible AI. The Act defines high-impact AI (systems that may affect life, physical safety, or fundamental rights), sets core trust principles (e.g., safety, reliability, accessibility), and creates governance structures, including a National AI Committee and an AI Safety Institute.
It also introduces operational obligations for developers/deployers, including requirements around human oversight, user notification and labeling of AI-generated content, and documentation/assessment expectations for higher-risk uses, backed by penalties for non-compliance.
This marks one of the world’s most extensive AI regulatory frameworks, influencing obligations for developers of high‑risk AI systems. It expands government oversight, introduces principles-based requirements, and sets the foundation for future certification, risk management, and safety practices across sectors including energy, food, medical devices, nuclear materials management, and biometrics.
Singapore’s IMDA launched the “Model AI Governance Framework for Agentic AI” on 22 January 2026 at the World Economic Forum. The framework provides guidance for responsible deployment of agentic AI, covering risk identification, bounding agent autonomy, ensuring meaningful human accountability, and outlining technical and non‑technical safeguards for safe agent operation.
This sets an international benchmark for regulating AI “agents” capable of autonomous action, influencing global expectations for safety, traceability, and oversight. It may shape cross‑border interoperability and governance norms for high‑autonomy AI.
France’s Competition Authority announced a sector-wide inquiry into the competitive functioning of generative AI conversational agents (e.g., ChatGPT, Gemini). The inquiry, prompted by rapid market growth and high concentration, will investigate business models, monetization strategies, potential self-referencing by dominant firms, and the rise of “agentic commerce” where AI tools autonomously guide transactions. A public consultation will be part of the process, with an opinion expected in 2026.
This inquiry represents one of the first major competition probes specifically targeting the nascent conversational AI sector. It could lead to recommendations or enforcement actions that shape market structure, business practices, and the competitive landscape for AI agents in France and potentially influence EU-wide approaches.
The Philippines’ Department of ICT lifted an access restriction on xAI’s Grok, imposed on 16 January 2026 over concerns about its image-generation capabilities creating non-consensual explicit content. An inter-agency technical assessment identified risks to digital safety, especially for women and children. The restriction was lifted after the investigation confirmed xAI had implemented sufficient safeguards and content moderation measures, with authorities pledging continued monitoring.
This action demonstrates a responsive regulatory approach where a temporary restriction is used as leverage to secure platform compliance. It resolves an immediate concern while establishing a framework for ongoing oversight, balancing innovation with user safety.
Brazil’s National Data Protection Agency, Federal Prosecutor’s Office, and consumer watchdog issued joint recommendations to X. They demand action within 30 days to prevent its AI tool Grok from generating non-consensual sexual synthetic content (deepfakes), particularly of women and minors. Measures include removing existing content, suspending involved accounts, submitting monthly compliance reports, and conducting a specific Data Protection Impact Assessment.
This action requires a major tech platform to implement technical and procedural safeguards against AI-facilitated abuse, setting a precedent for holding companies accountable for the misuse of their generative AI tools. It aims to protect individual dignity, especially of vulnerable groups, and reinforces the application of data protection and consumer law to AI outputs.
Sweden’s Cybersecurity Act (2025:1506) entered into force on 15 January 2026, transposing the EU’s NIS2 (Network & Information Security) Directive into national law. The Act imposes binding cybersecurity obligations on a wide range of public and private sector operators, including mandatory registration, implementation of risk management measures, management training, and incident reporting. It establishes a supervisory framework with powers to conduct audits and impose administrative fines, and it repeals the previous 2018 law on information security for socially important and digital services.
The Act significantly elevates Sweden’s national cybersecurity posture by expanding the scope of regulated entities and harmonizing its rules with the broader EU framework. It creates a stricter, more enforceable regime for managing cyber risks across essential economic and digital services.
TikTok formally established TikTok USDS Joint Venture LLC to comply with the September 2025 Executive Order on national security safeguards. The joint venture is majority American‑owned, with ByteDance retaining a 19.9% stake. It assumes responsibility for securing U.S. user data, safeguarding algorithms, and independently overseeing trust & safety policies. U.S. data and recommendation‑algorithm operations will run entirely within Oracle’s U.S. cloud, supported by independently audited cybersecurity programs aligned with NIST, ISO 27001, and CISA (Cybersecurity & Infrastructure Security Agency) requirements.
Represents one of the most significant restructuring of a global platform to meet U.S. national security mandates. Establishes a precedent for mandated operational separation, local data residency, independent governance, and algorithmic control, potentially shaping future cross‑border tech regulation.
The UK Treasury Committee published its final report from an inquiry into AI in financial services. It found over 75% of UK finance firms use AI, creating opportunities for faster services but also risks like opaque decision-making, financial exclusion, fraud, and unregulated AI advice. The report notes a lack of AI-specific financial laws and recommends that regulators publish guidance on consumer protection, conduct AI-specific stress tests, and designate major AI providers as “critical third parties” for oversight.
The report provides a comprehensive analysis of AI’s risks and opportunities in the UK’s financial services sector. Its recommendations, if adopted, would shape the UK’s approach to regulating AI in finance, enhancing oversight of AI providers and pushing for clearer rules on accountability and consumer protection in automated finance.
China’s Cyberspace Administration (CAC) announced the 21st batch of registered blockchain information services. The list includes 64 services from various provinces, covering areas like digital assets, cultural property trading, supply chains, and data traceability. Registered entities include government bodies like the Civil Aviation Administration’s Information Center, academic institutions, and commercial exchanges, bringing them under the existing regulatory framework.
This routine update to the official registry continues China’s established policy of bringing blockchain-based services under a formal registration and oversight system. It signals ongoing state supervision over the development and application of blockchain technology across both public and private sectors.
Ofcom opened a formal investigation into whether Meta failed to comply with legally binding information request notices issued under Section 135 of the Communications Act. These notices concerned Meta’s provision of information related to WhatsApp Business for market monitoring and a wholesale SMS termination market review. Ofcom suggests some data submitted by Meta may have been incomplete or inaccurate.
This investigation signals stricter enforcement of statutory information requirements in the UK’s digital communications regulatory environment. It increases compliance obligations for major digital service providers and raises the risk of enforcement actions, including penalties, if firms fail to meet procedural accuracy standards. It may also affect how platforms respond to regulators globally when providing data related to digital communications markets.
The European Commission formally designated WhatsApp as a Very Large Online Platform (VLOP) under the DSA based on the user reach of its Channels feature (treated as an online platform), which surpassed the 45M‑user threshold in the EU. WhatsApp Channels qualifies as an online platform, whereas private messaging remains excluded. Meta now has four months, until mid‑May 2026, to meet additional DSA obligations, including risk assessments related to illegal content, fundamental rights, privacy, electoral integrity, and systemic harms.
The designation places WhatsApp under the EU’s strictest obligations for platforms, significantly raising compliance burdens. Meta must implement robust risk‑mitigation systems and faces potential penalties if obligations are unmet. This shifts WhatsApp closer to the governance model used for public‑facing social platforms despite its hybrid nature.
On 20 January 2026, the European Data Protection Board (EDPB) and Supervisor (EDPS) adopted a Joint Opinion on the European Commission’s Digital Omnibus proposal, which aims to streamline aspects of the AI Act’s implementation. The opinion cautions that simplification must not undermine rights or data protection. Key recommendations include maintaining strict necessity for processing sensitive data for bias correction, keeping public registration for all AI system providers, and involving data authorities in AI regulatory sandboxes.
This opinion significantly influences the final shape of the EU’s AI Act implementation. It advocates for maintaining high standards of transparency, accountability, and fundamental rights protection, potentially preventing a dilution of the landmark regulation as it moves toward enforcement.
The President of Poland vetoed the Bill intended to implement the EU’s DSA into national law and designate competent authorities. This veto halted the legislative process, preventing the establishment of Poland’s national statutory framework for DSA enforcement unless the parliament overrides it.
The veto creates significant legal uncertainty and delays Poland’s formal compliance with its EU obligation to implement the DSA. It leaves a gap in the national enforcement apparatus for the landmark content moderation and platform accountability rules, potentially affecting user rights and digital service operations in Poland.
The UAE Media Council reminded residents and visiting influencers that January 31, 2026 is the deadline to obtain the mandatory Advertiser Permit for social media promotional content. The permit, free for the first three years, applies to anyone posting promotional content (paid or unpaid) and aims to boost transparency and reduce misleading advertising across platforms. Visitor permits are also required for foreign creators producing promotional content in the UAE.
The permit requirement raises compliance obligations for the creator economy, formalizing influencer advertising standards and increasing accountability and consumer protection. Platforms and agencies must verify creator compliance or risk penalties.
Indonesia’s Regulation No. 2 of 2024 on game classification officially entered into force on 24 January 2026, replacing the 2016 framework. The regulation establishes a mandatory national system for classifying games by age (3+, 7+, 13+, 15+, 18+), covering content categories such as violence, explicit language, harmful substances, gambling simulations, pornography, horror, and online interaction. Publishers must register as Private Scope Electronic System Organisers (PSE), independently classify their games, and display classification results on game descriptions, packaging, and advertisements. Annual reclassification and suitability tests may be required, and administrative sanctions include warnings, suspension, or access termination.
The regulation significantly increases compliance obligations for both domestic and foreign game publishers operating in Indonesia, introducing stricter registration, disclosure, and classification requirements. It also establishes stronger state oversight of content and age suitability, raising operational and compliance costs for global platforms. Non‑compliance risks include suspension and removal of game access, affecting market continuity and distribution.
The House of Representatives announced plans to draft legislation regulating children’s use of social media to address associated psychological and behavioural risks. The proposed law seeks to curb digital addiction and protect minors from online environments that may negatively impact their development. The House will move forward within its constitutional framework, initiating formal study and legislative procedures. Its specialized committees will lead consultations, gathering input from key state entities, including the Ministers responsible for Parliamentary, Legal and Political Communication Affairs, and for Communications and Information Technology, as well as the National Telecommunications Regulatory Authority. The National Council for Motherhood and Childhood will also participate in shaping the proposal.
This plan signs a potential tightening of child safety obligations on platforms operating in Egypt (e.g., age‑verification, parental controls, content/feature restrictions), which could increase compliance and localization needs.
China’s Cyberspace Administration (CAC) and seven ministries issued on 23 January 2026, new classification measures defining, categorizing, and regulating lawful online information that may negatively influence minors. The framework identifies potentially harmful content (e.g., cyberbullying, unhealthy lifestyles, extreme emotional stimulation, unsafe behavior, discrimination) and mandates warnings, restricted visibility, and prohibition from recommendation systems or traffic‑boosting placements. Algorithmic recommender systems and generative AI must prevent such content from being pushed to minors.
The measures significantly tighten control over content accessible to minors, imposing operational, ranking, and algorithmic obligations on platforms. They elevate compliance expectations for content governance, transparency, and minor protection mechanisms in China.
The Governor of New Jersey adopted Executive Order No. 6 on 23 January 2026, directing all state executive agencies to prioritize children’s mental health outcomes in relation to technology and social media. It mandates interagency coordination to review policies, prevent harms like cyberbullying and exploitation, support caregivers and educators, and identify service gaps. The order establishes an Office of Youth Online Mental Health Safety and Awareness to collect data and develop recommendations.
This order initiates a whole-of-government approach to mitigate digital harms to youth at the state level. It aims to create a coordinated framework for policy, data collection, and service provision focused on children’s digital wellbeing, setting a potential model for subnational action.
The Personal Data Protection Authority (KVKK) issued a public announcement following complaints and assessments concerning push notifications sent via mobile applications, signalling scrutiny of whether controllers’ notification practices comply with Turkish data protection principles (including consent/choice and transparency expectations).
This investigation signals increased regulatory scrutiny over a common yet often opaque data practice. It may lead to new compliance requirements for app developers and platforms regarding user consent and transparency for push notifications, potentially changing standard engagement models.
A committee of the European Parliament (Legal Affairs) advanced a series of proposal on copyright and generative AI, calling for stronger transparency about copyrighted works used in training and for mechanisms that support fair remuneration for rightsholders. This is part of an evolving copyright policy debate.
If enacted, this would establish a significant new copyright and financial obligation for AI developers in the EU, potentially altering the economics of AI training. It seeks to redistribute value from AI companies to content creators and could influence global standards on the use of copyrighted data for AI.
The existing parties to the Digital Economy Partnership Agreement (DEPA) (Chile, New Zealand, Singapore, and the Republic of Korea) jointly announced the substantive conclusion of negotiations for Peru’s accession. This marks a key step before Peru’s formal entry as the fifth member of this modern digital trade agreement.
Peru’s accession expands the geographic and economic footprint of DEPA, a pioneering agreement focused on digital trade issues like digital identities, data flows, and AI. It strengthens the bloc’s influence in shaping regional and global digital trade norms and offers Peruvian businesses enhanced access to digital trade frameworks.
Indian and Japanese Foreign Ministers held the 18th Strategic Dialogue, reaffirming their Special Strategic and Global Partnership. A key outcome was the launch of a new bilateral “AI Dialogue” to give coherent push to cooperation in AI, building on the Japan-India AI Initiative. They also agreed to convene a Joint Working Group on Critical Minerals to advance collaboration on rare earth elements vital for technology and green energy sectors.
This high-level engagement solidifies a major Asian partnership focused on future-oriented technologies and strategic supply chains. The institutionalization of an AI Dialogue and critical minerals working group creates formal channels for cooperation, research, and policy alignment, strengthening both nations’ positions in the global digital and green economy.
ASEAN Digital Ministers concluded their 5th meeting in Hanoi, Vietnam, adopting the “Hanoi Declaration on Digital Cooperation.” The declaration aims to boost regional digital integration by enhancing cooperation on key areas, including facilitating trusted cross-border data flows, strengthening cybersecurity, promoting digital trade, accelerating digital transformation for SMEs, and developing a regional digital talent pool.
The Hanoi Declaration sets a strategic, multi-year agenda for ASEAN’s digital community. By committing to collaborative frameworks on data, security, and trade, it aims to create a more seamless, secure, and competitive regional digital economy, reducing barriers and fostering inclusive growth across member states.
The Qatar Financial Centre (QFC) announced reciprocal data protection adequacy recognition with Abu Dhabi Global Market (ADGM) and the Dubai International Financial Centre (DIFC). The decision follows assessments of each jurisdiction’s data protection standards and creates a trusted regulatory basis for streamlined personal data transfers among the three financial centres. Recognition is granted only to jurisdictions demonstrating robust data protection and enforcement frameworks.
The reciprocal recognition significantly reduces compliance burdens for financial companies operating across Qatar, Abu Dhabi, and Dubai, enables frictionless data flows, and strengthens the Gulf region as a competitive hub for digital trade and financial services.
The “Special Act on the Promotion of Artificial Intelligence Data Centres” (Bill no. 2215928) was introduced in the National Assembly. The bill creates a centralized “one-stop” licensing window under the Ministry of Science and ICT to batch-process permits for building and operating AI data centres. It includes a “timeout” clause where permits are automatically granted if authorities do not respond within a set period. The bill also allows data centres located outside metropolitan areas to engage in direct electricity trading with large power sources to address energy constraints.
This legislation aims to accelerate the national build-out of critical AI infrastructure by cutting bureaucratic red tape and solving major institutional hurdles related to permitting and power supply. If passed, it would significantly lower barriers for AI data centre investment and operation, enhancing South Korea’s competitiveness in the AI sector.
India’s Ministry of Power released the Draft National Electricity Policy (2026) for public consultation. The policy aims to ensure a financially viable and sustainable power sector. Key proposals include mandating cost-reflective electricity tariffs, establishing robust data-sharing frameworks for better planning, and implementing enhanced cybersecurity norms for the national grid and distribution systems.
The draft policy seeks to modernize India’s power sector governance by linking financial health to regulatory reform and digital resilience. By integrating cybersecurity and data governance into core energy planning, it aims to protect critical infrastructure and enable data-driven investment decisions for the massive capital required through 2047.
The state of Rajasthan launched its Artificial Intelligence and Machine Learning (AI/ML) Policy 2026 at the Regional AI Impact Conference. The policy aims to accelerate the state’s digital transformation by establishing an AI Centre of Excellence, expanding AI education, and launching an AI Literacy Programme. The Chief Minister simultaneously announced that a forthcoming state Employment Policy will also integrate AI to support job creation.
This policy provides a comprehensive, state-level framework to build AI capacity, attract investment, and prepare the workforce for technological change. By linking AI development directly to employment goals and launching foundational literacy programs, it seeks to foster inclusive economic growth and position Rajasthan as a competitive regional tech hub.
On 14 January 2026, the Australian Cyber Security Centre (ACSC) published guidance titled “Artificial intelligence for small business.” The guide provides practical steps for small and medium-sized enterprises (SMEs) to securely adopt and use AI tools. It covers foundational concepts, key security risks), and actionable advice for risk assessment, secure implementation, and staff training.
This resource directly addresses a critical gap by empowering SMEs (a sector often lacking in-house expertise) to navigate AI adoption safely. It aims to boost overall national cyber resilience by preventing common security pitfalls that could lead to data breaches or system compromises when using AI.
Oman launched its Eleventh Five-Year Development Plan (2026-2030) as the second executive phase of Vision 2040, setting macroeconomic growth targets while positioning the digital economy as a core diversification driver. The plan emphasises accelerating digitisation and strengthening enabling foundations (including infrastructure and cybersecurity), aligned with ongoing national digital transformation programs led by the Ministry of Transport, Communications and Information Technology.
This plan serves as Oman’s central economic roadmap, formally prioritizing the digital sector as a key driver of economic diversification and growth for the next five years. It commits state resources and policy focus to building digital infrastructure, stimulating the tech startup ecosystem, and improving cyber resilience.
On 11 December 2025, the US President issued an Executive Order outlining a national approach to AI governance, directing federal agencies to coordinate standards on safety, transparency, and national security while discouraging fragmented state-level regulation. The Order prioritises innovation, global competitiveness, and voluntary risk-management frameworks, while tasking agencies with developing guidance rather than binding AI-specific legislation.
The Executive Order signals a shift toward softer, federal-led AI coordination in the US, potentially reducing regulatory fragmentation but increasing uncertainty for firms navigating overlapping guidance. It reinforces voluntary standards and industry self-regulation, which may accelerate deployment but limit enforceability. Internationally, the US approach contrasts with more prescriptive regimes and may influence global debates on AI governance models.
On 17 December 2025, the EU Commission published the First Draft Code of Practice on Transparency of AI-Generated Content, addressing key considerations for providers and deployers of AI systems generating content falling within the scope of Article 50(2) and (4). The Code aims to operationalise disclosure obligations through practical guidance on watermarking, provenance indicators, and user-facing transparency measures. It is expected to be finalised in June 2026.
The Code is likely to become a global reference for AI content labelling, shaping platform design and moderation practices beyond the EU. While voluntary, it may create de facto compliance expectations, particularly for firms operating internationally or exporting services to EU markets.
On 2 December 2025, the Australian Government unveiled its National AI Plan to guide AI development, adoption, safety and inclusion through 2030. Anchored on capturing economic opportunity, spreading benefits broadly, and ensuring safety and trust, the Plan highlights infrastructure investment, skills capacity building, industry adoption support, and establishment of an AI Safety Institute for ongoing risk monitoring and governance.
The Plan signals Australia’s transition to an ecosystem-level AI governance approach that prioritises innovation and adoption while grounding safety and ethics within existing law. It is expected to catalyse investment, workforce readiness, and public-sector use, reduce fragmentation in regulation, and support cross-sector engagement on AI risks. However, reliance on existing legal frameworks rather than new mandatory AI-specific regulation may generate mixed compliance expectations.
On 25 November 2025, the Australian Government announced the establishment of the Australian Artificial Intelligence Safety Institute (AISI) to strengthen national capacity on AI risks and harms, including safety, misuse, and emerging threats. The AISI will coordinate research, provide independent advice to government, and work across sectors to enhance AI safety practices and evidence-based policy design.
The creation of AISI marks a strategic investment in institutional governance capacity for AI risk assessment and safety frameworks. It will enhance national preparedness to identify, mitigate, and communicate risks associated with advanced AI technologies, with the potential to shape sectoral regulatory guidance and public sector procurement practices. AISI’s insights may inform risk frameworks internationally.
On 10 December 2025, Vietnam’s National Assembly adopted the Law on Artificial Intelligence (AI) — the country’s first comprehensive legal framework for AI governance. Spanning 35 articles, the law establishes core principles, prohibited acts, risk management obligations, and development incentives. It balances regulatory safeguards for high-risk systems with mechanisms to foster innovation, including sandbox testing and support for startups, drawing on comparative models.
The law positions Vietnam among early adopters of comprehensive AI governance frameworks in Asia. It creates legal clarity for AI development, data use, and accountability while embedding innovation-oriented tools like controlled sandboxes and development funds. Scheduled to take effect in early 2026, it may accelerate domestic technology growth but also requires significant regulatory capacity for effective enforcement.
On 5 December 2025, a Member of Parliament in India introduced the Artificial Intelligence (Ethics and Accountability) Bill, 2025 in the Lok Sabha. The Bill seeks to establish an ethical governance framework for AI, including the creation of a centrally funded AI Ethics Committee to develop guidelines, monitor compliance, audit algorithmic bias, and oversee transparency obligations for AI developers and deployers.
Although it is a Private Member’s Bill with low likelihood of passage, the proposal foregrounds ethics, fairness, transparency, and accountability in AI systems, calling for regular audits, bias mitigation, and grievance mechanisms. If taken up for debate, it could influence broader legislative agendas on AI governance in India, elevate public discourse on algorithmic harms, and signal emerging normative expectations for responsible AI deployment.
On 23 December 2025, the Japanese Cabinet adopted the country’s first Basic Plan on Artificial Intelligence, committing to make Japan a leading environment for AI development and utilisation. The Plan emphasises balancing technological innovation with risk management, leveraging high-quality data and communications infrastructure, accelerating AI adoption across the economy, and enhancing reliability, strategic capabilities, and societal benefit through public-private cooperation.
The Basic Plan sets a comprehensive national agenda to boost AI research, development, deployment, and trustworthiness while addressing existing competitiveness gaps relative to peer nations. It prioritises strategic investments, infrastructure readiness, workforce and skills development, and reliability standards to support sectoral innovation. Importantly, it ties AI adoption to socioeconomic goals such as labour productivity, public service transformation and long-term competitiveness in critical technologies.
On 10 December 2025, Vietnam’s National Assembly passed a comprehensive Cybersecurity Law that will take effect on 1 July 2026. The law consolidates previous cybersecurity statutes, grants the Ministry of Public Security broad oversight of digital identity verification and IP address identification, and empowers authorities to issue cybersecurity warnings, mandate content removal by ISPs/platforms, and combat cybercrime while strengthening state control over online information.
The law significantly expands state authority over cyberspace by institutionalising stricter digital ID verification, content removal requirements, and oversight of network operators and online platforms. It creates a legal basis for coordinated national cybersecurity strategy and data integrity protection while elevating enforcement powers. The framework may increase compliance costs for service providers and raises concerns about freedom of expression, content control, and the balance between security and digital rights.
On 3 December 2025, the Government of India announced it would withdraw the previously mandated requirement for all smartphone manufacturers to pre‑install the state‑developed Sanchar Saathi cybersecurity app on devices sold in India. The decision followed widespread public feedback and industry concerns about privacy, consent, and compatibility with global platform policies. The app remains available for voluntary download by users.
The policy reversal eases compliance pressure on device makers and aligns India’s approach with prevailing privacy expectations, particularly regarding user consent and platform autonomy. It removes a potential regulatory conflict with global handset standards while preserving the app’s role as a voluntary tool for reporting fraud and enhancing user cyber safety.
On 22 December 2025, Ghana’s Parliament passed the Virtual Asset Service Providers Bill, 2025, formally legalising cryptocurrency trading and virtual assets. The law establishes a licensing and supervisory framework for crypto exchanges and service providers, bringing previously widespread and unregulated digital asset activity under formal oversight. It grants authorities licensing powers and reinforces financial supervision to address fraud and market integrity concerns.
This legislative move clarifies the legal status of cryptocurrency in Ghana, reducing regulatory uncertainty and enabling transparent, licensed operations. It is expected to spur fintech innovation, consumer participation, and financial inclusion while requiring new compliance and oversight mechanisms for crypto platforms. The framework may also attract responsible investment into the digital asset ecosystem.
South Korea’s Financial Services Commission (FSC) advanced plans for a Digital Financial Security Act, a comprehensive legislative initiative responding to high‑profile hacks and systemic risks in the digital asset market. The Act aims to regulate virtual asset service providers (VASPs), financial firms, and electronic financial services under harmonised cybersecurity and resiliency standards, mandating robust protections and cross‑border AML/KYC enhancements.
If enacted, the Act would significantly elevate the security and resilience of South Korea’s digital asset ecosystem by requiring strengthened cybersecurity controls, incident reporting, and preventive protocols. This could raise compliance burdens for smaller exchanges but also enhance investor confidence, reduce hack vulnerabilities, align digital asset governance with traditional financial standards, and contribute to systemic stability in a rapidly expanding market.
In December 2025, the Bank of Mexico (Banxico) issued a financial stability report warning that the rapid growth of stablecoins — where a few issuers control a large share of the market and heavy reliance on short-term U.S. Treasuries — poses systemic risk and contagion threats absent harmonised global regulation. Banxico emphasised that divergent international frameworks could incentivise regulatory arbitrage and amplify stress.
Banxico’s warning underscores potential contagion risks if stablecoin markets continue to expand without coordinated safeguards. Heavy concentration among a few issuers, reliance on fragile backing assets, and inconsistent reserve/redemption requirements across jurisdictions could propagate shocks into traditional financial markets, undermining liquidity and confidence. The cautionary stance may slow institutional integration of stablecoins and reinforce central banks’ push for stronger, internationally aligned digital asset regulation.
On 4 December 2025, the UK Department for Transport published an ambitious call for evidence to inform the development of the regulatory framework under the Automated Vehicles Act 2024. The comprehensive document seeks stakeholder input on key elements, e.g., type approval, authorisation, user-in-charge transition, insurance, in-use regulation, incident investigation, cybersecurity, accessibility and environmental factors, to shape proportional secondary legislation and guidance.
This consultation will influence future binding and non-binding rules for automated vehicles on UK roads, shaping how safety, liability, insurance, data, and cyber risks are regulated. It could accelerate autonomous vehicle deployment by clarifying expectations for developers and operators, while embedding safeguards for vulnerable road users and accessibility. The evidence collected will directly inform policy instruments likely adopted in 2026 and beyond.
Indonesia reported accelerated OECD accession technical reviews, prioritizing environment, trade, and the digital economy, and established the INA OECD digital platform to coordinate digital policy reforms ahead of full OECD membership.
Progress toward OECD membership signals Indonesia’s alignment with OECD norms on digital governance, trade, and economic policy, unlocking new frameworks for data governance, innovation policy, and regulatory reform. It may increase foreign investor confidence, while also requiring domestic adaptation of policies to meet OECD evaluation criteria.
On 18 December 2025, Japan’s Mobile Software Competition Act (MSCA) came into force as an ex‑ante regulatory framework targeting dominant mobile platform operators. The law requires greater transparency and competitive conduct from Apple and Google across app distribution, payment systems and browser choice, permitting alternative app marketplaces, external payment options, and enhanced default choices for users to reduce platform gatekeeping.
The MSCA reconfigures Japan’s mobile ecosystem by curbing anti‑competitive practices and creating more competitive pathways for developers and payment providers. It mandates platform compliance with new distribution and billing rules, which may open opportunities for smaller developers and increase consumer choice. However, platforms must enhance compliance reporting and operational mechanisms, shifting responsibilities for security and interoperability to broader stakeholder groups.
On 2 December 2025, Saudi Arabia’s Cabinet approved the 2026 state budget, setting total expenditure at SAR 1.313 trillion (~US$350 billion) and revenues at an estimated SAR 1.15 trillion, with a projected deficit of SAR 165 billion (~3.3 % of GDP). The budget continues support for Vision 2030 economic diversification, infrastructure, technology and non‑oil growth priorities, reinforcing fiscal stability and targeted strategic investments across sectors.
The approved budget reinforces the Kingdom’s shift towards economic diversification, delivering stable backing for infrastructure, digital transformation and strategic sectors. By sustaining funding for technology ecosystems, logistics and industrial development under Vision 2030, the budget strengthens economic resilience and investor confidence. Continued expansionary spending underscores long‑term commitments despite projected shortfalls, while promoting public services and economic competitiveness.
On 3 December 2025, Accessibility Standards Canada published the CAN-ASC-6.2 Accessible and Equitable Artificial Intelligence Systems standard, the first of its kind in Canada. It provides detailed guidance on designing AI systems that are accessible to persons with disabilities and equitable in treatment, embedding inclusion throughout the AI lifecycle from development to deployment. The standard was approved by the Standards Council of Canada following public and expert consultation.
This standard sets a new practical benchmark for accessible and equitable AI design, encouraging organisations to integrate accessibility principles into core development processes. It is expected to influence private and public sector AI procurement, raise awareness of inclusive design practices, and inspire similar frameworks internationally. As a standards-based approach, it complements regulatory frameworks without imposing legal penalties.
On 10 December 2025, Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 came into effect, requiring major social media platforms to prevent Australians under 16 from holding accounts and to implement “reasonable steps” such as age verification to comply. The eSafety Commissioner will monitor and enforce compliance, and non‑compliance could attract civil penalties up to AUD 49.5 million.
This represents a pioneering regulatory intervention prioritising child safety and age‑appropriate digital engagement. By shifting responsibility onto platforms and mandating proactive age verification, Australia is introducing enforceable standards with financial penalties. The rollout is likely to influence global online safety policy debates and could prompt similar measures in other jurisdictions concerned about youth exposure to harmful content.
Indonesia’s Ministry of Communication and Digital Affairs confirmed that new age‑based social media restrictions aimed at protecting children will begin being enforced in March 2026, requiring parental consent and limiting account access for users aged 13‑16 depending on platform risk profiles. These rules stem from the PP Tunas regulation adopted earlier in 2025.
Once in effect, platforms will need to implement age verification, parental consent systems, and privacy‑by-default designs for minors, increasing operational complexity and compliance costs. This regulation could set a precedent in Southeast Asia, influence regional digital safety norms, drive technical innovation in verification methods, and potentially create cross-border challenges for global platforms needing to align diverse markets while prioritizing user protection and ethical digital design.
The UAE’s Telecommunications and Digital Government Regulatory Authority (TDRA) introduced an advanced model for assessing and improving the digital customer journey across government platforms. The model, presented at the “Zero Government Bureaucracy” meeting, emphasises user experience, streamlined service flows, reduced bureaucracy, and more intuitive, accessible digital government services to maximise satisfaction and efficiency for citizens and businesses.
The model’s adoption is expected to enhance the quality and coherence of the UAE’s online public services by prioritising user needs and reducing friction points. It may drive increased uptake of digital services, improve public trust in e-government, and reinforce the UAE’s Zero Bureaucracy Strategy. It also situates digital customer experience as a measurable regulatory priority that could influence other countries seeking people-centred governance frameworks.
On 19 December 2025, the European Commission adopted an Implementing Decision extending the validity of the EU-UK data adequacy decision under Regulation (EU) 2016/679 (GDPR) until 27 December 2025. This extension maintains the EU’s assessment that the UK continues to ensure an adequate level of personal data protection under the UK’s revised data protection framework. Entities operating in both markets can continue under the existing adequacy regime.
The extension ensures legal certainty for cross-border data flows between the EU and UK late into 2025, preserving uninterrupted transfers for businesses and public sector entities. It mitigates abrupt compliance challenges that could arise from a lapse in adequacy, stabilising digital trade, cloud services, and integrated supply chains dependent on personal data transfers.
In December 2025, the Philippine government revised proposed data‑localization language in a draft executive order after pushback from U.S. and industry groups concerned about overly restrictive localization requirements. The updated draft reduces burdens while maintaining broad intent to safeguard data.
The softened language could make data-governance requirements more flexible for cloud, AI, and cross-border operations, reducing compliance costs and operational risk. It may also accelerate digital investment, encourage innovation, and improve alignment with international standards, while retaining core protective measures for personal data.
Kuwaiti Cabinet approved a draft Digital Commerce Law designed to modernise the regulatory environment for e‑commerce and digital trade. The law introduces unified definitions for digital merchants, streamlined licensing procedures, legal recognition of electronic contracts and signatures, enhanced consumer safeguards, and clear dispute‑resolution mechanisms alongside regulatory oversight provisions to support trust in online transactions.
This law marks a major shift toward a coherent legal framework for digital economic activity in Kuwait, strengthening transparency, regulatory certainty, and trust for businesses and consumers alike. By clarifying rights and obligations for online operators, including advertising and influencer partnerships, it supports entrepreneurship and investment, aligns national practice with global e‑commerce standards, and may increase foreign participation in Kuwait’s digital market.
On 4 December 2025, Malaysia’s Dewan Rakyat passed the Immigration (Amendment) Bill 2025 and the Passports (Amendment) Bill 2025, introducing provisions for biometric data collection, Advance Passenger Screening System (APSS), and automated screening at entry and exit points. Measures require travellers to provide personal identifiers and submit biometric data before entry and enable enhanced pre‑arrival screening up to 72 hours prior.
The amendments modernise Malaysia’s border control infrastructure, aligning it with global security practices by integrating biometric ID, automation, and advance risk profiling. They are intended to improve immigration efficiency, integrity, and threat detection but also elevate data processing demands and privacy management. Strong data protection frameworks will be crucial for implementation.
On 1 Dec 2025, the EU and Singapore held the second Digital Partnership Council meeting, aligning on cooperation in AI safety, online safety, cybersecurity resilience, trusted data flows, digital identities, semiconductors and quantum technology, and other emerging tech areas. This builds on the existing Digital Partnership framework to deepen policy and technical collaboration.
The discussions signal stronger cross‑border coordination on digital standards and trust frameworks, potentially influencing regulatory norms and interoperability of digital markets in the Indo‑Pacific. This enhanced cooperation may accelerate adoption of harmonized standards, promote joint regulatory experimentation, and encourage shared AI safety and cybersecurity protocols, creating a reference point for other bilateral or multilateral digital partnerships and shaping regional governance in critical technology sectors.
South Korea announced a new digital identity verification requirement that will mandate real‑time facial recognition when individuals register a new mobile phone number. The Ministry of Science and ICT launched a pilot in late December ahead of full implementation on 23 March 2026, aiming to prevent identity theft and phone‑based scams by requiring carriers and virtual operators to integrate biometric authentication.
The policy is expected to significantly reduce fraudulent registrations and voice phishing scams by tying mobile services more tightly to verified individual identities. Although privacy advocates have raised concerns about biometric data use, the government stresses that facial comparison will not retain images long‑term and will only be used for identity verification. Operational costs for carriers and MVNOs will rise as systems and training are updated.
On 1 December 2025, Saudi Arabia’s National Cybersecurity Authority (NCA) opened a public consultation for the updated Saudi Cybersecurity Workforce Framework (SCyWF), a reference model defining cybersecurity work categories, job roles, tasks, knowledge, skills, and abilities for professionals across sectors. The SCyWF aims to standardise talent development, recruitment, and workforce planning, supporting national cybersecurity resilience and aligning organisational roles with defined competency requirements. The consultation ends on 16 December 2025.
Strengthening the cybersecurity workforce through a structured framework can improve national readiness, reduce talent gaps, and enhance consistency in risk management practices across government and private sectors. By clarifying roles and competence expectations, the SCyWF enables more effective workforce planning, targeted education and training pathways, and clearer career progression frameworks. It may also attract investment in training programs and support institutional capacity to defend digital infrastructure.
On 11 December 2025, Vietnam’s National Assembly overwhelmingly passed the Law on Digital Transformation, a framework law comprising 8 chapters and 48 articles to unify national digital transformation efforts. It codifies principles and policies for digital government, digital economy, digital society, national coordination, and digital infrastructure, and emphasizes connected, shared, secure digital systems while guiding integration with sector‑specific tech laws.
The law establishes a comprehensive legal foundation for nationwide digital transformation, shifting from fragmented technology policies to a unified framework. It introduces macro‑governance tools such as a national digital architecture, data governance and competency frameworks, and measurement indicators.
The United Arab Emirates announced the launch of a $1 billion “AI for Development” initiative to expand artificial intelligence infrastructure and AI‑enabled services across African countries. Unveiled at a G20 meeting in Johannesburg, the initiative aims to support national development priorities such as education, healthcare, climate adaptation, and public service delivery through AI systems, capacity building and digital infrastructure investments.
The UAE’s initiative could catalyse AI adoption across African economies by building foundational digital infrastructure, enhancing access to AI tools, and fostering local applications in priority sectors. By coupling financing with strategic partnerships, it may attract complementary investment and stimulate regional tech ecosystems. However, success depends on domestic absorption capacity, governance safeguards, and skills development as nations deploy AI systems in public and private domains.
On 15 December 2025, the Government of Djibouti launched the initial G2B integrated digital platform aimed at streamlining business creation, expanding access to government digital services, and improving electronic payment integration for micro, small, and medium enterprises under Vision 2035. The platform consolidates a startups portal, government services “one-stop shop,” and digital counter functions to facilitate access and reduce bureaucratic barriers.
The platform is poised to transform the business environment by reducing formalities, speeding up company registration, and consolidating digital public services, thereby lowering entry barriers for entrepreneurs and strengthening economic competitiveness. Enhanced service integration and EU technical support signal increased prospects for investment, digital adoption, and regional competitiveness in the Horn of Africa.
On 19 November 2025, the Commission proposed a “Digital Omnibus” package and launched a Digital Fitness Check to streamline a wide set of digital laws, including the AI Act, GDPR, Data Act and DSA. The package would delay some high-risk AI provisions to 2027 and relax certain data-use and consent rules in the name of competitiveness and reduced red tape. (European Commission)
If adopted, the Omnibus would lower short-term compliance pressure on AI providers and large digital firms, potentially saving billions in administrative costs and making the EU a more attractive base for AI investment, but at the cost of weaker privacy and fundamental-rights safeguards. It may also widen the gap between large platforms that can exploit relaxed data-use rules and smaller firms or civil society, while increasing legal uncertainty during the transition.
On 5 November 2025, the European AI Office convened experts to begin drafting a Code of Practice on transparency for AI-generated content, implementing Article 50 of the AI Act. The Code will spell out practical obligations on labelling, watermarking and disclosure for providers and deployers of generative AI. (Digital Strategy)
The Code is likely to become a de facto global reference for content-labelling practices, affecting how platforms, media companies and model providers design user interfaces and technical markers. It will increase compliance expectations around provenance and deepfake labelling, particularly for firms operating in or exporting to the EU, and may influence cross-border moderation norms even where the AI Act does not apply directly.
Around 24 November 2025, the Commission launched an online whistleblower tool allowing individuals and organisations to confidentially report suspected breaches of the AI Act. The platform is part of a broader enforcement build-out around the AI Office and sectoral regulators. (Digital Strategy)
This mechanism increases legal and reputational risk for firms deploying high-risk or prohibited AI, especially in sensitive areas like biometrics, credit and employment, and could lead to earlier investigations and corrective measures. For cross-border providers, it raises the stakes for having robust internal controls, grievance mechanisms and documentation that can withstand scrutiny triggered by whistleblowers.
In mid-November 2025, the UK government announced a package reportedly worth over £20–24 billion, including new AI Growth Zones, tax incentives, and a £500 million Sovereign AI Unit to expand domestic compute and support AI startups and researchers. The plans form part of a wider “AI to power national renewal” agenda. (GOV.UK)
The package is expected to draw AI firms and data-centre investment into designated zones, potentially tilting UK and inward FDI flows towards AI infrastructure and services while increasing pressure on energy grids and local labour markets. It may also intensify competition with EU and Gulf jurisdictions that are similarly positioning themselves as AI hubs.
In November 2025, Taiwan announced plans to invest more than NT$100 billion (≈US$3.2 billion) to transform itself into an “AI Island”, targeting top-five global status in computing power through next-generation hardware, silicon photonics, quantum and AI robotics. The strategy includes a national AI data centre in Tainan and private-sector mega-projects such as a Nvidia–Foxconn hub in Kaohsiung. (Tom’s Hardware)
The programme will significantly expand regional AI compute capacity and deepen Taiwan’s role in semiconductor and data-centre supply chains, with implications for global firms seeking resilient, non-mainland-China infrastructure. However, energy constraints and grid upgrades will shape the pace and cost of these investments, influencing electricity prices and the carbon profile of AI workloads.
On 12 November 2025, the UK government introduced the Cyber Security and Resilience (Network and Information Systems) Bill, expanding the scope of NIS Regulations 2018 and granting stronger powers to regulators and ministers over operators of essential and important entities. The Bill aims to harden critical services such as energy, water and healthcare against sophisticated cyber attacks. (GOV.UK)
Operators across multiple sectors will face broader in-scope coverage, more intrusive supervision and potentially higher penalties, raising compliance costs but improving overall resilience of UK supply chains and services relied on by households and firms. For cross-border providers, including cloud and managed-service firms, the Bill will require tighter incident reporting and may become a reference point for similar regimes in other jurisdictions.
In late November 2025, the US Federal Communications Commission voted 2–1 to scrap cybersecurity mandates introduced in January 2025 after the Salt Typhoon espionage campaign against major telecom operators, arguing that the rules were overly rigid and legally flawed. The Commission plans to replace them with narrower risk-management obligations, including for submarine cable operators. (TechRadar)
The repeal reduces mandatory security baselines for large carriers, potentially lowering near-term compliance costs but increasing systemic risk for US communications networks and customers, including international partners relying on US backbone infrastructure. It may also deepen regulatory divergence with EU-style NIS frameworks, complicating cross-border assurance for global telecom and cloud providers.
On 10 November 2025, the Bank of England released a detailed consultation on a regulatory regime for sterling-denominated systemic stablecoins, including prudential standards, backing assets, holding limits and the interplay with a potential digital pound. The paper revises earlier proposals and interacts with FCA rules for non-systemic tokens. (Bank of England)
The framework would allow certain stablecoins to be used as settlement assets in core markets while imposing strict governance, reserve and risk-management requirements, reshaping business models for issuers and wallets and influencing the relative attractiveness of UK-based stablecoin ecosystems compared to EU and US regimes. Caps on holdings and a preference for short-term gilts as backing could also affect government debt markets and competition with bank deposits.
On 26 November 2025, the FCA’s Payments and Digital Finance lead delivered a speech setting out the regulator’s philosophy on cryptoassets and stablecoins, coinciding with a new “stablecoins cohort” in its Regulatory Sandbox. The FCA emphasised consumer protection and financial-stability objectives while signalling openness to “responsible” innovation. (FCA)
The combination of a tougher prudential line with sandbox support will shape how UK-based fintechs design crypto and payments products, potentially drawing early-stage experimentation into the UK while raising the bar for firms seeking long-term authorisation. Cross-border providers may use the sandbox as a stepping stone into the UK market, influencing where digital-finance talent and capital cluster.
On 6 November 2025, the Central Bank of Ireland announced enforcement action against Coinbase Europe Limited for anti-money-laundering breaches, underscoring supervisory expectations for crypto exchanges operating under EU rules. The case focuses on deficiencies in customer due diligence and transaction monitoring. (Central Bank of Ireland)
The fine raises the compliance bar for cryptoasset service providers across the EU, signalling that regulators are willing to use their full enforcement toolkit even as MiCA transitions into force, and it may prompt exchanges to reassess risk systems or reconsider where they locate EU operations. Users and institutional clients can expect tighter onboarding and monitoring, with potential frictions for cross-border flows but improved risk management.
On 18 November 2025, a regional summit in Cotonou produced a declaration committing West and Central African governments, regional economic communities and the African Union to harmonise digital policies, standards and regulations, with a strong focus on interoperable digital public infrastructure, cybersecurity and data protection. The statement ties digital integration to jobs, entrepreneurship and cross-border e-commerce. (World Bank)
If translated into concrete regulations, this could create a more predictable regional market for digital services, lowering barriers for firms operating across borders and encouraging investment in shared platforms for payments, identity and trade facilitation. It may also spur convergence of data-protection and cybersecurity laws, affecting how international providers structure their African operations.
On 18 November 2025, the International Chamber of Commerce issued a call for action urging WTO members ahead of MC14 to preserve the moratorium on customs duties on electronic transmissions and to launch a time-bound WTO reform round. The statement frames the moratorium as essential to stability and confidence in global digital trade. (ICC – International Chamber of Commerce)
On 18 November 2025, the International Chamber of Commerce issued a call for action urging WTO members ahead of MC14 to preserve the moratorium on customs duties on electronic transmissions and to launch a time-bound WTO reform round. The statement frames the moratorium as essential to stability and confidence in global digital trade. (ICC – International Chamber of Commerce)
On 24–25 November 2025, Ofcom published its final guidance “A safer life online for women and girls”, setting detailed expectations for services in scope of the Online Safety Act, including risk assessments, design changes and moderation practices targeting gender-based abuse and harms. The regulator also launched a public campaign and enforcement stance signalling that it expects platforms to “up their game” on these harms. (www.ofcom.org.uk)
Platforms serving UK users will need to adapt governance, algorithms and reporting tools, increasing compliance and product-design costs but potentially reducing harmful content and legal exposure; global firms may adopt similar standards across markets to avoid maintaining separate UK-only systems. The guidance also sets expectations for smaller services, which may face proportionate but still non-trivial implementation burdens.
On 17 November 2025, the ITU published “Facts and Figures 2025”, estimating that about 6 billion people (three-quarters of the world’s population) now use the internet, up from 5.8 billion in 2024, but stressing that gaps in quality, affordability and skills remain stark, especially in least-developed countries and rural areas. (UN DESA)
The data confirm strong growth in demand for connectivity and digital services, supporting investment cases for operators and governments, but also show that without targeted policy interventions many communities will remain under-served or confined to low-quality access, limiting the impact of digitalisation on jobs and public services. Investors and donors may use the report to prioritise markets, while regulators can benchmark national progress.
On 21 November 2025, ITU and the UK Foreign, Commonwealth and Development Office published a joint article summarising nearly five years of collaboration to connect unserved communities and strengthen digital skills, including support for infrastructure, policy reform and local capacity-building. (ITU)
The partnership showcases how targeted financing and policy support can expand inclusive connectivity, potentially crowding in private investment and helping partner countries design more effective universal-service and skills programmes. It also signals donor priorities, which may influence where tech firms and NGOs concentrate pilots and corporate-social-responsibility projects.
On 10 November 2025, UNHCR and ITU announced an expanded initiative in Chad and other refugee-hosting countries to connect millions of forcibly displaced people and local communities by 2030, combining infrastructure, devices and skills interventions. (ITU)
The programme will create new demand for affordable connectivity solutions, solar-powered infrastructure and digital public services in fragile environments, with knock-on effects for local operators and NGOs; if successful, it may become a template for integrating connectivity into humanitarian response and development planning.
Under the Online Safety Amendment (Social Media Minimum Age) Act, from 10 December 2025 social media companies must take “reasonable steps” to ensure under-16s cannot hold accounts and must deactivate existing accounts, or face fines up to A$49.5m. The eSafety Commissioner has named major platforms such as Facebook, Instagram, TikTok, Snapchat, YouTube, X, Reddit, Twitch and Kick as in scope. (The Guardian)
Platforms will need to invest heavily in age-assurance systems, account reviews and user-appeal mechanisms, increasing compliance costs and potentially reshaping product design for Australian users; millions of teen accounts are expected to be frozen or removed, which will affect engagement metrics, advertising reach and youth online behaviour. The requirement to show ongoing “reasonable steps” also raises legal risk and may push companies towards more intrusive data collection or conservative enforcement to satisfy the regulator. (The Guardian)
In mid-November 2025, China’s Cyberspace Administration issued further clarifications and FAQs on cross-border data transfer obligations, stressing that exemptions should be narrowly interpreted and detailing when non-critical information infrastructure operators must undergo security assessments or certification. (Arnold & Porter)
The guidance tightens expectations for foreign and domestic firms moving personal data out of China, raising compliance costs and potentially prompting some to localise data or restructure data-flows, which will affect cloud-architecture choices and cross-border analytics. It may also make China-based operations less attractive for data-intensive services that rely on global datasets, unless firms invest heavily in compliance.
As noted above, the ICC’s 18 November 2025 statement explicitly urges WTO members to maintain the Moratorium on Customs Duties on Electronic Transmissions, warning that its expiry would amount to a tax on digitalisation and disrupt trade in services and digital goods. (ICC – International Chamber of Commerce)
Continuation of the moratorium would avoid new tariffs on cross-border software, streaming, cloud and other digital services, stabilising expectations for firms and consumers, while its end could encourage copycat digital tariffs that fragment markets and raise prices, particularly harming SMEs and exporters from small economies.
From 18 November 2025, all UK company directors and persons with significant control must verify their identity through GOV.UK One Login to interact with Companies House, marking a major step towards a unified digital identity layer across government services. (Companies House Blog)
The requirement will improve the integrity of the companies register and reduce fraud, but imposes new onboarding processes on domestic and foreign directors and intermediaries, potentially changing how corporate service providers operate and how quickly entities can be incorporated or updated. It also strengthens GOV.UK One Login as a core piece of UK DPI, with network effects for other services that may plug into the same identity infrastructure.
On 3 November 2025, the UK House of Commons Library released a briefing on digital identity, outlining centralised and decentralised models and cautioning against a state monopoly that could stifle innovation, while highlighting privacy and competition considerations. (UK Parliament Research Briefings)
The briefing will influence parliamentary scrutiny of forthcoming digital ID schemes and private-sector frameworks, affecting how identity markets evolve and how much room is left for competitive providers; it also informs investors and startups about potential regulatory directions.
From 4–6 November 2025, the World Bank and partners hosted the Global DPI Summit 2025 in Cape Town, convening policymakers, technologists and civil society to discuss interoperable digital IDs, payments and data-sharing platforms, with outcomes intended to “shape the global DPI agenda” and pilot programmes across Africa. (World Bank)
The summit will channel financing and technical assistance towards DPI projects, influencing which standards and architectures gain traction, and encouraging governments to think in terms of shared, interoperable building blocks rather than siloed e-government systems. This could accelerate deployment of foundational IDs, instant payments and data exchanges, with significant effects on how citizens and businesses access services.
In October/November 2025, UCL’s Institute for Innovation and Public Purpose published the “2025 State of Digital Public Infrastructure Report”, proposing ways to measure DPI prevalence and interoperability as systems move from pilots to scale, and analysing governance models across several countries. (University College London)
By offering metrics and case studies, the report will influence how donors, governments and civil-society groups assess DPI maturity, potentially shaping investment decisions and reform priorities; it also highlights risks around exclusion, vendor lock-in and weak governance.
In late November 2025, India’s power ministry convened its Energy Stack taskforce to fast-track design of DPI for the energy sector, while at the G20 Johannesburg summit on 23 November, the Prime Minister proposed an alliance for digital infrastructure collaboration among India, Brazil and South Africa. (The Economic Times)
Energy-sector DPI could improve grid management, billing and market transparency, creating new markets for data and digital services and attracting investment into smart-grid technologies; the proposed alliance signals an ambition to export DPI know-how and coordinate standards among major Global South economies.
On 5 November 2025, Microsoft and Abu Dhabi-based G42 announced a 200MW expansion of data-centre capacity in the UAE, framed as supporting AI and cloud demand under national digital and AI strategies. The move adds to a wave of digital-infrastructure investments positioning the Gulf as a regional data-centre hub. (Reuters)
The expansion strengthens the UAE’s attractiveness for AI startups and SaaS providers seeking low-latency infrastructure in MENA, potentially drawing FDI and skilled workers while increasing energy and water demand linked to cooling. It also underscores the strategic importance of regulatory frameworks for data localisation, security and sustainability in attracting hyperscale investments.
On 26 November 2025, Reliance Industries and partners Brookfield and Digital Realty announced plans to invest US$11 billion over five years in an AI-native data-centre campus in Visakhapatnam, targeting 1GW of AI compute capacity and complementing Google’s own AI hub in the city. (Reuters)
The project will significantly reshape India’s data-centre landscape, attracting AI startups, cloud workloads and potentially foreign AI labs, while creating jobs in construction, operations and associated services; it will also increase pressure on local energy infrastructure and could influence regional power prices and carbon emissions.
On 20 November 2025, India’s NABARD and the Internet and Mobile Association of India launched the Earth Summit 2025–26 in Hyderabad, a three-part conference series aimed at “empowering rural innovation for global change”, with strong emphasis on digital public infrastructure, agritech, climate-tech and startup–investor matchmaking. (The Times of India)
The summit will channel attention and capital towards rural-focused startups and DPI-enabled services (e.g. digital credit, agronomic advice, climate risk tools), potentially improving rural livelihoods while expanding new markets for fintechs and platform providers; it also offers a stage for policy announcements and pilots that could later scale nationally.
On 4 November 2025, Skills England released the “AI skills for the UK workforce” report (first published 29 October) and a related initiative to help businesses address an estimated £400bn AI skills gap, providing tools for workforce planning and training across ten growth sectors. (GOV.UK)
The report and tools will shape how UK employers design training programmes and hiring strategies, increasing demand for AI-literate workers and training providers and potentially widening wage gaps between those with and without advanced digital skills; they may also influence immigration and education policy.
In late November 2025, the National Foundation for Educational Research reported that between one and three million UK jobs could disappear by 2035 due to AI and automation, with administrative, customer-service and machine-operation roles particularly exposed, and called for accelerated reskilling efforts. (NFER)
These findings will fuel debates on social protection, education and industrial policy, and may push employers and unions to negotiate new training and redeployment schemes; they also highlight potential regional inequalities where at-risk jobs are concentrated. Investors and firms may anticipate sectoral shifts in labour costs and availability, influencing location decisions.
Jobs and Skills Australia’s 2025 Jobs and Skills Report, highlighted in November communications, finds that generative AI is largely augmenting work rather than replacing it, increasing demand for digital literacy and “human” skills, and underscores the need to modernise skills pathways and unlock inclusive participation. (Jobs and Skills Australia)
The report supports a shift in policy focus towards lifelong learning and blended digital–human roles, which may reshape vocational education, employer training investments and migration settings; it also reassures some sectors that AI can support productivity without mass lay-offs if managed well.
Canada’s new Minister for AI kicked off a 30-day sprint (1-31 Oct 2025) to define a refreshed national AI strategy. The consultation covers safety, compute, skills, commercialization, public sector adoption, and worker transition.
The sprint is expected to shape funding, compute policy, and procurement preferences. Canada is signaling that “safe AI made in Canada,” supported by domestic talent and infrastructure, will be favored in government and regulated sectors starting in 2026.
NIST is consulting on standardized AI documentation (“nutrition labels”) for datasets and models. The draft covers training data origin, intended use, safety testing, evaluation, maintenance, and governance roles, aiming for consistent, auditable disclosure.
If this standard lands, US-origin AI systems may ship with comparable documentation. Buyers get easier due diligence and comparability; providers get higher liability if they misstate safety or bias controls. Over time this could become a global baseline.
The UK has announced plans for an “AI Growth Lab” sandbox and a concierge service aimed at high-growth AI and fintech firms. The proposal, currently at the call for evidence stage, envisions a framework where companies could test advanced AI applications in finance under regulatory supervision while receiving tailored support with authorisation, relocation, and capital-raising. A concierge service in this context refers to a dedicated support mechanism designed to guide companies through regulatory processes and connect them with relevant public and private sector resources, essentially offering a single point of contact.
This is industrial policy plus risk management. The UK is trying to pull AI/fintech firms, capital, and highly skilled jobs into the UK, while giving supervisors early visibility into high-impact AI in payments, lending, and trading so systemic risk is managed before it spills into markets.
CISA ordered all US civilian federal agencies to inventory, lock down, and patch vulnerable F5 BIG-IP devices after F5 confirmed a long-term nation-state intrusion and theft of source code and unpublished vulnerability data. F5 BIG-IP devices are network appliances used for traffic management, load balancing, and security. Agencies have to take management interfaces offline, decommission unsupported kit, and apply fixes on an accelerated timeline.
CISA is treating a vendor compromise as a systemic national security risk. Agencies (and anyone using those devices) now face hard deadlines and board-level scrutiny for asset inventory and patch discipline. This sets a precedent for bold and centralized response to supply chain breaches.
In a ministerial letter on cyber security, UK ministers and the National Cyber Security Center warned major UK companies that hostile cyber activity is a direct economic security threat. Boards were told to rehearse “worst case” business continuity, sign up to Early Warning, and require Cyber Essentials across their supply chains.
Cyber moves from “IT’s problem” to a director-level obligation. The UK is signaling future legal duties for boards and suppliers, and tying cyber resilience to national economic security, not just compliance checklists.
RBI said it is widening its retail “digital rupee” pilot into more banks and more states, using a sandbox to test usability, offline capability, and fraud risk before any national rollout.
India stays in the first wave of large economies piloting a live, retail CBDC. Banks, payment firms, and merchants in pilot regions gain first-mover experience with state-backed programmable money, but also take on fraud-monitoring and compliance costs.
The Financial Stability Board gave G20 ministers an updated roadmap for cross-border payments. The plan leans on ISO 20022 adoption and instant payment interoperability to cut cost and settlement time for remittances and SME trade finance by 2027.
Banks, fintechs, and payment processors are now under pressure to invest in real-time rails, compliance tooling, and shared messaging standards. Cheaper, faster, more transparent cross-border payments would directly benefit SMEs and migrant workers.
The European Supervisory Authorities (ESAs – European Banking Authority, European Insurance and Occupational Pensions Authority, and the European Securities and Markets Authority) issued a warning to consumers, reminding that crypto-assets can be risky and that legal protection, if any, may be limited depending on which crypto-assets they invest in. This warning is accompanied by a factsheet explaining what the new EU regulation on Markets in Crypto-Assets (MiCA) means for consumers. The ESAs recommend concrete steps consumers can take to make informed decisions before investing in crypto-assets, such as checking if the provider is authorized in the EU.
This could create more awareness among consumers. If ESMA’s line wins, crypto firms will face more uniform supervision across the EU and less room to play national regulators off each other.
During the “Joint Spectrum Management and Knowledge Exchange” workshop in Kigali, Eastern African regulators and ICT ministries agreed to fast-track coordinated spectrum management and policy convergence. The goal is affordable, high-quality broadband and a “Single Digital Market for Eastern Africa,” backed by the World Bank’s Eastern Africa Regional Digital Integration Project.
Harmonizing spectrum and licensing reduces interference and regulatory uncertainty, which lowers costs for operators and encourages 4G/5G rollout and cheaper roaming. For consumers and SMEs this should mean cheaper data and better coverage; for investors it signals a more predictable regional market instead of fragmented national rules.
G20-related messaging reframed African digital policy as core to attracting private capital and moving into higher-value manufacturing and services. Regulatory convergence on data, digital trade, and industrial digitalization is pitched as a precondition for serious investment.
This links regulatory predictability to FDI. Donors and development banks are likely to tie infrastructure money to commitments on shared digital rules, spectrum policy, and cross-border data flows.
In Kampala, the EAC and the governments of Kenya and Uganda pitched a 193 km cross-border expressway plus upgraded one-stop border posts. The project is intended to cut travel time, ease congestion, and strengthen cross-border trade along the Northern Corridor between Kenya and Uganda.
Modernized border posts plus harmonized customs and digital processes are meant to cut transit time for goods and make it easier for SMEs to sell regionally without navigating multiple paperwork regimes. That reduces logistics costs and supports regional value chains.
IMCO backed a report urging a default EU-wide minimum age of 16 for social media, video platforms, and AI chat “companions,” unless parents authorize it. It also calls for bans on “addictive design” like infinite scroll and loot boxes (gambling-like mechanisms in games) for minors, and tougher DSA enforcement.
Platforms may need full UX redesigns for EU teens (age gates, ad targeting limits, no endless scroll). Senior management could face personal liability for repeat failures. This raises compliance cost, and could reshape youth engagement and ad revenue models.
Instagram and Facebook have breached EU law by failing to provide users with simple ways to complain or flag illegal content, including child sexual abuse material and terrorist content.
European Commission is signaling it will actually use DSA fining and ban powers. Platforms will likely roll out “safety-by-default for teens” EU-wide, and may replicate it globally because running two codebases (safe EU/unsafe rest of world) is risky.
Australia launched a national campaign to prepare parents and platforms for a law taking effect 10 Dec 2025 that will require platforms to take “reasonable steps” to block under-16s. The eSafety Commissioner will judge whether age assurance is good enough.
Liability for underage accounts shifts to platforms, not parents. Platforms now have a two-month countdown to prove effective age assurance at scale. This accelerates deployment of age checks globally, and normalizes the idea that age gating is part of basic platform compliance.
The Global System for Mobile Communications Association (GSMA) and major African mobile operators such as MTN and Orange pushed coordinated tax/levy changes and financing models to get sub-$30 4G smartphones into low-income hands, arguing affordability is now the main barrier to getting people online.
Cheaper smartphones plus lighter consumer taxes could rapidly expand the addressable market for mobile broadband, e-government, e-commerce, and mobile money among lower-income users. That boosts demand for spectrum, towers, and local digital services.
China has finalized draft rules that streamline approval for exporting “ordinary” data, shifting from blanket localization toward more predictable, risk-based reviews for multinationals. Sensitive data will still face strict security review.
Easing routine data export is meant to calm foreign investors who warned strict localization was choking normal operations. Multinationals could cut duplication of infrastructure, but still face security review for sensitive classes of data.
It’s an invitation-only, hands-on testing event to bring together national implementers and vendors, align on common guidelines, and accelerate cross-border rollout of the EU Digital Identity (EUDI) Wallet ahead of the end-2026 availability deadline for all Member States.
The Wallet creates a common, high-trust ID layer that can plug into payments, e-government, and online safety without over-sharing data. This sets expectations for private-sector onboarding and KYC across the EU.
Ofcom confirmed the results of the UK’s latest spectrum auction. The regulator framed the new assignments as foundational for 5G capacity, national resilience, and the reliability of digital ID, payments, online safety tech, and smart logistics.
More 5G capacity supports stable, nationwide digital ID checks, payments, safety tech, and industrial IoT. It also improves rural and industrial coverage, which matters for supply chain resilience.
Dubai announced a government-backed AI platform, an accelerator for frontier AI startups (“Unicorn 30 program”), a dedicated venture fund, and promises of fast-track licensing and regulatory support so founders can build and deploy high-impact generative AI quickly.
Dubai is bundling capital, regulatory access, and first-customer demand (government and finance) in one package. This is attractive for founders who want fast deployment, but it puts pressure on the city to manage AI safety and reputational risk in real time.
UK finance stakeholders (including the Financial Services Skills Commission and HM Treasury) opened a call for evidence on how AI will reshape roles, required skills, and productivity in financial services over the next 5-10 years. The review looks at reskilling needs, which tasks AI should automate, and how to stay globally competitive without mass displacement.
This folds workforce planning and productivity into AI policy for a key export sector. Banks and fintechs may be pushed to fund reskilling and talent transition, not just cost-cut with automation, which matters for wages and long-term competitiveness.
In Nairobi, IGAD labor and interior ministers issued a communiqué linking fair recruitment, skills recognition, and predictable labor migration frameworks to shared prosperity in the region. The goal is to formalize labor mobility across borders.
The communiqué reframes labor mobility as an economic growth tool, not just a social file. Predictable cross-border movement of skilled workers (including digital professionals) is treated as essential for regional integration and investment.
China has issued new Measures on National Cybersecurity Incident Reporting that force operators of “key infrastructure” to grade breaches and notify authorities within 60 minutes, and within 30 minutes for cases deemed a threat to national security or social stability, with the rules taking effect on 1 November 2025.
Operators in telecoms, cloud, finance, and platforms must build 24/7 monitoring and legal response teams or face harsh penalties, which raises compliance cost but gives the state near real-time visibility into cyber risk and data leaks.
The UN Secretary-General opened a global call for experts to sit on a new 40-member Independent International Scientific Panel on AI, mandated by a UN General Assembly resolution to provide evidence-based assessments of AI risks and opportunities across regions.
The Panel aims to become a neutral scientific reference point for AI safety, equity, and development, which could shape export controls, safety testing baselines, and funding priorities for AI infrastructure, especially for countries that lack their own large safety institutes.
The UK government has announced plans to begin rolling out a smartphone-based digital ID, starting with a digital Veteran Card for 1.8m ex-service personnel, and says a free national digital ID will become mandatory for Right to Work checks by the end of this Parliament, with plans to extend digital credentials (licences, passports) by 2027.
Ministers pitch this as convenience, fraud control, and faster access to services, but civil liberties groups warn that centralising identity in one app creates a major hacking target and could enable mission creep or exclusion of vulnerable groups without smartphones.
European Commission begins drafting guidance and a Code to help providers/deployers label AI-generated/manipulated content in accordance with the AI Act (Art. 50 AI Act).
Expect new implementation guidance affecting media, ads, and platforms for their output labeling.
Over 200 scientists, political leaders, and cultural figures signed this global appeal to set boundaries on AI use under the Global Call for AI Red Lines initiative, which seeks an international agreement on applications that should never be pursued. Signatories include Nobel laureates, former heads of state, and AI researchers, alongside OpenAI co-founder Wojciech Zaremba and authors Yuval Noah Harari and Stephen Fry. The appeal highlights risks to society and human rights and proposes prohibiting AI applications that could threaten democracy, security, or public safety.
Influences national stances on prohibited AI practices; could shape compliance scoping.
Italy became first EU country with a national AI law aligning to EU AI Act; assigns oversight to the Agency for Digital Italy (AgID) and the National Cybersecurity Agency; introduces penalties incl. for harmful deepfakes.
Sets strong compliance baseline for AI deployers in Italy; raises governance expectations for workplaces/health/education; potential spillovers across EU market.
Singapore’s Ministry of Law released a draft guide covering confidentiality, data handling, and safe GenAI use by legal professionals.
Shapes sector-specific AI governance; firms and lawtech providers must align tooling and policies for client data and privilege.
World Bank note (5 Sep) sets practices to bolster resilience of national ID ecosystems.
Promotes stronger controls, reducing outage/fraud risks in DPI; potential procurement and standards impacts.
DoD finalised rule; from Nov 2025 new contracts will require compliance with the Cybersecurity Maturity Model Certification (CMMC), including third-party assessments for many suppliers.
Tightens minimum cyber standards across a vast global supply chain; exporters to DoD must upgrade controls and evidence.
HKMA expanded priorities to Authenticate-in-App, retire unused functions, cancel suspicious payments, and enhance deepfake detection.
Banks/PSPs must adjust controls and customer journeys; expected drop in APP fraud and tighter real-time risk scoring.
Major UK banks (e.g. HSBC, Lloyds) are advancing plans to issue tokenised deposit products for 2026, encouraged by Bank of England guidance over stablecoins.
This could transform payment rails, reduce transaction costs, and enable novel financial services (e.g. programmable money).
The EU is preparing to exclude major U.S. tech firms (Meta, Apple, Google, Amazon) from its upcoming Financial Data Access (FiDA) regime, limiting who can tap into consumer banking data.
This shields banks and fintechs from disintermediation, shaping competition dynamics in digital finance.
The European Commission launched a digital package with a call for evidence on 16 September 2025. The digital omnibus is part of a broader digital simplification agenda to simplify the EU’s “data acquis”, rules on cookies and other tracking technologies, cybersecurity‑related incident reporting, targeted adjustments to the Artificial Intelligence Act and aspects of electronic identification and trust services.
The Commission intends to adopt the proposal in late 2025 and aims to reduce administrative burdens for companies and SMEs.
OTCC opened consultation (closed 18 Sep) on guidelines addressing monopolistic conduct and restrictions in multi-sided e-commerce platforms.
Signals forthcoming platform conduct rules (e.g., parity clauses, self-preferencing analogues); marketplaces and large sellers may face new constraints.
The bill defines “adultisation” (exposure of minors to sexualised content/behaviour) and prohibits monetisation of such content; applies to platforms, social networks and content providers.
Would impose stricter moderation and monetisation controls; advertising and creator platforms face elevated liability/oversight if passed.
Albanian Prime Minister Edi Rama presented “Diella”, an AI system designated as a cabinet-level “minister” to handle public procurement and promote transparency.
The move could rewire procurement workflows and vendor bidding processes; raises governance, oversight and accountability questions for AI in public administration.
New ministerial leadership signals continuity/acceleration of Rwanda’s digital strategy and innovation agenda.
Policy continuity supports investor confidence and programme delivery (skills, startups, infrastructure).
The US Administration announced a restructuring for TikTok’s U.S. business with ByteDance holding <20%, a U.S.-majority 6:1 board, and Oracle-led security oversight (hosting/audits). Enforcement of the divest-or-ban law was extended to Dec 16, 2025 to finalize the deal.
Establishes a divestiture-plus-assurance model combining ownership limits, local governance, and continuous auditability – likely to raise compliance costs for high-risk, foreign-owned platforms and to shape future U.S. and international approaches to foreig investment and technology-security reviews.
The European Commission fined Google €2.95 bn (~$3.5 bn) for abusing dominance by favoring its ad-tech stack (AdX). Google must stop the conduct and propose fixes within 60 days; structural remedies remain on the table.
Accelerates unbundling/behavioral changes across ad-buying/selling tools; increases reporting and transparency duties; potential divestment pressure if remedies fall short – this would have implications for publishers, advertisers, and smaller ad-tech rivals.
EDPB guidelines clarifying overlaps (e.g., profiling, special categories) between DSA and GDPR (EU Digital Services Act and General Data Protection Regulation) has been published.
Clarifies compliance for Very Large Online Platforms (VLOPs)/hosts, reduces legal ambiguity; may change consent/processing practices; public consultation to follow.
Guidances has published in 16 September and it sets expectations ahead of under-16 ban taking effect 10 Dec 2025; warns against blanket age-verification for all users.
Platforms must implement proportionate age-assurance; compliance costs and product changes; significant for youth online safety.
The Office of Communications (Ofcom) issued an industry bulletin and opened investigations into porn sites’ child-safety duties.
Raises compliance risk for non-conforming services; accelerates age-assurance adoption in UK market.
Report details investment gaps and policy actions to accelerate inclusive broadband across the continent.
Informs national broadband plans and donor/PPP pipelines; guides spectrum/planning priorities; benefits consumers/SMEs.
A bill to amend the Network Act (ICT Network Use & Information Protection) was tabled, proposing procedural requirements for requests to remove privacy-infringing or defamatory content.
Sets clearer due-process standards for takedowns; platforms face new notice/appeal workflows and record-keeping; compliance design changes needed if enacted.
Procon-SP issued a guide explaining bettors’ rights/obligations and safe practices under Federal Law 14,790/2023, including complaint pathways.
Improves user safeguards and complaint handling in fast-growing betting markets; operators must update disclosures and redress mechanisms.
Decree 12,622/2025 makes Brazil’s ANPD an autonomous administrative authority for online child protection and assigns competencies including executing court blocking orders tied to violations.
Concentrates enforcement and guidance in the ANPD; raises compliance stakes for platforms and adtech; clearer authority for court-ordered blocks.
Those draft measures would require platforms to assess their impact on minors and submit reports within 20 working days; criteria include user scale, usage patterns, spend, and sector influence.
Raises duty-of-care and reporting obligations on large platforms; could trigger product and policy changes for youth safety.
Following a six-month inquiry into social media harms (incl. TikTok), MPs recommended banning social media for under-15s and imposing night-time curfews for ages 15-18, plus stronger enforcement and parental responsibility measures; government response pending.
If legislated, platforms would need robust age-assurance, time-of-day enforcement, and expanded parental controls; could influence EU-level debates on minors’ protections and cross-border service obligations.
The EU Data Act, part of the EU’s data strategy, started to apply from 12 September 2025. The law gives consumers and business users of connected devices (such as smart cars, industrial machinery and smart TVs) the right to access and share the data generated by those devices
Manufacturers and service providers in the EEA must open up device-generated data to end-users and, on request, to third-party providers. The availability of such data will significantly impact the economy. For example, data generated by connected products and related services can be used to boost aftermarket and ancillary services as well as to create entirely new services, benefiting both businesses and consumers.
Indonesia said the 9-year Comprehensive Economic Partnership Agreement’ (CEPA) talks have concluded, with signing expected 23 Sep; digital trade provisions are part of the package.
Improves market access and services trade, likely easing e-signatures/data rules for EU–Indonesia flows; firms anticipate reduced barriers in 2026–27 post-ratification.
Commission launched steps toward an adequacy decision with Brazil.
If finalised, lowers transfer friction for EU–Brazil data flows; boosts market access for cloud/AI/AdTech; timelines depend on assessment.
European Data Protection Supervisor (EDPS) called for comprehensive safeguards in the EU’s mandate for a security-screening data-exchange deal with the US.
Could shape conditions for sensitive cross-border data sharing; firms handling screening/ID data face heightened safeguards and oversight.
The Konektadong Pinoy/Open Access Act took effect, mandating competitive, open segments across data transmission networks and registration with the National Telecommunications Commission.
Reduces entry barriers and boosts network investment/competition; improves connectivity for consumers/SMEs; clearer market access for foreign investors via wholesale openness.
The Nigeria Data Protection Commission (NDPC)’ General Application & Implementation Directive (GAID) took effect, detailing conditions for international data transfers (adequacy, approved instruments, or other lawful bases).
Tightens transfer governance and certainty for firms moving data in/out of Nigeria; vendors must align contracts, TIAs and safeguards immediately.
Following adoption of the revised eIDAS Regulation, several EU Member States began piloting the European Digital Identity (EUDI) wallet in September 2025. The pilot allows citizens to store digital credentials (IDs, driving licences, professional qualifications) and use them across borders for public and private services.
By standardising identity and trust services, the EUDI wallet aims to facilitate cross‑border e‑commerce, financial services and public‑sector interactions. Businesses will need to integrate wallet‑based authentication and could reduce fraud and onboarding costs.
South African government outlined plans for a unified digital ID to streamline access to public/private services.
Improves KYC/e-gov onboarding; creates integration demand for IDV vendors and interoperability standards across sectors.
Royal Oman Police confirmed certified e-ID and e-licence are legally recognised nationwide.
Accelerates digital public/private transactions; reduces reliance on physical docs; impacts onboarding/remote verification markets.
New OECD report maps AI use across 11 core government functions and scaling challenges.
Informs procurement, risk management and capability building for gov AI deployments; opens vendor opportunities.
The Spanish government proposes a delegated regulation to establish a common rating scheme for data centre sustainability and mandate energy-performance criteria for new and existing facilities.
This raises compliance burden for operators, may increase costs of green data hosting, and prioritize investment in efficient infrastructure.
The European Commission held hybrid workshops to gather stakeholder input on the Green Deal Data Space, combining environmental data with digital services to accelerate decarbonisation and promote a green data economy.
This accelerates cross-sector data integrations, lowers barriers for green innovation, and may enable new services (e.g. supply chain carbon tracking).
The EU Energy Commissioner announced new measures to boost research, funding, and partnerships for energy-efficient technologies. A dedicated data centre energy efficiency package will be introduced alongside the Strategy Roadmap on Digitalisation and AI in Q1 2026.
Helps SMEs digitalise energy management; reduces operating costs/emissions; accelerates green-digital tools uptake.
OECD convened on impacts of agentic AI for SMEs and possible policy support.
Shapes upcoming recommendations and funding priorities for SME AI uptake.
The Indian government expanded its Startup India programme by enhancing access to public procurement opportunities and increasing credit guarantee support for innovation firms.
Boosts financing and market access for early-stage firms, raises competition, encourages innovation.
19 data protection authorities issued a joint statement supporting trustworthy data governance for AI.
Signals convergence on AI governance expectations (lawfulness, transparency, accountability) impacting cross-border AI deployments.
Nigeria launched a service-wide plan to upskill civil servants for 21st-century workflows.
Boosts public-sector efficiency; creates demand for training vendors; may streamline e-gov delivery.
US announced new commitments backing AI education efforts.
Expands training content/funding; potential demand growth for EdTech; indirect global spillovers.
Ministry of Interior adds a digital check to verify “Visa 20” status before hiring, reducing duplicate applications and processing delays.
Streamlines employer onboarding and compliance; lowers administrative burden and fraud risk; improves service predictability.
Government posts a dedicated inclusion policy (with related licences and open-data materials), signalling near-term implementation steps.
Helps target underserved groups; informs regulator/ministerial programmes and donor coordination; boosts uptake of e-services.
“The DCO Policy Tracker is provided by the Digital Cooperation Organization (DCO) for general informational purposes only and compiles publicly available news and descriptive content sourced from third-party entities.
While DCO endeavors to reference sources it considers reputable and to provide timely information, it makes no representations or warranties as to the accuracy, completeness, reliability, or currency of the information and does not independently verify, endorse, or assume responsibility for such content.
DCO shall not be responsible for any errors or omissions and assumes no liability for any reliance placed on the information provided; users are solely responsible for consulting original sources and independently assessing credibility and suitability.
The content presented does not reflect the official position, views, or decisions of the Organization and shall not be construed as legal, policy, technical, financial, or other professional advice, recommendation, endorsement, or analysis.”