DCO AI Ethics Evaluator (The Tool):

A comprehensive digital policy tool designed to help individuals and organizations systematically assess and address ethical considerations in their AI systems, with a particular focus on human rights risks. Grounded in the DCO Principles for Ethical AI, the tool categorizes risks into six core areas and provides a structured framework for risk assessment.

The tool’s primary feature is its risk assessment mechanism, where users—identified as developers or deployers—answer tailored questions assessing the severity and likelihood of potential risks across six categories. Based on their responses, the tool generates a visual risk profile, including radar charts that clearly highlight high-priority risk areas. It then delivers targeted recommendations for appropriate safeguards and controls, aligned with international human rights standards and scaled to the identified risk levels. A downloadable report is then provided to support integration into existing workflows.

The DCO AI Ethics Evaluator stands out for its dedicated focus on systematically embedding human rights protections into AI development and deployment. The Tool uniquely integrates a human rights-centered perspective across a wide range of AI use cases, ensuring that ethical AI governance is accessible, practical, and globally interoperable for DCO Member States and beyond.

Access the DCO AI Ethics Evaluator

DCO AI Ethics Evaluator (Guidance Document): This document serves as a user guide for the DCO AI Ethics Evaluator tool, outlining the rationale behind its development and offering a detailed overview of its components to support effective use through enhanced user experience.


Overview

DCO AI Ethics Evaluator