Page content

The Digital Rights Foundations Database

Published on 19 November 2025

Overview

The Digital Rights Foundations Database is a component of a broader dataset designed to review the state of digital rights in countries around the world. It provides a structured overview of legal and institutional protections related to four key rights in the digital context:  the right to privacy, the right to freedom of expression and assembly, and the right to non-discrimination, along with certain foundational cross-cutting factors relevant for all digital rights. Focusing on treaties, constitutions, legislation, binding legal decisions, and oversight institutions, the database highlights whether foundational legal safeguards exist, offering a publicly accessible starting point for understanding how digital rights are protected—or left vulnerable—both through international commitments and domestic law. 

Methodology
The Digital Rights Foundations Database was compiled using large language models (ChatGPT, Gemini, and DeepSeek) with expert human verification. Given the evolving nature of legal and policy environments, some information may become outdated. Readers who identify inaccuracies or have additional sources to contribute are encouraged to contact the project team using the details at the bottom of the page.

Step 1

AI-Assisted Data Compilation 

In the initial phase, relevant legal and institutional data were extracted using large language models — specifically ChatGPT 4.5, Gemini 2.5, and DeepSeek. Countries were grouped in batches of ten and processed with a standardized prompt designed to generate structured, table-formatted responses. These responses addressed legal protections related to four core digital rights: access, privacy, freedom of expression and assembly, and non-discrimination.

General queries: For each batch, a single prompt was issued that uploaded the country list and posed a series of predefined questions drawn from the project’s database. The prompt explicitly requested that responses include specific laws, legal decisions, and constitutional provisions, formatted in a table. If any cells were left unfilled, a follow-up prompt was sent to the same model, asking why the information was missing and whether a relevant law or provision existed but was not initially surfaced. This allowed for a basic check of logical consistency in the outputs. For instance, Gemini initially returned an incomplete table for The Gambia. A targeted follow-up clarified that while the Information and Communications Act governs online speech, outdated sedition laws still apply—highlighting a significant legal caveat that would otherwise have been overlooked.

For the column concerning National Human Rights Institutions (NHRI), the UN accreditation document was uploaded into the prompt. The LLM was instructed to extract both the NHRI’s name and its classification from this document. If the document did not contain relevant data, the model was prompted to search online and flag the result as not UN-verified. This two-step process helped improve the reliability of institutional data while maintaining transparency around source limitations.  

In parallel, ratification data for relevant international human rights treaties and ILO conventions was collected from the UN Human Rights Treaty Body Database and the NORMLEX Information System on International Labour Standards. Country names were standardized, and an automated process was used to determine ratification status and dates, recorded as “Yes” (with date) or “No” in the corresponding columns.

Outputs were cross-compared across the three models (Chat GPT, Gemini and Deepseek) for consistency and compiled into a unified preliminary dataset. Because the same questions were asked of each model and the responses were formatted in structured tables, it was possible to directly assess overlaps and discrepancies. When two or more models identified the same legal provision—for example, a freedom of expression law in The Gambia—the response was provisionally accepted using a majority-vote approach. In such cases, the referenced law was reviewed to confirm its relevance. When all models returned that no specific law existed for a given right, no further verification was conducted at this stage, with the expectation that gaps would be flagged during later human validation.

Step 2

Human Verification

Following the AI-assisted compilation, the preliminary dataset was reviewed by designated Human Rights Focal Points. These experts were responsible for verifying the accuracy of each entry using domestic legal texts, official government sources, and international legal databases. The verification process involved manually reviewing each response generated by the large language model, consulting the referenced source documents, conducting supplementary searches using standard search engines such as Google, and confirming information with local experts. Any discrepancies or uncertainties identified in the AI-generated content were corrected or clarified based on authoritative documentation. This rigorous verification process ensured that the final dataset accurately reflects up-to-date information on the foundational legal protections for digital rights in each country.

Explore the Database 

 
Disclaimer

Given the dynamic and evolving nature of legal and policy environments, the information contained in this database may become outdated or may not fully capture recent developments. Users who identify inaccuracies, outdated information, or have additional sources to contribute are encouraged to contact the project team via the provided contact details. Input and feedback are welcome and appreciated to help ensure the accuracy and relevance of this resource.

Contact

digital.support@undp.org