The Trust Catch-22: Why AI Needs Public Confidence Before It Can Earn It
- Hannah So
- 4 hours ago
- 5 min read
AI systems require public data and cooperation to function responsibly, yet people won’t share their data unless they already trust the systems collecting and utilizing it. Solving this paradox must be a policy priority for developers and the Canadian government.

Canada prides itself on its strong record as a global leader in AI safety. Since becoming the first nation to adopt an AI strategy in 2017, Canada has brought together industry leaders to voluntarily commit to a code of conduct for the responsible development and management of generative AI tools and published multiple strategies and guides on the use of AI in government. For example, the Guide on the Use of Generative AI introduces the "FASTER" principles – Fair, Accountable, Secure, Transparent, Educated, and Relevant – for implementing AI in federal institutions and outlines potential issues and employee best practices, such as avoiding feeding sensitive or personal information into AI tools not managed by the Government of Canada to mitigate information security risk. Additionally, the Guiding Principles for the Use of AI in Government reaffirms Canada's commitment to effective and ethical AI usage through a list of principles that echoes those published by the Digital Nations – a collaborative forum of the world’s leading digital governments. These include promoting openness about AI use, publishing legal or ethical impact assessments, and making sure decisions made by AI are explained and subject to review by human agents.
Despite efforts to implement regulatory frameworks, Canadian concerns about AI safety and reliability remain high. In one 2025 study by the Pew Research Center examining international attitudes towards AI, nearly all surveyed Canadian adults expressed concern about the increased use of AI in daily life, with 45% of adults expressing more concern than excitement and 45% expressing concern and excitement equally. In the United States, public sentiment is similarly pessimistic: according to the same study, 50% of U.S. adults surveyed were more concerned than excited about the increased use of AI in daily life – up from 37% in 2021. Another Pew study comparing the attitudes of the American public and AI experts towards AI found only 17% of the surveyed public believed that AI will have any sort of positive impact on the U.S. over the next 20 years.
Even in the face of widespread skepticism toward AI, Canadian AI policy has historically failed to give appropriate attention to the ongoing crisis of public trust in disruptive technologies. Trust has largely been treated as something to be simply maintained - whether through alignment with the "FASTER" principles or, as stated in the Guiding Principles for the Use of AI in Government, through public engagement initiatives on specific AI policies and projects. Meanwhile, the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems contains no mention of trust. In the more recently published 2025-2027 AI Strategy for the Federal Public Service, public trust takes more of the spotlight. Recognizing the high levels of mistrust in AI, the strategy identifies building trust as a key priority, encouraging transparent AI usage, public engagement, and clearer communication of the public value of AI initiatives. Across these frameworks, the absence of public trust is ultimately understood to threaten responsible adoption, and there is an implicit assumption that public trust can be encouraged by developing effective policies and responsible designs. However, framing trust in this way overlooks a crucial dimension: trust is not just a result of responsible AI development, it drives it. More attention needs to be given to increasing public trust in AI at the early stages of innovation, because it is, in fact, one of the conditions that makes responsible AI development possible.
This is because public trust is essential for good, responsibly sourced data, which in turn is the foundation for responsible AI. Good data is at the core of any functionally good AI model, or one that is accurate and produces replicable outputs – a relationship aptly summarized by the saying “garbage in, garbage out.” A good training dataset must be very large, diverse, and high-quality, among other things. Good datasets that are moreover responsibly sourced, that is, secure and given with explicit informed consent, directly enable key ethical principles in AI development, such as mitigating harmful biases and protecting consumer data privacy. Nonetheless, according to researchers with Cohere.ai, sourcing and curating datasets that actually meet these criteria remains a significant hurdle, especially for AI trained on data commons, due to a persistent and growing resistance from data creators to give up their data and systemically insufficient infrastructure for facilitating consent.
Good, responsibly sourced datasets, and correspondingly trustworthy AI, require the consensual participation of large and diverse bodies of the public. But, such participation is itself contingent on trust – say, trust that data will not be used in ways that conflict with one's values and interests. Here we see the catch-22: though trustworthy AI requires public data, and by extension public trust, public trust requires that AI is already trustworthy. To tackle this, I suggest a two-pronged approach. First, part of the solution, as already acknowledged, is for AI developers to undertake real steps towards building AI systems and implementing data practices that integrate the already widely accepted principles for responsible AI development and deployment. For example, these models should be secure, unbiased, transparent, and explainable while implementing appropriate safeguards and channels of accountability to protect the public from harm. Data, in turn, should be traceable, securely collected and used with informed consent, and, where appropriate, fairly compensated.
Second, as with any disruptive technology, efforts must also be directed towards public outreach and engagement initiatives explicitly aimed at fostering more balanced perspectives on the risks and benefits associated with the AI revolution, rather than polarized opinions.

Though public concern regarding AI is well-founded, negative sociocultural narratives and media representations, such as those shown in the image above, can be amplified and essentialized through media coverage and (often virtual) echo chambers, allowing skepticism to persist even as governance frameworks and technical safeguards might evolve. Thus, it is important that public communication efforts go beyond the "metrics and performance indicators" emphasized in Canada's 2025-2027 AI strategy and instead aim for emotional resonance, addressing public values while being grounded in concrete measures taken by policymakers and developers. Companies and governing bodies must help people feel that the benefits of AI outweigh its risks, that AI is safe and trustworthy and that developers are also trustworthy, so as to foster new shared cultural understandings of what AI means. An initial step may be to focus on assuaging the public's top concerns about AI, such as job security and vulnerability to misinformation and impersonation. While the Canadian government must work towards developing regulatory frameworks and social safety-net policies to protect people from the fallouts of a post-AI world, developers must also continue to publicly acknowledge the potential risk to the public with empathy and transparency and make meaningful gestures towards addressing them.
In conclusion, effective and ethical AI development requires developers and policymakers to consider public attitudes more seriously. Public trust in AI, although presently low in Canada and across the world, can be increased through two interrelated processes: a simultaneous development of AI systems that are actually worthy of trust through adherence to responsible development, design, and data principles, and sustained messaging efforts which meaningfully engage with public values and concerns. In treating trust as a foundational condition for responsible AI development, developers and policymakers can help ensure that we enter a future where AI actually serves the public interest and benefits society as a whole.
Hannah So is a Research Associate in Bioethics at Genomics4S, where she supports research projects on the ethical use of AI in clinical research and contributes to public outreach initiatives.
Recommended citation: So, H. (2025). The Trust Catch-22: Why AI Needs Public Confidence Before It Can Earn It. Zenodo. Canadian Institute for Genomics and Society (Blog). December 17, 2025. https://doi.org/10.5281/zenodo.17991465