Why Is Vulnerability the Key to Trustworthy Artificial Intelligence?

Why Is Vulnerability the Key to Trustworthy Artificial Intelligence?

Artificial intelligence occupies a central place in debates about technological innovation and respect for citizens’ rights. Yet, the notion of trustworthy AI, often presented as an ideal compromise, remains vague and criticized. Some even see it as a form of diversion to avoid stricter regulations. A different approach gives concrete meaning to this idea: by placing human vulnerability at the heart of the discussion.

Vulnerability refers to our exposure to the risk of being harmed or affected in what defines our existence and well-being. It is not just a weakness to be avoided, but also a reality that drives us to seek collective solutions. For example, we trust others to overcome our limitations, whether by sharing tasks or relying on institutions. This trust, in turn, creates a new form of vulnerability, as it makes us dependent on those in whom we place our faith.

In the field of AI, this dynamic is often overlooked. Current guidelines focus on principles such as transparency or security, but they neglect a fundamental question: which vulnerabilities is AI supposed to mitigate, and what new vulnerabilities might it create? Trustworthy AI should not only avoid worsening existing inequalities but also recognize and protect people from risks associated with its own operation.

To achieve this, AI governance must be rethought. Those who develop and deploy these technologies must be able to recognize users’ vulnerabilities and respond to them appropriately. This means not settling for abstract principles but integrating concrete mechanisms to identify and mitigate risks. For example, digital platforms that manipulate user choices to maximize engagement create new dependencies and fragilities. Trustworthy AI, on the other hand, should be designed to strengthen individual autonomy and security.

The participation of citizens and affected groups is a key element. By including vulnerable or marginalized people in the design and regulation of AI systems, we can better understand their needs and avoid unintended consequences. This participatory approach transforms vulnerability into collective strength, where technology serves to protect rather than exploit.

Ultimately, trustworthy AI is not measured solely by its capacity to innovate but by its willingness to recognize and address human vulnerabilities. It is by placing this concern at the center that we can hope to build technological systems that truly deserve society’s trust.


Documentation and Sources

Reference Document

DOI: https://doi.org/10.1007/s00146-026-02892-3

Title: The value of vulnerability for trustworthy AI

Journal: AI & SOCIETY

Publisher: Springer Science and Business Media LLC

Authors: Giacomo Figà-Talamanca

Speed Reader

Ready
500