Are Artificial Intelligences Shaping the Way We Think and Know?

Are Artificial Intelligences Shaping the Way We Think and Know?

Large language models do more than reproduce information. They actively participate in defining what is considered true, reasonable, or legitimate. These systems are not merely imperfect technical tools, but devices that transform social and historical hierarchies into epistemic norms. Their operation relies on the analysis of massive texts, often marked by Eurocentric, gendered, and colonial biases. By favoring dominant linguistic patterns, they make certain ways of speaking and reasoning more likely than others, thereby marginalizing minority knowledge and expressions.

The mechanism of reinforcement learning from human feedback illustrates this phenomenon. Subjective judgments about what is “useful” or “appropriate” are converted into algorithmic rules. These initially contextual norms become large-scale standards. The result is not objective truth, but a form of discursive conformity. Models favor moderate, consensus-driven, and institutionally aligned responses, while sidelining divergent or critical viewpoints. Thus, power is exercised less through censorship than through optimization: some ideas become statistically advantageous, while others disappear.

These systems do not merely reflect existing inequalities; they embed them into their very structure. For example, studies show that travel recommendations generated by these tools systematically prioritize Western destinations and cultures. Similarly, non-Anglophone linguistic styles and minority cultural expressions are often relegated to the background. The models reproduce and amplify historical categories of race, gender, and risk, presenting them as neutral facts rather than social constructs.

The issue goes beyond simply correcting technical biases. It is about understanding how these technologies redefine the conditions for knowledge production. They determine which knowledge is visible, which voices are heard, and which subjects are deemed credible. Their authority rests on an illusion of objectivity, yet they depend on data and design choices shaped by power dynamics.

Large language models also act as instruments of social normalization. By generating texts, advice, or summaries, they impose interpretive frameworks. A response rephrased to be more “professional” or “neutral” can erase cultural nuances or alternative modes of expression. Users are thus encouraged to adopt ways of speaking and thinking that conform to dominant norms, often without realizing it.

Their development is embedded in a political economy concentrated in the hands of a few major players, primarily located in North America and Europe. These actors define which knowledge is valued and which is ignored. Technical infrastructures, training data, and commercial objectives shape what is considered valid knowledge. Promotional discourses around artificial intelligence, emphasizing innovation and efficiency, obscure these dynamics and naturalize specific political and economic priorities.

Given this reality, a critical approach is necessary. It is not just about diversifying data or development teams, but about redistributing epistemic authority. This involves making the normative choices embedded in these systems visible, enabling affected communities to question these choices, and supporting more pluralistic data practices. The goal is not an impossible neutrality, but transparency about the limits and biases of these technologies. Only such an approach can ensure that artificial intelligence serves collective and democratic futures, rather than reinforcing existing hierarchies.


Documentation and Sources

Reference Document

DOI: https://doi.org/10.1007/s00146-026-02994-y

Title: From ‘objectivity’ to obedience: LLMs as discourse, discipline, and power

Journal: AI & SOCIETY

Publisher: Springer Science and Business Media LLC

Authors: Theodoros Kouros

Speed Reader

Ready
500