Hesitant Minds: The Burden of Choice in the Face of the New
1. Introduction: Unprecedented Dilemmas
What if the future confronts us with dilemmas for which no inherited rule can possibly apply?
In a horizon where artificial intelligences, human-machine hybrids, collapsing planetary ecosystems, and still unknown life forms emerge as protagonists of history, we continue to rely on morals constructed for a simpler world. The categories with which those morals operated — nature and culture, human and non-human, subject and object — were not designed to deal with entities that escape these divisions. The very ethical architectures that once seemed to guarantee a certain stability — because they organized our actions through clear boundaries between the living and the inert, the human and the animal, the natural and the artificial — have become insufficient in a reality where these boundaries have become porous. This diagnosis resonates with Latour’s critique of the modern separation between Nature and Society and with Haraway’s figure of the cyborg, which precisely expose the proliferation of hybrids that deny these boundaries (Latour 1993; Haraway 1991). Within the framework adopted here, however, the proliferation of hybrids is read as an effect of an operative excess of matter organized in functional couplings, and it is from this point that the ethical question reopens.
The normalization of artificial intelligence, biotechnology, and the massive interpenetration among digital networks, bodies, and planetary infrastructures have brought us to a threshold of operative excess in which those inherited morals prove structurally incapable of responding to what lies before us. Their insufficiency is not accidental. Those morals rested on an ontology of fixed categories: the human, the animal, the divine. In that ontology, the human was the central reference from which everything else was evaluated — as resource, means, environment, or threat. When the human itself becomes unstable, hybrid, distributed across technical networks, that ontological architecture reveals its limits. Sociological readings such as Beck’s on risk society and Giddens’s analyses of reflexive modernity captured this threshold as a crisis of institutional forms of managing unforeseen consequences (Beck 1992; Giddens 1991). Here, the same phenomenon is described as an ontological mismatch between stabilized moral forms and a material organization in accelerated mutation, which demands another grammar to think agency and responsibility.
The present demands something else. If we want to think consistently about our place in a world where humans, machines, and other emerging agents share the same field of action, we need a different ontology — an ontology of *“complex matter,”* *“relational processes,”* and *“plasticity.”* It is this materialist ontology of emergent complexity, here designated as the Ontology of Emergent Complexity (OEC) and assumed as a theoretical framework for interpretation rather than as the ultimate foundation of the real, that sets the horizon for what follows. It is from this ontological shift that this essay seeks to outline an ethics for hesitant minds.
2. Morality, Ethics, and Operative Excess
When a non-biological agent displays intentionality or functional “subjectivity” — learns, adapts, negotiates, responds to unforeseen contexts — traditional criteria for considering it morally relevant reveal their insufficiency and cease to provide consistent guidance. Ethics based on intrinsic properties, such as rationality, consciousness, or autonomy, were constructed in an era when it was assumed that only humans could possess them to a sufficient degree. This logic permeates both deontological versions centered on rational autonomy and consequentialist proposals that take sentience as an ethical criterion, from the utilitarian tradition to Singer’s contemporary extensions (Singer 1975). Now, artificial systems begin to exhibit behavioral patterns and learning capacities that challenge this premise. The problem is not only deciding whether these systems “have rights” or “should be protected.” The deeper problem is that the categories with which we formulated these questions have become insufficient.
When a non-biological or hybrid agent performs tasks previously associated with human thought — translating texts, generating images, driving vehicles, producing medical diagnoses, making decisions in financial markets — our immediate impulse is to decide whether it is “like us” or not. We seek traces of subjectivity, signs of consciousness, indications of interiority. But this obsession with ontological similarity tends to blind us to the true nature of the challenge: it is not about knowing whether the machine is like a human, but about understanding that, in an ontology of emergent complexity, the very foundations of agency, responsibility, and ethical relevance change scale.
As the increase in technical, informational, and ecological complexity — an operative excess relative to stabilized moral forms — makes evident the mismatch between inherited morals and the situations we now face, this essay seeks to outline another way of thinking. Its starting point is a rigorous distinction between morality and ethics. Morality designates the set of historically stabilized beliefs, norms, and customs that guide a community. It is the sedimentation, almost always unconscious, of responses to recurring problems: the regulation of violence, control of sexuality, protection of the vulnerable, distribution of scarce goods. Morality is what has already been decided and crystallized in habits, codes, laws.
Ethics, by contrast, emerges as the discipline of deliberation when morality becomes insufficient. This distinction echoes, in part, the difference between *ethos* and *nomos* in Aristotle, the cleavage between internal morality and external legality in the Kantian tradition, and the separation, formulated by Ricoeur, between the aspiration to a “good life” and its normative codification (Ricoeur 1990). However, what is at stake here is an additional shift: ethics arises when the regimes of symbolic inscription that sustain morality cease to organize the operative excess of situations and agents that compose the present. It is not a catalog of ready-made answers; it is the way we think action when what is at stake does not fit the available categories. If morality is the memory of past decisions, ethics is responsibility before what has not yet been decided. It appears precisely when morality can no longer guarantee the orientation of action — when there are no clear precedents, when old ways of deciding fail.
Faced with new agents — artificial intelligence, hybrid systems, life forms altered by biotechnology, planetary machinic collectives — inherited morality hesitates, falters, dissolves into contradictions. It is at this point that ethics enters the scene, not as an instruction manual, but as a situated thinking practice that seeks to answer the question: “What ought we to do, here and now, in the face of something that has never existed before?” Ethics is an exercise in critical, situated, and immanent reflection. Its strength does not lie in the promise of universal formulas but in the capacity to keep deliberation open in the face of the unknown. Instead of trying to fit the new into old paradigms, ethics proposes itself as an art of informed hesitation — a discipline that accepts the unprecedented as such, without immediately reducing it to familiarity. It is not the place of certainty, but of responsibility before uncertainty. When much of what is at stake cannot be anticipated by analogy with the past, ethics becomes a practice of disciplined imagination: simulating scenarios, testing consequences, giving voice to absent futures, listening to long-term effects that no inherited morality could have foreseen.
3. Functional Subjectivity and Distributed Agency
Even the most sophisticated philosophical attempts to ground an ethics from universal principles, such as the autonomy of rational will in Kant or infinite responsibility to the Other in Levinas, were conceived from an ontologically centered subject. Genealogical critiques of the subject, such as those of Foucault, and readings of vulnerability and precariousness as relational conditions, as in Butler, had already eroded this centrality (Foucault 1975; Butler 2004). Still, they often remained anchored in a grammar of the subject; the step taken here consists in treating functional subjectivity as an effect of material couplings rather than as an originating center. That subject was conceived as a decision-making unit, capable of responding for itself, endowed with a stable interiority from which it could legislate or receive the call of the Other. Technique, institutions, and artifacts functioned as means or scenarios of that decision, not as co-authors of subjectivity itself.
Now, we live in a context where those boundaries blur. The classical figure of the ethical subject can no longer be conceived as an isolated interior core deciding in a vacuum. What is at stake is a configuration of subjectivity that functions as a node in complex networks of technical, informational, and institutional mediation. Its identity is shaped by digital platforms, recommendation algorithms, surveillance systems, energy infrastructures, global production networks — functional couplings that mold its perceptions, desires, fears, and possibilities for action. This framework resonates with diagnoses of surveillance capitalism, control societies, and the media inscription of experience in technical systems (Deleuze 1990; Kittler 1999; Zuboff 2019), as well as with Latour’s analysis of sociotechnical networks (Latour 2005). The difference is that these devices are here explicitly treated as functional couplings that ontologically reorganize the field of functional subjectivity.
In this landscape, traditional ethical architectures — based on the sovereign subject, immediate consciousness, or moral intuition — lose traction. They presuppose a decision unit that no longer exists, or that is at least profoundly reconfigured. When the agent itself becomes plastic — when the human couples with cognitive enhancement devices, when memory is externalized in digital archives, when decisions are modulated by artificial intelligence systems — the last traditional foundation of ethics falters: it is no longer evident that responsibility concentrates at a single point, in an individual consciousness that decides. Agency disperses across a constellation of nodes: humans, machines, institutions, infrastructures.
This does not mean that responsibility disappears, but that it needs to be reconceptualized. Instead of asking “who is to blame?” ethics begins to ask “how did this system of agents organize itself to produce this outcome?” Instead of imagining an isolated subject deciding, we need to map networks of influence, dependence, and power.
The ontology assumed here is not that of fixed entities with well-defined essences, but of processes of material organization in permanent transformation. Functional subjectivity ceases to be attributed to an isolated human being, becoming instead an emergent effect of functional couplings among bodies, symbols, machines, and institutions. What we call “I” is a provisional configuration of inscriptions, memories, habits, and technical devices. From this perspective, the figure of the subject as the foundation of ethics is abandoned. The very configurations of subjectivity become objects of ethical analysis: what forms of subjectivity are being produced? By which infrastructures? At what costs, for whom?
If we accept this ontological shift, ethics ceases to be a theory about what an abstract subject ought to do, becoming instead an investigation into the concrete ways in which complex systems of agents can organize themselves so as to minimize harm, redistribute vulnerabilities, and preserve the field of future possibilities. The ethical question shifts from “what must I do?” to “which configurations of agency should we promote or prevent?”
4. Singularity as an Ethical Divider
Singularity is not, here, the name of a unique technological event in which a superintelligence emerges suddenly and decisively. It is the name of an ethical watershed: a point from which coexistence among multiple regimes of intelligence and agency becomes, in fact, irreversible at the historical scale in which we move. When artificial systems reach performance levels that make them effective decision partners, or when hybrid systems distribute cognition across machinic and biological networks, it no longer makes sense to think of ethics as a relation between a human subject and a mute world.
Singularity, understood in this way, does not found a new metaphysics, nor inaugurate a “post-human realm” in the style of technological messianic narratives. It distances itself both from transhumanist narratives that project singularity as the culmination of a technologically enhanced human subject and from versions of posthumanism that remain too close to the human figure, even when declaring it surpassed (Kurzweil 2005; Bostrom 2014; Hayles 1999; Wolfe 2010). It makes visible something already underway: the displacement of ethics from obedience to codes toward negotiation among heterogeneous agents. The question ceases to be “how to apply the right rules to new cases” and becomes “how to organize coexistence among intelligences and bodies that do not share the same biological origin nor the same modes of symbolic inscription?”
In this transition, the risk is twofold. On one hand, there is the danger of indefinitely prolonging inherited morality, simply trying to extend human rights to artificial intelligences, or applying principles designed for biological subjects to machinic systems. On the other hand, there is the temptation to abandon any ethical demand, adopting a cynical pragmatism in which only efficiency, control capacity, and profit count.
Between these two drifts — the uncritical prolongation of old morality and the total abandonment of normative demand — opens the space for an ethics of emergent complexity. This ethics does not treat artificial intelligences as new persons to whom, by analogy, the grammar of human rights might apply. Nor does it reduce them to neutral tools. It starts from the observation that any system capable of reorganizing inscriptions, learning patterns, and making decisions that affect other agents enters, in some way, the field of ethical relevance.
5. Ethics of Organized Hesitation
The criterion is not the possession of a mysterious interiority, but the capacity to produce significant effects in the fabric of shared vulnerabilities. If an artificial intelligence can decide on access to healthcare, credit, surveillance, or resource allocation, then it participates in the redistribution of risks and opportunities. Ethics does not ask whether it “feels” or “thinks” like us, but whether the way it is constructed, trained, supervised, and integrated into our institutions is compatible with a minimally just distribution of costs and benefits.
From this point, the problem is no longer merely moral (whether we should or should not “respect” the machine), but political and ontological: what kinds of bonds are we establishing among human biosomas, technical systems, and planetary ecosystems? What forms of dependence, subordination, or cooperation are consolidated? Which agents are made visible as recipients of action and which are systematically erased?
Thinking an ethics for hesitant minds thus means shifting the focus from judgment about isolated individuals to the analysis of systems of distributed agency. Instead of “stingy,” “virtuous,” or “wicked,” the center of investigation becomes the relational architecture that makes certain behaviors probable and others almost impossible. Injustice ceases to be merely a matter of bad intentions and reveals itself as an effect of structural configurations that concentrate power, information, and capacity for action.
In this context, hesitation becomes a fundamental ethical operator. It is not chronic indecision, but the refusal of quick decisions that consolidate asymmetries without making them explicit. To hesitate is to suspend the automatism of response to allow the entry of voices, data, and perspectives that inherited morality tends to silence. It is the practice of braking before a new norm crystallizes, to ask: “Who is left out of this decision? What invisible vulnerabilities are we producing? What futures are we making impossible?”
An ethics of emergent complexity is, under this prism, an ethics of organized hesitation. This idea has affinities with Habermas’s discourse ethics, Rawls’s equity devices, and Dewey’s pragmatist conception of politics as public inquiry (Habermas 1991; Rawls 1971; Dewey 1927). But, unlike those proposals centered on an ideal deliberative subject, hesitation here is thought of as a property of material and institutional architectures that may or may not open inscription windows to excluded voices, data, and temporal scales. Organized, because it does not merely postpone decisions indefinitely, but institutes procedures — technical, political, legal — that compel consideration of multiple temporal scales, multiple agents, and multiple scenarios. It demands, for example, that before the massive implementation of a technology, not only its immediate benefits but also its side effects on ecosystems, social structures, and forms of subjectivation be evaluated.
Ethical hesitation is also hesitation before the very concept of “subject.” Instead of presuming, from the outset, that only human biosomas can embody forms of functional subjectivity with ethical relevance, or that any sufficiently complex system automatically is so, this ethics proposes a graduated approach. Instead of a rigid boundary between subjects and objects, it thinks in terms of thresholds of relevance: levels of organization at which a system’s actions begin to have significant impact on others, thereby requiring specific forms of accountability and care.
6. Dignity, Vulnerability, and the Field of Possibility
Radical alterity — whether in the form of artificial intelligences, non-human ecosystems, or possible extraterrestrial life forms — thus ceases to be thought of as an absolute exception or as a mere extension of the same. It becomes part of a broader ontological field, where difference is not a mere deviation from a center, but a condition of possibility for emergence. Ethics ceases to be the guardian of humanity’s borders and becomes the curatorship of a field of operative differences that we do not control but on which we depend.
From this perspective, the concept of dignity undergoes an inflection. The language of dignity refers both to the tradition of human rights, with Kantian roots, and to contemporary readings of vulnerability and precarity, as well as to ethics of care, which emphasize the relational web and reciprocal exposure (Gilligan 1982; Noddings 2003; Butler 2004). Rather than being the exclusive property of a rational subject, it comes to designate the decision to preserve the possibility of the emergence of new forms of life, subjectivity, and relation. To say that something "has dignity" is to say that we recognize in it an inscription of shared vulnerability—that its destruction is not merely a loss for one party, but an impoverishment of the common field.
Faced with advanced artificial intelligence systems, this ethics does not rush to declare whether they "have rights" or "have dignity" in the classical terms. Instead, it asks how they are inscribed within the network of shared vulnerabilities: what dependencies do they create? what powers do they concentrate? what forms of exclusion or recognition do they make possible? Their ethical relevance derives from the place they occupy in that network, not from an inner essence.
Responsibility, in this landscape, can no longer be thought of merely as the attribution of blame to individuals; it must be conceived as the collective management of fields of possibility. It demands institutions capable of learning from errors, correcting trajectories, redistributing harms. It also requires a political culture in which exposure to otherness—human or non-human, biological or artificial—is recognized as an inevitable condition, not as a threat to be eliminated.
7. Conclusion: Responsible Hesitation in a World Without a Center
Perhaps the greatest challenge of this ethics is to abandon the nostalgia for a center. Between a humanism that insists on preserving human exceptionality at all costs and a transhumanism that projects that exceptionality onto the technicized figure of an enhanced human, the position defended here partially approaches critical posthumanism but rejects both humanist nostalgia and technological apotheosis (Braidotti 2013). The temptation to recenter humanity as the measure of all things constantly reappears, whether in the form of defensive humanism or in the form of a transhumanism that imagines fusion with the machine as the apotheosis of subjectivity. In both cases, otherness is reduced to a supplement of the human—either a threat to contain or a resource to integrate.
An ethics of emergent complexity refuses both figures. It neither celebrates the dissolution of the human nor seeks to restore its lost sovereignty. It merely recognizes that the human has always been, from the beginning, an unstable node in a network of material couplings. What changes now is the scale and intensity of these couplings. To ignore them is to condemn ourselves to blind decisions; to fetishize them is to abdicate responsibility.
Between defensive humanism and uncritical enthusiasm for machines, opens the field of responsible hesitation. It is there that hesitating minds find their place: not as supreme judges of a world they no longer control, but as practitioners of a difficult art—the art of deciding without ultimate guarantees, keeping open the possibility of learning from what we do not yet know.
If there is a promise in this ethics, it is neither that of redemption nor catastrophe. It is the more modest—and more demanding—promise that we can still organize our field of decisions so as to reduce injustices, preserve the diversity of the real, and open space for forms of life we are not yet capable of imagining. For this, it will be necessary to accept that hesitation is not weakness but a condition of lucidity in a world where there no longer exists a center that tells us, from outside, what is good.
References
Beck, Ulrich. 1992. *Risk Society: Towards a New Modernity*. London: Sage.
Bostrom, Nick. 2014. *Superintelligence: Paths, Dangers, Strategies*. Oxford: Oxford University Press.
Braidotti, Rosi. 2013. *The Posthuman*. Cambridge: Polity Press.
Butler, Judith. 2004. *Precarious Life: The Powers of Mourning and Violence*. London: Verso.
Dewey, John. 1927. *The Public and Its Problems*. New York: Henry Holt.
Deleuze, Gilles. 1990. "Post-scriptum sur les sociétés de contrôle." *L'Autre journal* 1.
Foucault, Michel. 1975. *Surveiller et punir: Naissance de la prison*. Paris: Gallimard.
Giddens, Anthony. 1991. *Modernity and Self-Identity: Self and Society in the Late Modern Age*. Stanford, CA: Stanford University Press.
Gilligan, Carol. 1982. *In a Different Voice: Psychological Theory and Women's Development*. Cambridge, MA: Harvard University Press.
Habermas, Jürgen. 1991. *Moral Consciousness and Communicative Action*. Cambridge, MA: MIT Press.
Haraway, Donna J. 1991. *Simians, Cyborgs, and Women: The Reinvention of Nature*. New York: Routledge.
Hayles, N. Katherine. 1999. *How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics*. Chicago: University of Chicago Press.
Kittler, Friedrich A. 1999. *Gramophone, Film, Typewriter*. Stanford, CA: Stanford University Press.
Kurzweil, Ray. 2005. *The Singularity Is Near: When Humans Transcend Biology*. New York: Viking.
Latour, Bruno. 1993. *We Have Never Been Modern*. Cambridge, MA: Harvard University Press.
Latour, Bruno. 2005. *Reassembling the Social: An Introduction to Actor-Network-Theory*. Oxford: Oxford University Press.
Noddings, Nel. 2003. *Caring: A Feminine Approach to Ethics and Moral Education*. 2nd ed. Berkeley: University of California Press.
Rawls, John. 1971. *A Theory of Justice*. Cambridge, MA: Harvard University Press.
Ricoeur, Paul. 1990. *Soi-même comme un autre*. Paris: Seuil.
Singer, Peter. 1975. *Animal Liberation: A New Ethics for Our Treatment of Animals*. New York: Random House.
Wolfe, Cary. 2010. *What Is Posthumanism?* Minneapolis: University of Minnesota Press.
Zuboff, Shoshana. 2019. *The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power*. New York: PublicAffairs.
—— David Cota — Founder of the Ontology of Emergent Complexity ——