The Operative Dissolution of Truth
This essay applies, in an ontotechnical key, results established in Ontology of the Difference Between Truth and Fiction (foundational ontological article).
Abstract
This essay analyzes how social platforms operate as ontopolitical infrastructures whose ontotechnical design reallocates the conditions and costs of proof. By indexing value to attention capture — clicks, shares, time spent — they prioritize the performative effectiveness of adherence over epistemic validity, making functional falsehood superior in operative terms. In this environment, filters of appearance and retentional effectiveness displace the confrontation with proof, destabilizing classical theories of truth (correspondence, coherence, pragmatism, truth as event). Empirically, statistical learning without comprehension amplifies emotionally charged content and produces falsehood without an author by design, not by intention. The essay proposes an operative program: friction algorithms and sustained verification time; binding multistakeholder governance (ex ante/ex post audits, transparency of the objective function, version/model registers, appeal mechanisms); digital public goods (open provenance protocols, trust graphs, evidence repositories); and critical literacies with counter-algorithmic practices that redistribute the cost of proof. Reframing critique as intervention at the infrastructure level, it concludes that public reason must be materially designed so that truth recovers a right that was algorithmically denied to it: the right to spend time.
Ontotechnics of Functional Falsehood
Social networks are not just spaces for the circulation of statements. They are, more profoundly, ontopolitical infrastructures — technical-material devices that reconfigure the conditions of possibility for saying, hearing, and believing. Following Foucault and Deleuze, these platforms do not represent the real, but produce regimes of visibility and enunciation, operating as machines of affective governmentality. The selection of what emerges as enunciable or legitimizable obeys a material logic of retention, whose criterion is not truth, but the performative efficiency of adherence. This logic is inscribed in quantitative metrics — clicks, shares, time spent — that establish a regime of value indexed to attention capture, redesigning the hierarchies of symbolic space and replacing argumentative value with retentional effectiveness.
Consider the specific case of vaccine misinformation on Facebook between 2019 and 2021. A study by Avaaz documented that the ten largest anti-vaccine pages generated 7.7 million monthly interactions, while the ten main public health institution pages generated 5.5 million. The false content "vaccines contain microchips for tracking" circulated four times more than the WHO article "how mRNA vaccines work." The difference was not in epistemic quality: it lay in emotional architecture. The first sentence simultaneously activated fear, indignation, and group belonging. The second required concentration, scientific literacy, and the absence of immediate emotional gratification. The algorithm did not distinguish true from false. It identified patterns: average time of twelve seconds on the WHO publication, forty-eight seconds on the conspiratorial publication; share rate of 0.3% versus 4.7%; density of emotional comments eight times higher. The platform learned a correlation: content type A generates retention, content type B generates abandonment. It promoted type A. Falsehood became functional not by intention, but by systemic design.
What becomes visible or viral is not what best resists rational scrutiny, but what most effectively adheres to affects already in circulation. Language dissociates itself from its mediating function between subject and world, becoming a matrix of emotional synchronization. This displacement reconfigures the symbolic function: recognition shifts from the criterion of proof to that of affective resonance — that is, to the capacity for adherence. By ontotechnics I mean precisely this — the technical production of the conditions of emergence and recognition of what counts as existing in the public space. This is not neutral mediation, but active configuration of what can appear and in what form. Truth, as a category of resistance to the immediate, enters this circuit at a structural disadvantage. See also Ontology of the Difference Between Truth and Fiction, section “Temporality of Truth,” for the ontological formulation of proof time and friction as operators of truth. To make the scope of this displacement explicit, it is necessary to clarify the main theories of truth involved and how ontotechnics destabilizes them.
The correspondence theory holds that a statement is true when it mirrors a state of affairs; ontotechnics does not deny the world, but destabilizes correspondence by shifting the filters of emergence and the costs of proof towards retention metrics: what appears and persists no longer depends on confrontation with proof but on its circulatory compatibility. The coherence theory understands truth as intra-systemic consistency; algorithmic ecology tends to convert coherence into intra-systemic closure — a ritualized self-consistency — self-sustaining belief systems whose informational closure simulates validation. The pragmatic theory reads truth as what results from effective practices in a community of inquiry; ontotechnics perverts the criterion of result, replacing epistemic effectiveness (what resists refutation) with retentional effectiveness (what maximizes session time). Finally, “truth as event” — understood here in a materialist key as the irruption of a new symbolic consistency that reconfigures the field — requires temporality, hesitation, and friction; the current regime rarifies the event by neutralizing precisely the technical conditions that would make it possible. In this sense, ontotechnics not only subverts correspondence, coherence, and pragmatism but also impoverishes the regime of the event, converting it into a very rare exception in the digital public space. Correlated ontological synthesis in Ontology of the Difference Between Truth and Fiction, section “Material criterion of truth.” Let us return to the thread: in this informational ecosystem, "validation" primarily designates aptitude for circulation — not probationary confrontation.
In this technical environment, language is reconfigured according to the principles of circulatory effectiveness. Falsehood ceases to be a moral rupture or intentional violation of the classical epistemic contract, becoming a structural function of the informational ecology — an effect of the very logic of algorithmic distribution of visibility. It is here that the classical distinction between operative falsehood, which emerges spontaneously as a way to maintain interactive fluency without prior calculation, and strategic falsehood, which mobilizes falsity instrumentally and planned, dissolves. Both converge in the same functional regime: the algorithmic optimization of circulation. Both are evaluated not by correspondence with the world, but by the aptitude to activate available affects and access privileged zones of visibility.
This forces us to reopen the question of intentionality. If falsehood is, to a large extent, a systemic effect (operative falsehood) and not just a deliberate act (strategic falsehood), then the philosophical definition of falsehood cannot be limited to the emitter's malice. Intentionality remains relevant — it distinguishes calculated deception from emergent deception — but it does not exhaust responsibility. In an ecosystem where falsity is produced by structural compatibility with retention metrics, the ethics of communication must operate on three coupled levels: (1) design responsibility (whoever conceives metrics, interfaces, and promotion criteria is responsible for “ontotechnical defects” that generate falsehood without an author); (2) institutional responsibility (media, schools, platforms, regulators, who define proof protocols, uncertainty labeling, and circulation rhythms); (3) individual responsibility (minimum duties of hesitation, elementary verification, and non-amplification when there are signs of low traceability). Intentionality thus becomes a gradient in an economy of risks: from explicit malice to negligence reinforced by architecture. Where there is no malice, there may be fault due to uncritical adherence; where the architecture induces predictable error, there is product responsibility. In summary: recognizing systemic falsehood does not absolve the agent; it expands the scope of imputation, shifting it from the psychologism of the emitter to the engineering of the conditions of enunciation and to distributed practices of communicational care. Cf. Ontology of the Difference Between Truth and Fiction, section “Validation community and proof time.”
The viralization of the Tide Pod Challenge in January 2018. Videos of young people biting detergent capsules proliferated on YouTube and Instagram. The company Procter & Gamble issued medical statements warning of serious poisoning. Poison control centers documented two hundred and twenty cases in two weeks. The institutional response was factually correct: "detergent capsules are toxic and can cause burns to the esophagus." But this true sentence generated twelve thousand shares. A parody video titled "I ate a Tide Pod and became a superhero" generated 2.3 million views in forty-eight hours. A pure example of the convergence defined above, which confirms the functional prevalence already evidenced by systemic design (cf. Ontology of the Difference Between Truth and Fiction, section “Speed vs. proof”).
The discursive scene thus transforms into a matrix of sensory feedback. What appears no longer does so by epistemic merit, but by adherence to the user's affective-cognitive profile. The public space ceases to be a place of discursive interaction to become a surface of continuous emotional modulation.
This continuous modulation also reconfigures the subjective experience of truth. What presents itself as "evident" now coincides with what has affective salience, producing a seems-true without the work of proof that would sustain an is-true. When validation is discouraged, subjective certainty derives from the intensity of the affect and not the resistance of the statement, establishing a regime of instantaneous certainty. Ethically, this shifts responsibility from the mere "do not lie" to the management of one's own rhythms of attention: cultivating hesitation, tolerating delay, and suspending amplification when traceability is low. As anticipated by Jonathan Crary and Byung-Chul Han, this transformation implies the collapse of critical negativity. Falsehood ceases to be an exception — it becomes an operative rule, a systemic requirement.
The symbolic function of language undergoes a decisive reconfiguration here. Understood not as mere representational codification, but as the capacity for material reorganization of difference, this function is appropriated by a logic of heuristic simplification typical of contemporary algorithmic infrastructures. This logic privileges what confirms and resonates, penalizing what destabilizes or complexifies. The enunciative value — that is, the material potency of a statement to access visibility and generate effects in the public sphere — shifts from the internal complexity of the argument to its capacity for circulation.
Formulas with low cognitive density, linear causalities, and fragments of probationary simulation — decontextualized screenshots, isolated statistics — become the new effective speech acts. The performativity of the statement is assessed by its compatibility with the algorithmic amplification system.
This symbolic reorganization rests on a decisive material foundation: the technical-informational architecture of digital platforms. Every user action — click, abandonment, share — is recorded as operative data, not interpretive data. This data is not semantically understood, but statistically correlated, as demonstrated by Matteo Pasquinelli in his critique of the "artificial intelligence of capital." As an operator of regularities, the algorithm learns by statistical recurrence: it adjusts and reinforces without comprehending. Investigations by the Mozilla Foundation and Harvard University documented that users who watched videos about vegetarian diet were progressively recommended for extreme veganism, then radical anti-speciesism, then food industry conspiracy theory. The algorithm did not understand the ideological positions. It identified a pattern: users who watched A stayed longer on B, and even longer on C. YouTube explained that the algorithm optimized session time. It empirically discovered that progressively more radical content retained attention. The platform promoted radicalization not out of ideological conviction, but out of economic optimization: emotional intensification generated permanence, and permanence generated advertising revenue. Statistical correlation — radicality generates retention — was sufficient. Semantic understanding of what radicality means and what its social effects are was irrelevant.
Without resorting to essentialisms of technique, it is also important to open the reading through the philosophy of technology. In Bernard Stiegler, technique appears as pharmakon — simultaneously poison and remedy — because it externalizes memory and retains the symbolic, reconfiguring our circuits of attention and credit in the common; in this key, digital ontotechnics institutes a pharmacology of retention, where the cure can only come through redesign of the prostheses of attention and the regime of retentives. Don Ihde shows that all technique mediates perception and action according to multi-scalarities (amplifies, reduces, translates), which allows us to read algorithms not as passive filters, but as structures of co-intentionality that redistribute agency among users, metrics, and interfaces. Albert Borgmann distinguishes devices that conceal effort and maximize convenience from the focal paradigm that calls for attentive practice; current platforms function as cognitive convenience devices that dissolve probationary friction — hence, a politics of truth must refocus the public space through rhythms, formats, and rituals that restore attention as a shared practice.
It is important to emphasize that this algorithmic configuration is neither neutral nor inevitable. The choice to optimize session time at the expense of veracity constitutes a conscious business decision, determined by the advertising revenue structure of the platforms. Internal documents revealed in recent years — notably the Facebook Files disclosed by Frances Haugen in 2021 — demonstrate that these companies have detailed data on the polarizing, addictive, and disinformative effects of their recommendation systems. Meta knew, since at least 2018, that its algorithm prioritized content that generated "significant anger" because it maximized engagement. YouTube knew, according to internal investigations in 2019, that its auto-play system progressively led users to more extreme content. The maintenance of this design does not result from technical ignorance or operational impossibility, but from economic calculation: functional falsehood[n.1] is more profitable than truth. Every additional second of retention translates into measurable advertising exposure, and every engagement metric sustains the market valuation of these corporations. Technically, we have seen statistical blindness; politically, agency is human. The ontotechnics of functional falsehood is, therefore, inseparable from a political economy of attention where cognitive capture constitutes the business model. What presents itself as systemic necessity is, in fact, strategic choice — reversible, modifiable, but deliberately maintained as long as value extraction depends on it.
In this new regime, the public discursive space ceases to function as an arena for shared justification, operating as a laboratory for affective adherence. Validation becomes a structurally discouraged act — not through explicit censorship, but because the infrastructure itself shifts the cognitive effort of verification to immediate gratification. The cost of proof is externalized: it is no longer distributed between emitters and receivers in a communicational contract, but absorbed and dissolved by the technical system of visibility itself. Proof encounters the cost differential already indicated (proof time vs. fluidity), being penalized for interrupting the flow.
Language that seeks to maintain a link to truth is thus forced to incorporate the codes of viral fiction: narratives of rapid digestion, emotional condensation, immediate visual appeal. This adaptation, however, undermines the critical potency of truthful language. When truth is compelled to simulate the modes of expression of falsehood to compete for attention, it compromises hesitation, abdicates openness to refutation, and nullifies the time necessary for verification. What could be a survival strategy becomes, by accumulated effect, the operative dissolution of the critical function of discourse.
It is important to evoke, by way of counterpoint, the tradition that thinks of language as a space of traversal and resistance. From the Socratic gesture of public interrogation to the Derridean proposal of deconstruction, language has been thought of as a place where truth is not given, but sought — constructed in the tension between the statement and the unsaid, between the instituted and what exceeds it. Such a conception requires temporality, hesitation, openness to dissonance: everything that the current technical environment systematically neutralizes. The speed of the feed is not just a rhythm — it is an organizing principle that prevents the emergence of truth as an event. In light of phenomenology, truth can be thought of as a modality of appearance: not just propositional correctness, but situated disclosure (aletheia) that requires temporal thickness and prolonged attention from the body. In Merleau-Ponty, embodied perception founds horizons of meaning that are not neutral: the proper body functions as a selection matrix for what can figure as evident. In an ontotechnical environment of low cognitive density, this grammar of appearing is compressed: the phenomenological window necessary for the true to reveal itself as such is reduced. Hermeneutics helps to name what is lost: the hermeneutic circle (Gadamer) — between pre-understanding and confrontation with the text/world — requires rhythms of back-and-forth that retention metrics shorten. In Ricoeur, the passage through narrative and action establishes interpretive distances that allow for judgment revision; the feed suppresses this distance, gluing evidence to the immediate. Phenomenology and hermeneutics thus converge on an operative point: without a rite of attention and distancing, the subjective experience of the true is depowered by the design that favors rapid affective synthesis.
It is at this point that a material redesign of the conditions of enunciation becomes unavoidable.
This diagnosis requires a theoretical shift: from the moral critique of falsehood to the ontotechnical analysis of the conditions of enunciation. The issue is not denouncing false content, but understanding the material devices that make them functionally superior. It is not enough to try to make truth competitive through the instruments of its own effacement. From this stems the explicit redistribution of the cost of verification (infrastructural measures). Recognizing that platforms do not self-regulate — given that functional falsehood sustains their business model — jurisdictions such as the European Union and Australia are beginning to impose regulatory frameworks that require algorithmic transparency, accountability for amplified content, and mechanisms for mitigating misinformation. The European Digital Services Act (DSA), approved in 2022, represents a first step by requiring large platforms to undergo external audits of their recommendation systems and risk assessments of systemic effects. Although insufficient — since they remain oriented towards the moderation of individual content and not the transformation of the amplification logic — these instruments demonstrate that regulatory intervention on technical architecture is politically viable. From this diagnosis results an operative criterion for institutions. Verification time as a public good is formulated in Ontology of the Difference Between Truth and Fiction, section “Institutions of duration.”
Information ethics and social epistemology. In Floridi, the infosphere is not just a repository, but a common ontological environment: acting technically is intervening in the informational fabric. From this derive positive duties — to preserve, enrich, and not degrade informational value — which, in our context, translate into design obligations (metrics, interfaces, promotion policies) and governance obligations (audits, traceability, uncertainty labeling). On the side of social epistemology, knowledge is a good co-produced by practices of testimony, expertise, and public trust; validity is not just an attribute of the statement, but a relational property of networks and institutions. This requires trust infrastructures (verifiable procedures, source registries, legible evidence chains) and combating credibility asymmetries that amplify the false (echo chambers, authority biases) and silence the true (epistemic injustice). In summary: information ethics offers the normative standard (do not degrade the infosphere; raise its value) and social epistemology specifies the collective mechanisms of validation (organization of testimony, distribution of trust, design responsibility); both converge on our criterion: without networks of proof and trust incorporated into the architecture, truth loses world. Such a requirement implies regulation that is not limited to penalizing false content a posteriori, but that imposes a redesign of algorithmic success metrics, prioritizes reflection time over reaction speed, and redistributes the burden of proof from those who consume to those who publish and amplify.
Without redundancy in relation to what has been exposed, we can make explicit the principles of an ethics of infrastructure: (i) objective-oriented transparency, not just about moderation, but about the objective function of recommendation systems itself (declared weights for veracity, diversity, and well-being, with aggregated public reports); (ii) continuous independent auditability, enabled by audit APIs and sandboxes with synthetic data for third-party testing, with correction obligations when predictable disinformative effects are detected; (iii) normative prioritization of veracity over engagement, materialized in multi-criteria optimization with explicit penalization of patterns associated with low traceability and rupture of the evidence chain; (iv) promotion of diversity of perspectives, through diversity injectors and limits to graph homophily (minimum exposure to verified independent sources within defined time windows); (v) responsibility for social impact, with public risk assessments, provenance traces, and chains of custody for amplified content. On the operative level, this translates into friction algorithms: deliberate latencies in viral shares, "read before sharing" prompts, dynamic limits on re-amplification, deceleration of low-verifiability trends, pause quotas that protect proof time, and uncertainty labels that call for public hesitation. All of this must converge towards a new metric of success: sustained verification time, and not just passive permanence.
Algorithmic governance models. Beyond the DSA, a binding multistakeholder model is required that involves public regulators, civil society, academia, and companies in the design, testing, and review of platforms. In addition to the audit APIs already mentioned, this requires: (i) mandatory ethical audits ex ante (before launching/changing recommendation systems) and ex post (based on real data), with publication of summary reports; (ii) independent authorities with a technical mandate to impose design corrections when predictable effects of misinformation, polarization, or discrimination are detected (power of injunction and fine); (iii) public registers of model versions, objective function parameters, and catalogs of training/evaluation data (with privacy protection), allowing decisions to be traced; (iv) appeal mechanisms for those harmed by algorithmic decisions, with deadlines and response obligations; (v) regulatory sandboxes for supervised experimentation with veracity/diversity metrics.
Role of the individual and education for resistance. Resistance is not exhausted by regulation or institutional design: it requires critical media and digital literacy that combines provenance reading skills (tracing sources, recognizing legible evidence chains and gaps), functional understanding of algorithms (knowing in operative terms what the objective function, training signal, retention metrics, and their selection effects are) and ethics of attention (training in delay, suspension of amplification, tolerance of dissonance). In pedagogical terms, this implies curricula that teach how to reconstruct content paths (from capture to ranking), simulate recommendations to show how small variations in interaction produce informational drift, and practice verification protocols with explicit proof times. On the civic level, it implies public rites of proof — distributed verification laboratories, slow reading circles, community uncertainty labeling devices — that restore shared time to truth. Thus, “public reason” ceases to be a normative abstraction and becomes a trainable competence: organizing deep attention under conditions of noise, sustaining hesitation where the infrastructure accelerates, and maintaining operative coherence in the face of pressure from mere circulatory aptitude.
Resistance and counter-algorithmic practice. “Socratic resistance” only becomes effective in the digital realm when public interrogation translates into material procedures: auditable lists of premises (algorithmic decision records), the right to ask incorporated into interfaces (clear access to the objective function, data origin, operative explanations), and adversarial devices (prioritized replies with an evidence chain). “Derridean deconstruction” shifts here to design: exposing and undoing the binary oppositions and hierarchies embedded in the metrics (e.g., engagement > veracity; salience > proof), introducing temporal difference (pauses, delays, re-amplification limits) and structural difference (forced diversification of sources) as techniques of denaturalization. Resistance ceases to be just content critique and becomes a practice of engineering and technological activism: browser extensions that restore visible provenance, plugins for low traceability alerts, alternative “slow” feeds with explicit weights for veracity/diversity, and civic hackathons for bias testing and public stress tests of platforms. This counter-algorithmic practice does not aim to paralyze circulation, but to redistribute the cost of proof and reopen the time where truth can resist — prolonging, in a materialist mode, Socratic interrogation and Derridean deconstruction on the very plane of infrastructures.
As an alternative to the current business model, digital public goods or common digital infrastructures should be instituted: open protocols for provenance and chain of custody of content; public indices of informational quality and auditable trust graphs; interoperable evidence repositories for journalism and science; public APIs for access to civic metrics (verification time, source diversity) and interoperability obligations between platforms. These arrangements reorient the ecosystem towards public interest, shifting the dominant rationality from attention extraction to the shared production of conditions of proof.
Examples of counter-algorithmic institutions. Some existing models rehearse, with varying degrees of maturity, the principles defended here. Federated networks like Mastodon/ActivityPub avoid a single center of decision and reduce dependence on engagement metrics (chronological/localized timelines by default, public moderation rules per instance). Success: greater transparency and possibility of choosing norms; challenges: fragmentation, content discovery between instances, distributed moderation, and economic sustainability. Knowledge platforms like Wikipedia practice legible validation (verifiability policies, edit history, discussion pages, auditable reversions). Success: public proof and reversibility; challenges: systemic bias, targeted harassment, and dependence on volunteering. “Slow” journalism (e.g., Tortoise, Zetland, De Correspondent) and civic investigation (e.g., ProPublica, The Markup) prioritize proof time over speed, with membership models and publication of evidence bases. Success: deepening and traceability; challenges: scale, attraction in ecosystems accustomed to rapid gratification. Verification ecosystems (e.g., fact-checking networks like IFCN and national initiatives) introduce evidence chains and uncertainty labels. Success: public corrections and minimum standards; challenges: latency, limited reach without platform integration. Open annotation layers (e.g., Hypothes.is) add visible contradiction to content, but face adoption inertia. Together, these experiences show that slow infrastructure and legible validation formats are possible: they gain in proof and accountability what they lose in speed and scale. The next step is to institutionalize these mechanisms — interoperable, auditable, and economically sustainable — so that they no longer depend on heroic exceptions.
What is at stake is not just the circulation of content, but the very possibility of a public reason as a material space for the reorganization of meaning. Functional falsehood is neither a moral accident nor an epistemic pathology: it is a necessary effect of an algorithmic ecology designed to reward what adheres, not what verifies. Critique cannot be limited to normative diagnoses: it must become intervention in the same ontotechnical sense already specified, demanding an ethics of infrastructure — an ethics that does not merely judge the statement, but redesigns the material conditions of its emergence. Only then will it be possible to restore to language its most radical power: that of resisting what merely functions.
Truth needs counter-algorithmic institutions guided by this design criterion.
The justice of public space begins by restoring to truth the right to spend time.
[n.1] “Functional falsehood” = functional falsehood (in the ontological article: falsehood as a structural possibility under low temporal friction).
—— David Cota — Founder of the Ontology of Emergent Complexity ——