Innocent En Ligne - Presumed

Private online platforms (X, Meta, TikTok) moderate billions of content items daily. Their terms of service often include clauses allowing suspension or removal "at our sole discretion." In practice, automated systems flag content based on statistical risk scores. A user is not presumed innocent; rather, a post is presumed violative if it matches a pattern (e.g., certain keywords, account age, report frequency).

Moreover, forensic tools (e.g., cell-site simulators, hacking warrants) operate opaquely. The presumption of innocence requires that the accused can challenge the integrity of evidence. But when the evidence is an algorithm’s output or a proprietary tool’s analysis, meaningful challenge is often impossible. This creates a de facto reversal: the accused must prove the technology erred, rather than the state proving its reliability. presumed innocent en ligne

This paper investigates the following question: To what extent does the principle of presumed innocent apply in online environments, and what normative framework should govern its application? The analysis proceeds in three parts. First, a conceptual overview of the presumption in traditional jurisprudence. Second, a diagnosis of three zones of inversion: platform moderation, digital evidence, and networked vigilantism. Third, a proposal for procedural reforms grounded in "digital due process." Private online platforms (X, Meta, TikTok) moderate billions

Finally, legal norms must be culturally embedded. Platforms should design friction into accusatory features (e.g., requiring a verified identity for public accusations, adding a mandatory "presumption reminder" before sharing an accusation). Digital literacy curricula should teach the distinction between suspicion and conviction. Moreover, forensic tools (e