What Is the DHARMA Framework: How NoPsyops Analyzes Influence Operations

Every day, content circulates online that is designed not to inform but to shape what its audience is capable of thinking. Not partisan spin or aggressive advocacy — those are old and mostly legible. What's harder to see is coordinated, covert production: operations engineered to make it more difficult to distinguish a real signal from manufactured noise.
The challenge isn't just detection. It's precision. Most tools that claim to identify "disinformation" or "propaganda" apply those labels so broadly they become meaningless — flagging emotional language as manipulation, treating ideological alignment as coordination, calling something an influence operation because it's uncomfortable. That sloppiness isn't neutral. It degrades the very thing it claims to protect.
NoPsyops is built on a different premise: that rigorous analysis requires making finer distinctions, not coarser ones. The methodology behind our work is called DHARMA — a structured, modular framework designed to separate what is genuinely covert and coordinated from what is simply loud, one-sided, or inconvenient. Here's how it works.
The First Question: Who Is Actually Speaking?
Before analyzing what a piece of content does, DHARMA asks who is producing it and from what position. This is Module 0 — Enunciative Position Analysis — and it's the gate that everything else passes through.
The most common error in this field is treating a report about manipulation as evidence of manipulation. A journalist documenting a foreign influence operation is not running one. A researcher cataloguing fear-based rhetoric is not deploying it. DHARMA makes this distinction formal and mandatory.
Every piece of content is classified into one of four modes: Descriptive (the author is reporting on manipulation from the outside), Executive (the content itself is the operation), Dual (both simultaneously — which does happen), or Indeterminate (insufficient evidence to classify). That mode determination conditions how every subsequent module runs.
The Four Conditions That Define a Formal Influence Operation
Module 1 is where DHARMA draws the line between what qualifies as a genuine influence operation and what is merely obnoxious, biased, or adversarial. Content must satisfy four necessary conditions — all four, not two out of four.
Organizational Coordination. The content shows evidence of centralized or networked production: synchronized messaging across unconnected sources, template replication, coordinated publication timing relative to external events. Convergence alone doesn't qualify. Multiple outlets covering the same story with similar framing because it's a significant story is journalism, not coordination. The bar is evidence of organizational structure, not just similarity of output.
Active Attribution Concealment. This is the most critical condition. The real identity of the content's producer is being actively disguised — not merely unstated, but hidden. A propaganda outlet that openly identifies itself as a propaganda outlet is not running a covert operation. A partisan journalist who signs their name is not an IO asset, regardless of how manipulative their content is. Anonymity is not concealment. The question is whether the emitter is actively working to prevent the audience from identifying who is sending the message.
Predefined Strategic Objective. The content serves an identifiable goal that precedes and directs its production — it is one node in a larger architecture. Having an agenda doesn't meet this condition. Every editorial outlet has an agenda. The IO criterion requires that content is produced in service of a strategic objective that exists prior to and independent of its ostensible subject matter.
Epistemic Degradation. This is what separates influence operations from legitimate persuasion, however aggressive. Persuasion makes claims that can be evaluated, accepted, or rejected. Epistemic degradation bypasses evaluation entirely — through channel saturation (flooding the environment with volume to make signal/noise discrimination impossible), noise injection (fabricating sources or data to erode trust in any source), or decoder attacks (exploiting cognitive shortcuts so audiences arrive at confident beliefs they never actually examined). Emotional content and one-sided argument do not meet this condition on their own.
If all four conditions are present, DHARMA classifies the content as a formal influence operation. If some are absent, it names the correct category instead: partisan journalism, institutional PR, organic polarization, engagement-optimized content. Those phenomena are real and worth understanding — they're just categorically different, and calling them something they're not produces errors that compound downstream.
From Classification to Assessment
Once content clears the gate, Modules 2 and 3 map the specific mechanisms at work across five operational layers — how the content is structured, how it moves, what cognitive routes it exploits, and whether it exhibits what the framework calls reflexive control: engineering not just what someone believes, but what options they consider available to them.
Module 4 synthesizes all of this into a final output: a quadrant classification, an ordinal severity assessment (Low / Medium / High / Critical), and an attribution inference — a probabilistic reading of what type of operator is most consistent with the behavioral markers in the content. Attribution is always declared as inference, never as established fact. The framework is explicit about what evidence would change it.
Why the Distinctions Matter
DHARMA's precision is not an academic preference. It has practical consequences.
Mislabeling open propaganda as a covert operation is a mistake with political effects. It degrades the credibility of legitimate analysis. It can be instrumentalized to suppress adversarial speech that is entirely above board. And it makes it harder, not easier, to identify the operations that genuinely warrant concern.
The framework is also designed to be falsifiable. Every output documents what evidence would change the classification. Every confidence level is derived from the weakest link in the evidentiary chain, not averaged across the stronger ones. When the evidence doesn't support a strong conclusion, the output says so.
That's what rigorous media analysis looks like. Not a confidence score on a slider. Not a label applied because something feels suspicious. A structured chain of reasoning that can be examined, challenged, and revised.
That's what NoPsyops is built to provide.
Make information work for you
Take back control of your information flows, improve decision quality & see beyond the headlines.

