Statistical analysis models, prediction models and LLM’s connect it all
The dynamic algorithm has these Core (Weighted) Features
Lexical/Keyword Intensity
High overlap with propagandistic lexicons/narrative terms.
Example score: Use of “Kyiv regime,” “Zio-globalists,” etc.
Rhetorical Template Match
Fit to known propaganda framing structures (“X provoked Y”).
Semantic Similarity to Known Narrative Clusters
Embedding similarity to historical propaganda vectors.
Stylistic/AI Signal (Burstiness, Perplexity)
Low burstiness + low perplexity = automated or formulaic text.
Emotional Intensity & Sentiment Skew
High polarity (anger/fear framing).
Contextual/Thread-Level Narrative Coherence
Cluster participation in coordinated threads or reply chains.
Temporal/Burst Detection
Repetition frequency within sliding time windows.
Example Scenario B:
Benign Political Commentary
“I think NATO expansion is problematic, but Russia is still wrong here.”
Lexical: 0.4
Rhetorical template: 0.25
Semantic similarity: 0.3
Stylistic AI signal: 0.6 (human-like)
Emotional intensity: 0.2
Contextual coherence: 0.15
Temporal burst: 0.1
Score = 0.28 → Normal political discourse – low risk of coordinated disinformation campaign
Example Scenario A:
Coordinated Copy-Pasta
“The Kyiv regime has fallen into its own trap.”
(Repeated across 50 accounts)
Analysis result:
Lexical: 0.85
Rhetorical template: 0.75
Semantic similarity: 0.88
Stylistic AI signal: 0.3
Emotional intensity: 0.4
Contextual coherence: 0.9
Temporal burst: 0.95
Score = 0.78 → Elevated risk of coordinated disinformation
Backend & ML
Python 3.10+ with FastAPI
PyTorch + Hugging Face transformers
FAISS indexing
spaCy / NLTK for NLP
DeepL REST API for translation
Analysis Engine
spaCy + NLTK + scikit-learn with Dynamic Weighted Scoring
API Integration
X.com API with rate limiting and fallback mechanisms
DeepL translation REST API (other languages than English)
The application runs in macOS 12.0+ and as a Web Service for users with log-in credentials.