ResearchForecasting
Towards SOTA Forecasting LLMs
We trained an 8B-parameter model that surpasses all published baselines on open-ended forecasting, including models 10-15x larger. Here's how context building, calibrated reward design, and modified GRPO got us there.
Eternis·March 24, 2026
