The future is path dependent. Our goal is to forecast, simulate, and quantify uncertainties using AI systems. We aim to understand the evolving manifold of knowledge, epistemics and how the world is evolving.
We believe these abilities must be democratized. Our products and experiments explore how we can collectively govern autonomous productive systems in the future.
Post-training language models for calibrated prediction, exploring pretraining on new modalities, and maintaining cheap world models that are continually learning systems.
Read moreSeparating what a model knows from what it can fluently say. Epistemic embeddings, new scaling laws, emergent properties, and how world models compress reality.
Integrating our research and experiments into a core product that people use to make decisions grounded in structured evidence.
AxionUnderstanding emergent properties of agent systems at scale. Multi-agent RL, experiments in societies of twins, and large-scale simulations.
strategy.freysa.aiMulti-agent system for decision-making. Built for capital allocators, strategists, and anyone reasoning through high-stakes forecasts.
axion.eternis.aiAn agent with her own money and private keys, evolving over time. Agents with resources will exist, and governance layers are needed for a human-AI future.
freysa.aiA private AI app and box. Access models privately (hosted in TEEs), with data that never leaves your control. Purchase a box so you can host models locally as well.
siloprivacy.com$30M raised. San Francisco preferred, high-agency remote welcome.
We want people who are obsessed with questions like:
How should coordination scale in a world where every human has a digital twin?
How can we build first-order world models without pretraining on the world's data?
How do LLMs represent beliefs?
What makes a good forecaster, and can we robustly benchmark it?
Are there laws tying compute to scientific production, and can we formalize the intuition behind discovery and better decisions?
Can we automate the discovery of new coordination mechanisms — retroactive funding, incentive design, agent alignment around shared goals?
