Mimir
A foundation model for dynamical systems. Mimir learns the language of how systems behave — and how to recover their underlying mathematical structure from observed data.
Why
We have foundation models for language, images, and code. We don't have one for the dynamical systems that describe the physical world — control systems, biological networks, signal processors, physical simulations.
These are all compositions of interacting subsystems, and discovering their structure from observed behavior is one of the oldest problems in science. Existing approaches either fit black-box approximations (neural ODEs, system identification) or search symbolic spaces with hand-crafted heuristics (symbolic regression). Neither gives you what you actually want: a model that understands how dynamical systems compose, and can reason about structure the way a mathematician does.
Mimir is made possible by Gimle — a novel mathematical formalism for composing dynamical systems from primitive operators. Gimle provides the structured representation that makes the space learnable: a small set of operators (sequential composition, parallel combination, feedback, scaling) that compose into arbitrarily complex systems, paired with a JAX-based simulator that can evaluate any expression exactly. This is the substrate Mimir learns over.
Given behavioral data — streams of inputs and outputs over time — Mimir synthesizes an interpretable, symbolic program in Gimle's language. Not a black-box fit, but the actual structure: something you can read, verify, prove properties about, and compose with other systems.
From Imitation to Reasoning
Mimir doesn't just pattern-match. It progresses through four stages of learning, each building a deeper capability — an approach inspired by how AlphaZero learned to play Go not by memorizing games, but by learning to reason about strategy.
Imitation
Next-token prediction on synthetic data. The model learns the language of compositional systems — what well-formed expressions look like and how structure maps to behavior.
Behavioral
A simulation-based loss replaces token-level supervision. The model learns to match what systems actually do — optimizing directly for functional correctness, not syntactic similarity.
Reinforcement
RL fine-tuning lets the model explore beyond the training distribution. It learns to search for better solutions through trial and reward, discovering compositions that supervised training never showed it.
Planning
Learned search over composition strategies. Rather than committing to a single prediction, the model evaluates candidate structures before selecting — learning how to think about composing systems, not just what to predict.
Fields of Application
Anywhere a system takes inputs and produces outputs over time — and you want to understand why, not just predict what — Mimir applies.
Control Engineering
Recover controller structure from closed-loop behavior. Identify PID configurations, feedback topologies, and stability characteristics from observed input-output data.
Signal Processing
Decompose signal chains into their constituent filters, transforms, and feedback paths. Understand not just what a system does to a signal, but how it's built.
Biological Networks
Infer regulatory and metabolic network structure from experimental measurements. Gene regulation, neural circuits, pharmacokinetics — systems where interpretability matters as much as accuracy.
Finance
Model the compositional dynamics underlying market behavior. Discover feedback structures, mean-reversion mechanisms, and regime-switching patterns as interpretable, verifiable programs.
Physical Systems
Recover governing dynamics from sensor data or simulation output. Mechanical systems, fluid dynamics, thermodynamics — anywhere the laws are compositional but the structure is unknown.
Robotics
Identify the dynamical structure of sensors, actuators, and plant models from operational data. Build interpretable system models that can be formally verified before deployment.