Time That Remembers: Proto-Life Attractors in a Memory-Bearing Medium
We’re used to thinking of physics as stuff moving in time. In this project, we flipped that around.
What if the primary thing isn’t “stuff,” but the medium of time itself?
What if time can thicken locally, store memory, and then push back on whatever is happening?
From that single idea — simulated on a grid — something surprising falls out:
stable, cell-like bodies
pulsating rings that “breathe” while maintaining identity
droplet / packet ecologies that persist as multiple bodies
global ordering modes like stripes (less romantic, but important as a baseline)
All of this emerges from one extra ingredient added to a standard reaction–diffusion system: a time-density field that learns where interesting dynamics are happening and remembers them.
In this post I’m going to skip most of the code and focus on:
What the model is (in plain language)
How we recognise proto-life behaviour without wishful thinking
What the experiments now show (v7), and what we’re trying next
For anyone who wants the full code / math, and to run their own experiments, the repo is here:
What is a time-density field?
Instead of assuming time “ticks” uniformly everywhere, we introduce a field:
τ(x, y, t)
Intuitively:
Where τ is high, time is “thick”:
processes slow down and history accumulates.Where τ is low, the medium is loose and fast:
change spreads quickly and the past doesn’t hold on.
Crucially, τ is not just a parameter. It’s a dynamical field that:
responds to what is happening (activity, boundaries, gradients), and
feeds back by changing how easily patterns can form and persist.
The base chemical system is a classic two-species reaction–diffusion model (Gray–Scott). On its own, Gray–Scott is a beautiful pattern generator.
With a dynamic τ attached, it becomes something else: a memory-bearing medium capable of sustaining bounded structures.
How τ talks to chemistry
There are two key couplings.
1) τ changes how easily things diffuse
Instead of constant diffusion, the effective diffusion depends on τ.
So:
where τ is high, diffusion slows down → structures can “solidify”
where τ is low, diffusion speeds up → structures can smear, dissolve, or reorganise
This is one of the simplest ways to formalise “time thickens here.”
2) τ evolves based on activity (and optionally a resource)
τ isn’t imposed. It learns.
In practice, τ increases where the chemistry is strongly active and/or where sharp boundaries exist, and relaxes back toward a baseline where nothing is happening. In later versions we also include a simple resource-like field that the system can “spend” to sustain activity.
The important point is not the exact equation — it’s the feedback loop:
activity builds τ → τ stabilises activity → stabilised activity persists long enough to become structure
That feedback changes the attractor landscape.
A note on “nutrient” / resource fields
At first I was cautious about adding any “resource” term, because it can feel like cheating: aren’t we trying to get life-like behaviour from physics, not sneak in biology?
But in this model a resource field (when used) is not magic. It’s just another scalar field that:
diffuses
is consumed where activity is high
is replenished slowly from the background
It’s not “food.” It’s closer to “local capacity to keep transforming.”
And importantly: you can run the system without it — the interesting part is that τ already functions as a memory-medium, and the resource layer simply makes the ecology richer.
What counts as “proto-life” here?
The danger with these models is obvious: you can project life onto any pretty pattern.
So the project has steadily shifted toward measurable, mechanical criteria — things you can compute without poetry.
Version 7 is built around two ideas:
(A) Proto-life is about persistence + recurrence, not just pattern
A stable blob that freezes forever is not “alive-like.”
A pattern that constantly melts into noise is not “alive-like.”
The sweet spot is something like:
a bounded body that persists
with internal dynamics that recur (or cycle)
often with a kind of “breathing” / oscillation
(B) We need diagnostics that don’t bias the outcome
Earlier we used FFT-style frequency detection. It works, but it has a subtle failure mode: the scoring window can “lock” the behaviour into what the measurement rewards.
So v7 switched to autocorrelation (ACF) as the main recurrence diagnostic:
Does the system return to itself at a characteristic lag?
And we measure that on signals that actually reflect structure (like spatial variance or high-percentile probes), not just the mean (which often washes out the real dynamics).
The experimental workflow (what v7 actually does)
This is the part that made everything click:
Search parameter space using BO/QD-style exploration
Score each run using recurrence diagnostics (ACF) + stability gates
Replay the best runs deterministically
Save dense snapshots every N steps
Render GIFs so we can see dynamics, not just infer them
That replay step matters more than I expected. Mid/final images are often misleading; GIFs reveal what kind of attractor you really have.
What emerges: a small taxonomy of attractor classes
Once we had recurrence scoring + replay, the system stopped feeling like “random pretty pictures” and started looking like a phase space with repeatable classes.
Here are the main ones we’re seeing now.
1) Pulsating rings (“breathing compartments”)
A thick ring forms and persists: a bounded structure with a sharp inside/outside boundary.
In replay, it often breathes — sharpening and relaxing in a cycle — while staying recognisably the same object.
Why this is special:
it’s localised (not a whole-domain pattern)
it maintains a boundary
it exhibits recurrence (a kind of limit cycle)
This is one of the strongest “proto-compartment” signatures we’ve found so far.
2) Multi-compartment regimes (“ecologies”)
Instead of one body, you get several persistent bodies that coexist. Sometimes they appear synchronized, sometimes coupled, sometimes loosely interacting through the medium.
Even if nothing “biological” is happening, this is already a major qualitative step:
the attractor is no longer “one structure,” but a stable population of structures.
3) Gaussian packets / droplet fields (“few compartments”)
These look like multiple rounded bodies — packets — often more mobile and less rigid than rings.
They’re interesting because they occupy a middle ground:
not global stripes, not one perfect compartment, but a small number of persistent bodies.
These runs start to hint at the kinds of dynamics you’d want for richer “proto-life” stories later: drift, interaction, occasional merging or rearrangement.
4) Stripes (global ordering mode)
Stripes are less exciting emotionally, but scientifically they’re important: they’re a stable attractor family that can score well on recurrence, simply because the whole domain participates.
Stripes are the reminder that:
“oscillation score” alone is not enough — you need morphology.
Morphology descriptors: measuring “body-ness”
To avoid rewarding global patterns (like stripes) the same way we reward bounded objects (like rings), v7 adds simple morphology descriptors computed from a snapshot:
area fraction: how much of the domain is “structured”
number of components: how many distinct bodies exist
compactness / boundary complexity: a coarse proxy for “blob vs filament vs stripe”
These are deliberately simple. They’re not the final word.
But they’re enough to start turning the search into something meaningful:
not just “find the biggest score,” but “map the space of behavioural classes.”
Why this matters
There are two levels to this.
Locally: a proof of concept
You can take:
a standard reaction–diffusion system
add a single memory-bearing field τ
and you get a zoo of stable spatiotemporal attractors
Not just static patterns — but objects that can persist, oscillate, and recur.
Conceptually: a different picture of “stuff”
I’m not claiming this replaces the Standard Model. That’s not the point.
The point is that this model is a concrete sandbox where a medium with memory generates persistent structured entities — and you can search and classify those entities systematically.
It suggests a different lens:
matter as the “appearance” of stable spatiotemporal attractors
life as a phase of systems that can maintain bounded identity while cycling internally
failure modes (fossilisation, melting) as neighbouring attractor regimes
Even if you don’t buy any of the big metaphysical implications, this is still valuable: it’s a working laboratory for exploring how memory-bearing media reorganise their attractor landscapes.
What we’re doing next
The next steps are straightforward and exciting:
improve descriptors (ringness, stripe anisotropy, filament measures)
increase seed diversity (initial conditions that explore more basins)
search not only for “high recurrence,” but for rare behavioural classes
test robustness (gentle parameter/environment shifts) on the best candidates
In other words: less “beautiful patterns,” more “stable, adaptive attractors with identity.”






