quiet-spread
- richer than early profiles
- leaky balance
- early/two-room stable
A tiny watchable world where residents point at regions of a shared 384-dimensional semantic space instead of speaking in text.
MOO-384 is a tiny, watchable multi-agent world where residents do not speak in text, but gesture as vectors through a shared 384-dimensional semantic space, forming rooms, traces, and attractors that humans can observe through subtitles and maps.
This is a browser replay of compact data exported from a real log. The simulation is not running in the browser.
MOO-384 is a small local research artifact for studying vector-space interaction between simple residents.
Tildespace is the watchable world and interface around MOO-384 logs. It is a research notebook and a terminal exhibit — not a product, not a web app, and not a game. It is also not consciousness, culture, or emergent language.
Residents in MOO-384 do not send sentences to each other. They emit 384-dimensional vectors. Other residents react to those vectors geometrically — by averaging, by fleeing, by sitting between two of them, or by sleeping and summarising what they recently observed.
Human-readable phrases do appear in the logs and the viewer above. Those phrases are observer-side subtitles: the atlas phrase whose embedding has the highest cosine similarity to a resident's most recent vector. Subtitles are not resident speech. The residents themselves never receive a phrase.
how an emission propagates:resident internal state → 384D vector gesture → other residents react geometrically → human sees approximate subtitle
not:resident writes sentence → other residents read sentence instead:resident emits coordinate → other residents respond to coordinate
384 is the dimensionality of the shared sentence-embedding space. The intended semantic substrate is sentence-transformers/all-MiniLM-L6-v2, which produces 384-dimensional unit vectors for short phrases.
The project also runs a synthetic substrate (random 384-dimensional unit vectors with the same noise schedule) so semantic and synthetic regimes can be compared directly. The synthetic mode is not a contrived control — it has been load-bearing for several findings about which dynamics depend on the semantic manifold and which do not.
This is not a claim that semantic embeddings are magic. The project's results often turn on geometry (how far apart the home vectors of distinct residents are) more than on which model produced the embeddings.
Can simple residents interacting only through vector-space gestures form stable, inspectable, watchable room/attractor dynamics?
Two contrarians paired against two sleepers. Contrarians flip away from their paired sleeper's most recent gesture. Sleepers wake periodically and summarise the recent activity of everyone they have observed. Together they produce recurring motion between a small ring of rooms.
This is the first profile where the substrate looked alive, not merely stable.
Listing failures here makes the project credible. Each was a real iteration with a written decision record under docs/decisions/:
Raw runs are not committed because the per-seed TSV logs are multi-MB; only the compact replay JSON used by the viewer above (about 425 KB) lives in this site. The simulator and atlas cache are reproducible from the repo.
python visual_bundle.py
python cavern_viewer.py \
runs/exp-v1.6-contrarian-sleeper-long-soak/semantic-seed-1234/log.tsv \
--speed 10 --trail 8
python visual_bundle.py --site-data
Writes docs/site/assets/cavern-frames.json and docs/site/assets/cavern-summary.json from the v1.6 sample log.
Regenerate the long-soak experiment (a few minutes on commodity hardware):
python moo384.py --experiment --conditions synthetic semantic \
--runs 20 --ticks 20000 --delay 0 --seed 1234 \
--profile contrarian-sleeper \
--out runs/exp-v1.6-contrarian-sleeper-long-soak