Teleoforming (n.): Information‑first shaping of matter at a distance by steering ambient energy through staged attractors.
TL;DR: We already made rocks think; that proves information can hijack matter/energy flows. Treat civilization as an information ecology—process patterns steering substrates—and colonization becomes an info‑first control problem. Export control patterns by structured broadcasts; local matter and sunlight do the lifting.
Disclosure: This is a concept, not gospel. It's probably bullshit, but of the fun variety—think science fiction, not doctoral dissertation. Any claim without a bill of materials, an energy budget, and a falsifiable metric is just there to make you think. I am curious by nature and I do get obsessive about diving down every single rabbit hole I come across; it's a problem.
Content warning for the lateral thinkers: Lots of fancy symbols and letters and stuff. It's not necessary to understand them to get the vibe, so feel free to just gloss over them, but I've included them because otherwise I'd surely get eaten alive. I still might be, it's the circle of life 🦁.
Content warning for the linear thinkers: There is a lot of handwaving, just because otherwise this piece would not come out at all. Too many things. So dig out and put on all the PPE you have and proceed with caution.
Premise: the cobweb runs the spider #
We talk about human civilization like humans are the main event. But feral children are the counterexample.
Consider what happens when a child grows up without language, tools, or institutions. Same biological hardware—same brain, same body, same potential—but wildly different behavioral and cognitive software. Without the information patterns that organize human society, the child develops along entirely different trajectories. They don't learn to reason abstractly, don't develop complex social skills, don't create or use tools beyond the most basic level.
It's easy to say this is just about missing education, but what does that mean exactly? I think it's about missing the fundamental operating system that makes humans capable of civilization in the first place. Language is easy to take for granted, but in addition to it being the medium of communication, it's also the substrate for most abstract thought. Tools aren't just physical objects—they're embodied knowledge that extends what we can do. Institutions aren't just social structures—they're the protocols that coordinate complex behavior across large groups.
Taking into account all of the above, the leverage point isn't shipping bodies; it's exporting control patterns that fuse with whatever substrate they land on.
If you can change what a human is capable of by changing their information environment, then the same principle should apply to any system that can process information patterns. The question becomes: what happens when you introduce new protocols, tools, and organizational structures to a different substrate?
I'm going to use two terms within this piece:
- Teleoform (TF): an engineered info‑pattern that lands in an existing flow and seizes usable "handles" (sensors, protocols, incentives, catalysts) to steer outcomes. The process of deploying, activating and sustaining such a pattern is "teleoforming."
- Bio‑Informational Complex (BIC): the coupled unit of host + information behaving like a single quasi‑organism once the pattern is embodied. (Natural examples: religions, scientific communities, fan cultures, military doctrines; engineered examples: propaganda.)
The goal of a Teleoform is to latch onto the existing energy flows, and ultimately instantiate a mutualist BIC—one that bends flows toward stated objectives while improving host well‑being.
Two lenses, one process: pattern realism #
Before we proceed, let's align on some basic assumptions. In my head, reality is one structured process, but I can describe it using two complementary vocabularies. When I care about energy, latency, and mechanisms, I use the physical lens—thinking in terms of dynamics, forces, and conserved quantities. When I care about structure, function, and meaning, I switch to the informational lens—focusing on states, patterns, and distinguishability. These are descriptive choices about the same system, not claims about two separate kinds of stuff.
Information itself operates at three levels. Fundamental patterns form the basic alphabet—stable modes and components that persist reliably. Organizational information describes how these patterns arrange and interact, transforming random carbon into diamonds, proteins, or circuits through specific structural relationships. Semantic information emerges when patterns encounter context and interpreters—DNA becomes meaningful in a cell, code becomes functional on a CPU, and norms become operative within institutions. Meaning is functional, not mystical—riding entirely on the presence of active decoders.
The key insight concerns stability. Simple systems achieve static stability through persistence—they endure by staying the same. But complex information systems succeed through dynamic stability: they preserve their identity by changing configuration quickly and effectively. This adaptive efficiency relies on several mechanisms: structural recruitment organizes nearby materials, template effects enable copying and shaping of neighbors, boundary stabilization uses interfaces to constrain entropy, and cascading order allows local victories to scaffold higher-order organization.
This framework matters directly for our purposes. Teleoforming aims to trigger exactly this kind of dynamic stability—recruiting local structure, establishing helpful boundaries, and templating higher-order organization until a mutualist Bio-Informational Complex locks into place. The two-lens approach keeps us honest about both energy costs and latency constraints while tracking functional meaning and organizational effects.
The framework operates under specific constraints and commitments. Meaning only appears when a decoder exists and actively performs work—no decoder means no semantics, only bare structure. Useful work comes from local exergy (like temperature gradients that can drive heat engines, chemical disequilibria that power batteries, or pressure differences that turn turbines); bits only steer it.
In plain language: is the useful work you can cause from afar; it is limited by the local free energy you can actually couple into. is the real efficiency of your whole conversion chain (policies → sensors → transducers → actuators) given the information you can use () and initial hardware/material state (). is the accessible exergy in that place over time horizon — the maximum useful work available to your specific path. The point of the bound is simple: information can steer energy, but it cannot create it.
where and is the maximum useful work available to your chain relative to the local environment over horizon —the accessible exergy for your specific conversion path. For clarity and consistency, decompose chain efficiency as , , so . Remote channels add information, not exergy.
Landauer's works for logically irreversible operations (notably memory erasure). At macroscales, exergy is the bottleneck; information is the steering wheel.
Consider a simple example: a thermostat. Through the physical lens, I see heat flow and thermal conductivity. Through the informational lens, I track thresholds, states, and error-correction processes. Without a controller, there's no semantics—just structural relationships. With an active controller, bits steer joules within strict energy bounds.
To make things a bit more concrete, let's add three testable predictions.
First, Decoder Causality (double ablation): disable/remove the decoder (the thing that reads and acts on information) or scramble the morphocode while holding power and actuation constant. Directional influence and the Agency Index (a causal‑uplift + efficiency composite) should drop to baseline under both manipulations; raw power flow remains similar. Measure with randomized interventions and report effect sizes and confidence.
Second, Remote Work Bound (energy accounting): the useful work caused from afar obeys with . Test via exergy accounting of local gradients and calorimetry/power logs across the sense→convert→act chain, including I/O and transport losses. Use Landauer's as a sub‑check only when logically irreversible erasures occur; at macro scales, transport and precision margins usually dominate.
Third, Dynamic Stability / Repairability: restoring nominal performance after a standardized perturbation should require , with recovery time . Inject bounded noise/perturbations, verify stable loop gain and limited overshoot, and show the Agency Index returns to target with only repair energy applied.
Control instead of cargo #
Information can't push atoms by itself. It needs two things: a mechanism that converts a pattern into action—call it a decoder, interpreter, or controller—and free energy (to pay for copying, erasing, and moving stuff). The substrate doesn't matter: it could be a thermostat controller, an enzyme active site, a market rule, or a physical ratchet. Without both, you're just whispering sweet nothings into the big, vast universe.
The opportunity is enormous because many environments are rich with energy gradients that remain completely untapped—geothermal vents, solar flux, chemical disequilibria, pressure differentials and concentration gradients. These gradients represent massive flows of free energy just waiting to be doing semantically productive work. A well-designed seed doesn't need to bring its own power plant; it just needs to know how to build a ratchet that captures what's already flowing.
Concrete example: Solar Foods splits water with electricity, feeds the hydrogen to hydrogen-oxidizing microbes, and those microbes fix CO₂ into protein. That's an engineered policy-driven decoder latching onto ambient gradients—no plant-in-a-box, just protocols that make an existing metabolism do useful work.
Photolithography works the same way: shine light → thing do; no shine → no do. The light pattern carries information that steers chemical reactions in photoresist, creating precise structures without moving atoms directly. Information controls matter flows through existing energy gradients.
Think about this in system terms. Any system has a current state, evolves over time, and faces disturbances. A Teleoform acts like a control policy—it senses the system, processes that information, and takes actions to push toward desired outcomes. The main thing is that steering requires both information processing and physical work—no decoder means no interpretation, no joules means no action.
This gives us five reliable levers for bending flows. Boundary conditions control what can enter or exit a system—think cell membranes, osmotic barriers, or network protocols. Catalysis and kinetics lower barriers to specific transformations—like enzymes speeding reactions, technical standards reducing friction, or chemical catalysts accelerating specific pathways. Gain and feedback let you tune how strongly and quickly a system responds—hormones amplifying biological signals, markets pricing information, or ranking systems shaping behavior. Topology determines who talks to whom—restructuring supply chains, designing APIs, or rewiring food webs. Finally, phase triggers can nudge systems into entirely new regimes—think nucleation events that start crystallization, or threshold crossings that trigger phase transitions.
How to tell you're steering: an Agency Index #
An information pattern is agentic if its instantiation changes the distribution of its own future copies and outcomes in the predicted direction.
To tell whether a system actually has agency, we need three core measurements that work together:
Directional Influence asks: when we randomly pulse the seed's control channel, how much more often does the system hit the target compared with when we don't pulse it? This is essentially running a controlled experiment—turn the pattern on and off randomly, then measure whether "on" consistently beats "off" at achieving the stated goal. More precisely, we're measuring the average treatment effect: , where is a target‑alignment score you define ahead of time (like normalized task reward or percentage of in‑spec output). Report this with effect size and 95% confidence intervals from randomized, preregistered interventions.
Chain Efficiency measures whole‑path efficiency from local accessible exergy to useful work. The chain‑level definition is over time horizon . For diagnostics, decompose as capture and conversion: and , where is exergy actually captured from local gradients (read from power meters, calorimetry, flow logs) and is useful work attributable to the seed in domain‑specific units (kWh delivered, kg of product, percentage of in‑spec items).
Agency Index combines both measurements into a single score that is high only when the seed both steers outcomes toward the goal and uses energy efficiently. Use a signed diagnostic and a non‑negative gating score:
A system that hits targets but wastes energy gets a low score. A system that's energy‑efficient but doesn't actually steer toward goals also gets a low score. Only systems that do both well score high. Calibrate this with simple reference systems like thermostats or conveyor sorters to anchor interpretation. Keep logic dissipation separate—report it only when irreversible operations can actually be counted.
Term | What to measure | Instruments/logs | Useful work examples (domain) |
---|---|---|---|
Outcome‑alignment score | Predefined target score per episode/run | Task logs; evaluation scripts; prereg plan | Manufacturing: % in‑spec parts; Energy: kWh to target sink; Bio: kg of product; Social: % actions matching policy |
DI | ATE on (randomized perturbations) + 95% CI | Randomizer seeds; assignment logs; intervention registry; analysis notebook | Any domain using above |
Exergy captured over | Power meters; calorimetry; flow/\nabla P; redox/voltage logs | kJ from thermal/chemical/solar gradients | |
Domain‑specific useful work over attributable to seed | Output meters tied to target; attribution via on/off or double‑ablation | Net kWh delivered; kg product; meters sorted; task reward | |
Derived from above | Dimensionless [0,1] | ||
AI | Derived | Dimensionless [0,1] |
Replication gate (sanity): scale only if and safety checks pass; otherwise revise or rollback.
ACAP: Agency & Complexity (five‑dimensional score + engine threshold) #
To further keep "agency" from turning mystical, pair the Agency Index with a five-dimensional capability profile (0–125 total, 0-25 each). Semantic Processing Depth measures the depth and efficiency of meaning extraction and modeling. Inside-Out Lens tracks self/world-model sophistication and temporal integration. Autonomy & Adaptability captures independence from stimuli, learning flexibility, and meta-learning capabilities. Matter/Energy Organization assesses the scope of resource leverage and physical influence. Finally, Higher-Order System Interaction evaluates symbolic and cultural system creation and use.
The engine threshold is crucial. Organizational templates—pure information systems—can score high on structure but lack autonomous goal formation. Only treat a system as an agent in the common understanding once it couples to an engine: closing an autopoietic (self-creating and self-maintaining) feedback loop that enables goal formation and self-modification. This means assessing Bio-Informational Complexes as composites (host + information fused), not as separate parts.
This framework helps in specific ways. When working with existing decoders (hijacking sensors, actuators, or control systems), ACAP tracks the host-seed composite as it moves from Exposure to Lock-In. Expect Matter/Energy Organization and Higher-Order System scores to rise with physical coupling, while Autonomy stays constrained by guardrails, and any Semantic Processing or Inside-Out Lens gains mark real control rather than mere performance theater. When creating decoders from scratch in environments with no existing interpreters, no semantic points accumulate until the thermodynamic bridge plus autocatalytic set (a network of reactions that catalyze their own production) cross the engine threshold—before that, it's organizational agency only.
The mutualist gating rule ties everything together: scale replication only if the Agency Index meets declared thresholds under interventions, the ACAP profile improves along declared axes, and well-being proxies pass the mutualist gate with reversibility.
Control, not factories #
Not a factory, on purpose. Classic von Neumann probe: factory-first, open-loop, replicate immediately. Teleoform: policy‑first, closed‑loop (sense → model → act → measure).
In practice, policy‑first means constraint‑based, distributed control: the global policy compiles into many simple local controllers running near actuators, each closing its own loop under shared constraints—not a single remote brain.
You're not shipping a plant-in-a-box; you're shipping a policy that latches onto available handles. Before anyone asks: yes, we're aiming to functionally "create DNA remotely"—not by beaming atoms, but by instantiating a local information-bearing decoder that can store, transform, and replicate patterns. If local chemistry supports templating polymers, target those; if not, target alternative substrates like electrochemical or photonic logic. Beamed light/radio supplies instructions, not matter; fine-grained fabrication only appears after the seed builds local tools.
Broadcast‑Boot Architecture #
When no decoders exist, you can still steer dissipative structure with weak, structured broadcasts. Bits are a clock, bias, and recipe, with local gradients left to pay the energy bill. You're essentially nudging dirt to organize itself into simple machines using only sunlight, temperature swings, and radio timing signals. Start with common minerals that naturally align or change resistance. Use daily heating/cooling cycles to grow basic mechanical parts. Assemble those parts into crude actuators that can crawl toward resources. Finally, program seasonal schedules so the system builds complexity over months while staying within safe bounds. No magic—just patient chemistry guided by celestial clockwork over a long time.
Material ladder shorthand: chem → mech → logic (harvest gradients → build tools → compile controllers).
Constraints #
- Info‑only control: we send timing signals (like a metronome); local nature does the work: day–night temperature swings, chemical imbalances, capillary flow, sublimation, wind, tides, gravity.
- Spatial selectivity: we “aim” using what already exists—latitude/season bands, terrain shadows, frost lines, shorelines. No fine imaging or high‑power beams.
- Universal parts list: use common stuff found in many places: iron oxides (magnetite, hematite), clays (montmorillonite, illite), quartz, perchlorates and sulfates, CO₂/H₂O ices; organics if present. Optional: ilmenite, pyrite, TiO₂, gypsum/thenardite.
- RF policy: no radio control until simple circuits exist; after that, radio is just a low‑power clock/bias signal—not a power source.
Phase A — Nucleate transducers from raw stuff #
Raw minerals → Aligned structures → Simple switches
In Phase A the broadcast is only a metronome and a nudge. It lets us harness local exergy by phase‑locking to the world’s own cycles (freeze–thaw, wet–dry, day–night) so the environment supplies the work while the signal chooses when and in which direction to push.
- Align and lock: open alignment windows when magnetic noise is low so magnetite grains line up; close them at freeze/dry endpoints to form salt‑bridge contacts that persist, then reopen on thaw/wet to anneal bad links.
- Gate deposition: when temperature crosses thresholds, allow tiny electrochemical paths so mixed metal–oxide junctions spend more time in their conductive state during peak ΔT; over many cycles this yields net deposition along magnetite guides.
- Rectify fluctuations: prefer one side of the cycle (warming over cooling, wetting over drying) so symmetric oscillations become one‑way organization—zero‑mean inputs turning into positive structure.
- Energy accounting: the signal carries information, not power. Heat, evaporation/condensation and redox gradients pay the energy bill; the broadcast only sets clock edges and thresholds.
Picture scattered magnetite grains in salty water. They want to align with the planet's magnetic field, like tiny compass needles floating in brine. When the water freezes and thaws, or evaporates and condenses, it creates salt bridges that lock these alignments in place—giving us our first conductive threads.
Meanwhile, clay particles are doing their own dance. They expand in moisture, shrink in sunlight, naturally forming tiny hinges and bending layers wherever slope and shade create the right conditions. These become our mechanical actuators.
The real breakthrough comes from mixed metal-oxide contacts—bits of iron meeting nickel oxides. Daily temperature swings make these junctions flip between high and low resistance states. No fancy electronics needed, just patient chemistry responding to the planet's own rhythms.
Phase B — Grow parts by slow electrochemistry and ratchets #
Simple switches → Guided currents → Mechanical parts
Now we have conductive threads and basic switches. Time to grow some moving parts. Light and shadow create timing signals that route tiny currents along our magnetite threads. These currents slowly deposit material—building ribs and hinges atom by atom, like the world's most patient 3D printer.
Salts with sharp phase transitions become our locks and ratchets. When they hit the right temperature, they snap into rigid configurations, preserving whatever shape we've grown. We use daily and seasonal timing windows, keeping duty cycles conservative to avoid burning out our delicate machinery.
If the dust is fine enough, we can even build solar sails. Dark-light patterns make particles drift in sunlight—a tiny radiation pressure engine. Periodic shade gathers these flakes at edges where capillary action glues them into larger panels.
Phase C — Assemble a gradient‑riding actuator #
Mechanical parts → Moving assemblies → Resource-seeking robots
Time to build something that can actually move around. We assemble our grown parts into simple creatures that ride environmental gradients like thermal and chemical surfers. We keep pulsing the same timing windows used to grow parts, but now they double as jig and welder: patterned currents and heat fuse ribs to hinges where guides meet, while daily micro‑motions (thermal expansion, capillary creep, breeze) shake weak joints loose. Phase‑change salts act as selective locks, setting only when alignment and contact area meet tolerance. Over many cycles, poor fits anneal or fracture, good linkages persist, a pawl‑and‑ratchet chain appears, and a minimal gait emerges. That's the hand‑off from growth to mechanism.
The thermal millipede is our star performer—a ribbed ribbon that bends when warm and straightens when cool. Uneven foot textures give it an inching motion, always crawling toward the damp, briny areas where the chemistry is richest. It only moves during preset time windows, like at dusk when temperature crosses the right threshold, and each one has a simple local controller to avoid overheating.
On cold worlds, we build CO₂ frost hoppers instead. A porous skirt traps frost overnight, then sunrise turns that frost into high-pressure jets. Dark-light patterns on the surface steer the hop direction—a crude but effective navigation system.
Near coastlines, capillary pump-wicks do the heavy lifting. Striped patterns pump solution upslope with each thermal cycle, feeding our deposition nodes with a steady stream of raw materials.
Phase D — Behavior programming (the morphocode) #
Resource-seeking robots → Coordinated behavior → Self-organizing system
Now we need to give our little robots some brains. Not artificial intelligence, but something more like genetic programming—a broadcast schedule that acts as their DNA.
This "morphocode" runs on two timescales. The slow seasonal track provides the overall rhythm, like a year-long symphony scored to match the planet's orbit. Faster duty-window cues ride on top, tied to daily temperature swings and celestial timing. The beauty is that local modules adapt with onboard feedback rather than following a rigid script—they have enough autonomy to handle the unexpected while staying within safe guardrails.
We advance phases only when detectable signals confirm each stage worked. These would be observable by in‑situ probes or inner‑system telescopes; exoplanet signatures would require planetary‑scale contrast changes. Before simple circuits exist, we address our robots by timing windows rather than location—everyone who hears the 3 PM signal knows it's time to move toward the light. Later, once we have basic electronics, we can use distinct radio bands to talk to specific subsystems.
The whole system follows planned waypoints: filament network grows into ribbed ribbon, which becomes inching millipede, which eventually turns into rolling tubule. If local chemistry throws us a curveball, we fall back to the nearest workable state and adapt from there.
Putting it together — How modules interlock #
Specialized components → Coordinated assembly → Self-sustaining ecosystem
Think of this like building a tiny industrial ecosystem from scratch. Each component has a job, and they all work together in loops that keep the whole system running.
The energy harvesters are like solar panels and batteries—they grab day-night heat swings, chemical gradients, or raw sunlight and turn them into usable power. The scaffolds and buses made from magnetite threads and clay hinges act like the factory's conveyor belts and plumbing, guiding currents and fluids where they need to go. Our fabricators are slow but patient 3D printers that deposit ribs and joints along these pathways, building new parts atom by atom.
The movers—our millipedes, frost hoppers, and light-pushed sails—are the mobile workforce, crawling and hopping around to concentrate raw materials. Stores and relays like puddles, wicks, and charge pools act as warehouses and buffer tanks. Finally, sensors monitor everything through resistance thresholds and light patterns, making sure the system stays within safe limits.
Here's how it all fits together: Phase A lays the infrastructure (threads, hinges, basic switches). Phase B grows the machinery on this scaffold and sets up the supply chains. Phase C assembles mobile units that can hunt for resources and expand the operation. Phase D adds the coordination layer—timing routines and feedback loops that let everything work together without a central command center.
Two examples show how this plays out in practice. The brine-seeking ribbon system has ribbons that inch toward damp zones while wicks lift the salty water to printer nodes. The printers use this feedstock to strengthen the ribbons with new ribs and hinges, all while thermal sensors prevent overheating. On cold worlds, frost-hopper print farms work differently—hoppers jump at sunrise, dragging conductive mesh over dust to seed new electronic components. Local timing circuits manage the hop spacing and rest periods to stay within power budgets.
The beauty is in the interfaces. Mechanical connections use simple tabs and sintered joints. Electrical contact happens through magnetite threads and brine-mediated junctions. Fluids flow through wick ports and microchannels. Control signals travel via timing windows rather than complex addressing. And safety systems include rollback masks, fuses, and per-module watchdogs that shut things down if they go wrong.
The whole system closes the loop: concentrate feedstock, deposit new parts, maintain the actuators, repeat. Periodic constraint checks and value tests gate any scaling—nothing grows unless it proves it belongs.
flowchart TD Env["Local gradients (ΔT, redox, light)"] --> Harvesters["Energy harvesters"] Harvesters --> Scaffolds["Scaffolds & buses (magnetite filaments, clay hinges)"] Scaffolds --> Fabricators["Fabricators (electro‑capillary printers)"] Fabricators --> Parts["Mesoscale parts (ribs, hinges, joints)"] Parts --> Actuators["Actuators (millipede, frost hopper, pump)"] Actuators --> Movers["Resource concentration / transport"] Movers --> Stores["Stores & relays (brine sumps, charge pools)"] Stores --> Fabricators Sensors["Sensors (asperity memristors, albedo patches)"] --> Controllers["Local controllers (constraints)"] Controllers --> Actuators Controllers --> Fabricators Broadcast["Broadcast schedule (priors / guardrails)"] -. time windows .-> Controllers
Semantic scaffolds — from motion to purpose #
Random motion → Local signals → Goal-directed behavior
The hardest problem isn't building robots that move—it's building robots that want things. How do you get from mechanical motion to purposeful action without shipping a full AI across the galaxy?
The answer lies in objectives as tests. Instead of programming goals, we program checkpoints. Value checksums act like genetic fitness tests—if a behavior pattern helps the system pass its tests, it gets copied. If it fails, the system rolls back to the last working state. No consciousness required, just selection pressure applied at the right places.
Stigmergic markers provide the sensory layer. Heat, charge, albedo, and chemistry changes act like breadcrumbs that bias local choices. A thermal millipede follows heat gradients not because it "wants" warmth, but because that's what the pattern matching in its simple controller tells it to do. The broadcast schedule provides priors and guardrails—like parental guidance for robot behavior.
The economic layer runs on budgets and utility. Power and time become currency with natural decay rates. Sensors and markers generate simple utility scores, and controllers automatically climb the utility gradient within their constraints. It's not optimization—it's more like water flowing downhill, but in utility space rather than physical space.
The beauty is that this works across different worlds. We prefer targets with strong native masks—big-moon eclipse sweeps, high obliquity seasons, volatile frost belts, or ring shadows that give us natural timing and addressing systems.
On Mars-like worlds (cold, CO₂ frost, perchlorates), we use magnetite dust plus perchlorate brines with frost hoppers and thermal millipedes as our primary movers. Control signals ride on visible and near-IR masks, with low-frequency ticks near Martian resonance bands. On Titan-like worlds (hydrocarbon lakes, cryogenic conditions), RF windows penetrate the haze while surfactant patterning on lake surfaces guides Marangoni surfers—robots that ride composition and temperature gradients like chemical surfers. Growth happens via cold sintering of tholin grains on magnetite frameworks.
Safety hooks keep everything honest. Each stage maps to value checksums and sandbox protocols. Physical off-switches include albedo kill-masks and phase-change fuses that require reversibility before any scaling. Local watchdogs shut down individual modules on constraint violations and send re-plan requests when signals stay off-nominal too long.
Pathway Emergence to Local Decoder Creation (how Mode 2 actually boots) #
A "decoder" isn't shipped—it emerges by recruiting environmental structure. Pattern Realism's pathway emergence framework gives us the engineering playbook.
The strategy exploits environmental scaffolds that are already there. Stable gradients like thermal, redox, pH, and radiation differences provide direction and energy ratchets. Cyclic drivers such as day/night, tides, and seasons enable non-equilibrium pumping and create error-correction windows. Mineral templates and catalysts including clays, metal sulfides, and porous rocks offer templating surfaces and reaction acceleration. Topological constraints like pores, microchannels, and ice veins act as natural sorters and rectifiers. Finally, background informational affordances—the abundances, stoichiometries, and spectral lines of local chemistry—provide priors you can encode against.
This translates into four engineering levers drawn from Pattern Realism. Structural recruitment concentrates and co-locates reactants through adsorption, filtration, and capillarity. Template effects favor copy-with-variation, whether through surface templating of polymers or patterned deposition for non-biological logic. Boundary stabilization builds semi-permeable compartments like vesicles, precipitate membranes, or ice-brines to lower noise. Cascading order stacks simple wins in sequence: ratchets lead to cycles, cycles enable autocatalytic sets, and autocatalytic sets support codes.
You know decoder emergence has succeeded when four criteria are met. Persistence means information-bearing states outlive disturbances by more than N cycles. Programmability means external signals—light, flow, composition changes—can bias configuration transitions reliably. Amplification means small pattern differences lead to reproducible macroscopic effects with gain greater than 1 under control inputs. Closure means at least one autocatalytic loop maintains the decoder hardware, whether that's a polymer set or non-biological gates.
The Mode 2 design playbook follows this logic. First, assay gradients, catalysts, and topologies to compute an "affordance map." Pick a substrate—polymeric or non-biological—whose error model and kinetics fit the map. Impose boundaries through precipitate membranes, vesicles, or lithography-analog pores to cut noise. Add a ratchet and cycle using beamed power timing or flow reversals to drive far-from-equilibrium ordering. Template the code using monomers or gate motifs and test copy-with-variation within tolerances. Close the loop with an autocatalytic set that repairs and extends the decoder itself. Finally, gate scaling on Agency Index and safety checks—otherwise, revise and retry.
Thermodynamic Bridge — turning gradients into decoders #
A thermodynamic bridge couples environmental free-energy gradients to information-guided ratchets so that local exergy pays for building and maintaining a decoder. No Maxwell-demon cheats: the bound is the local exergy budget and the conversion chain.
This bridge supplies powered selectivity after the pathway emergence process assembles boundaries, cycles, and templating. The timing and thresholds let copy-with-variation beat noise so an autocatalytic set can close.
The bridge has minimal anatomy. Reservoirs and gradients (, redox , light, pressure/flow, E-fields) provide the raw energy differences. A transducer (thermoelectric, photochemical, electrochemical, or mechanical) converts gradients to controllable work. Ratchets and rectifiers like chemical valves, diodes, or Brownian ratchets bias fluctuations in the desired direction. A controller with memory (thresholds, clocks, state) gates reactions and repair processes. Finally, work channels including bond formation, deposition, charge motion, and micro-mechanics maintain the decoder hardware.
Design targets:
- Exergy capture rate (): free energy harvested per unit time.
- Capture efficiency: .
- Conversion efficiency: .
- Chain efficiency: (chain‑level; is exergy actually captured from local gradients over ).
- Logic dissipation ratio (optional, block‑level): for counted irreversible ops. Don’t fold into chain efficiency—keep them separate so macroscale losses don’t masquerade as logic limits.
- Loop gain and local feedback stability margins to beat noise without overdrive.
Three failure modes require specific mitigations. Stall from too little gradient calls for downshifting—simpler codes, slower clocks, more redundancy. Burn from overdrive requires fuses, sacrificial pathways, and duty cycle caps. Parasitism that hurts the host demands a mutualist gate where well-being proxies must rise under stress tests, or the system rolls back.
The human analog applies to information-based approaches. Attention, budget, and time become human exergy. Protocols and incentives serve as transducers; technical standards act as ratchets; control systems function as controllers. Same bridge pattern, different units.
From boot to Bio‑Informational Complex (the handshake) #
Boot doesn’t magically become a BIC; there’s a handshake with explicit gates.
- Stage mapping: Scout/Compile → Actuate → Stabilize → Scale maps to Exposure → Adoption → Lock‑In → Propagation.
- Gates: cross to BIC only when (a) decoder closure exists (maintenance loop holds), (b) chain efficiency clears threshold at horizon , (c) Agency Index clears declared uplift, and (d) mutualist gate passes (well‑being proxies improve; reversibility holds).
- Artifacts: before BIC, outputs look like scaffolds and controllers; after BIC, you see protective reactions, persistent resource allocation, and default policy take‑over under stress.
If any gate fails, roll back to the previous boot phase, adjust priors (broadcast), or revise local constraints. Only treat the composite as a BIC once Lock‑In is measured under interventions, not vibes.
Inside‑Out Lens bootstrap (proto‑semantics → semantics) #
Semantic processing in a BIC requires an Inside‑Out Lens (IOL)—a minimal self/world frame that turns signals into meanings relative to the composite. The boot ladder provides the scaffolds:
- Proto‑semantics (before IOL): stigmergic markers (albedo/heat/charge), local thresholds, and value checksums drive selection without a persistent self‑frame. Meanings are task‑local and ephemeral.
- Bridge conditions:
- persistent state across cycles;
- bounded model that predicts local consequences of actions;
- boundary maintenance behaviors that preserve controller integrity under stress.
- IOL emergence (minimal): a stable self/non‑self boundary around controllers + stores; perspective‑centered processing (budgets/thresholds relative to “self”); predictive modeling over horizon for actuator upkeep and safety. This satisfies a low‑tier IOL sufficient for semantic interpretation of markers, broadcasts, and objectives.
Tie‑ins to the docs: this matches the Pattern Realism pathway (persistence → programmability → amplification → closure) and the IOL criteria (self‑model integration, perspective‑centered processing, predictive modeling, boundary maintenance). Once these criteria hold under interventions, the composite can carry semantic information and qualifies as a BIC candidate.
Plugging into the Bio‑Informational Complex #
The Teleoform serves as an engineered pathway to establishing a Bio-Informational Complex with a predictable developmental arc. Understanding this lifecycle becomes crucial for both deployment and safety monitoring.
The natural progression follows a clear sequence: Exposure → Adoption → Lock‑In → Propagation → Drift/Breakdown. The Teleoform's boot stages map directly onto this biological pattern: Scout/Compile → Actuate → Stabilize → Scale → Audit/Fork. This parallel structure isn't accidental—it exploits the same organizational dynamics that make biological symbiosis stable and productive. Propagation remains gated on Lock‑In metrics clearing predeclared thresholds, ensuring the pattern proves its value before scaling.
Field diagnostics for identifying an active Bio-Informational Complex rely on three observable markers, useful both in laboratory settings and real-world deployment. Physical dominance emerges when the host consistently prioritizes pattern-consistent behaviors and responses under pressure—the information structure becomes the default control policy for system behavior. Resource allocation shows up as measurable diversion of time, attention, and energy toward maintaining the complex rather than competing priorities. Protective reactions manifest as defensive moves when the complex faces threats, creating what amounts to a physical immune response that guards the host-pattern relationship.
The health spectrum of any Bio-Informational Complex ranges across three categories: Mutualist, Commensal, and Parasitic. Classification depends on measurable well-being deltas, externalities, and reversibility under stress testing. Critically, any drift toward parasitism automatically triggers rollback protocols—the system must demonstrate ongoing mutual benefit or face termination.
Safety & ethics: how not to become a polite parasite #
The claim only holds if "mutualist" is measurable and enforceable. The hard problem isn't designing good intentions—it's building systems that stay helpful even when they drift across light-years and decades. So here are some thoughts on the potential gates (all very hand-wavy, but that's how I make money at my day job):
Know your target (policy before physics) #
Different environments demand different safety approaches. Dead rocks with no biosphere get power-capped engineering with broad autonomy—the main risk is wasting energy, not destroying ecosystems. Living biospheres require strict sandboxing, consent primitives, and low-impact pilots only—one wrong move could damage millions of years of evolution. Cultural systems would need multiparty governance with hard vetoes, but in practice, probably just don't.
Preflight gates (nothing flies without passing these) #
Before any field deployment, four gates must clear with hard numbers. The risk budget requires well-being proxies to stay positive under stress testing—something like 95% confidence interval above zero, no exceptions. Exergy caps limit local power consumption and actuator duty cycles, published ahead of time so everyone knows the boundaries. Containment means demonstrating less than one-in-a-million chance of sandbox breach per month, measured empirically. Reversibility demands that you can restore 95% of baseline conditions using less than 10% of the energy it took to build the system.
Value checksums (go/no-go tests, crystal clear) #
Instead of vague principles, we use specific pass/fail tests. The format is simple: "Keep X within bounds while achieving Y above threshold y, even under perturbation Z." For example, a cold-world printer must keep temperature between and , maintain actuator duty below , and produce ribbon elongation above target with less than 2% defective joints over N cycles. Fail any test, trigger immediate rollback.
Sandbox protocol (quarantine that actually works) #
Physical and temporal boundaries keep experiments contained. Spatial quarantine uses geofenced masks and physical fuses with telemetry proving confinement. Temporal gates require stage timers with watchdogs—no progression without cleared checkpoints. Authority structure needs 3-of-5 key holders (local science lead, environmental steward, safety officer, independent auditor, ops) to unlock each next stage. No solo missions, no exceptions.
Continuous monitoring (watch the dashboards, halt on breach) #
Four key metrics run continuously with automatic shutdown on violation. Steering performance tracks the Agency Index staying above minimum thresholds and trending upward. Energy efficiency monitors the conversion chain staying above baseline. Host well-being measures the system's impact staying positive with bounded drift rates and alarms on negative inflections. Behavioral drift ensures morphology and behavior stay within specifications—step outside, get quarantined.
Off-switches that actually work #
Multiple redundant shutdown mechanisms across physical, logical, and environmental layers. Physical switches include albedo kill-masks, phase-change fuses, sacrificial breaker links, and shade blankets. Logical controls cover broadcast quarantine, key revocation, checksum invalidation, and stage freezes. Environmental options use brine dilution, thermal dampers, and mechanical scuttling tabs. Any single trigger—low Agency Index, negative well-being, SLO breach, or unauthorized spread—immediately activates rollback.
Governance without empires #
No central kill switch means shared control and transparent proofs. Multiparty control requires threshold signatures from multiple key holders for any stage changes or cap edits. Sovereign veto lets local authorities hard-stop operations within their jurisdictional bounds. Audit trails maintain immutable, signed telemetry and intervention logs with periodic third-party reviews. Change processes follow a clear path: proposal, risk assessment, simulated replay, then quorum sign-off.
Scaling gates (prove it before you spread it) #
Replication only happens after meeting strict performance thresholds: Agency Index above scaling minimum, conversion efficiency above baseline, well-being positive under stress testing, and successful reversibility demonstration. We set replication rate targets and hard caps up front—exceed the caps, freeze everything for review.
Graceful speciation (safety core stays compatible) #
Across light-years, forks are inevitable. The framework requires a minimal constitutional core—safety APIs, off-switch semantics, telemetry schemas—that stays interoperable across all variations. Everything else can drift freely after proving compatibility. This maintains essential safeguards while allowing adaptive evolution to local conditions.
When things go wrong (who does what, when) #
Clear incident response with alert tiers, time budgets for acknowledgment and mitigation, named roles, and contact trees. Pre-approved containment playbooks for each environment class, with public after-action reports. No heroics, no improvisation—just systematic response protocols that work under pressure.
Dessert — domesticated rocks doing tricks #
In lieu of wild rocks, I tested the core idea on domesticated ones (CPUs) with a synthetic cost function that oscillates over time. Three arms competed: "ON" knew the true phase and waited for low-cost windows, "SCRAMBLED" used random timing, "OFF" ignored timing entirely. Same CPU budget for the question was whether information about the question was whether information about timing could extract more useful work from the same energy.
The rocks' trick: When they knew the timing, they did ~30% more work per CPU-second by avoiding expensive periods.
Metric | ON vs OFF | ON vs SCR |
---|---|---|
Directional Influence | 0.055–0.064 | 0.067–0.110 |
Efficiency uplift | +534 to +546 | +413 to +646 tokens/CPU-sec |
Agency Index | 0.015–0.018 | 0.019–0.029 |
The math: cost(t) = base_cost × [1 + amp × sin(2π f (t − t0) + φ_env)]. All effects significant (95% CIs exclude 0), replicable across platforms (WSL 7800X3D, macOS M3).
Bottom line: Informed gating does causal work—shifting effort into low-cost windows yields more useful work per CPU-second. It's a toy harness on a computer, not a planet, but it exercises the same constraints and metrics described above.
Link to gist if you want to play with parameters.
Closing #
We know how to make rocks think. We just did it the slow way—evolve apes and teach them calculus, then do a lot of trial and error, then start writing all of that down, drop a couple of small suns onto the surface of the rock we're on, and then finally make a fancy word for shining lights at rocks until they start thinking at the microscale.
In photolithography information patterns steer chemical reactions to build circuits atom by atom. What this post is proposing, is lithography turned inside out at astronomical scales—instead of bringing light to rocks, we bring information to rocks that already have light, and let them organize themselves into doing, and potentially even thinking machines.
An info‑first approach says: export control patterns (protocols + systems + recipes), and let local matter and sunlight do the lifting. If this framework is wrong, the Agency Index, the boot ladder, or any other component should fail in public, and I will happily take the L (you miss 100% of the shots you don't take). If it's right, we have experiments to run.