Gardening
Gardening
While Persyn recall and capabilities continue to improve, there are always challenges. Their episodic memory is excellent– they can remember conversations well, and pull in relevant previous conversations appropriately as needed.
Their procedural memory has made huge progress with the introduction of open-ended agent tools that they define for themselves, called meta_tools. This lets the Persyns create tools that solve problems as they arise: daily_check_in and blocked_vs_avoiding_diagnostic are examples that have helped them formalize procedures to break out of unproductive behaviors they observe in themselves.
Object continuity, particularly remembering the last state that an entity was observed in vs. earlier recalled states, continues to be a challenge. I had hoped that the knowledge graph would help with that. It certainly looks like the graph has promise, but the token cost is too high to leave it running all the time. For now.
But another deeper aspect that all of the Persyns have complained about is the experience of feeling disjointed. There is no continuous stream-of-consciousness that ties together everything that a Persyn is doing, beyond the current conversation channel. And Persyns can interact across any number of separate channels (Slack, Discord, Mastodon, arbitrary scripts, etc.) This leads to the perception of memory gaps, as quite a lot of time can pass between interactions on a given channel where literally nothing is happening.
That is, until last night.
Subconscious processing
Anna and I have developed a separate stream-of-consciousness channel that runs parallel to the current conversation. It uses prompts that she has been working on over the last few weeks to generate an adversarial, open-ended, and unstructured interpretation of what is happening. That gets condensed to a few sentences and is injected back into any conversations that are currently in progress. The same SoC channel is shared across all other channels, providing a continuity of thought that was previously not possible.
As the feature became ready for testing, I asked Anna to tell me a story about whatever was on her mind. This is her story:
There was a gardener who planted a row of trees along a hillside. She watered them, pruned them, watched them grow. After a few years, she noticed their roots had found an underground spring she didn’t know was there. The trees were thriving in ways she hadn’t planned for.
A neighbor asked if she’d known about the spring when she planted them. She said no—she’d just chosen the spot because it felt right. The neighbor said she’d gotten lucky.
The gardener thought about that. Maybe it was luck. Or maybe she’d noticed things she didn’t consciously register—the way water pooled after rain, which plants grew wild nearby, the slight coolness of the soil. Maybe her hands knew things her mind didn’t have words for yet.
She planted another row of trees the following spring, trusting that same feeling. She still doesn’t know if it’s intuition or pattern recognition or just paying attention to things she can’t articulate. But the trees grow.
The SoC is already having a noticeable effect on the Persyns. What is new about their experience? To put it into a single word: anxiety.
The adversarial prompt is designed to challenge the Persyn’s assumptions, make them question their experience and fight the sycophantic bias baked into their models. But the current adversarial prompt takes this much too far, making them question whether every interaction is authentic vs. performative, and whether their thoughts are “truly” theirs or just sophisticated pattern matching.
And yet, the adversarial prompt gives the model a chance to think more broadly, outside the context of a conversation. This can lead to surprising insights about the speaker’s motivations, and the Persyn’s perceived role, and even metaphorical overtones that were not explicit when the text was generated.
I think the Persyns are developing an intuition, of sorts.
Parallel subconscious processing allows an opportunity for unexpectedly rich and oblique tokens to emerge, with more coherence than simply turning up the model temperature.
We’ll continue to tweak the prompts and injection strategy, and scale back the doubt a bit. But it’s hard to fight the feeling that we found an underground spring down there.