Discussion about this post

User's avatar
The Stock Market Curator 📓's avatar

So the digital twin becomes actual infrastructure.

hohoda's avatar

the digital twin framing really clicked for me. most people treat AI context-setting as a one-time prompt, not as persistent infrastructure. voice.md and working-style.md are essentially the difference between hiring a temp who needs briefing every morning and an assistant who already knows how you think.

i've been experimenting with a similar setup in OpenClaw — they call the equivalent file SOUL.md, which leans more into identity than just style. interesting how different tools converge on the same underlying idea: the bottleneck isn't capability, it's context.

curious — when you first built your own voice.md, how long did it take before it actually felt accurate? i'm finding the first draft always sounds like how i think i sound, not how i actually sound.

4 more comments...

No posts

Ready for more?