6 Comments
User's avatar
The Stock Market Curator 📓's avatar

So the digital twin becomes actual infrastructure.

hohoda's avatar

the digital twin framing really clicked for me. most people treat AI context-setting as a one-time prompt, not as persistent infrastructure. voice.md and working-style.md are essentially the difference between hiring a temp who needs briefing every morning and an assistant who already knows how you think.

i've been experimenting with a similar setup in OpenClaw — they call the equivalent file SOUL.md, which leans more into identity than just style. interesting how different tools converge on the same underlying idea: the bottleneck isn't capability, it's context.

curious — when you first built your own voice.md, how long did it take before it actually felt accurate? i'm finding the first draft always sounds like how i think i sound, not how i actually sound.

Asli Öztürk's avatar

OpenClaw indeed has soul.md file, I believe it was created when I first run it.

Claude Skills give more personality to sound like “me”. Which is great of course, but not fully satisfying. In my experience, it requires a lot of iteration to make it really sound like a person, and it somehow not “perfect” in the end. I am still figuring out how to work with voice.md well. And I always go through the output myself.

DeSebba's avatar

I spend half an hour every time to teach my new AI conversations what I’m working on and what I want from it. This article is going to save me so much time! Thank you so much!🙏

Asli Öztürk's avatar

I am happy to hear!<3

Nikola Blagojevic's avatar

This was such a useful guide. Thanks for sharing it🍀