Learn how to use Claude Cowork to automate documents, build your AI digital twin with voice.md files, and stop copy-pasting forever. Step-by-step guide.
the digital twin framing really clicked for me. most people treat AI context-setting as a one-time prompt, not as persistent infrastructure. voice.md and working-style.md are essentially the difference between hiring a temp who needs briefing every morning and an assistant who already knows how you think.
i've been experimenting with a similar setup in OpenClaw — they call the equivalent file SOUL.md, which leans more into identity than just style. interesting how different tools converge on the same underlying idea: the bottleneck isn't capability, it's context.
curious — when you first built your own voice.md, how long did it take before it actually felt accurate? i'm finding the first draft always sounds like how i think i sound, not how i actually sound.
OpenClaw indeed has soul.md file, I believe it was created when I first run it.
Claude Skills give more personality to sound like “me”. Which is great of course, but not fully satisfying. In my experience, it requires a lot of iteration to make it really sound like a person, and it somehow not “perfect” in the end. I am still figuring out how to work with voice.md well. And I always go through the output myself.
I spend half an hour every time to teach my new AI conversations what I’m working on and what I want from it. This article is going to save me so much time! Thank you so much!🙏
So the digital twin becomes actual infrastructure.
the digital twin framing really clicked for me. most people treat AI context-setting as a one-time prompt, not as persistent infrastructure. voice.md and working-style.md are essentially the difference between hiring a temp who needs briefing every morning and an assistant who already knows how you think.
i've been experimenting with a similar setup in OpenClaw — they call the equivalent file SOUL.md, which leans more into identity than just style. interesting how different tools converge on the same underlying idea: the bottleneck isn't capability, it's context.
curious — when you first built your own voice.md, how long did it take before it actually felt accurate? i'm finding the first draft always sounds like how i think i sound, not how i actually sound.
OpenClaw indeed has soul.md file, I believe it was created when I first run it.
Claude Skills give more personality to sound like “me”. Which is great of course, but not fully satisfying. In my experience, it requires a lot of iteration to make it really sound like a person, and it somehow not “perfect” in the end. I am still figuring out how to work with voice.md well. And I always go through the output myself.
I spend half an hour every time to teach my new AI conversations what I’m working on and what I want from it. This article is going to save me so much time! Thank you so much!🙏
I am happy to hear!<3
This was such a useful guide. Thanks for sharing it🍀