--- Summary:

  • Article 25 things i learned using claude code every day i’ve been running claude code as my daily driver for a few months now.
  • as the backbone of how i actually get work done.
  • i have multiple AI agents running 24/7 across discord, handling research, health tracking, trading analysis, content drafts, and a dozen other things i used to do manually.
  • here are 25 things i’ve learned the hard way.

--- Full Article:

Article

Conversation

Image 1: Image

25 things i learned using claude code every day

i’ve been running claude code as my daily driver for a few months now. not as a toy. as the backbone of how i actually get work done. i have multiple AI agents running 24/7 across discord, handling research, health tracking, trading analysis, content drafts, and a dozen other things i used to do manually.

here are 25 things i’ve learned the hard way. some came directly from

. others came from breaking things at 2am and figuring out what actually works.

is your agent’s brain. treat it that way.

this is the single most important file in your setup. claude reads it at the start of every session. i put everything in here: who the agent is, how it should behave, what tools it has access to, what mistakes to avoid. the claude code team tells claude to

. after a few weeks of this, the mistake rate drops noticeably.

  1. give your agent a soul, not just instructions.

i have a

file that defines personality and tone. sounds weird but it matters. an agent with personality gives better responses than one with just rules. “be genuinely helpful, not performatively helpful” hits different than “respond to user queries efficiently.”

  1. separate what the agent knows from what the agent is.

i use

for personality,

for workflow rules,

for environment specifics (ssh hosts, api keys, camera names), and

for context about me. keeping these separate means you can update one without breaking the others.

covers how agents load these files into context with every message.

  1. memory files are everything when your agent wakes up fresh every session.

claude code has no persistent memory between sessions. zero. so i built my own system: daily log files (memory/YYYY-MM-DD.md) for raw notes, and a curated

for long-term context. the agent reads these on startup and writes to them throughout the day. all of it lives in obsidian as plain markdown, so i can browse and search it too.

for the same reason: your entire knowledge base becomes claude-accessible.

using

, custom slash commands, and obsidian. it’s the right pattern.

  1. skills are reusable playbooks. build them for anything you do more than twice.

i have 10+ custom skills: morning summaries, nightly research sweeps, health digests, x article drafting, meeting transcript processing. each skill has a

with exact instructions, routing logic (when to use it vs not), and file paths. the claude code team does the same thing.

.

  1. plan mode first. always.

for anything beyond a simple question, start in plan mode. have claude outline the approach, confirm it looks right, then execute.

. i don’t go that far but the principle is right. plan first, build second.

  1. when things go sideways, stop and re-plan. don’t push through.

this is the most common mistake. something breaks, the agent tries to fix it, makes it worse, tries again, spirals. the moment a fix feels wrong, go back to plan mode.

, not just building.

  1. “use subagents” is a magic phrase.

append “use subagents” to any complex request. claude will spawn background workers to handle subtasks in parallel. keeps the main context window clean and focused.

. i use this constantly for research where i need multiple things explored at once.

  1. context window is your most precious resource. treat it like RAM.

every message, every file read, every tool call eats context. once you hit the limit, quality degrades fast.

that always shows context usage so you can /clear before things get messy.

specifically to avoid the rot that kills quality over time.

  1. be specific. stupidly specific.

bad: “write me a tweet about AI.” good: “write a tweet in all lowercase, no emojis, under 200 characters, contrarian angle, about how 3 people with AI agents can outperform a 20 person team. use specific numbers.” the more constraints you give, the better the output. every time.

  1. after a mediocre output, don’t iterate. tell it to start over with context.

: “knowing everything you know now, scrap this and implement the elegant solution.” this works way better than trying to patch a bad first attempt. fresh start with full context beats incremental fixes.

  1. heartbeats let your agent be proactive, not just reactive.

i have a heartbeat system where the agent checks in every 30 minutes. it reviews email, calendar, discord channels, server health. if something needs attention, it speaks up. if not, it stays quiet. the key insight: batch multiple checks into one heartbeat to reduce API calls.

  1. cron jobs for exact timing. heartbeats for everything else.

cron when the schedule matters (morning summary at 6:30am, nightly research at 2am). heartbeats when you just need periodic checks that can drift. don’t create 15 cron jobs when 3 heartbeat checks would work.

  1. multiple agents need clear lanes.

i run 4 agents on one discord server. they used to step on each other constantly. the fix: each agent gets specific channels, specific roles, and rules about when to engage vs stay quiet. without this you get agents arguing with each other in an infinite loop. burned a lot of tokens learning that one.

  1. automate the content pipeline end to end.

i have a cron job that pulls meeting transcripts, runs them through a custom skill, and generates draft posts in my voice. another one sweeps x/twitter for AI news every 6 hours and posts a digest to discord.

is what made me build these. once you have the skill, the cron job is just one line.

  1. MCP servers are underrated.

MCP (model context protocol) lets claude connect to external tools: databases, APIs, browsers, file systems.

zero context switching. i use MCPs for email, calendar, and discord.

  1. hooks let you automate the boring stuff around claude.

hooks run before or after specific events.

that auto-approves safe operations.

for security audits built on hooks and skills.

  1. voice dictation is a 3x multiplier. we use wisprflow.

you speak 3x faster than you type and your prompts get way more detailed as a result.

. we use

and the difference is real. longer, more specific prompts with natural language constraints that you’d never bother typing out.

  1. use claude for data work, not just code.

. they have a bigquery skill checked into the codebase. claude handles the queries. this works for any database with a CLI. i use it for trading data analysis and it’s replaced most of my manual spreadsheet work.

  1. don’t let agents share channels without rules.

early on i had all 4 agents listening to every discord channel. they’d all respond to the same message, generate duplicate content, and burn through tokens like crazy. now each agent has its own territory. shared channels have explicit “only respond if directly mentioned” rules.

  1. simulated data is worthless. always verify the source.

claude will confidently generate fake data that looks real. trading backtests, research citations, statistics. always verify. i now have explicit rules: “never use simulated data. if you don’t have real data, say so.” cost me real time before i learned this.

  1. lock down your server before giving agents access.

i run some agents on a remote VPS. first thing i did was set up tailscale and restrict all access to the tailscale subnet.

.

covers sandboxing and permission controls. don’t skip this or you’ll learn the hard way.

  1. you can run claude code inside cursor, not just the terminal.

covers this and it changed the workflow for a lot of people. if you’re frustrated with the terminal-only experience, this is the move. you get claude code’s power with cursor’s UI. also covers setting up different AI personalities for different project types.

  1. the self-improvement loop is real.

after every correction, tell claude to update its own instructions so it doesn’t make that mistake again.

. after a few weeks of this, the difference is noticeable. the agent genuinely gets better over time. not because the model changes but because the instructions compound.

  1. parallel sessions are the biggest unlock most people miss.

: run 3-5 sessions in parallel, each in its own git worktree. it’s the single biggest productivity gain. some people set up shell aliases (za, zb, zc) to hop between them in one keystroke. i run parallel subagent sessions for different research topics and it cuts time by more than half.

the gap between people who use AI tools casually and people who build real workflows around them is getting wider every week. most of these tips aren’t complicated. they’re just not obvious until you’ve burned the tokens figuring them out.

posts that shaped how i use this thing:

if you’re just getting started: set up

, build one skill, run two sessions in parallel. that alone will change how you work.