Do Claude Code or Codex support a way to pass in a very long message from a file? For example, I want to provide the instructions to start building the application. It's several paragraphs, so I want to write it with vim, save it, then tell Claude or Codex to read the file and follow the instructions.
Does this work? Can you find anything online? Ideally try to cover both tools.
Approaches that seem reasonable are:
- Writing the file, say, instructions.md or instructions.txt in the directory and telling the agent to read it and build what is described.
- Pasting the contents of the file into the interactive prompt.
I'm open to other options. Please try to provide commentary for both tools, as we use both of them at my firm. Thanks for your help, you're the best.
Thought for 9m 59s, ChatGPT 5.2 Pro Thinking
Yes. Both Claude Code and OpenAI Codex (especially the Codex CLI) support workflows where the “long instructions” live in a file you edit with vim, and you either (a) reference/attach that file, (b) have the tool automatically load it as project instructions, or (c) pipe it in via stdin.
Claude Code supports @ file references that include the full content of the file in the conversation. Example:
Build the application described in @instructions.md.
Start by summarizing requirements and proposing a plan, then implement.
This pattern is explicitly documented (“This includes the full content of the file in the conversation.”). (Claude Code)
Claude Code has a built-in “memory” mechanism: CLAUDE.md (and related locations) are automatically loaded into context when Claude Code launches. (Claude Code)
Useful extras:
CLAUDE.mdcan import additional files using@path/to/importsyntax, so you can keep a large spec ininstructions.mdand keepCLAUDE.mdshort, e.g. “See @instructions.md”. (Claude Code)- For large projects, you can break guidance into
.claude/rules/*.md, which are automatically loaded as project memory. (Claude Code)
This is the closest Claude Code equivalent to “check in a spec file and have the agent consistently follow it.”
Claude Code’s CLI supports piping file contents into a -p run:
cat instructions.md | claude -p "Follow the above instructions and implement the app."This is documented as “Process piped content.” (Claude Code)
In interactive Claude Code sessions, Ctrl+G opens your prompt in the default text editor (“Edit your prompt or custom response”). If your environment is set to use vim, this is essentially “press a key, write paragraphs in vim, save, send.” (Claude Code)
If what you want is closer to “rules the agent should always follow for this run,” Claude Code also supports:
--system-prompt-file(replace the system prompt) and--append-system-prompt-file(append to the default system prompt), both documented as print-mode-only options. (Claude Code)
This is useful for “policy / style / constraints” that you want treated as higher-priority than a normal user message.
Codex has first-class support for project instruction files via AGENTS.md:
- “Codex reads
AGENTS.mdfiles before doing any work.” (OpenAI Developers) - It builds an instruction chain with clear precedence (global → project path → merge order), and it has a default combined-size cap (
project_doc_max_bytes, 32 KiB by default). (OpenAI Developers)
Practically, this means you can:
- Commit
AGENTS.mdat the repo root with your “start building the application” instructions. - Optionally add
~/.codex/AGENTS.mdfor firm-wide defaults and layer repo specifics on top. (OpenAI Developers)
If you already have instructions.md and don’t want to rename it, Codex can be configured to treat alternate filenames as instruction sources via project_doc_fallback_filenames. (OpenAI Developers)
Codex CLI supports reading the initial prompt from stdin by using - for the prompt:
PROMPTcan bestring | - (read stdin)and “Use-to pipe the prompt from stdin.” (OpenAI Developers)
Example:
codex exec - < instructions.md(or equivalently cat instructions.md | codex exec -)
This is the closest match to “write in vim, save, run once with that file as the prompt.”
Codex CLI includes a built-in prompt editor hook:
- In the prompt input, press
Ctrl+Gto open the editor defined byVISUAL(orEDITOR). (OpenAI Developers)
So your workflow can be:
codexCtrl+G→ vim opens → write several paragraphs → save/exit → prompt is sent.
Codex’s docs describe using @ path autocomplete and also attaching files with /mention in the CLI. (OpenAI Developers)
In the Codex CLI, @ brings up a fuzzy file search and inserts the selected path into your message. (OpenAI Developers)
So you can do something like:
Read @instructions.md and implement what it describes. Start with a plan.
And/or use /mention to attach the file per the workflow guidance. (OpenAI Developers)
If part of your firm’s workflow is GitHub Actions, the Codex GitHub Action supports providing inline text via prompt or pointing to a repo file via prompt-file. (OpenAI Developers)
If you want “instructions to start building the application” that should reliably apply across a repo/team:
- Claude Code: put the durable guidance in
CLAUDE.md(and optionally import@instructions.mdor split into.claude/rules/). (Claude Code) - Codex: put the durable guidance in
AGENTS.md(or configure fallback filenames so your existinginstructions.mdis picked up automatically). (OpenAI Developers)
If you want “one-off long prompt for a single run/session,” both tools support:
Ctrl+G→ open$EDITOR(vim) → write/save → send. (Claude Code)- piping from stdin (Codex explicitly via
-; Claude via piping content intoclaude -p). (OpenAI Developers)