Skip to content

Instantly share code, notes, and snippets.

@rma92
Created February 5, 2026 18:59
Show Gist options
  • Select an option

  • Save rma92/aa0882f477a64548e49c84f1d6851c20 to your computer and use it in GitHub Desktop.

Select an option

Save rma92/aa0882f477a64548e49c84f1d6851c20 to your computer and use it in GitHub Desktop.
Claude Code and Codex using a long instructions file.

user

Do Claude Code or Codex support a way to pass in a very long message from a file? For example, I want to provide the instructions to start building the application. It's several paragraphs, so I want to write it with vim, save it, then tell Claude or Codex to read the file and follow the instructions.

Does this work? Can you find anything online? Ideally try to cover both tools.

Approaches that seem reasonable are:

  • Writing the file, say, instructions.md or instructions.txt in the directory and telling the agent to read it and build what is described.
  • Pasting the contents of the file into the interactive prompt.

I'm open to other options. Please try to provide commentary for both tools, as we use both of them at my firm. Thanks for your help, you're the best.

agent

Thought for 9m 59s, ChatGPT 5.2 Pro Thinking

Yes. Both Claude Code and OpenAI Codex (especially the Codex CLI) support workflows where the “long instructions” live in a file you edit with vim, and you either (a) reference/attach that file, (b) have the tool automatically load it as project instructions, or (c) pipe it in via stdin.

Claude Code

1) Put instructions.md in the repo and reference it from your prompt

Claude Code supports @ file references that include the full content of the file in the conversation. Example:

Build the application described in @instructions.md.
Start by summarizing requirements and proposing a plan, then implement.

This pattern is explicitly documented (“This includes the full content of the file in the conversation.”). (Claude Code)

2) Make it persistent with CLAUDE.md (recommended for “project boot instructions”)

Claude Code has a built-in “memory” mechanism: CLAUDE.md (and related locations) are automatically loaded into context when Claude Code launches. (Claude Code)

Useful extras:

  • CLAUDE.md can import additional files using @path/to/import syntax, so you can keep a large spec in instructions.md and keep CLAUDE.md short, e.g. “See @instructions.md”. (Claude Code)
  • For large projects, you can break guidance into .claude/rules/*.md, which are automatically loaded as project memory. (Claude Code)

This is the closest Claude Code equivalent to “check in a spec file and have the agent consistently follow it.”

3) Pipe the file in non-interactively

Claude Code’s CLI supports piping file contents into a -p run:

cat instructions.md | claude -p "Follow the above instructions and implement the app."

This is documented as “Process piped content.” (Claude Code)

4) Draft the long prompt in vim without copy/paste

In interactive Claude Code sessions, Ctrl+G opens your prompt in the default text editor (“Edit your prompt or custom response”). If your environment is set to use vim, this is essentially “press a key, write paragraphs in vim, save, send.” (Claude Code)

5) (Optional) Load instructions from a file as system-prompt text in print mode

If what you want is closer to “rules the agent should always follow for this run,” Claude Code also supports:

  • --system-prompt-file (replace the system prompt) and
  • --append-system-prompt-file (append to the default system prompt), both documented as print-mode-only options. (Claude Code)

This is useful for “policy / style / constraints” that you want treated as higher-priority than a normal user message.


Codex (OpenAI Codex CLI / IDE extension)

1) Put instructions in AGENTS.md (recommended)

Codex has first-class support for project instruction files via AGENTS.md:

  • “Codex reads AGENTS.md files before doing any work.” (OpenAI Developers)
  • It builds an instruction chain with clear precedence (global → project path → merge order), and it has a default combined-size cap (project_doc_max_bytes, 32 KiB by default). (OpenAI Developers)

Practically, this means you can:

  • Commit AGENTS.md at the repo root with your “start building the application” instructions.
  • Optionally add ~/.codex/AGENTS.md for firm-wide defaults and layer repo specifics on top. (OpenAI Developers)

If you already have instructions.md and don’t want to rename it, Codex can be configured to treat alternate filenames as instruction sources via project_doc_fallback_filenames. (OpenAI Developers)

2) Pipe the long prompt from a file (non-interactive)

Codex CLI supports reading the initial prompt from stdin by using - for the prompt:

  • PROMPT can be string | - (read stdin) and “Use - to pipe the prompt from stdin.” (OpenAI Developers)

Example:

codex exec - < instructions.md

(or equivalently cat instructions.md | codex exec -)

This is the closest match to “write in vim, save, run once with that file as the prompt.”

3) Draft/edit long prompts in vim inside the Codex TUI

Codex CLI includes a built-in prompt editor hook:

  • In the prompt input, press Ctrl+G to open the editor defined by VISUAL (or EDITOR). (OpenAI Developers)

So your workflow can be:

  1. codex
  2. Ctrl+G → vim opens → write several paragraphs → save/exit → prompt is sent.

4) Attach/read repo files in interactive sessions

Codex’s docs describe using @ path autocomplete and also attaching files with /mention in the CLI. (OpenAI Developers) In the Codex CLI, @ brings up a fuzzy file search and inserts the selected path into your message. (OpenAI Developers)

So you can do something like:

Read @instructions.md and implement what it describes. Start with a plan.

And/or use /mention to attach the file per the workflow guidance. (OpenAI Developers)

5) (If you run Codex in CI) prompt-file

If part of your firm’s workflow is GitHub Actions, the Codex GitHub Action supports providing inline text via prompt or pointing to a repo file via prompt-file. (OpenAI Developers)


Practical recommendation

If you want “instructions to start building the application” that should reliably apply across a repo/team:

  • Claude Code: put the durable guidance in CLAUDE.md (and optionally import @instructions.md or split into .claude/rules/). (Claude Code)
  • Codex: put the durable guidance in AGENTS.md (or configure fallback filenames so your existing instructions.md is picked up automatically). (OpenAI Developers)

If you want “one-off long prompt for a single run/session,” both tools support:

  • Ctrl+G → open $EDITOR (vim) → write/save → send. (Claude Code)
  • piping from stdin (Codex explicitly via -; Claude via piping content into claude -p). (OpenAI Developers)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment