Skip to content

Instantly share code, notes, and snippets.

@brandonbryant12
Created December 15, 2025 16:09
Show Gist options
  • Select an option

  • Save brandonbryant12/6452e054b6f395b5e01a5c3d769d90c0 to your computer and use it in GitHub Desktop.

Select an option

Save brandonbryant12/6452e054b6f395b5e01a5c3d769d90c0 to your computer and use it in GitHub Desktop.
Here is a detailed prompt you can use with an AI coding assistant (like Cursor, GitHub Copilot, or ChatGPT) to generate your full project scaffolding.
It is designed to force the AI to separate your business logic (the pipeline) from your API logic (FastAPI), which is critical for testing.
The Prompt to Copy & Paste
> I am refactoring a Python data pipeline script into a production-ready FastAPI service. I have already installed fastapi, pydantic, instructor, and openai.
> The Goal:
> Create a strictly typed, async API service that processes messages. The pipeline flow is: Input (SQS style payload) -> Pre-processing -> Azure OpenAI Analysis (via Instructor) -> Database Write (Mocked for now).
> Please generate the code for the following project structure:
> * app/core/config.py:
> * Use pydantic-settings to manage environment variables (Azure Endpoint, API Key, Deployment Name).
> * app/models/schemas.py:
> * Create a PipelineInput model (simulating an SQS message body).
> * Create a AnalysisResult model (the structured output we want from the LLM) with fields like summary, sentiment, and tags.
> * app/services/llm_client.py:
> * Initialize the AzureOpenAI client.
> * Wrap it using instructor.from_openai().
> * Create an async function analyze_text(text: str) -> AnalysisResult that uses the patched client to return the specific Pydantic model.
> * app/services/pipeline.py:
> * Create a function process_message(input_data: PipelineInput).
> * It should coordinate the steps: validate input -> call analyze_text -> print "Saving to DB" (stub).
> * app/api/routes.py:
> * Create a standard APIRouter.
> * Add a POST /trigger endpoint that accepts PipelineInput.
> * Use BackgroundTasks to run the pipeline so the API doesn't block.
> * app/main.py:
> * Initialize the FastAPI app and include the router.
> Constraints:
> * Use Python type hints everywhere.
> * Make the LLM call async.
> * Ensure the Azure client uses os.getenv or the config settings.
>
What this prompt gives you
When you run this prompt, you will get a structure that solves your specific pain points:
* app/core/config.py: This removes hardcoded credentials. It allows you to switch between Dev/Prod Azure instances easily.
* app/services/llm_client.py: This isolates the "Instructor" logic. If you ever switch from Azure to standard OpenAI or Anthropic, you only change this one file, not your API routes.
* app/models/schemas.py: This is your "Single Source of Truth." Both your API validation and your LLM validation will use these same class definitions.
One missing piece (Dependency)
The prompt asks for pydantic-settings (the industry standard for config), which is separate from the main Pydantic library. You should add it now:
uv add pydantic-settings
Next Step:
Once you generate this code, would you like me to show you how to set up the .env file so the Azure client actually connects?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment