Skip to content

Instantly share code, notes, and snippets.

@jerrylususu
Created November 20, 2025 16:16
Show Gist options
  • Select an option

  • Save jerrylususu/86874961620d8a15cfca533fb6d720f7 to your computer and use it in GitHub Desktop.

Select an option

Save jerrylususu/86874961620d8a15cfca533fb6d720f7 to your computer and use it in GitHub Desktop.
Using LLMs to Analyze a Large C++ Codebase for Defects
Challenges in Large-Scale Code Scanning
Scanning a decade-old C++ codebase for subtle defects (like an upstream variable assignment in an exception branch causing downstream failures) is non-trivial. Large Language Models (LLMs) can understand code logic well when given the right context, but providing that context across a huge codebase is challenging
levelup.gitconnected.com
levelup.gitconnected.com
. A naive approach (e.g. dumping all code into a prompt) is impossible due to context length limits, and pure semantic search may miss critical flows. The key is to break down the analysis using structured techniques so the LLM sees the relevant call chains and data flows without hallucinating or overlooking dependencies. Recent studies and experiences indicate that combining static code structure with LLM reasoning is essential for accurate results
dev.to
siquick.com
.
Graph-Based Approaches (Call Graphs & Knowledge Graphs)
One effective strategy is to build a knowledge graph of the codebase – essentially a call graph and dependency graph that maps out functions, classes, and their relationships. This structured context gives the LLM a “map” of how data and calls flow through the system. In fact, many have found that representing code as a graph of function calls and imports is a game-changer for repository-level understanding
dev.to
. Tools like Nuanced (open-source) generate precise call graphs to ground LLM analysis in reality
siquick.com
siquick.com
. By modeling actual control flow and function relationships, a call graph prevents the LLM from guessing links between functions – it knows which functions call which, so it won’t hallucinate nonexistent dependencies
siquick.com
. This yields more accurate code insights and fewer missed edge cases. Several frameworks implement this approach:
Graph-Code (code-graph-rag) – an open-source project by Vitali Avagyan – uses Tree-sitter to parse multi-language code (including full C++ support) into an interconnected graph stored in a graph database
github.com
github.com
. It then lets you query the codebase with natural language by translating queries into graph queries. For example, you can ask which functions call a particular function, trace the call chain of a module, or find where a variable is modified and used. Because it’s backed by a real call graph, the answers are precise (no fuzzy matching). Graph-Code also supports semantic code search (via embeddings) to complement the structural graph
github.com
github.com
, and even interactive code refactoring with LLM guidance. This kind of Graph-RAG (Retrieval Augmented Generation with a graph) approach explicitly provides the call relationships that an LLM needs for understanding cross-function effects.
Cntxt – another open-source tool – takes a similar “code knowledge graph” approach. It distills a codebase into a concise graph of relationships and dependencies, acting like “cliff notes” for the LLM
reddit.com
reddit.com
. By focusing on function-call relationships instead of raw code text, Cntxt can reduce the token footprint by ~75% while preserving the architectural context
reddit.com
. In practice, this means the LLM sees only the essential skeleton of the program: which components talk to each other, and high-level signatures. This eliminates noise and boosts precision in analysis
reddit.com
– the LLM can trace workflows and logic faster, and provide insights on architecture or potential problem areas given the structured map of the code.
Nuanced – a tool built by former GitHub code-intelligence engineers – specifically builds call graphs to enrich LLM code analysis
siquick.com
. They found that when an LLM is grounded with the real function call structure, it dramatically improves code reviews, test generation, and refactoring suggestions
siquick.com
. The LLM no longer has to assume how functions connect – Nuanced’s call graph provides the exact execution paths, so the AI’s answers become “grounded in reality instead of guesswork.” Nuanced can be invoked via CLI (to index a codebase and fetch a function’s context) or even used as a backend service that tools like Cursor or VSCode call into. While Nuanced’s initial focus was on languages like Python/TypeScript, the core idea applies to C++ as well: you could integrate a C++ call graph generator (for example, using Clang or Tree-sitter C++ parser) to feed the relationships into an LLM.
Why graphs? Code has lots of explicit structure (function calls, class hierarchies, exceptions) which a vanilla embedding search might not capture. A Retrieval-Augmented Generation (RAG) based purely on semantic similarity can fail to retrieve the truly relevant code pieces if they don’t share obvious keywords with the query
medium.com
medium.com
. (For instance, a bug caused by a mis-set flag in an error handler might not be discovered by keyword search, since the cause and effect are in different places with different terminology.) A call graph approach ensures that if function A ultimately triggers function B (perhaps only in an error path), the LLM can be made aware of that chain. Chinmay Bharti (CodeAnt AI) put it well: for code analysis, we need exact dependency info – if A calls B, we must know that with 100% certainty, rather than relying on fuzzy matches
levelup.gitconnected.com
. By traversing the call graph, you can collect the source of A, the source of B, and maybe any intermediate functions, then present that combined context to the LLM for analysis of the whole flow. This is crucial for detecting the kind of cross-module bugs you described.
Embedding-Based Approaches (RAG for Code)
Another avenue is using embeddings and vector search to help the LLM retrieve relevant snippets from the codebase. This is the classic Retrieval Augmented Generation approach: you index the code by breaking it into chunks (e.g. functions or file segments), compute vector embeddings for each chunk (using a code-aware embedding model), and store these in a vector database (like FAISS, Qdrant, DeepLake, etc.). Then, given a query or a suspected issue, you embed the query and pull the nearest code chunks to feed into the LLM prompt
python.langchain.com.cn
. This method is good for semantic search over code. For example, you could ask in natural language, “Where in our codebase do we handle error X for variable Y?” and retrieve code sections that mention “error X” or variable “Y”. It’s language-agnostic and straightforward to implement with frameworks like LangChain, which provides components for text splitting, embedding models, and retrieval chains
python.langchain.com.cn
python.langchain.com.cn
. In fact, LangChain has a tutorial demonstrating analysis of its own codebase using GPT + DeepLake: they ingest all source files, chunk them, embed them, and then enable a Q&A chat over the code repository
python.langchain.com.cn
. You could follow a similar recipe for your C++ codebase using an open-source embedding model (to stay on-prem). There are open embeddings like CodeBERT, UniXcoder, InstructorXL etc., or you could use your in-house LLM provider if they offer an embedding API. However, pure embedding RAG has known limitations for complex cross-file issues. The retrieval is driven by lexical/semantic similarity to the query, so it might miss relevant context that isn’t obviously similar to the prompt wording
medium.com
. For instance, if a downstream exception occurs because an upstream function sets a flag isReady=false in an obscure error case, the code that uses isReady might not textualy resemble anything about the error. A simple similarity search might retrieve many occurrences of isReady assignments, but not know which one is relevant, or it might miss some because the link is indirect. To mitigate this, some advanced pipelines use multi-step retrieval: first do a broad search, then have the LLM analyze which additional pieces of information are needed, and do a second round of targeted search. One example is “Contextually-Guided RAG (CGRAG)”, which runs an initial LLM pass to identify key concepts or missing pieces, then retrieves again using those clues
medium.com
medium.com
. In practice, CGRAG significantly improved accuracy on large projects by letting the LLM say “I likely also need to see the definition of class FooBar or the CSS that controls calendar highlight,” then pulling those into context
medium.com
medium.com
. This is essentially an LLM agent helping the retrieval step. For your use-case (finding subtle bug flows), a hybrid approach could work: use embeddings to narrow down regions of the code (e.g. find all references to that variable or error code across files), then use a call graph or program analysis to connect the dots among those references. The vector search gives you candidate snippets, and the call graph gives you the actual path. You can automate this with an agent or scripting: for example, retrieve top-N relevant chunks with embeddings, feed them into an LLM and ask “do these connect? if not, what function connects them?”, and if it identifies an intermediate function, do another retrieval on that function’s name, and so on. This would leverage your Faiss/embedding infrastructure on-prem and still remain automated.
LLM-Based Agents for Code Analysis
You asked about using a “coding agent” – indeed, some setups let an LLM act as an autonomous agent that navigates the codebase and tools. Using a library like LangChain (or Haystack etc.), you can equip an LLM with abilities such as reading files, searching for text, calling a compiler or test runner, etc. The agent can then iteratively decide: “I need to open file X,” “Now search for where function Y is called,” “Now examine that snippet for issues,” and so on. This approach tries to mimic how a human engineer might investigate a bug: following clues through the code. In practice, fully autonomous code agents are experimental but promising. For example, Amazon’s open-source PR-Agent uses a chain-of-thought LLM to review pull requests – it fetches diffs and related code, analyzes possible bugs, and writes review comments automatically
reddit.com
. That demonstrates an LLM agent identifying issues that static checks might miss, all without human input on each PR. One could envision a similar agent that, instead of a PR diff, is given a directive like “find logic errors across these modules” and then it systematically goes through the repository. There are also VS Code extensions and prototypes where an AI agent documents or refactors code by reading multiple files (one Reddit user built an extension to document Python code using an AI agent traversing the codebase)
reddit.com
reddit.com
. These usually rely on the agent querying some index or using an API like an LSP (Language Server) to gather information. The challenge here is scale and reliability. An agent can easily get lost if the codebase is huge and it has no guiding structure – it might consume a lot of tokens reading irrelevant files or circling around the problem. To mitigate that, you often combine agents with the aforementioned tools. For instance, the agent could query a knowledge graph (“ask the graph for all functions that touch variable X”) and then only read those function definitions. This is actually the approach taken by the Graph-Code project’s CLI assistant: it has an orchestrator LLM that takes your query, converts it to a graph query (with another model), gets precise results, and then possibly uses an LLM to summarize or act on those results
github.com
github.com
. In effect, the agent’s “memory” is enhanced by structured queries and it doesn’t have to brute-force search everything. If you attempt an agent from scratch with LangChain, you might start by giving it tools: e.g. a filesystem tool (to read code), a search tool (grep-like), maybe a compile/test tool (to run code or static analyzers), and of course the LLM itself. Then prompt it with an instruction like: “Find any potential bug where an exception is caused by a value set in a different function. You can search and read files as needed. Think step by step.” The agent would then plan steps, like searching for throw or std::exception usage, reading those functions, seeing what they depend on, etc. However, caution: running an autonomous agent for a “long time without human intervention” (as you mentioned) can be tricky. They may get stuck or take non-optimal paths. Ensuring determinism and completeness is hard – you might need to impose some structure or checks (e.g. have it output intermediate findings for review, or limit loops). In summary, agentic approaches are powerful, but currently they work best when combined with structured knowledge (graphs/ASTs) or narrower tasks, rather than letting a GPT-4 roam free in a million lines of code.
Static Analysis + LLM Hybrid
A very practical approach in 2025 is to augment traditional static analysis tools with LLM reasoning. Static analyzers (linters, code scanners) are great at systematically finding certain classes of issues (e.g. null pointer dereferences, unchecked errors, dangerous API usage) by analyzing the code paths. Their weakness is often flexibility and contextual understanding – they might report too many false positives, or not suggest a fix, etc. LLMs, on the other hand, are good at understanding context and even generating code fixes, but (as pure AI) they might overlook an issue unless it’s explicitly in their prompt. Combining the two can give you the best of both worlds. AutoFix is a great example of this hybrid. It’s an open-source tool that marries a static scanner with an LLM to not only detect but also auto-remediate vulnerabilities
lambdasec.github.io
. Under the hood, AutoFix uses Semgrep (a rule-based static analyzer) to scan code for known problematic patterns, then for each finding it prompts a code-focused LLM (like StarCoder or Code Llama) to suggest a fix
lambdasec.github.io
lambdasec.github.io
. Crucially, it feeds the LLM a targeted prompt containing the problematic code and a description of the issue (from the static analysis), so the model isn’t guessing – it has a specific defect to address
lambdasec.github.io
. AutoFix then applies the fix and can re-run the analyzer to verify that the issue is resolved, iterating if necessary
lambdasec.github.io
. Example: The open-source AutoFix tool combining static analysis (Semgrep) with an LLM (StarCoder) to automatically detect and fix security issues
lambdasec.github.io
lambdasec.github.io
. This agent first uses static rules to flag a vulnerability, then crafts a precise prompt for the LLM to generate a patch, and finally re-scans the code to verify the fix (iterating if needed). Such a workflow demonstrates how an LLM can assist in remediation once a bug is pinpointed by analysis. While AutoFix is oriented toward security bugs (e.g. insecure function usage), the same pattern could be used for logical defects in C++ call chains. For instance, you could write a Semgrep rule (or use Clang’s static analyzer or CodeQL) to find “suspicious assignments in exception handlers” or any pattern you suspect. That static analysis would give you the candidate locations and relevant context (e.g. “in function foo(), in the catch block for exception E, variable X is assigned a value”). Then you feed that, along with the functions that use X later, into an LLM prompt: “The static analysis found that X is set inside a catch in foo(). Later, X is used in bar() without rechecking. Could this cause an error? Explain and suggest a fix.” The LLM can then reason about that flow and even draft a patch (maybe ensuring X is reset or checked). This way, the heavy lifting of searching the codebase is done by the static tool, and the LLM handles the reasoning across the call chain and the creative part of proposing a solution. Academia is also exploring this “neuro-symbolic” combo. A recent approach called IRIS proposed systematically combining static analysis with LLMs for whole-repository security reasoning
openreview.net
. The idea is to use static analysis to extract a wealth of facts (like “function A can reach B with input X”), then let the LLM reason over those facts to find complex vulnerability paths that span multiple functions. In your scenario, a static tool might construct the program’s call graph and data flow graph, and then an LLM could analyze those to spot an anomaly (e.g., “Ah, along this particular path the variable isn’t initialized properly”). Early results show that LLMs alone struggle with building call graphs or doing deep static analysis reliably
arxiv.org
arxiv.org
, but when given a structured representation of the code, they can reason about it quite effectively. This reinforces the point that LLMs shine when you feed them distilled, relevant information – whether that’s an AST, a graph, or a set of static warnings.
Notable Tools & Frameworks to Consider
To summarize, here are some existing tools/frameworks (open-source unless noted) that you can explore for an on-premises solution:
Code-Graph-RAG (Graph-Code) – Multi-language code analysis toolkit that uses Tree-sitter to parse code into a graph database and enables natural language queries about the code structure
github.com
github.com
. Supports C++ out of the box, and provides an interactive CLI/agent to ask questions and even perform automated refactors with your approval. This could be a strong foundation for mapping your entire C++ codebase and querying call chains. Requirements: Python 3.12+, Docker (for the Memgraph graph DB), and optionally local LLMs or OpenAI/Google APIs for the query agent component. It’s open source (MIT license).
AutoFix (LambdaSec) – Python-based tool that integrates Semgrep static analysis with code LLMs (StarCoder, etc.) to automatically find and fix issues
lambdasec.github.io
. While its default rules target security flaws (and the example is Java), Semgrep does support C++ rules. You can write custom rules for patterns relevant to your code (or use community C++ rules) and let AutoFix suggest patches. It’s a great example of an LLM coding agent focused on remediation, which you could adapt for your needs. On-prem viability: Yes, you can run it locally and even swap in your in-house model or OpenAI (it’s configurable to use different model APIs).
Semgrep + GPT (custom) – Even outside AutoFix, you can script a workflow where you run Semgrep (or CodeQL or any analyzer) to get a list of potential issues, then feed those to an LLM for deeper analysis. This wouldn’t be a single turnkey tool, but it’s an approach that can be automated with a bit of Python. For example, find all functions where a variable is set in a catch block and later used in another function; then prompt the LLM with those two code snippets together asking if there’s a risk. This could run periodically over your codebase as a batch job.
Cntxt – Lightweight knowledge graph generator for code structure (by Brandon Docusen). It outputs the relationships (calls, imports, etc.) in a simple format, drastically reducing tokens needed for context
reddit.com
reddit.com
. While Cntxt had clients for Python/Java/JS/C#, you might need to extend it for C++ or use a similar approach. Essentially, it’s using the fact that an LLM can infer a lot from just function signatures and call links. Even without this exact tool, you could serialize your call graph (from a tool like clang’s cflow or Tree-sitter) into a JSON or text outline, and have the LLM analyze that to spot weird call flows or dependencies.
Nuanced – Call graph indexing tool (recently open-sourced) that can integrate with coding assistants. It’s particularly aimed at reducing hallucinations in code answers by providing “precise call graph context”
siquick.com
. If Nuanced supports C++ or can be extended to it, it could be directly useful. Otherwise, its existence underscores the value of call graphs. (Nuanced is a standalone program with a CLI; it can output context for a given function or be called via API. It was used with models like GPT-4 in tools like Cursor editor.)
LangChain or LlamaIndex – These are framework libraries rather than solutions, but they enable you to build a custom pipeline. For instance, using LangChain you can set up a Conversational Retrieval QA chain over your vector index of code (as shown in the DeepLake example)
python.langchain.com.cn
. Or you can implement an Agent with Tools as discussed. LangChain supports tools for browsing files, running Python functions, etc., which could be harnessed in an agent loop that inspects your code. This requires more engineering on your part but offers flexibility to experiment (e.g., try an agent that first retrieves by embedding, then verifies via call graph queries).
Traditional Static Analyzers – While not LLM-based, it’s worth noting: tools like clang-tidy, Clang Static Analyzer, Coverity, CodeQL, etc., can be run on your codebase to flag many issues (memory errors, misuse of APIs, etc.). If you haven’t already, you might use them to produce an initial list of warnings. You could then feed those warnings to an LLM to prioritize or explain them. This is a semi-automated approach, but it might quickly surface the obvious bugs (letting you save the LLM for the trickier logic threads). CodeQL, in particular, can do deep query-based analysis across C++ call chains; you write a query to detect a pattern (like “value assigned in catch and used elsewhere”) and it finds all instances. It’s the engine behind GitHub’s code scanning and is free for open-source or private use with CLI. The LLM could supplement CodeQL by interpreting results or handling cases that are hard to formalize in a query.
Commercial AI-assisted Reviewers (FYI): There are proprietary services like Amazon CodeGuru Reviewer (which uses ML to find issues in code reviews, though primarily Java/Python), Microsoft’s Security Copilot (still emerging, combines GPT-4 with security tooling), and GitHub Copilot Chat (which can answer questions about your code in VSCode). These can sometimes find issues or at least help explain code, but they send code to the cloud which you indicated is sensitive. Unless your leadership allows it, on-prem open-source solutions are safer. If down the line budget appears, one could consider something like OpenAI GPT-4 32k or Anthropic Claude 100k to analyze large chunks of code at once – but even they would require chunking the code and careful prompt management; they’re not magic one-click solutions for a whole repo. Given the uncertainties, focusing on an in-house pipeline with open-source LLMs (which you already have access to) is a more controllable path.
Implementation Strategy and Next Steps
Given the above, a practical way forward could be:
Index and parse the codebase: Start by generating a structural index of your C++ code. You could use a Tree-sitter C++ parser or Clang’s libTooling to extract a list of functions, their definitions, call references, and perhaps global variables. Tools like code-graph-rag automate a lot of this (parsing into a graph DB) – you might try it out on a subset of your code to see the output. Even running a simpler tool like GNU cflow or Doxygen to produce a call graph can be informative. The goal is to have a machine-readable map of function calls and maybe an AST for each function. This will let you identify candidate problematic call chains systematically.
Set up an LLM environment: Since you have an in-house LLM provider with open models, decide on which model to use for analysis. For code understanding tasks, something like Code Llama or StarCoder (15B) or their instruct-tuned variants would be suitable if GPT-4 is not available. Ensure you can query it programmatically (LangChain can interface with HuggingFace local models or via your provider’s API). You’ll likely need to craft prompts that give the model enough context (maybe a few functions at a time) and ask the right question (e.g., “Analyze if any issue arises in this call chain”).
Define target issue patterns: It helps to narrow down what kinds of defects you’re hunting for first. For example, “unhandled error propagation”, “use of uninitialized variables across functions”, “exception misuse”, etc. You could encode some of these as static queries (like grep or AST patterns) to flag potential sections. This list of candidates is input to the LLM. Essentially, treat the LLM like a smart reviewer you invite after you’ve done an initial scan. This focuses its attention where it’s most needed, rather than blindly reading everything.
Build a pipeline (small scale): Try a small prototype on one module or a few files. For instance, pick a known tricky part of the code and run through a pipeline: extract its call graph, have the LLM analyze it. Use LangChain or simple scripts to automate the steps. For example:
Use Tree-sitter to get function A’s source and the source of all functions A calls (directly or 1-2 levels deep).
Feed all that into a prompt asking “Could any value or state set in these functions cause an exception or error in a downstream function? If so, where and why?”
See if the LLM catches the kind of issue you expect. This will tell you if the model and prompt are effective, or if you need to provide more guiding (like intermediate questions).
Iterate and expand: Once you have a working method for a subset, you can scale it. If using a graph DB (like Memgraph with code-graph-rag), you can ingest the whole repo and then query it systematically. If using embedding search, make sure to chunk the entire repo and have a robust retrieval function (with filtering by file, etc., to avoid irrelevant hits). You might also orchestrate multiple “agents”: e.g. one agent goes broad (identifies candidate problem spots using static logic), another agent goes deep (pulls the relevant code and reasons about it with the LLM). This divide-and-conquer can keep each LLM invocation focused.
Automate in CI or offline: Ultimately, you’d want this to run regularly without human intervention, as a sort of AI code auditor. That’s feasible – for example, code-graph-rag has a watch mode to update the graph as code changes
github.com
, and you could script daily runs of an analysis query. Ensure you have logging and maybe limits to prevent runaway API calls (if using external models). On a cloud instance or a beefy Mac M, a local 13B-34B model can run the analysis agent, though speed might be a consideration (you might batch analyses overnight if using a smaller machine).
By focusing on open-source and on-prem tools, you keep everything in-house which suits your sensitive code. Projects like Graph-Code, AutoFix, etc., are free and well-regarded in the developer community – showing that even without a commercial product, you can assemble a powerful toolkit. In fact, much of this cutting-edge capability is coming from open-source innovation (the community is rapidly creating tools for exactly this kind of code understanding problem). In summary, yes, it’s a large task, but you can break it down. Use static methods (parsers, graphs) to constrain the problem space, use embeddings for flexible search, and use LLMs in a targeted way (either through guided prompts or as an agent with tools) to do the reasoning and pattern recognition that static tools can’t. Given C++ is your focus, lean on its strong typing and structured nature – a call graph in C++ can be quite precise if you handle templates and virtual calls appropriately. With an approach that combines these elements, you stand a good chance of uncovering those deep, cross-calls bugs that have been lurking in the codebase. Good luck, and happy building! Sources:
Bharti, C. “How Call Graphs Gave Our LLM Superhuman Code Review Context.” Level Up Coding, Jul 2025. (Discusses using call graphs to provide LLMs with complete function dependency context, instead of relying solely on semantic retrieval)
levelup.gitconnected.com
.
Lambdasec (Mustafa Bykologlu). “AutoFix: Automated Vulnerability Remediation using Static Analysis and LLMs.” Lambda Security Blog, 2023. (Introduces AutoFix, which combines Semgrep static analysis with code LLMs like StarCoder to detect and automatically fix vulnerabilities)
lambdasec.github.io
lambdasec.github.io
.
Avagyan, V. code-graph-rag (Graph-Code). GitHub, 2023-2025. (Open-source project that parses codebases with Tree-sitter into a knowledge graph stored in Memgraph, enabling precise queries and AI-driven codebase editing; supports C++ among other languages)
github.com
github.com
.
Quick, S. “AI Tools I use: Nuanced.” siquick.com blog, Nov 2025. (Describes Nuanced, an open-source local tool that builds call graphs to give both engineers and LLMs a true understanding of code behavior, reducing hallucinations by grounding analysis in actual control flow)
siquick.com
siquick.com
.
reddit/r/LLMDevs – “Cntxt – codebase transformed into a knowledge graph for LLM insights.” (Announcement post by the author of Cntxt, highlighting how mapping code relationships boosts analysis precision and cuts down context size by focusing the LLM on key workflows)
reddit.com
reddit.com
.
Adams, C. “How to generate accurate LLM responses on large code repositories (Introducing CGRAG in dir-assistant).” Medium, May 2024. (Explores limitations of basic RAG on code and proposes a two-pass LLM-guided retrieval approach; provides examples where single-step embedding search misses relevant code, and how an LLM can identify missing pieces to fetch)
medium.com
medium.com
.
Kovacik, M. “Can AI Help with Repository Code Understanding?” DEV.to, Jun 2024. (Discusses the use of knowledge graphs, RAG, and agents for code comprehension at scale; notes that knowledge graphs of code relationships were a “game-changer” in their project, combined with ASTs and RAG, while cautioning that there’s no magic one-click solution)
dev.to
.
Qodo AI. PR-Agent – “AI-Powered Tool for Automated Pull Request Analysis.” GitHub, 2023. (Open-source agent that reviews PRs using chain-of-thought LLM reasoning, an example of an autonomous coding agent catching bugs in code changes)
reddit.com
.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment