A comprehensive guide based on a conversation with Johannes Doss, VP of Code Security at Sonar
Security responsibility in software development has undergone a fundamental transformation over the past two decades. Johannes Doss, who has spent 20 years in cybersecurity—from his early days playing capture-the-flag competitions to professional penetration testing and now leading code security at Sonar—has witnessed this evolution firsthand.
His journey into security began personally: his computer got infected with the Sasser worm, sparking both frustration and intrigue about how someone could gain access to his machine. This led him down a path of security exploration, eventually studying IT security in Bochum, Germany, and competing in hacking competitions where university teams would try to hack each other in isolated environments.
Twenty years ago, security was clearly owned by dedicated security teams. The software development lifecycle was much slower—quarterly releases were the norm. Before each release, security teams would conduct a final audit, give their blessing, and only then would the software ship to production. This compliance-driven model worked because the pace allowed for it.
Today's reality is drastically different. Modern development teams release multiple times per day, sometimes per hour. AI coding assistants accelerate development even further. You simply cannot have a disconnected security review process in this environment. The old model breaks down completely when applied to contemporary development speeds.
The argument for developer ownership is straightforward and logical:
- Every software vulnerability manifests in code - Security issues aren't abstract concepts; they're concrete problems in your codebase
- Developers are the only ones writing and changing code - No one else in the organization touches the actual implementation
- Developers are the only ones who can fix security issues - Even if a security team identifies a problem, they need developers to resolve it
- Great education and tools are now available - Developers have access to resources that didn't exist before
This doesn't make security teams obsolete. Rather, it redefines their role. Security teams should focus on the broader application security landscape: compliance requirements, organization-wide security initiatives, penetration testing, handling vulnerability reports from external researchers, and providing specialized expertise in areas like cryptography or authentication logic.
Think of it like the relationship between platform teams and feature teams. Platform teams build the infrastructure and provide expertise, but feature teams own their implementations. Security teams should be the specialized experts that developers can consult when needed, while developers handle the day-to-day security hygiene of their code.
Code security is a subset of the broader application security field. Code security focuses specifically on making sure your code is free of vulnerabilities that attackers can exploit to access data or compromise your systems. Application security encompasses code security plus many other concerns: network security, cloud security, data security, compliance, penetration testing, and more.
The complexity lies in defining what "free of security issues" actually means. It's not just about traditional vulnerabilities like SQL injection. Consider these examples:
- A null pointer exception that crashes your application puts it in an unintended state that attackers might exploit
- Memory corruption in C/C++ enables buffer overflow attacks where attackers can execute arbitrary code on your server
- A file upload feature for profile pictures that doesn't restrict file types could let attackers upload and execute a web shell
- Slow or insecure regular expressions can enable denial-of-service attacks
In essence, code security overlaps heavily with code quality. Security issues are bugs—things you forgot about in your code or misspecified requirements. They're technical debt that needs to be addressed just like any other bug in your backlog.
This might sound obvious, but it's how security experts find vulnerabilities in your code. They look for corner cases and edge cases that you may have overlooked. In the era of AI-accelerated development, using extensive libraries and copying code snippets, it's increasingly easy to lose track of exactly what your code does and how it interacts with the rest of your codebase.
Best practice: When working on security-sensitive features, consciously look through the eyes of an attacker. Ask yourself: What could an attacker do here? How could they modify inputs or behavior to break assumptions?
The industry has discussed input validation for decades, yet it remains one of the most critical security principles. Never trust any input, from any source.
Input isn't just obvious form fields. It includes:
- GET and POST parameters
- Cookies
- HTTP headers
- File uploads
- External API responses
- Database query results (when that data originated from users)
- Even indirect inputs like YouTube video titles if you're parsing them
Think carefully about where all external input is used in your application:
- File operations (could allow reading arbitrary files)
- SQL queries (SQL injection)
- HTML responses (cross-site scripting)
- System commands (command injection)
- LDAP queries (LDAP injection)
These vulnerabilities have existed for decades, and according to Sonar's analysis of 8 billion lines of code from 1 million developers across 40,000 organizations, they're still the most common security issues today.
Secret leaks are involved in many high-profile data breaches. Developers often hardcode credentials "temporarily" for testing purposes:
- API tokens
- Database passwords
- Cryptographic keys
- OAuth client secrets
- Private keys
Why this is dangerous:
- Attackers actively crawl public GitHub repositories looking for secrets
- Even if you delete the code, secrets persist in git history
- Automated tools can find and test these secrets within minutes of exposure
The rule is absolute: All secrets belong in environment variables or secret management systems (like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault), never in code.
According to Sonar's State of Code Security report analyzing billions of lines of code, the most prevalent security issues are:
- SQL Injection - Unsanitized user input in database queries
- Cross-Site Scripting (XSS) - User input rendered in HTML without proper encoding
- Hardcoded Passwords - Credentials in source code
- Path Traversal - File operations that allow accessing arbitrary files
- Regular Expression Denial of Service (ReDoS) - Inefficient regex patterns that attackers can exploit
For advanced security features like cryptography, authentication, password reset flows, or access control, use established, well-vetted frameworks and libraries trusted by the open-source community. These have been battle-tested and reviewed by security experts.
This is where security teams can provide tremendous value—helping you select the right libraries and frameworks for your security-sensitive implementations.
Code quality and code security are deeply intertwined, and this relationship is significantly underrated in the industry today.
Some quality issues directly create security vulnerabilities:
- Null pointer exceptions that crash your application
- Slow regular expressions enabling DoS attacks
- Memory leaks that make systems unstable
- Race conditions that create exploitable windows
Poor code quality impacts security in subtler but equally important ways:
Spaghetti code makes security issues invisible - When code is hard to read and poorly maintained, code reviewers are more likely to miss security problems. Complex, tangled logic obscures the data flow that security analysis requires.
Unmaintainable code delays security fixes - When a vulnerability is discovered, if your code is a mess, fixing it becomes much harder. The longer a vulnerability remains unfixed, the longer the "attacker window" stays open. In this sense, poor quality literally becomes a security issue because it prevents timely remediation.
Verbose code increases attack surface - More lines of code mean more places for things to go wrong. Based on the statistic of roughly one security issue per 1,000 lines of code, verbose implementations directly correlate with more potential vulnerabilities.
This becomes particularly relevant with AI-generated code, which tends to be verbose and lower quality—even when it contains fewer immediate security vulnerabilities.
Every developer uses an IDE, and most include basic linting that catches issues as you type. This is valuable for immediate feedback but has limitations:
- Typically focuses on syntactic and semantic checks
- Usually analyzes only the current file (for performance reasons)
- Lacks deep security-specific coverage
Some IDE extensions add security analysis, but they remain constrained by the need to run in milliseconds without slowing you down.
SAST tools provide much deeper analysis than IDE linting. They work by:
- Transforming your entire codebase into an abstract graph model - Every file, function, if-else statement, and function call becomes a node in this graph
- Simulating runtime behavior without executing code - Using techniques like symbolic execution and taint analysis
- Tracing data flow paths - Following how user input flows through your application
- Identifying security-sensitive sinks - Finding where that input reaches dangerous operations like database queries or file system access
This can discover incredibly complex vulnerabilities where user input flows through multiple files and dozens of function calls before reaching a security-sensitive operation.
Modern SAST tools can complete this analysis in minutes for what would take security experts days to find manually. This is a hard computer science problem—efficiently analyzing all possible code paths and data flows across a large codebase—but current tools have become impressively fast.
Critical distinction: Choose SAST tools built for developers, not security teams. Security-focused tools are designed to find every possible issue (high sensitivity) because security auditors want to "turn over every stone." For developers, this creates unbearable noise—you'll be interrupted constantly with low-probability findings. Developer-focused tools balance thoroughness with signal-to-noise ratio, flagging issues you can actually fix without derailing your productivity.
SCA tools address the dependency vulnerability problem. They:
- Scan your dependency manifest files (package.json, requirements.txt, pom.xml, etc.)
- Check dependencies against vulnerability databases - Including the CVE (Common Vulnerabilities and Exposures) database and other sources
- Alert you to known vulnerabilities - Telling you which versions are affected
- Suggest remediation - Recommending which versions to upgrade to
Why you need automation: There are over 200,000 CVEs in the database, with approximately 50 new ones added daily. Manual tracking is impossible. Even trying to stay current with vulnerabilities in your specific dependencies would be a full-time job.
The CVE program, historically run by MITRE and the U.S. government, is now evolving because of bottlenecks—there are simply too many vulnerabilities being reported. Modern SCA tools pull from multiple vulnerability databases beyond just CVE to ensure comprehensive coverage.
The key differentiator: Look for SCA tools that don't just detect issues but help you fix them. A tool that generates a massive backlog of unfixable issues is worse than useless—it creates security theater while providing false confidence.
These tools scan your codebase and git history for accidentally committed credentials. Given how common secret leaks are and how serious the consequences can be, secret detection should be part of every team's security toolkit.
Modern applications include infrastructure as code—Terraform files, CloudFormation templates, Kubernetes manifests, GitHub Actions workflows. These are code too, and they can contain security misconfigurations. Good SAST tools include IaC scanning as part of their static analysis.
DAST tools attempt to automate penetration testing. They:
- Test your running application from the outside (black-box testing)
- Send malicious payloads to see how your application reacts
- Look for error messages, delays, or unexpected behavior
- Attempt to exploit vulnerabilities the same way an attacker would
Why DAST is less developer-friendly: The feedback loop is much longer. You have to finish coding, deploy to a test environment, run the scan, then context-switch back to fix issues. This works better as a security team tool for additional verification rather than a developer tool for catching issues early.
Fuzzing is similar to DAST but typically used for compiled binaries, embedded software, and C/C++ applications. Fuzzers pass malformed or random input to your program, systematically "flipping every bit" in file formats or protocols to find crashes and undefined behavior. It's highly effective for finding memory corruption vulnerabilities but, like DAST, is better suited for security team use than daily developer workflows.
Choose tools that integrate into your development process, not tools that require separate security reviews. Security shouldn't be something that happens afterward or ad-hoc when the security team decides to run a scan. It should be part of the continuous development process, with developers getting immediate feedback and owning the remediation.
AI coding assistants have fundamentally changed the security landscape, introducing both new capabilities and new risks.
AI can generate working code in seconds, dramatically accelerating development. For straightforward tasks, this is genuinely helpful and has made many developers significantly more productive.
Sonar conducted extensive studies of popular LLMs (Claude, GPT-4, GPT-5, Llama, OpenCoder, etc.), analyzing the "personalities" of each—what kinds of issues they produce and what quality code they generate.
Key findings:
-
More verbose code - AI often produces more lines of code than necessary to solve a problem. GPT-5's reasoning mode actually produces fewer security issues but generates significantly more verbose code to do so.
-
Verbose code creates security problems - Remember: roughly one security issue per 1,000 lines of code. More code means more potential vulnerabilities, harder code reviews, and less maintainable systems.
-
Developer trust is low - A Stack Overflow survey found only 3% of developers trust their AI-generated code. This is healthy skepticism.
-
Quality issues compound - Low-quality, poorly structured AI code is harder to review, harder to maintain, and more likely to hide security issues. Even if the AI doesn't generate an immediate SQL injection, the poor structure makes future modifications risky.
Dependency hallucination (Slop squatting): AI sometimes suggests using libraries that don't exist. Attackers monitor AI-suggested package names and register them in npm or Maven Central, then inject malicious code. When developers blindly follow AI suggestions, they install backdoored packages.
This isn't an entirely new attack (dependency confusion existed before), but AI dramatically increases the prevalence because models regularly hallucinate package names, whereas humans rarely mistype dependencies.
Supply chain attacks via compromised developers: With AI agents and MCP (Model Context Protocol) servers gaining access to developer machines and internal systems, compromising a single developer becomes more valuable. An attacker who gains access to a developer's machine can:
- Compromise dependencies that developer maintains
- Inject backdoors into code being committed
- Access internal systems through the developer's credentials
- Use AI agents to propagate malicious code
Until now, developer machines were valuable targets, but with agents that can autonomously act on behalf of developers, the risk amplifies.
Prompt injection - the new code injection: As natural language becomes code through LLMs, prompt injection becomes the new attack vector. If your application uses an LLM in the backend and accepts user input that influences prompts, attackers can manipulate the system prompt or prompt engineering logic to change application behavior.
Just as SQL injection exploited the mixing of code and data in SQL queries, prompt injection exploits the mixing of instructions and data in natural language prompts.
Using AI to review AI-generated code is "like having students grade their own homework." If the AI generating code could prevent security issues, why would it detect them in review? This creates a fundamental problem:
You need deterministic, non-AI verification as a guardrail. AI security reviews are interesting for security research—finding novel, unexpected vulnerabilities in large codebases. But for systematic, reliable security verification that developers can depend on, you need deterministic analysis that produces consistent results.
- Don't blindly accept AI-generated code - Review it as carefully as you'd review a junior developer's code
- Watch for verbosity - Refactor AI code to be more concise and maintainable
- Verify dependencies exist - Don't just trust AI's package suggestions; verify they're legitimate
- Run security tools on AI code - Your SAST and SCA tools should analyze AI-generated code just like human-written code
- Maintain code quality standards - AI code should meet the same quality bar as human code
- Run an initial assessment - Use SAST and SCA tools to understand your current security posture
- Fix critical issues first - Don't try to fix everything at once; prioritize high-severity vulnerabilities
- Integrate security tools into your workflow - Add SAST to your CI/CD pipeline, enable IDE security extensions
- Learn the basics - Understand the OWASP Top 10, common vulnerability types, and how to prevent them
- Review regularly - Run assessments quarterly to track progress
- Establish security ownership - Make it clear that developers own code security
- Create security champions - Have developers who are particularly interested in security become go-to resources
- Bring in security expertise - Hire or contract security professionals for specialized knowledge, penetration testing, and security tool selection
- Standardize tooling - Choose organization-wide security tools so everyone uses the same scanning and gets consistent results
- Make security part of the definition of done - Don't consider a feature complete if it has unresolved security issues
There is no perfect security. Even the most popular open-source projects—with excellent maintainers, bug bounty programs, extensive testing, and large communities—still have vulnerabilities. Sonar's vulnerability research team regularly audits high-profile open-source projects and consistently finds new security issues.
The goal isn't perfection; it's making sure you've closed the obvious windows and doors. Think of it like securing a house:
- Lock the doors (basic authentication)
- Close the windows (input validation)
- Install an alarm system (monitoring and logging)
- Have good locks (encryption, proper access controls)
This won't stop a highly skilled or well-funded attacker, but it protects against opportunistic attacks and script kiddies, which represent the vast majority of threats most organizations face.
The ongoing challenge: With software, you're adding new windows and doors every day with each feature you build. This is why automation is crucial—you need tools that continuously check security as you develop, not just periodic audits.
Security is a moving target. New technologies bring new vulnerabilities:
- Traditionally: Remove the database, eliminate SQL injection
- With LLMs: Add an LLM to your backend, introduce prompt injection
- With microservices: Increase attack surface with more network boundaries
- With containers: Introduce container escape vulnerabilities
- With serverless: Create new IAM and access control challenges
The industry adapts, but it takes time. We're still seeing COBOL code vulnerabilities decades after COBOL's heyday. New vulnerability types emerge faster than old ones disappear.
As a developer, you need to stay informed about:
- Common vulnerabilities in your technology stack
- New attack vectors introduced by new technologies you adopt
- Updates to security best practices in your frameworks and languages
- CVEs affecting your dependencies
This doesn't mean becoming a security expert, but it does mean maintaining awareness and knowing when to consult security specialists.
Which programming languages are most secure? Newer languages tend to be more secure because they learn from predecessors' mistakes. Go is a good example—it has memory safety built in, reasonable defaults, and a security-conscious standard library.
Java, despite being older, remains quite secure, especially in enterprise environments where it's widely used and well-understood.
Rust is famous for its memory safety guarantees, eliminating entire classes of vulnerabilities common in C/C++.
But the language alone doesn't make your code secure. You can write insecure code in any language. What matters more is:
- Using secure frameworks and libraries
- Following security best practices for your stack
- Validating inputs and encoding outputs properly
- Using your language's security features correctly
The security industry has a financial incentive to sell products that promise complete security. CISOs face pressure to buy tools to demonstrate due diligence. But security isn't something you buy; it's something you build into your development process.
A thousand security tools won't help if they generate massive backlogs that never get addressed. What matters is tools that integrate into development, help developers fix issues, and make security part of the normal workflow.
There's a mystique around security—the idea that only top-notch hackers can understand it. This is partly true for advanced exploitation techniques, but developers don't need to know how to exploit buffer overflows; they need to know how to prevent and patch them.
The exploitation stage that security professionals find fascinating is less relevant for daily development. What developers need is understanding common vulnerability types and how to write secure code.
No system is perfectly secure. The question is: "What's good enough for your threat model?" A small startup faces different threats than a defense contractor. Match your security investment to your actual risk.
Done wrong, yes. Security teams that generate massive finding backlogs without helping developers fix issues definitely slow things down. But security tools that integrate into development and provide actionable feedback actually speed things up by catching issues before they become production incidents.
The fundamental shift happening now is that natural language is becoming code. When you write a prompt for an LLM, you're essentially programming in English (or whatever language you use).
This has profound security implications:
- Traditional code injection (SQL, command injection) focused on exploiting mixing of code and data
- Prompt injection is the same concept applied to natural language
- As more systems rely on LLM-generated content and LLM-mediated interactions, prompt injection becomes a primary attack vector
Security professionals and developers need to think about:
- How to sanitize natural language inputs
- How to prevent prompt manipulation
- How to verify LLM outputs before trusting them
- How to limit LLM capabilities and access
This is an evolving field, and best practices are still being established.