Claude Opus 4.6 Shows How AI Can Uncover Critical Open-Source Vulnerabilities

Anthropic has announced that its latest AI model, Claude Opus 4.6, has discovered over 500 previously unknown high-severity security flaws in widely used open-source libraries, including Ghostscript, OpenSC, and CGIF. This highlights the growing role of AI in modern cybersecurity.

Designed with stronger coding, review, and debugging capabilities, Claude Opus 4.6 can identify serious vulnerabilities without specialized prompts or custom tools. The model analyzes code in a way similar to human security researchers, examining commit history, recognizing risky patterns, and understanding logic that leads to crashes or memory issues.

All reported vulnerabilities were validated and have since been patched by project maintainers. Notable findings include buffer overflows, crash bugs, and a complex heap overflow in CGIF that required deep knowledge of the LZW algorithm—something traditional fuzzers often struggle with.

Anthropic believes AI models like Claude can help defenders keep pace with evolving threats. At the same time, these findings reinforce a key message: as AI becomes more powerful, fundamental security practices such as timely patching remain essential.

Source: https://thehackernews.com/2026/02/claude-opus-46-finds-500-high-severity.html