<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Vulnerability-Discovery on ControlPlane</title><link>https://control-plane.io/tags/vulnerability-discovery/</link><description>Vulnerability-Discovery on ControlPlane</description><language>en-uk</language><copyright>© 2026 ControlPlane</copyright><lastBuildDate>Tue, 28 Apr 2026 00:00:00</lastBuildDate><atom:link href="https://control-plane.io/tags/vulnerability-discovery/index.xml" rel="self" type="application/rss+xml"/><item><title>How LLMs Are Ending The Attacker-Defender Stalemate (And What to Do About It)</title><link>https://control-plane.io/posts/llms-ending-attacker-defender-stalemate/</link><pubDate>Tue, 28 Apr 2026</pubDate><guid>https://control-plane.io/posts/llms-ending-attacker-defender-stalemate/</guid><description>Frontier Large Language Models (LLMs) are reshaping how software is built, attacked, and secured. Their impact is most visible in code generation and vulnerability discovery, where they reduce the time and expertise required to produce outputs that previously demanded specialist knowledge. As organisations rush to adopt AI tools into development and operations, a practical question arises: in a world where AI can autonomously write exploits and generate patches, what is the role of human-driven security?</description></item></channel></rss>