‹ Blogs

AI Software Development Lifecycle on Kubernetes

Featured Image
Published on January 30, 2024
Author By ControlPlane

TL;DR: see our Secure AI Advantage offering.

At the London Stock Exchange for Open Source London, ControlPlane CEO Andrew Martin provided a detailed view on securing the AI software development lifecycle (SDLC) and data supply chain within Kubernetes environments. He described how Kubernetes is leveraged within AI software development, covering the industry’s current practices, potential future trends, and outlining the associated risks.

The talk explores the future, risks, and security of AI software development on Kubernetes:

  • The Future of AI Development: Exploring potential future scenarios where the transformation of AI use from data analysis tooling to code authorship could sideline the majority of traditional software roles and emphasizing the importance of human oversight in reinforcement learning and output verification. The application of machine learning across sectors, such as financial services, was highlighted, balancing technological progress with its vulnerabilities.

  • Risks in AI Systems: Vulnerabilities in AI systems include threats that could compromise integrity (e.g., model “prompt injection and remote code execution”) or financial stability (e.g., denial of wallet attacks). The traditional CIA Triad – confidentiality, integrity, and availability – remains fundamental to understanding these risks.

  • Securing Data and Supply Chains: The complexities of securing the AI data and model layers, including ensuring data accuracy, supply chain integrity, and guarding against adversarial attacks, were thoroughly analysed.

  • Leveraging Kubernetes: Kubernetes plays a pivotal role in facilitating secure, flexible AI software development, enabling easy integration with ML systems like PyTorch and TensorFlow and offering a robust, secure platform built off of existing extensive operational experience and a proactive approach to threat modelling.

The key takeaways from the talk were to threat model to quantify controls, and not to trust unrestricted prompts to the model: firewall and filter all interfaces. Common threats still apply to AI/ML systems, so common controls are still valid despite the complexity of new AI-specific threats — tomorrow’s AI poses a threat to today’s AI systems. With an evolving regulatory landscape, a collaborative approach is required to keep up with the pace of change, and the new FINOS AI Readiness SIG is a great place to start.

ControlPlane can assist with these challenges and supports enterprise services under the MLOps Security Framework:

  • AI Maturity Assessment and Threat Modelling
  • Model Red Teaming and Pipeline Pentesting
  • Secure, Verifiable Data and Software Supply Chain
  • Architecture Guidelines
  • Development Platform Guardrails
  • Secure Deployment Patterns
  • Continuous Observability
  • Secure Supply Chain

For more information, please see our AI Security offerings.