Levix

Levix's zone

x
telegram

AI Security: A new round of startups competing to protect the security of AI technology stack.

In the wave of the growing adoption of generative artificial intelligence (AI) applications, companies are collectively facing a prominent issue: security.

Of particular concern are the AI models that make up the core of the modern AI technology stack. These models need to handle large amounts of sensitive corporate data, rely on self-learning mechanisms that are difficult to control precisely, and are often deployed in error-prone environments. At the same time, cybercriminals equipped with equally advanced technology are creating new threats at an unprecedented pace. The widespread use of AI technology increases the risk of being targeted by cyber attacks, making large language models (LLMs) an attractive target for attacks.

Using existing tools to protect these models has proven to be impossible. As a result, enterprise customers have become extremely cautious, and their adoption rate has not kept up with the market's enthusiasm. (A report by Menlo on enterprise adoption of AI specifically points out that customers want assurances of data exchange security and open-source model security before deploying these models at scale.)

Chart revealing the main obstacles to the adoption of generative AI

The complexity of this challenge and the enormous scale of the opportunity have triggered a wave of security innovation. Below, we will outline the current state of the market, highlight the areas in which Menlo Ventures will invest, and emphasize the promising companies that are paving the way for secure and scalable deployments.

GenAI: A New Source of Threats#

AI models are increasingly becoming targets of cyber attacks. Last November, OpenAI confirmed that they experienced a DoS attack that caused multiple interruptions to their API and ChatGPT traffic. Foundational model providers like Anthropic and OpenAI have expressed the need to protect model weights from theft, which can be achieved through leaked authentication information or supply chain attacks.

In use, LLMs are vulnerable to issues such as prompt injection, insecure output handling, sensitive information leakage, and insecure plugin design (source: OWASP). At the 2023 Black Hat conference, cybersecurity experts publicly demonstrated a synthetic ChatGPT compromise that modified the chatbot through indirect prompt injection, leading users to disclose sensitive information. Prompt injection can also be used to drive LLMs to generate malicious software, engage in scams (such as phishing emails), or make improper API calls.

LLMs are also vulnerable to attacks during the development phase. For example, Mithril Security released a tampered open-source GPT-J-6B model on Hugging Face, which generates fake news for specific prompts. Mithril's tampering went unnoticed until they publicly disclosed the model, which could be integrated and deployed by enterprises. While this is just one example, it clearly conveys a message: LLMs that are maliciously exploited can cause widespread and difficult-to-detect damage.

Fortunately, cybersecurity and AI experts are working together to address these challenges.

image

The Time for Investment Has Come: Huge Opportunities in Governance, Observability, and Security#

We categorize emerging technologies into three main areas: governance, observability, and security, and we believe adoption will follow this sequence. However, the urgency of certain protective measures outweighs others. As the threat landscape exposes models to external factors, model consumption threats become an imminent issue that enterprise customers must consider. Future AI firewalls and security measures need to address this concern. For operators, more sophisticated attack methods such as prompt injection will also be a focus of attention.

Governance solutions, such as Cranium and Credo, help organizations establish AI service, tool, and responsible personnel directories for internal development and third-party solutions. They assign risk scores for security and security measures and help assess business risks. Understanding the usage of AI within an organization is the first step in monitoring and protecting LLM models.

Observability tools, whether they are comprehensive tools for model monitoring like Helicone or tools for specific security use cases like CalypsoAI, enable organizations to aggregate logs of access, input, and output to detect abusive behavior and provide a complete audit of the solution stack.

Security solutions focus on establishing trust boundaries during model construction and usage. Strict control of model usage boundaries is crucial for both internal and external models. We are particularly optimistic about AI firewall providers such as Robust Intelligence, Lakera, and Prompt Security, who prevent prompt injection, check the validity of inputs and outputs, and detect personally identifiable information (PII)/sensitive data. Additionally, companies like Private AI and Nightfall help enterprises identify and remove PII data from inputs and outputs. It is important for enterprises to continuously monitor threats to LLM models and the impact of attacks, and to perform continuous red team testing. Companies like Lakera and Adversa are dedicated to automating red team activities to help organizations assess the robustness of their security measures. Furthermore, threat detection and response solutions, such as Hiddenlayer and Lasso Security, aim to identify anomalies and potential malicious behavior to counter attacks on LLMs.

From licensing third-party models to fine-tuning or training custom models, there are various approaches to model construction. Any process involving fine-tuning or custom building LLMs requires inputting a large amount of sensitive commercial/proprietary data, which may include financial data, health records, or user logs. Federated learning solutions, such as DynamoFL and FedML, meet security requirements by training local models on local data samples without centralized data exchange, only exchanging model parameters. Companies like Tonic and Gretel address concerns about inputting sensitive data into LLMs by generating synthetic data. PII identification/masking solutions, such as Private AI or Kobalt Labs, help identify and mask sensitive information in LLM data storage. When building on open-source models that may have thousands of vulnerabilities, pre-production code scanning solutions like Protect AI are crucial. Finally, production monitoring tools like Giskard focus on continuously identifying, recognizing, and prioritizing vulnerabilities in models deployed in production.

It is worth mentioning that this field is evolving at a faster pace than ever before. While companies may start in a specific niche area of the market (e.g., building AI firewalls), they quickly expand their capabilities across the entire market spectrum (e.g., to data loss prevention, vulnerability scanning, observability, etc.).

Menlo has a long history of investing in pioneering companies in the field of cybersecurity, such as Abnormal Security, BitSight, Obsidian Security, Signifyd, and Immersive Labs. We are excited to invest in teams with deep AI infrastructure, governance, and security expertise who are facing an evolving and increasingly complex landscape of network threats, especially as AI models face more frequent attacks. If you are a founder innovating in the field of AI security solutions, we would love to connect with you.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.