Tech and Semiconductor Firms Top AI Security Risk in the S&P 500, New Analyses Find

Table of Contents
    Add a header to begin generating the table of contents

    New work from the Autonomy Institute and Cybernews shows that technology, software, and semiconductor firms face the highest AI security risk in the S&P 500. Disclosures about AI risk rise across the index, and the tech group records the most cases linked to IP theft, insecure outputs, and data leakage. In response, nexos.ai sets out practical controls to help protect sensitive code and chip-design IP while enabling safe AI use.

    Tech and Semiconductor Firms Top AI Security Risk in the S&P 500, New Analyses Find

    AI now sits at the centre of how many S&P 500 companies build products, run operations, and serve customers. As use grows, risk reporting grows with it. The Autonomy Institute finds that three out of four companies increased their AI risk disclosures this year. This shows that boards and leaders treat AI security as a core issue.

    Exposure is not the same across sectors. Firms in technology, software, and semiconductors hold a high amount of code, algorithms, and chip-design IP. Research highlighted by nexos.ai and Cybernews places this group at the top for documented AI security risks. The data points to the same idea: AI brings value, but without clear controls, it also brings real security risk.

    “AI is now a core business driver. Without the right guardrails, it carries strategic risks, especially in tech and semiconductors; IP theft, insecure outputs, and prompt-driven leaks are no longer theoretical. The solution is proactive: policy-first design, prompt redaction at the edge, strict model access controls, and audit-ready logs. This is how companies can protect their most valuable asset, their innovation, while still moving fast with AI,” says Žilvinas Girėnas, head of product at nexos.ai. 

    What the data shows

    • Disclosures increase across the index. The Autonomy Institute finds that 3 in 4 S&P 500 companies expand AI risk disclosures this year.

    • IP risk becomes mainstream. 1 in 5 S&P 500 companies now list proprietary data or IP exposure as a top AI risk, and every semiconductor company in the index updates its 2025 filings to acknowledge significant AI threats.

    • Sector concentration is highest in tech and semiconductors. Cybernews documents 202 AI security risks across 61 companies in this group, the most in the index, spanning 40 flagged cases of potential IP theft, 34 insecure AI outputs, and 32 instances of data leakage.

    • AI is embedded at scale. Cybernews reviews public disclosures of AI use across 327 S&P 500 companies, cataloguing nearly 1,000 real-world deployments (from internal analytics to customer chatbots) and 970 potential security issues, including prompt injection, model extraction, and accidental data exposure.

    How exposure happens

    For high-IP sectors, the threat path is often operational rather than exotic:

    • Prompt-driven leakage. A single malicious or careless prompt can elicit confidential code snippets or unreleased design details.

    • Model extraction. Sustained query patterns can reconstruct model behaviour and reveal trade secrets embedded in algorithms.

    • Misconfiguration. Internal assistants who lack strict scoping or routing can surface unreleased specifications during testing.

    These patterns align with Cybernews’ categorisation of risks such as prompt injection, model extraction, and accidental data exposure.

    Recent incidents that illustrate the risk

    • Samsung. Engineers paste confidential code into ChatGPT, the content becomes part of model training data, and the company institutes a ban on generative AI.

    • EDA workflow exposure (2024). An Electronic Design Automation software company encounters a serious issue: internal design-automation prompts used to guide AI-assisted chip layout and verification circulate in developer forums after being entered into an unsecured third-party AI model.

    • Semiconductor testing. Multiple companies report that misconfigured AI assistants expose unreleased product specifications during internal testing.

    These events show how everyday usage, rather than advanced adversaries, can create leakage paths without enforced policy, redaction, and model oversight.

    Tech companies are racing to ship AI features. That pace often skips the guardrails protecting code and designs. Centralised controls like policy, redaction, routing, and clear audit trails are the only way to keep innovation from becoming an IP liability,” says Girėnas.

    Why it matters for tech and semiconductor firms

    For these companies, IP is the business. Source code, chip schematics, and proprietary algorithms underpin product differentiation and time-to-market. A single leak can compress a multi-year advantage into weeks, undermine licensing revenue, and trigger regulatory and contractual consequences.

    The current disclosure trend, particularly universal acknowledgement from semiconductor filers in 2025, shows that boards and executive teams treat AI security as a strategic control area, not only a technical issue.

    Recommended controls for high-IP environments

    Drawing on the patterns above, nexos.ai highlights a control set that enterprises can implement across teams and toolchains:

    1. Centralised policy enforcement to block risky prompts and apply consistent output filters.

    2. Automated PII and token-level redaction to strip or mask secrets before they reach AI models.

    3. Strict model access controls and routing to keep sensitive workloads on private or approved systems.

    4. Comprehensive, audit-ready logs to support IP tracking, compliance, and post-incident investigation.

    These measures align AI use with corporate security baselines and make model-assisted workflows safer for code and design assets.

    About nexos.ai

    nexos.ai is an AI infrastructure company that provides a centralised platform for enterprises to integrate and manage multiple AI models. Founded in 2024 by Tomas Okmanas and Eimantas Sabaliauskas, who also co-founded several bootstrapped global ventures, including the $3B cybersecurity company Nord Security and Oxylabs, nexos.ai originates in the ecosystem of Lithuania-based accelerator Tesonet. The company attracts its first investment of €8M in early 2025 from Index Ventures, Creandum, Dig Ventures, and several angel investors and focuses on enabling secure, policy-first AI adoption at enterprise scale.