<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:breach="https://breachnotes.vulnetix.com/xmlns/breach/1.0"><channel><title>AI</title><link>https://breachnotes.vulnetix.com/ai/</link><description>AI-related cybersecurity incidents including prompt injection, model poisoning, deepfakes, and AI-assisted attacks</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>Breach Notes Project</managingEditor><lastBuildDate>Sun, 12 Apr 2026 12:18:39 +0000</lastBuildDate><atom:link href="https://breachnotes.vulnetix.com/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>LiteLLM PyPI Supply Chain Attack - Mercor AI Breach (TeamPCP / Lapsus$)</title><link>https://breachnotes.vulnetix.com/ai/2026-03_litellm-pypi-mercor-teampcp/</link><pubDate>Fri, 27 Mar 2026 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2026-03_litellm-pypi-mercor-teampcp/</guid><description>On March 27, 2026, TeamPCP (a threat group also linked to the European Commission cloud breach) compromised PyPI publishing credentials for LiteLLM, a widely used open-source library for calling AI/LLM APIs. Malicious versions were published to PyPI, enabling downstream compromise of users. Mercor …</description><content:encoded>On March 27, 2026, TeamPCP (a threat group also linked to the European Commission cloud breach) compromised PyPI publishing credentials for LiteLLM, a widely used open-source library for calling AI/LLM APIs. Malicious versions were published to PyPI, enabling downstream compromise of users. Mercor (a $10B AI data training startup) was a confirmed victim: attackers exfiltrated approximately 4 TB of data including 939 GB of platform source code, a 211 GB user database, and 3 TB of storage (video interviews and identity verification passport data for candidates). Lapsus$ subsequently claimed responsibility and auctioned data on dark web forums. Meta indefinitely paused work with Mercor. Five contractors filed lawsuits. TeamPCP is also attributed to the March 2026 European Commission AWS breach via the Trivy tool compromise.</content:encoded><category>ai</category><breach:sourceUrl>https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-open-source-litellm-project/</breach:sourceUrl><breach:dateOfBreach>2026-03-27</breach:dateOfBreach><breach:dateOfDisclosure>2026-03-31</breach:dateOfDisclosure><breach:dateOfCustomerNotification>2026-04-01</breach:dateOfCustomerNotification><breach:initialAttackVector>TeamPCP (linked to Lapsus$) compromised the PyPI publishing credentials for the LiteLLM open-source AI API library, injecting malicious code into two versions on March 27, 2026; downstream victim Mercor was compromised via the backdoored package</breach:initialAttackVector><breach:vendorProduct>LiteLLM (open-source AI/LLM API library); PyPI (Python package registry)</breach:vendorProduct><breach:softwarePackage>LiteLLM</breach:softwarePackage><breach:supplyChainClaimed>true</breach:supplyChainClaimed><breach:aiModelName>LiteLLM</breach:aiModelName><breach:aiModelProvider>BerriAI</breach:aiModelProvider><breach:aiAttackVector>supply chain attack</breach:aiAttackVector></item><item><title>LiteLLM Cascading Supply Chain Attack — TeamPCP Trivy Credentials Used</title><link>https://breachnotes.vulnetix.com/ai/2026-03_litellm-hit-in-cascading-supply-chain-attack/</link><pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2026-03_litellm-hit-in-cascading-supply-chain-attack/</guid><description>The LiteLLM PyPI supply chain attack by TeamPCP involved a cascading attack chain: TeamPCP first compromised
the Trivy security scanner's GitHub Actions CI/CD pipeline (March 19, 2026), used stolen credentials to access
LiteLLM's PyPI publishing infrastructure, and pushed malicious versions of …</description><content:encoded><![CDATA[The LiteLLM PyPI supply chain attack by TeamPCP involved a cascading attack chain: TeamPCP first compromised
the Trivy security scanner&rsquo;s GitHub Actions CI/CD pipeline (March 19, 2026), used stolen credentials to access
LiteLLM&rsquo;s PyPI publishing infrastructure, and pushed malicious versions of LiteLLM on March 27, 2026. LiteLLM
is a widely used Python library for calling LLM APIs (OpenAI, Anthropic, etc.) used in AI application
development. Downstream victim Mercor (AI data training startup) suffered a major breach via the LiteLLM
compromise. This &lsquo;supply chain of supply chains&rsquo; attack — one compromise enabling access to another trusted
package — is documented separately in the LiteLLM/Mercor/TeamPCP records.]]></content:encoded><category>ai</category><breach:sourceUrl>https://www.databreachtoday.com/litellm-hit-in-cascading-supply-chain-attack-a-31210</breach:sourceUrl><breach:dateOfBreach>2026-03-26</breach:dateOfBreach><breach:dateOfDisclosure>2026-03-26</breach:dateOfDisclosure><breach:initialAttackVector>TeamPCP (UNC6780) used credentials stolen in the Trivy GitHub Actions compromise to push malicious versions of LiteLLM to PyPI, creating a second-stage supply chain attack</breach:initialAttackVector><breach:supplyChainClaimed>true</breach:supplyChainClaimed><breach:aiModelName>LiteLLM</breach:aiModelName><breach:aiModelProvider>BerriAI</breach:aiModelProvider><breach:aiAttackVector>supply chain attack</breach:aiAttackVector></item><item><title>"Moonwell hit by $1.78M exploit as AI vibe coding debate reaches DeFi"</title><link>https://breachnotes.vulnetix.com/ai/2026-02_moonwell-exploit/</link><pubDate>Sun, 15 Feb 2026 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2026-02_moonwell-exploit/</guid><description>After an oracle misconfiguration, the Moonwell defi lending protocol accumulated $1.78 million in bad debt. When the protocol showed that cbETH was priced at just over a dollar, rather than its actual market price of around $2,200, bots and humans alike rushed to take advantage of the mispricing. …</description><content:encoded>&lt;p>After an oracle misconfiguration, the Moonwell defi lending protocol accumulated $1.78 million in bad debt. When the protocol showed that cbETH was priced at just over a dollar, rather than its actual market price of around $2,200, bots and humans alike rushed to take advantage of the mispricing. The error cascaded into liquidations across the platform.This is the second time Moonwell has suffered a loss thanks to an oracle misconfiguration. In November 2025, the platform was left with almost $3.7 million in bad debt after a different asset was mispriced.Although the vulnerable pull requests were at least partially developed by an AI tool, the security auditor who initially attributed the vulnerability to Claude Opus 4.6 later softened his criticism, noting that even senior developers could have made the same mistake. He did, however, criticize the project for a lack of sufficiently rigorous testing that should have caught the issue.&lt;/p>
&lt;p>Total loss estimated at $1,780,000.&lt;/p>
</content:encoded><category>ai</category><breach:sourceUrl>https://cointelegraph.com/news/moonwell-exploit-cbeth-oracle-misprice-ai-commits-testing-audits</breach:sourceUrl><breach:dateOfBreach>2026-02-15</breach:dateOfBreach><breach:dateOfDisclosure>2026-02-15</breach:dateOfDisclosure><breach:initialAttackVector>AI-assisted attack or AI-generated exploit</breach:initialAttackVector><breach:vendorProduct>Moonwell</breach:vendorProduct><breach:blockchain>ethereum</breach:blockchain><breach:financialLossUsd>1780000</breach:financialLossUsd><breach:aiModelName>Claude Opus 4.6</breach:aiModelName><breach:aiModelProvider>Anthropic</breach:aiModelProvider><breach:aiAttackVector>AI-generated vulnerable code</breach:aiAttackVector></item><item><title>TechCrunch</title><link>https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel-vendor/</link><pubDate>Wed, 26 Nov 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel-vendor/</guid><description>Hackers breached Mixpanel, a third-party analytics vendor used by OpenAI to track user behavior on its API platform, on November 26, 2025. The breach exposed data belonging to OpenAI API platform business customers including names, email addresses, geographic locations, and technical details about …</description><content:encoded>Hackers breached Mixpanel, a third-party analytics vendor used by OpenAI to track user behavior on its API platform, on November 26, 2025. The breach exposed data belonging to OpenAI API platform business customers including names, email addresses, geographic locations, and technical details about customer systems. Standard ChatGPT consumer app users were not affected. No chat content, API keys, passwords, credentials, payment details, or government IDs were compromised. OpenAI confirmed the incident was at the vendor level, not within OpenAI&amp;rsquo;s own systems. The incident highlighted that AI platform providers — holding sensitive data on enterprises integrating AI into products — are attractive targets for attackers seeking business intelligence.</content:encoded><category>ai</category><breach:sourceUrl>https://techcrunch.com/2024/07/05/openai-breach-is-a-reminder-that-ai-companies-are-treasure-troves-for-hackers/</breach:sourceUrl><breach:dateOfBreach>2025-11-26</breach:dateOfBreach><breach:dateOfDisclosure>2025-11-27</breach:dateOfDisclosure><breach:dateOfCustomerNotification>2025-11-27</breach:dateOfCustomerNotification><breach:initialAttackVector>CWE-284: Improper Access Control (third-party analytics vendor breach)</breach:initialAttackVector><breach:vendorProduct>Mixpanel analytics platform (used by OpenAI)</breach:vendorProduct><breach:supplyChainClaimed>true</breach:supplyChainClaimed><breach:aiModelProvider>OpenAI</breach:aiModelProvider><breach:aiAttackVector>data exposure</breach:aiAttackVector></item><item><title>OpenAI Third-Party Breach (November 2025)</title><link>https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel/</link><pubDate>Sat, 01 Nov 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel/</guid><description>In 2025, OpenAI experienced a data security incident via a third-party vendor relationship. The compromised
third-party vendor was Mixpanel. Source reporting:
https://www.bleepingcomputer.com/news/security/openai-discloses-api-customer-data-breach-via-mixpanel-vendor-hack/</description><content:encoded>In 2025, OpenAI experienced a data security incident via a third-party vendor relationship. The compromised
third-party vendor was Mixpanel. Source reporting:
&lt;a href="https://www.bleepingcomputer.com/news/security/openai-discloses-api-customer-data-breach-via-mixpanel-vendor-hack/">https://www.bleepingcomputer.com/news/security/openai-discloses-api-customer-data-breach-via-mixpanel-vendor-hack/&lt;/a></content:encoded><category>ai</category><breach:sourceUrl>https://www.bleepingcomputer.com/news/security/openai-discloses-api-customer-data-breach-via-mixpanel-vendor-hack/</breach:sourceUrl><breach:dateOfBreach>2025-11-01</breach:dateOfBreach><breach:dateOfDisclosure>2025-11-01</breach:dateOfDisclosure><breach:initialAttackVector>Compromise of third-party service provider / vendor relationship</breach:initialAttackVector><breach:vendorProduct>Mixpanel</breach:vendorProduct><breach:supplyChainClaimed>true</breach:supplyChainClaimed><breach:aiModelProvider>OpenAI</breach:aiModelProvider><breach:aiAttackVector>data exposure</breach:aiAttackVector></item><item><title>OpenAI Mixpanel Product Analytics Data Exposure</title><link>https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel-analytics-leak/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-11_openai-mixpanel-analytics-leak/</guid><description>In November 2025, OpenAI disclosed that customer data had been exposed via Mixpanel, its third-party product analytics platform. OpenAI had shared user behavioral data with Mixpanel for product improvement purposes, and the compromise of Mixpanel's systems exposed this data. Affected information …</description><content:encoded>In November 2025, OpenAI disclosed that customer data had been exposed via Mixpanel, its third-party product analytics platform. OpenAI had shared user behavioral data with Mixpanel for product improvement purposes, and the compromise of Mixpanel&amp;rsquo;s systems exposed this data. Affected information included user names, email addresses, approximate geographic location, operating system and browser information, and organizational/user identifiers. This was part of a broader Mixpanel breach that affected multiple technology companies including PornHub, Pinterest&amp;rsquo;s Shuffles app, CoinDCX, SoundCloud, SwissBorg, and CoinLedger in November-December 2025.</content:encoded><category>ai</category><breach:sourceUrl>https://www.bleepingcomputer.com/news/security/openai-discloses-data-exposure-via-mixpanel-analytics-provider/</breach:sourceUrl><breach:dateOfBreach>2025-10-01</breach:dateOfBreach><breach:dateOfDisclosure>2025-11-10</breach:dateOfDisclosure><breach:dateOfCustomerNotification>2025-11-10</breach:dateOfCustomerNotification><breach:initialAttackVector>OpenAI's product analytics vendor Mixpanel was compromised, exposing behavioral and account data that OpenAI had shared with Mixpanel for product analytics purposes</breach:initialAttackVector><breach:vendorProduct>Mixpanel (product analytics SaaS)</breach:vendorProduct><breach:aiModelProvider>OpenAI</breach:aiModelProvider><breach:aiAttackVector>data exposure</breach:aiAttackVector></item><item><title>Tweet by Oli Feldmeier</title><link>https://breachnotes.vulnetix.com/ai/2025-09_griffin-ai-exploit/</link><pubDate>Wed, 24 Sep 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-09_griffin-ai-exploit/</guid><description>One day after Griffin AI launched its GAIN token on Binance Alpha, an attacker minted 5 billion fake GAIN tokens on the Ethereum blockchain, then exploited a cross-chain endpoint to trick the bridge to the Binance chain into recognizing them as the real thing. The attacker was only able to sell a …</description><content:encoded><![CDATA[<p>One day after Griffin AI launched its GAIN token on Binance Alpha, an attacker minted 5 billion fake GAIN tokens on the Ethereum blockchain, then exploited a cross-chain endpoint to trick the bridge to the Binance chain into recognizing them as the real thing. The attacker was only able to sell a small fraction of their tokens, but they made off with approximately $3 million as the token plunged in price. According to CEO Oliver Feldmeier, the exploit was enabled by &ldquo;a misconfigured layer Zero (cross-chain messaging) set-up and compromised key&rdquo;.Griffin AI promises to allow customers to &ldquo;build, deploy, and scale autonomous AI agents for crypto finance&rdquo;. These are essentially AI-powered bots that perform various functions — some of Griffin&rsquo;s advertised examples include a &ldquo;robo-adviser&rdquo; to provide &ldquo;tailored investment strategies&rdquo;, and bots to do arbitrage trading or manage staked assets.</p>
<p>Total loss estimated at $3,000,000.</p>
]]></content:encoded><category>ai</category><breach:sourceUrl>https://x.com/OliFeldmeier/status/1971270096535326728</breach:sourceUrl><breach:dateOfBreach>2025-09-24</breach:dateOfBreach><breach:dateOfDisclosure>2025-09-24</breach:dateOfDisclosure><breach:initialAttackVector>AI-assisted attack or AI-generated exploit</breach:initialAttackVector><breach:vendorProduct>Griffin AI</breach:vendorProduct><breach:blockchain>bsc, ethereum</breach:blockchain><breach:financialLossUsd>3000000</breach:financialLossUsd><breach:aiModelName>Griffin AI</breach:aiModelName><breach:aiModelProvider>Griffin AI</breach:aiModelProvider><breach:aiAttackVector>smart contract exploit</breach:aiAttackVector></item><item><title>AI-Enabled Cyberattack Acceleration — Reduced Breakout Times, Autonomous Attack Chains</title><link>https://breachnotes.vulnetix.com/ai/2025-01_ai-accelerating-cyberattack-timelines/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-01_ai-accelerating-cyberattack-timelines/</guid><description>By 2025-2026, documented evidence shows AI is systematically accelerating cyberattack timelines and lowering barriers to entry for attackers, while defenders face structural disadvantages in AI adoption speed. Key documented impacts from CrowdStrike Global Threat Report 2026, Okta Security Report …</description><content:encoded>By 2025-2026, documented evidence shows AI is systematically accelerating cyberattack timelines and lowering barriers to entry for attackers, while defenders face structural disadvantages in AI adoption speed. Key documented impacts from CrowdStrike Global Threat Report 2026, Okta Security Report 2026, Microsoft Digital Defense Report 2025, and ENISA Threat Landscape 2025: (1) Breakout time — average time for attackers to move from initial access to lateral movement — fell from 62 minutes (2023) to under 40 minutes (2025) as AI-assisted tooling automates post-exploitation; (2) CrowdStrike documented adversaries using AI-generated malicious code and AI-driven fuzzing to discover zero-days faster; (3) AI-powered phishing campaigns achieve 3-5x higher click rates than traditional campaigns by personalising content from social media and breach data in real time; (4) Nation-state actors (China, Russia, North Korea, Iran) have all been observed integrating AI into attack workflows; (5) Ransomware negotiation bots using LLMs now conduct initial extortion communications autonomously; (6) Social engineering via deepfake voice and video bypasses human recognition even for security-trained staff. Okta&amp;rsquo;s Brett Winterford documented that attackers are using AI to identify which accounts to target for MFA fatigue attacks — prioritising accounts where success probability is highest. IBM X-Force found that AI-powered scanning tools can identify vulnerable systems in an organisation within 3 minutes of an IP range being provided. The structural challenge is that defenders must secure every attack vector while attackers need only find one path — and AI amplifies this asymmetry.</content:encoded><category>ai</category><breach:sourceUrl>https://www.crowdstrike.com/global-threat-report/2026/</breach:sourceUrl><breach:dateOfBreach>2025-01-01</breach:dateOfBreach><breach:dateOfDisclosure>2026-04-08</breach:dateOfDisclosure><breach:initialAttackVector>Threat actors use AI to automate reconnaissance, accelerate vulnerability exploitation, reduce time-to-breach, generate convincing phishing content at scale, and create adaptive malware that evades static detection; defenders face structural disadvantage as AI reduces skill barriers for attackers while defenders face integration and compliance costs</breach:initialAttackVector><breach:vendorProduct>Multiple sectors — financial services, healthcare, critical infrastructure, technology companies globally</breach:vendorProduct><breach:aiAttackVector>AI-assisted cyberattack</breach:aiAttackVector></item><item><title>AI-Powered Identity Theft Wave — Synthetic Identity Fraud, Deepfake KYC Bypass 2025-2026</title><link>https://breachnotes.vulnetix.com/ai/2025-01_ai-powered-identity-theft-wave/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2025-01_ai-powered-identity-theft-wave/</guid><description>By 2025-2026, AI-powered identity theft had emerged as a major and growing threat category, representing a structural shift in how identity fraud and credential theft are conducted at scale. Key developments documented by the Identity Theft Resource Center (ITRC), Okta, and industry researchers: (1) …</description><content:encoded>By 2025-2026, AI-powered identity theft had emerged as a major and growing threat category, representing a structural shift in how identity fraud and credential theft are conducted at scale. Key developments documented by the Identity Theft Resource Center (ITRC), Okta, and industry researchers: (1) Synthetic identity fraud — AI generates complete fake identities combining real SSNs obtained from data breaches with generated names, addresses, and photos, enabling new account fraud at financial institutions; (2) Deepfake KYC bypass — generative AI video and voice tools defeat liveness detection at banks, crypto exchanges, and identity verification services (Jumio, Onfido, Sumsub) enabling account takeover and fraudulent account creation; (3) AI voice cloning enables highly convincing vishing calls impersonating bank fraud departments, CEOs (BEC), and customer service agents, bypassing voice authentication systems used by major banks; (4) AI-generated phishing emails achieve higher click rates by personalising content using data scraped from social media and data breach dumps; (5) Deepfake impersonation of executives for wire transfer fraud (BEC 2.0) — video call fraud where a fake CFO or CEO authorises fraudulent transactions in real-time video meetings. Verified incidents include the 2024 Arup Engineering deepfake CFO video call fraud ($25M lost in Hong Kong), multiple crypto exchange KYC bypass incidents confirmed by Chainalysis, and a documented AI voice fraud against a major European energy company ($243,000 stolen). ITRC reports that data breach-enabled identity theft complaints exceeded 1.4 million in 2025. The UK NCSC, US FTC, and ENISA all issued 2025-2026 advisories on AI-enabled identity threats.</content:encoded><category>ai</category><breach:sourceUrl>https://www.idtheftcenter.org/post/2025-annual-data-breach-report/</breach:sourceUrl><breach:dateOfBreach>2025-01-01</breach:dateOfBreach><breach:dateOfDisclosure>2025-12-31</breach:dateOfDisclosure><breach:initialAttackVector>Threat actors use generative AI tools to create synthetic identities combining real and fabricated personal data; deepfake video and voice generation is used to bypass live KYC (Know Your Customer) verification at banks and cryptocurrency exchanges; AI-driven phishing and vishing attacks increase success rates and reduce costs for attackers</breach:initialAttackVector><breach:vendorProduct>Financial institutions, cryptocurrency exchanges, and identity verification platforms globally</breach:vendorProduct><breach:aiAttackVector>deepfake</breach:aiAttackVector></item><item><title>Tweet by NFPrompt</title><link>https://breachnotes.vulnetix.com/ai/2024-03_nfprompt-discloses-hack/</link><pubDate>Fri, 15 Mar 2024 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2024-03_nfprompt-discloses-hack/</guid><description>A Binance-incubated platform called NFPrompt claims to be "the first Prompt Artist Platform in Web3" — with "prompt artist" referring to people who come up with prompts to feed into large language models. More succinctly, it's a platform to sell the NFTs you've made out of AI-generated images.The …</description><content:encoded><![CDATA[A Binance-incubated platform called NFPrompt claims to be &ldquo;the first Prompt Artist Platform in Web3&rdquo; — with &ldquo;prompt artist&rdquo; referring to people who come up with prompts to feed into large language models. More succinctly, it&rsquo;s a platform to sell the NFTs you&rsquo;ve made out of AI-generated images.The platform announced on March 15 that it had suffered a &ldquo;critical security incident&rdquo; that it attributed to &ldquo;a group of hackers&rdquo; who were able to gain access to funds belonging both to the project&rsquo;s users and the project itself. They did not disclose how much was taken.The project announced that it was working with the FBI, and had contacted centralized exchanges to ask them to freeze stolen funds.]]></content:encoded><category>ai</category><breach:sourceUrl>https://twitter.com/nfprompt/status/1768558658697433464</breach:sourceUrl><breach:dateOfBreach>2024-03-15</breach:dateOfBreach><breach:dateOfDisclosure>2024-03-15</breach:dateOfDisclosure><breach:initialAttackVector>AI-assisted attack or AI-generated exploit</breach:initialAttackVector><breach:vendorProduct>NFPrompt discloses</breach:vendorProduct><breach:blockchain>bsc</breach:blockchain><breach:aiAttackVector>AI platform breach</breach:aiAttackVector></item><item><title>"Two Men Charged for Operating $25M Cryptocurrency Ponzi Scheme"</title><link>https://breachnotes.vulnetix.com/ai/2023-12_ai-powered-crypto-ponzi/</link><pubDate>Tue, 12 Dec 2023 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2023-12_ai-powered-crypto-ponzi/</guid><description>Two fraudsters capitalized on the hype around both cryptocurrency and artificial intelligence, advertising an "artificial intelligence automated trading bot" that they promised would earn large returns for their investors. Instead, however, the fraudsters spent the money on themselves, paying for …</description><content:encoded><![CDATA[<p>Two fraudsters capitalized on the hype around both cryptocurrency and artificial intelligence, advertising an &ldquo;artificial intelligence automated trading bot&rdquo; that they promised would earn large returns for their investors. Instead, however, the fraudsters spent the money on themselves, paying for private chartered jet flights, luxury hotel accommodations, private mansion rentals, a personal chef, and private security guards.In addition to pulling off the original scam, the fraudsters also came up with a fake investigative agency called the &ldquo;Federal Crypto Reserve&rdquo;, where they directed victims who were seeking to recover their losses.The scammers were charged with wire fraud, money laundering, and obstruction of justice, which carry hefty maximum prison terms.</p>
<p>Total loss estimated at $25,000,000.</p>
]]></content:encoded><category>ai</category><breach:sourceUrl>https://www.justice.gov/opa/pr/two-men-charged-operating-25m-cryptocurrency-ponzi-scheme</breach:sourceUrl><breach:dateOfBreach>2023-12-12</breach:dateOfBreach><breach:dateOfDisclosure>2023-12-12</breach:dateOfDisclosure><breach:initialAttackVector>AI-assisted attack or AI-generated exploit</breach:initialAttackVector><breach:vendorProduct>"AI-powered" crypto ponzi</breach:vendorProduct><breach:financialLossUsd>25000000</breach:financialLossUsd><breach:aiAttackVector>AI-themed fraud</breach:aiAttackVector></item><item><title>OpenAI ChatGPT Redis Bug — Chat History &amp; Payment Info Leak</title><link>https://breachnotes.vulnetix.com/ai/2023-03_openai-chatgpt-redis-bug/</link><pubDate>Mon, 20 Mar 2023 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2023-03_openai-chatgpt-redis-bug/</guid><description>On March 20, 2023, OpenAI took ChatGPT offline after discovering a bug in its Redis client library (redis-py open-source library) that caused some users to see other users' conversation history titles and partial personal information in their sidebar. In some cases, the first message of a new …</description><content:encoded>On March 20, 2023, OpenAI took ChatGPT offline after discovering a bug in its Redis client library (redis-py open-source library) that caused some users to see other users&amp;rsquo; conversation history titles and partial personal information in their sidebar. In some cases, the first message of a new conversation could be visible. OpenAI later confirmed that a small subset of users (~1.2% of ChatGPT Plus subscribers who used the service during a 9-hour window on March 20) had their payment information exposed — including first and last names, email addresses, payment addresses, credit card type and last four digits, and credit card expiration dates. Full credit card numbers were not exposed. Approximately 100,000 ChatGPT Plus users were notified. The bug was triggered by a Redis connection pool race condition during a server-side configuration change that caused requests to return cached query data from another active connection. OpenAI patched the Redis library and confirmed the fix. This was the first major data exposure incident for ChatGPT and drew significant attention given OpenAI&amp;rsquo;s rapid growth and the sensitivity of private conversation data. OpenAI notified affected users directly and notified relevant regulators.</content:encoded><category>ai</category><breach:sourceUrl>https://openai.com/blog/march-20-chatgpt-outage</breach:sourceUrl><breach:dateOfBreach>2023-03-20</breach:dateOfBreach><breach:dateOfDisclosure>2023-03-24</breach:dateOfDisclosure><breach:dateOfCustomerNotification>2023-03-24</breach:dateOfCustomerNotification><breach:initialAttackVector>A bug in the Redis client library (redis-py) used by OpenAI caused race conditions in connection pooling under high load, resulting in users being served cached data from other users' sessions — exposing conversation titles and personal payment information</breach:initialAttackVector><breach:vendorProduct>OpenAI ChatGPT; Redis (redis-py library)</breach:vendorProduct><breach:softwarePackage>redis-py</breach:softwarePackage><breach:aiModelName>ChatGPT</breach:aiModelName><breach:aiModelProvider>OpenAI</breach:aiModelProvider><breach:aiAttackVector>data exposure</breach:aiAttackVector></item><item><title>"Scammers Created an AI Hologram of Me to Scam Unsuspecting Projects"</title><link>https://breachnotes.vulnetix.com/ai/2022-08_binance-exec-claims-deepfake-scam/</link><pubDate>Wed, 17 Aug 2022 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2022-08_binance-exec-claims-deepfake-scam/</guid><description>Binance's chief communications officer, Patrick Hillman, has come out with a blog post claiming that "Scammers created an AI hologram of me to scam unsuspecting projects". (Hologram?) He claimed that scammers were using these meetings to ask token creators to pay a listing fee for their tokens, …</description><content:encoded><![CDATA[Binance&rsquo;s chief communications officer, Patrick Hillman, has come out with a blog post claiming that &ldquo;Scammers created an AI hologram of me to scam unsuspecting projects&rdquo;. (Hologram?) He claimed that scammers were using these meetings to ask token creators to pay a listing fee for their tokens, something that Binance also does, but has been more squirrely about.The only evidence Hillman provided was a redacted conversation via LinkedIn, where he denies meeting with someone, and they reply: &ldquo;they impersonated your hologram. This person sent me a zoom link then your hologram was in the zoom&rdquo;. (Again, hologram?) Amusingly, Hillman waxes poetic about the importance of security at Binance throughout the whole post, while also including a LinkedIn screenshot with a name that&rsquo;s blurred so poorly it remains completely legible.Hillman goes on to claim, with no further evidence, that &ldquo;a sophisticated hacking team used previous news interviews and TV appearances over the years to create a &lsquo;deep fake&rsquo; of me&rdquo;. If so, this would be remarkable, as to date video deepfakes have mostly been limited to robotic-sounding and grainy pre-recorded Elon Musk impersonations, rather than anything that can respond naturally and quickly to alive conversation.Another possible explanation is that Hillman is trying to cover Binance&rsquo;s collective ass after being caught taking listing fees for tokens they never list. But who&rsquo;s to say, really — maybe deepfakers have made a considerable breakthrough with startling implications, and Hillman just didn&rsquo;t feel it was important to elaborate on.]]></content:encoded><category>ai</category><breach:sourceUrl>https://www.binance.com/en/blog/community/scammers-created-an-ai-hologram-of-me-to-scam-unsuspecting-projects-6406050849026267209</breach:sourceUrl><breach:dateOfBreach>2022-08-17</breach:dateOfBreach><breach:dateOfDisclosure>2022-08-17</breach:dateOfDisclosure><breach:vendorProduct>Binance exec claims deepfake</breach:vendorProduct><breach:aiAttackVector>deepfake</breach:aiAttackVector></item><item><title>"Please Don't Invest in This Crypto Scam Because Deepfake Elon Musk Told You To"</title><link>https://breachnotes.vulnetix.com/ai/2022-05_elon-musk-deepfake/</link><pubDate>Fri, 27 May 2022 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2022-05_elon-musk-deepfake/</guid><description>A somewhat robotic-sounding deepfake Elon Musk speaks to a deepfaked interviewer, who asks "what can you tell us about your project and how can it help people get rich right now?" Fake-Musk explains that people who invest in the (scam) project, "BitVex", will "receive exactly 30% of dividends every …</description><content:encoded><![CDATA[<p>A somewhat robotic-sounding deepfake Elon Musk speaks to a deepfaked interviewer, who asks &ldquo;what can you tell us about your project and how can it help people get rich right now?&rdquo; Fake-Musk explains that people who invest in the (scam) project, &ldquo;BitVex&rdquo;, will &ldquo;receive exactly 30% of dividends every day&rdquo;, and that if Bitcoin falls in price they will still receive twice their investment back.According to BleepingComputer, only about $1,700 in deposits appeared to have gone to addresses associated with the scam, although they acknowledged that the addresses are likely rotated and so the true amount may be larger.Someone brought the scam to Musk&rsquo;s attention on Twitter, where he replied, &ldquo;Yikes. Def not me.&rdquo; The YouTube channel hosting the videos was taken down shortly after.</p>
<p>Total loss estimated at $1,700.</p>
]]></content:encoded><category>ai</category><breach:sourceUrl>https://gizmodo.com/elon-musk-deepfake-invest-bitcoin-scam-bitvex-1848982652</breach:sourceUrl><breach:dateOfBreach>2022-05-27</breach:dateOfBreach><breach:dateOfDisclosure>2022-05-27</breach:dateOfDisclosure><breach:initialAttackVector>Smart contract exploit / hack</breach:initialAttackVector><breach:vendorProduct>Elon Musk deepfake</breach:vendorProduct><breach:blockchain>bitcoin</breach:blockchain><breach:financialLossUsd>1700</breach:financialLossUsd><breach:aiAttackVector>deepfake</breach:aiAttackVector></item><item><title>Microsoft AI Research Team 38TB Exposure via Misconfigured Azure SAS Token</title><link>https://breachnotes.vulnetix.com/ai/2023-09_microsoft-ai-sas-token-38tb/</link><pubDate>Mon, 20 Jul 2020 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2023-09_microsoft-ai-sas-token-38tb/</guid><description>On July 20, 2020, Microsoft's AI research team published open-source AI training data to GitHub and inadvertently included an overpermissioned Azure SAS token in the repository. The token granted 'full control' (read, write, delete, and list) permissions to the entire Azure Blob Storage account — …</description><content:encoded>On July 20, 2020, Microsoft&amp;rsquo;s AI research team published open-source AI training data to GitHub and inadvertently included an overpermissioned Azure SAS token in the repository. The token granted &amp;lsquo;full control&amp;rsquo; (read, write, delete, and list) permissions to the entire Azure Blob Storage account — not just the intended folder of training data. The SAS token was valid for approximately three years, until Wiz Research discovered the exposure on June 22, 2023 and reported it to Microsoft; Microsoft remediated the issue on June 24, 2023, and the public disclosure occurred September 18, 2023. The exposed 38TB of internal data included: backups of workstation files belonging to two Microsoft employees (including sensitive personal data), over 30,000 internal Microsoft Teams messages from 359 Microsoft employees, private SSH keys, passwords, and other internal credentials. No customer data was exposed. Microsoft confirmed no evidence of unauthorized external access. The incident illustrated how Azure SAS tokens — which appear as simple URLs and are often treated casually by developers — can carry dangerous levels of privilege that are difficult to audit and can persist for years without revocation. Wiz Research used this finding to advocate for better SAS token controls and visibility in cloud environments.</content:encoded><category>ai</category><breach:sourceUrl>https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers</breach:sourceUrl><breach:dateOfBreach>2020-07-20</breach:dateOfBreach><breach:dateOfDisclosure>2023-09-18</breach:dateOfDisclosure><breach:initialAttackVector>Misconfigured Azure SAS (Shared Access Signature) token published to a public GitHub repository by Microsoft AI researchers; the SAS token was configured with 'full control' permissions on an entire Azure Blob Storage account rather than read-only access to a specific folder — granting any GitHub visitor read, write, and delete access to all 38TB of data in the account</breach:initialAttackVector><breach:vendorProduct>Microsoft Azure Blob Storage (SAS token misconfiguration)</breach:vendorProduct><breach:aiModelProvider>Microsoft</breach:aiModelProvider><breach:aiAttackVector>training data exposure</breach:aiAttackVector></item><item><title>Microsoft AI Research Division 38TB Data Exposure via SAS Token — GitHub Misconfiguration</title><link>https://breachnotes.vulnetix.com/ai/2020-07_microsoft-ai-38tb-sas-token/</link><pubDate>Wed, 01 Jul 2020 00:00:00 +0000</pubDate><guid isPermaLink="true">https://breachnotes.vulnetix.com/ai/2020-07_microsoft-ai-38tb-sas-token/</guid><description>In July 2020, Microsoft's AI research division accidentally published an Azure Shared Access Signature (SAS) token with overly permissive access when sharing an open-source training data contribution on GitHub. The SAS token granted anyone with the link full access to the entire Azure Storage …</description><content:encoded>In July 2020, Microsoft&amp;rsquo;s AI research division accidentally published an Azure Shared Access Signature (SAS) token with overly permissive access when sharing an open-source training data contribution on GitHub. The SAS token granted anyone with the link full access to the entire Azure Storage account — not just the intended public training data. The storage account contained 38 terabytes of sensitive data including private keys, passwords, internal Microsoft Teams messages from 359 Microsoft employees, and over 30,000 internal Microsoft Teams messages, as well as secrets, private keys, passwords, and other sensitive internal Microsoft files. The token was accidentally published for approximately 3 years (July 2020 to September 2023), until Wiz.io security researchers discovered and reported it. Microsoft patched and secured the storage account after Wiz&amp;rsquo;s notification. Microsoft stated that no customer data was exposed and no other internal Microsoft services were put at risk. The overly permissive SAS token also allowed write and delete access — meaning anyone who discovered it could have modified or deleted the exposed data or potentially planted malicious data into AI training datasets. The case illustrated a fundamental risk with Azure SAS tokens: they grant access based on the URL alone (no authentication required), making accidental exposure in code or documentation particularly dangerous, and they can persist for years if not carefully managed.</content:encoded><category>ai</category><breach:sourceUrl>https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers</breach:sourceUrl><breach:dateOfBreach>2020-07-01</breach:dateOfBreach><breach:dateOfDisclosure>2023-09-18</breach:dateOfDisclosure><breach:initialAttackVector>Microsoft AI researchers accidentally included an overly permissive Azure Shared Access Signature (SAS) token when publishing open-source training data to a public GitHub repository; the SAS token granted full read-write-delete access to the entire Azure Storage account — not just the intended public dataset</breach:initialAttackVector><breach:vendorProduct>Microsoft Azure Storage (AI division internal data)</breach:vendorProduct><breach:aiModelProvider>Microsoft</breach:aiModelProvider><breach:aiAttackVector>training data exposure</breach:aiAttackVector></item></channel></rss>