Everything looked fine. The A.I., Privacy, and Security Weekly update for the week ending April 27th 2026
EP 289.

Let’s climb to the top of this week’s stories:
France's most trusted identity infrastructure has become its biggest liability, and nineteen million citizens are now paying the price.
The real lesson from Bitwarden's close call isn't about passwords it's about how quietly an attack can move through the software you never see being built.
A newly uncovered rootkit predating Stuxnet has rewritten what we thought we knew about state-level sabotage and its most dangerous feature was making everything look perfectly normal.
The arms race in AI security has hit a new threshold machines are now the ones probing for weaknesses, and they don't need sleep to do it.
The browser is no longer just a window to the web it's becoming an autonomous actor, and that changes everything about who's actually in control.
A restricted AI model, a contractor's borrowed credentials, and a private Discord channel Anthropic's Mythos access story is a case study in how third-party trust becomes a front door.
A logging bug quietly turned one of the world's most trusted encrypted messaging apps into an inadvertent evidence locker and it took an FBI courtroom testimony to bring it to light.
OpenAI and Microsoft have redrawn the map of AI's most consequential partnership, and the shift from exclusivity to optionality signals a new phase in who controls the infrastructure layer.
Tighten your shoelaces, and let’s get to the bottom of this.
Fr: The French government's database used to secure identity documents has been breached.
A major cyberattack has hit France's national identity document agency, exposing what could be up to 19 million personal records tied to passports, ID cards, and driver's licenses.
Let’s climb to the top of this week’s stories:
France's most trusted identity infrastructure has become its biggest liability, and nineteen million citizens are now paying the price.
The real lesson from Bitwarden's close call isn't about passwords it's about how quietly an attack can move through the software you never see being built.
A newly uncovered rootkit predating Stuxnet has rewritten what we thought we knew about state-level sabotage and its most dangerous feature was making everything look perfectly normal.
The arms race in AI security has hit a new threshold machines are now the ones probing for weaknesses, and they don't need sleep to do it.
The browser is no longer just a window to the web it's becoming an autonomous actor, and that changes everything about who's actually in control.
A restricted AI model, a contractor's borrowed credentials, and a private Discord channel Anthropic's Mythos access story is a case study in how third-party trust becomes a front door.
A logging bug quietly turned one of the world's most trusted encrypted messaging apps into an inadvertent evidence locker and it took an FBI courtroom testimony to bring it to light.
OpenAI and Microsoft have redrawn the map of AI's most consequential partnership, and the shift from exclusivity to optionality signals a new phase in who controls the infrastructure layer.
Tighten your shoelaces, and let’s get to the bottom of this.
Fr: The French government's database used to secure identity documents has been breached.
A major cyberattack has hit France's national identity document agency, exposing what could be up to 19 million personal records tied to passports, ID cards, and driver's licenses.
The breach, confirmed by the government, affects a system used widely by citizens to manage official documents.
Hackers operating under aliases including 'breach3d' दावा they accessed internal systems and are now offering the data for sale on criminal forums.
The dataset reportedly includes names, contact details, birth dates, and other identifying information, making it especially valuable for fraud and impersonation schemes.
Officials say the incident was detected in mid-April and has been reported to regulators, with a formal investigation now underway.
While the agency maintains that user accounts were not directly compromised, the exposure of personal data still creates serious downstream risks.
This breach also raises concerns about recurring weaknesses.
A similar dataset tied to the same agency surfaced in 2025, suggesting either repeated compromise or unresolved vulnerabilities.
At the same time, France has faced a series of cyber incidents across government systems, pointing to growing pressure on public infrastructure.
So what's the upshot for you?
The scale of the leak is significant, but its real impact lies in how usable the data is, as identity-linked records can fuel targeted scams long after the breach itself fades from headlines.
Global: Bitwarden's Supply Chain Scare Turned Into a Bigger Story About Developer Trust
Bitwarden had a rough moment, but not the kind most everyday users fear when they hear the words 'password manager breach.'
So what's the upshot for you?
The scale of the leak is significant, but its real impact lies in how usable the data is, as identity-linked records can fuel targeted scams long after the breach itself fades from headlines.
Global: Bitwarden's Supply Chain Scare Turned Into a Bigger Story About Developer Trust
Bitwarden had a rough moment, but not the kind most everyday users fear when they hear the words 'password manager breach.'
The problem was not that customer vaults were cracked open. The issue was that a malicious version of Bitwarden's command line tool was briefly pushed through npm during a supply chain incident tied to the wider Checkmarx campaign.
In plain English, this was a strike at developers and software pipelines, not at the average person storing passwords in Bitwarden.
That matters, because it shows how modern attacks increasingly go after the plumbing behind the software rather than the front door everybody is watching.
What makes this story interesting is how small the window was and how big the implications were.
The malicious package was available for a limited period, Bitwarden says it found no evidence that end user vault data or production systems were compromised, and affected users were told to rotate secrets and update to a safe version.
That is a good reminder that in 2026, 'secure software' is not just about code quality. It is about every build step, every dependency, and every automation script that touches the release process.
So what's the upshot for you?
Most people never install CLI tools from npm, but everybody relies on software that was built by someone who does. This is one of those stories that quietly explains why software updates can be both essential and a little nerve-racking at the same time.
Global: FAST16.SYS Rewrites the Timeline of Cyber Sabotage
Newly surfaced research into FAST16.SYS, a rootkit driver tied to an older malware framework that appears to predate Stuxnet by years.
The malware did something especially chilling. Instead of simply stealing data or breaking systems, it appears to have altered engineering and scientific calculations in memory, introducing subtle errors into high precision software. That is not ordinary cybercrime.
That is sabotage with patience. The detail that makes this unforgettable is the method. The malware reportedly modified executable behavior on the fly as programs were loaded, leaving the files on disk unchanged.
That means the target software could look perfectly normal at rest while behaving differently in action. SentinelLabs concluded that the patched routines were tied to precision calculation tools used in areas like engineering and physics, which suggests a deliberate attempt to distort real-world outcomes without announcing itself.
It is the digital equivalent of quietly bending a ruler so every measurement comes out just a little wrong.
So what's the upshot for you?
This expands the usual mental picture of what malware does. Most people think theft, ransom, or outages. FAST16.SYS points to something colder and more strategic: changing the answer while making the system look fine. That idea sticks with you because it turns accuracy itself into a target.
Global: AI Systems Are Now Testing and Breaking Each Other
Researchers and hackers are beginning to use AI systems to probe and break other AI systems. One example involved using one model to manipulate and expose weaknesses in another, showing how quickly this space is evolving.
This is a shift from traditional security testing. Instead of humans slowly finding flaws, AI can now automate that process at scale. It can try thousands of variations, learn from failures, and refine attacks far faster than before.
The result is an environment where AI is both the tool and the target. Systems are being trained, attacked, and improved in a continuous loop.
So what's the upshot for you?
The technology shaping the future is being stress-tested at machine speed, which means both breakthroughs and risks are arriving faster than people are used to.
Global: Agent Browsers Like Interceptor Could Change How You Use the Web
A new concept known as an 'agent browser' is starting to take shape, with tools like Interceptor leading the way. Instead of simply displaying websites, these browsers allow AI agents to interact with pages, perform tasks, and automate workflows.
This changes the role of the browser itself. It is no longer just something you control directly. It becomes something that can act on your behalf, navigating interfaces and completing actions in the background.
That opens the door to convenience, but also raises new questions about oversight. If something else is clicking, typing, and interacting for you, understanding what it is doing becomes more important.
So what's the upshot for you?
The way you interact with the internet may shift from doing things yourself to supervising software that does them for you.
Global: Anthropic's Mythos Model Is Being Accessed by Unauthorized Users
Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing.
Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems.
The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said.
The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub.
To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models.
Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said.
So what's the upshot for you?
The group has not run cybersecurity-related prompts on the Mythos model, the person reassuringly said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Global: Apple Stops Weirdly Storing Data That Let Cops Spy On Signal Chats
Apple has fixed a bug that could cause parts of Signal notifications to remain stored on iPhones even after messages disappeared and the app was deleted.
'Affected users concerned about push notifications can update their devices to stop what Apple characterized as 'notifications marked for deletion' that 'could be unexpectedly retained on the device,'' reports Ars Technica.
'According to Apple, the push notifications should never have been stored, but a 'logging issue' failed to redact data.' Vulnerable users hoping to evade law enforcement surveillance often use encrypted apps like Signal to communicate sensitive information.
That's why users felt blindsided when 404 Media reported that Apple was unexpectedly storing push notifications displaying parts of encrypted messages for up to a month.
This occurred even after the message was set to disappear and the app itself was deleted from the device. 404 Media flagged the issue after speaking to multiple people who attended a hearing where the FBI testified that it 'was able to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, because copies of the content were saved in the device's push notification database.
' The shocking revelation came in a case that 404 Media noted was 'the first time authorities charged people for alleged 'Antifa' activities after President Trump designated the umbrella term a terrorist organization.' 'We're grateful to Apple for the quick action here, and for understanding and acting on the stakes of this kind of issue,' Signal's post said. 'It takes an ecosystem to preserve the fundamental human right to private communication.'
So what's the upshot for you?
Global: OpenAI shakes up partnership with Microsoft, capping revenue share payments
OpenAI and Microsoft have reworked one of the most important partnerships in artificial intelligence, loosening ties that once gave Microsoft exclusive access to OpenAI's technology.
So what's the upshot for you?
Most people never install CLI tools from npm, but everybody relies on software that was built by someone who does. This is one of those stories that quietly explains why software updates can be both essential and a little nerve-racking at the same time.
Global: FAST16.SYS Rewrites the Timeline of Cyber Sabotage
Newly surfaced research into FAST16.SYS, a rootkit driver tied to an older malware framework that appears to predate Stuxnet by years.
The malware did something especially chilling. Instead of simply stealing data or breaking systems, it appears to have altered engineering and scientific calculations in memory, introducing subtle errors into high precision software. That is not ordinary cybercrime.
That is sabotage with patience. The detail that makes this unforgettable is the method. The malware reportedly modified executable behavior on the fly as programs were loaded, leaving the files on disk unchanged.
That means the target software could look perfectly normal at rest while behaving differently in action. SentinelLabs concluded that the patched routines were tied to precision calculation tools used in areas like engineering and physics, which suggests a deliberate attempt to distort real-world outcomes without announcing itself.
It is the digital equivalent of quietly bending a ruler so every measurement comes out just a little wrong.
So what's the upshot for you?
This expands the usual mental picture of what malware does. Most people think theft, ransom, or outages. FAST16.SYS points to something colder and more strategic: changing the answer while making the system look fine. That idea sticks with you because it turns accuracy itself into a target.
Global: AI Systems Are Now Testing and Breaking Each Other
Researchers and hackers are beginning to use AI systems to probe and break other AI systems. One example involved using one model to manipulate and expose weaknesses in another, showing how quickly this space is evolving.
This is a shift from traditional security testing. Instead of humans slowly finding flaws, AI can now automate that process at scale. It can try thousands of variations, learn from failures, and refine attacks far faster than before.
The result is an environment where AI is both the tool and the target. Systems are being trained, attacked, and improved in a continuous loop.
So what's the upshot for you?
The technology shaping the future is being stress-tested at machine speed, which means both breakthroughs and risks are arriving faster than people are used to.
Global: Agent Browsers Like Interceptor Could Change How You Use the Web
A new concept known as an 'agent browser' is starting to take shape, with tools like Interceptor leading the way. Instead of simply displaying websites, these browsers allow AI agents to interact with pages, perform tasks, and automate workflows.
This changes the role of the browser itself. It is no longer just something you control directly. It becomes something that can act on your behalf, navigating interfaces and completing actions in the background.
That opens the door to convenience, but also raises new questions about oversight. If something else is clicking, typing, and interacting for you, understanding what it is doing becomes more important.
So what's the upshot for you?
The way you interact with the internet may shift from doing things yourself to supervising software that does them for you.
Global: Anthropic's Mythos Model Is Being Accessed by Unauthorized Users
Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing.
Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems.
The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said.
The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub.
To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models.
Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said.
So what's the upshot for you?
The group has not run cybersecurity-related prompts on the Mythos model, the person reassuringly said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Global: Apple Stops Weirdly Storing Data That Let Cops Spy On Signal Chats
Apple has fixed a bug that could cause parts of Signal notifications to remain stored on iPhones even after messages disappeared and the app was deleted.
'Affected users concerned about push notifications can update their devices to stop what Apple characterized as 'notifications marked for deletion' that 'could be unexpectedly retained on the device,'' reports Ars Technica.
'According to Apple, the push notifications should never have been stored, but a 'logging issue' failed to redact data.' Vulnerable users hoping to evade law enforcement surveillance often use encrypted apps like Signal to communicate sensitive information.
That's why users felt blindsided when 404 Media reported that Apple was unexpectedly storing push notifications displaying parts of encrypted messages for up to a month.
This occurred even after the message was set to disappear and the app itself was deleted from the device. 404 Media flagged the issue after speaking to multiple people who attended a hearing where the FBI testified that it 'was able to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, because copies of the content were saved in the device's push notification database.
' The shocking revelation came in a case that 404 Media noted was 'the first time authorities charged people for alleged 'Antifa' activities after President Trump designated the umbrella term a terrorist organization.' 'We're grateful to Apple for the quick action here, and for understanding and acting on the stakes of this kind of issue,' Signal's post said. 'It takes an ecosystem to preserve the fundamental human right to private communication.'
So what's the upshot for you?
In their post, Signal confirmed that after users update their devices, 'no action is needed for this fix to protect Signal users on iOS. Once you install the patch, all inadvertently-preserved notifications will be deleted, and no forthcoming notifications will be preserved for deleted applications.'
Global: OpenAI shakes up partnership with Microsoft, capping revenue share payments
OpenAI and Microsoft have reworked one of the most important partnerships in artificial intelligence, loosening ties that once gave Microsoft exclusive access to OpenAI's technology.
The updated agreement removes that exclusivity, allowing OpenAI to work with other major cloud providers and expand its enterprise reach. At the center of the change is money.
OpenAI will still share roughly 20 percent of its revenue with Microsoft through 2030, but those payments are now capped rather than open-ended.
At the same time, Microsoft will no longer share its own AI-related revenue with OpenAI, signaling a more balanced financial relationship between the two companies.
The shift also removes earlier constraints tied to artificial general intelligence, which had created uncertainty in the partnership. Microsoft remains a key partner and primary cloud provider, but OpenAI now has the flexibility to strike deals with companies like Amazon and Google as it scales its business and prepares for potential public markets.
For Microsoft, the move reflects a broader strategy to diversify its AI offerings rather than rely heavily on one partner. For OpenAI, it opens the door to faster growth and wider distribution, especially as competition intensifies across the AI sector and infrastructure demands continue to surge.
So what's the upshot for you?
The revised deal signals a shift from dependency to optionality, showing that in the AI economy, control over partnerships may matter just as much as control over technology.
And now to round it all up:
France's national identity agency confirmed a breach exposing up to 19 million records tied to passports, ID cards, and driver's licenses, with the stolen data already appearing on criminal forums. The deeper concern is that a similar dataset from the same agency surfaced in 2025, raising hard questions about whether the underlying vulnerabilities were ever truly resolved.
A malicious version of Bitwarden's CLI tool was briefly distributed through npm as part of the broader Checkmarx supply chain campaign, targeting developers and build pipelines rather than end users. No vault data was compromised, but the incident is a sharp reminder that in 2026, software security lives or dies in the spaces between the code you write and the code you depend on.
SentinelLabs research has surfaced FAST16.SYS, a rootkit that appears to predate Stuxnet and operated by silently corrupting precision calculations in memory while leaving files on disk completely untouched. It reframes what sophisticated sabotage actually looks like not loud, not destructive, just quietly making every answer slightly wrong.
Researchers and threat actors are now deploying AI systems to probe, manipulate, and break other AI systems at a scale and speed no human red team can match. The implication is significant: the attack surface is expanding faster than defenses can be written, and the tool doing the probing learns as it goes.
Agent browsers like Interceptor allow AI to navigate, click, and complete tasks on your behalf, fundamentally shifting the browser from a tool you control to one that acts for you. The convenience is real, but so is the oversight gap when software is doing the interacting, understanding exactly what it's doing and why becomes a security requirement, not just a preference.
A small group gained access to Anthropic's restricted Mythos model by combining contractor-linked credentials, open-source sleuthing, and data exposed in a breach at AI training startup Mercor. The incident is less about malicious intent and more about how third-party access chains create exposure that the primary organisation may not even be aware of until it's too late.
A logging bug in iOS was quietly retaining fragments of Signal push notifications for up to a month even after messages were deleted and the app was removed a flaw the FBI used to extract message content in a live criminal case. Apple has patched the issue and confirmed that updating to the latest iOS will delete any inadvertently retained data, but the episode exposed just how thin the line between ephemeral and permanent can be.
OpenAI and Microsoft have restructured their landmark partnership, removing exclusivity clauses and capping revenue share payments, while giving OpenAI the freedom to work with Amazon, Google, and other cloud providers. For anyone watching where AI infrastructure power is consolidating, this deal signals that the era of single-partner dependency is ending and a more competitive, distributed landscape is taking its place.
And our Quote of the week: "It's only when the tide goes out that you discover who's been swimming naked." Warren Buffett
That’s it for this week. Stay safe, Stay secure, keep everything looking fine, and we’ll see you in se7en.
So what's the upshot for you?
The revised deal signals a shift from dependency to optionality, showing that in the AI economy, control over partnerships may matter just as much as control over technology.
And now to round it all up:
France's national identity agency confirmed a breach exposing up to 19 million records tied to passports, ID cards, and driver's licenses, with the stolen data already appearing on criminal forums. The deeper concern is that a similar dataset from the same agency surfaced in 2025, raising hard questions about whether the underlying vulnerabilities were ever truly resolved.
A malicious version of Bitwarden's CLI tool was briefly distributed through npm as part of the broader Checkmarx supply chain campaign, targeting developers and build pipelines rather than end users. No vault data was compromised, but the incident is a sharp reminder that in 2026, software security lives or dies in the spaces between the code you write and the code you depend on.
SentinelLabs research has surfaced FAST16.SYS, a rootkit that appears to predate Stuxnet and operated by silently corrupting precision calculations in memory while leaving files on disk completely untouched. It reframes what sophisticated sabotage actually looks like not loud, not destructive, just quietly making every answer slightly wrong.
Researchers and threat actors are now deploying AI systems to probe, manipulate, and break other AI systems at a scale and speed no human red team can match. The implication is significant: the attack surface is expanding faster than defenses can be written, and the tool doing the probing learns as it goes.
Agent browsers like Interceptor allow AI to navigate, click, and complete tasks on your behalf, fundamentally shifting the browser from a tool you control to one that acts for you. The convenience is real, but so is the oversight gap when software is doing the interacting, understanding exactly what it's doing and why becomes a security requirement, not just a preference.
A small group gained access to Anthropic's restricted Mythos model by combining contractor-linked credentials, open-source sleuthing, and data exposed in a breach at AI training startup Mercor. The incident is less about malicious intent and more about how third-party access chains create exposure that the primary organisation may not even be aware of until it's too late.
A logging bug in iOS was quietly retaining fragments of Signal push notifications for up to a month even after messages were deleted and the app was removed a flaw the FBI used to extract message content in a live criminal case. Apple has patched the issue and confirmed that updating to the latest iOS will delete any inadvertently retained data, but the episode exposed just how thin the line between ephemeral and permanent can be.
OpenAI and Microsoft have restructured their landmark partnership, removing exclusivity clauses and capping revenue share payments, while giving OpenAI the freedom to work with Amazon, Google, and other cloud providers. For anyone watching where AI infrastructure power is consolidating, this deal signals that the era of single-partner dependency is ending and a more competitive, distributed landscape is taking its place.
And our Quote of the week: "It's only when the tide goes out that you discover who's been swimming naked." Warren Buffett
That’s it for this week. Stay safe, Stay secure, keep everything looking fine, and we’ll see you in se7en.
Comments
Post a Comment