Assumed Safe. The AI, Privacy, and Security Weekly Update for the Week Ending May 5th, 2026.

 Episode 290.

This week, we assume nothing in our collection of stories...

A flaw hiding in plain sight for nearly a decade has quietly turned every Linux system's most trusted layer into an open door.

Attackers have discovered that the easiest way to install malware is to convince users the malware is the cure.

A new phishing kit is lowering the barrier to industrial-scale credential theft to roughly the cost of a Netflix subscription.

 Ransomware didn't slow down in Q1 2026  it mutated, and the new strain doesn't even need encryption to extort you.

Credit Union Loan Fraud The most methodical fraud playbook circulating underground right now doesn't involve a single line of malicious code.

A teenager with a forum alias just handed a third of France's population an identity problem they didn't ask for.

Six of the world's most serious cybersecurity agencies just issued a unified warning that most organizations deploying agentic AI are not ready for what they've built.

A new paper argues that the discipline meant to stress-test AI safety has itself become the thing it was designed to find a vulnerability dressed up as a control.

The arc runs from infrastructure to brand to process to institution to the security function itself. Each story is a different flavor of the same failure: someone trusted something they shouldn't have, or built a system that assumed others would.

Let's go verify!


Global: One tiny exploit gives full Linux access: all kernels since 2017 are vulnerable

A newly disclosed Linux kernel flaw is raising alarms after researchers said every kernel version released since 2017 may be vulnerable to privilege escalation.

The bug, tracked as CVE-2026-31431 and nicknamed 'Copy Fail,' can let an ordinary user gain full root access, giving them the highest level of control over a system.

Researchers said the exploit is unusually small and simple, requiring only a short script to work across multiple major Linux distributions.

Tests reportedly succeeded on Ubuntu, Amazon Linux, Red Hat Enterprise Linux, and SUSE, with experts warning that Debian, Fedora, Arch, and others could behave similarly.

The flaw stems from how the kernel handles data in memory through its page cache.

Attackers can reportedly alter the cached version of trusted system files, such as the command used to switch users, allowing modified code to run with elevated privileges while leaving the original file unchanged on disk.

Security teams are especially concerned about containers and shared cloud systems.

Because Kubernetes and Docker workloads rely on the host kernel, a compromise inside one container could potentially spread to the wider machine and neighboring workloads.

That makes the issue more serious than a typical local-only bug.

So what's the upshot for you?

Patches are available through Linux maintainers.

The most dangerous cyber risks are often not loud attacks from outside, but quiet flaws already sitting inside trusted systems.

Global: Bogus Avast website fakes virus scan, installs Venom Stealer instead

A new cyber scam is exploiting trust in well-known antivirus software by impersonating Avast, according to researchers at Malwarebytes.

The fake website looks convincing and even runs what appears to be a system scan.

It tells users their computers are infected, then urges them to download a fix.

That fix is not security software, but malware designed to quietly steal sensitive data.

The attack follows a familiar pattern.

The site simulates a scan, reports threats, and offers a solution that appears legitimate.

Once downloaded, the file installs Venom Stealer, a type of malware built to extract passwords, browser data, and cryptocurrency wallet information.

The tactic blends fear and brand recognition to push users into acting quickly without verifying the source.

After installation, the malware disguises itself inside system folders to avoid detection.

It mimics legitimate software processes and uses technical tricks to bypass many antivirus tools.

At the time of analysis, most security engines failed to flag it, allowing it to operate largely unnoticed while collecting data in the background.

The scope of data theft is extensive.

The malware pulls saved credentials and session cookies from browsers, which can let attackers access accounts without needing passwords, even if two-factor authentication is enabled.

It also targets cryptocurrency wallets and captures screenshots, packaging all of this data and sending it to remote servers controlled by attackers.

Researchers say this campaign reflects a broader trend in cybercrime, where attackers reuse proven playbooks that combine convincing design with automated data theft.

The takeaway is clear: in today's threat landscape, the most dangerous attacks are not the most technical ones, but the ones that look just real enough to make you do the work for them.

So what's the upshot for you?

The most dangerous malware in 2026 still comes with a 'click here to stay safe' button.

Global: Just when you thought you had the upper hand...  Meet the Bluekit phishing kit, enabling automated phishing with 40+ templates and AI tools

A newly discovered phishing kit called Bluekit is making waves in cybersecurity circles.

Identified by Varonis Threat Labs, the kit is still in active development but already packs a serious feature set: over 40 website templates, automated domain registration, antibot protection, geolocation emulation, and two-factor authentication bypass capabilities.

It also offers add-ons, including voice cloning and a built-in AI assistant.

Bluekit's template library targets a wide range of platforms, including iCloud, Gmail, Outlook, GitHub, ProtonMail, and cryptocurrency services like Ledger.

That breadth makes it a flexible tool, covering email, cloud, developer, and financial platforms all under one roof.

The kit's dashboard functions as a centralized command center.

Operators can build campaigns, register domains, manage harvested credentials, and route stolen data directly to Telegram.

A site-builder lets attackers select templates and target brands while configuring login detection, device filtering, and anti-analysis measures.

Bluekit also includes an AI assistant panel with model options such as Llama, GPT-4.1, Claude Sonnet, Gemini, and DeepSeek.

In testing, however, the AI produced only rough campaign drafts filled with placeholders rather than polished, ready-to-deploy phishing content, suggesting the AI component is more of a scaffolding tool than a finished attack engine at this stage.

Researchers note that Bluekit's rapid development pace is itself a concern.

New features and templates are being added frequently, and if adoption grows alongside that evolution, the kit is expected to appear in real-world campaigns soon.

The takeaway here is direct: a phishing kit that is still rough around the edges today can become a serious operational threat by next quarter, making early detection and awareness your most practical advantage.

So what's the upshot for you?

A phishing kit with 40 templates, AI assistance, and a Telegram dashboard is now cheaper and easier than your last security awareness training.

Global: Ransomware Is Getting Uglier As Cybercriminals Fake Leaks and Skip Encryption Entirely

https://reliaquest.com/blog/threat-spotlight-ransomware-and-cyber-extortion-in-q1-2026/

 'Ransomware activity jumped again in Q1 2026,' with 2,638 victim posts on leak sites, up 22% year over year, according to a report from cybersecurity company ReliaQuest.

But the bigger shift is how messy the ecosystem has become.

Established groups like Akira and Qilin are still active, while newer players like The Gentlemen surged into the top tier with a 588 percent spike in activity.

At the same time, questionable leak sites such as 0APT and ALP-001 are muddying the waters by posting possibly fake breach claims, forcing companies to investigate incidents that may not even be real.

Meanwhile, actors like ShinyHunters are showing that ransomware does not always need encryption anymore.

By targeting identity systems and SaaS platforms, attackers can steal data using legitimate access, often through phishing or even phone-based social engineering, and then extort victims without deploying traditional malware.

With a record 91 active leak sites and faster attack timelines, the report suggests defenders should focus less on tracking specific groups and more on stopping common tactics like credential theft, remote access abuse, and large-scale data exfiltration.

So what's the upshot for you?

When attackers don't need to break in because you handed them the keys through phishing, your firewall is just expensive furniture.

US: Fraudsters Are Not Hacking Credit Unions. They Are Simply Applying for Loans.

Cybersecurity firm Flare recently uncovered a detailed loan fraud playbook circulating in underground forums.

What makes it notable is what it does not involve: no malware, no exploits, no system breaches.

Instead, threat actors are using stolen identities to navigate standard loan application workflows from start to finish, moving through credit checks, identity verification, and fund disbursement as if they were legitimate borrowers.

Small to mid-sized credit unions are the explicit target.

Fraudsters perceive these institutions as more reliant on traditional verification methods, with limited resources dedicated to fraud detection.

The structured approach involves selecting a target, submitting a consistent stolen identity application, passing knowledge-based authentication, securing loan approval, and rapidly moving funds through intermediary accounts.

What sets this apart from typical fraud is how methodical it has become.

Discussions in underground communities reflect an organized, process-driven approach where the fraud method is broken down step by step and designed to be consistently replicated.

The focus is not on breaking security systems but on blending into normal business operations entirely.

This matters beyond credit unions.

The same logic applies broadly across financial workflows where identity verification relies on data consistency rather than genuine validation.

As stolen credential markets grow and synthetic identity fraud matures, any institution using legacy onboarding processes faces similar exposure.

So what's the upshot for you?

The fraudsters didn't hack your system; they read your process documentation and followed it better than your staff.

FR: French Prosecutors Link 15-Year-Old To Mega-Breach At State's Secure Document Agency

French prosecutors say police detained a 15-year-old suspected of using the alias 'breach3d' in connection with a cyberattack on France Titres (ANTS), the state agency that handles passports, ID cards, and other secure documents.

The breach allegedly involved 12 million to 18 million lines of data offered for sale online, potentially affecting up to a third of France's population if the records are unique.

It formally opened a judicial investigation on April 29, covering alleged fraudulent access to a state-run automated data processing system and the extraction of data from it.

Each offense carries a potential prison sentence of seven years and a maximum ~$350,000 fine.

Public Prosecutor Laure Beccuau has requested that the minor, whose pronouns, like their name, were also not specified, be formally charged and placed under judicial supervision.

France's approach to punishing minors via its legal system is typically geared toward re-education and rehabilitation rather than prison time.

While those aged between 13 and 16 can face time in juvenile detention, it is often used as a last resort measure.

The maximum sentences and fines for the charges the 15-year-old in this case faces are upper limits imposed on adult offenders, and would likely be lowered substantially in cases involving a minor, like this one.

So what's the upshot for you?

12 million government records, a teenager, and a forum username.. your threat model needs a lower age floor.

AU: Careful adoption of agentic AI services

Six cybersecurity agencies from across the Five Eyes alliance, including CISA, the NSA, and their counterparts in Australia, the UK, Canada, and New Zealand, jointly released guidance on May 1, 2026 titled 'Careful Adoption of Agentic AI Services.'

The guidance outlines key security challenges and risks associated with agentic AI, and provides actionable steps for designing, deploying, and operating these systems safely.

The publication comes as businesses race to integrate agentic AI into core workflows.

 Agentic AI can be misused or misappropriated, leading to productivity losses, service disruption, privacy breaches, or cybersecurity incidents.

Agentic AI systems insert data from tools and memory bases into the context window of LLM agents, greatly expanding the attack surface that malicious actors can exploit, particularly through prompt injection attacks.

Unlike traditional systems, errors in agentic pipelines can cascade across multiple steps and compromise the entire system.

 Privilege risks are a key concern, and strict adherence to the principle of least privilege is critical.

Privileges assigned to agents directly determine the level of risk they can introduce.

Developers are advised to construct each agent as a distinct principal with a cryptographically anchored identity, its own unique keys or certificates, and strong identity management mechanisms.

 Security begins at the design stage.

Practitioners should understand threats, anticipate risks, and proactively integrate mitigations before development and deployment.

A progressive deployment approach aims to limit initial risk until operators are more familiar with the limitations of the system, with configurations set to fail-safe by default, requiring agents to stop and escalate issues.

Human oversight throughout is treated as non-negotiable.

 So what's the upshot for you?

'Strong governance, explicit accountability, rigorous monitoring, and human oversight are not optional safeguards but essential prerequisites.'

Until security practices and standards mature, organisations should assume agentic AI may behave unexpectedly, prioritising resilience and reversibility over efficiency.

For security and risk professionals, this guidance is less a suggestion and more a baseline that the organisations that treat it as optional are already behind.


Global: The Rise of the AI Red Teamer

A new academic paper published on arXiv argues that AI red teaming, as it is currently practiced, is falling short of its own purpose.

Researchers Subhabrata Majumdar, Brian Pendleton, and Abhishek Gupta contend that while the practice has gained real traction in AI governance circles, it has drifted far from its roots as a rigorous, adversarial critical thinking discipline.

 The central problem, according to the authors, is scope.

Most AI red teaming efforts today concentrate on finding flaws in individual models, such as jailbreaks or harmful outputs.

That is a narrow lens.

The paper argues that this approach misses the bigger picture: the complex, emergent risks that arise when AI models interact with real users and real environments over time.

 To address this, the researchers propose a two-tiered framework.

The first tier operates at the macro level, covering the full AI development lifecycle from design through deployment.

The second tier targets micro-level model evaluation.

Together, the authors argue, this structure better reflects how AI actually fails in practice, which is rarely in isolation.

 The paper draws heavily on cybersecurity principles and systems theory, fields where adversarial thinking has been applied with considerably more discipline and depth.

The authors make the case that AI governance has borrowed the terminology of red teaming without adopting its rigor, and that closing this gap requires genuinely multifunctional teams rather than narrow technical testers.

 So what's the upshot for you?

If your organization is using AI red teaming as a compliance checkbox rather than a systems-level stress test, you are likely producing a false sense of security that the next real-world failure will expose.


Assume this is our end-of-update roundup!

Linux Copy Fail - The most dangerous vulnerabilities aren't the ones attackers invent, they're the ones that have been quietly waiting inside systems you already trust. Patch now, and treat your kernel with the same skepticism you'd apply to anything arriving from outside your perimeter.

Fake Avast / Venom Stealer Brand recognition has become a weapon, and the more trusted the name, the more effective the trap. Before clicking anything that tells you you're in danger, verify the source independently. Fear is the oldest social engineering tool in the book.

Phishing has graduated from a nuisance to a scalable, AI-assisted production line, and the kit doing it is still being refined. The window to get ahead of Bluekit is now, before it matures from rough prototype into a mainstream attack platform.

Ransomware no longer needs to break your defences when it can simply borrow your credentials and walk through the front door. Shift your focus from perimeter defence to identity hygiene; stopping the key handoff is now more important than stopping the breach.

Credit Union Loan Fraud. When fraud is indistinguishable from a legitimate transaction, the process itself is the vulnerability. Any institution still relying on data consistency as a proxy for identity verification needs to treat that assumption as an open risk, not a working control.

French Government Breach: Tens of millions of identity records compromised by a minor with a script is not an edge case; it is a stress test result, and the system failed. If a state agency handling national identity documents can be breached this way, the question every organisation should be asking is what their equivalent assumption of safety actually rests on.

Five Eyes Agentic AI Guidance. When six allied intelligence and cybersecurity agencies align on a warning, it is not a recommendation; it is a baseline that already reflects the minimum acceptable standard. Organisations deploying agentic AI without governance frameworks, least-privilege controls, and genuine human oversight are not innovating; they are assuming safety they haven't earned.

AI Red Teaming Borrowing the language of rigorous security practice without adopting its discipline produces something more dangerous than no process at all  it produces false confidence. If your AI red teaming effort wouldn't survive scrutiny from a serious adversarial thinker, it is time to rebuild it from the threat model outward, not the compliance checklist inward.


And our quote of the week - "Security is mostly a superstition. It does not exist in nature." - Helen Keller 


That’s it for this week.  Stay safe.  Stay Secure and don’t assume anything... Except that we’ll expect to see you in se7en!




Comments