Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026
Episode 285.
This week, we uncover some long-term offensive strategies that show the virtue of patience can have a negative impact on the victims.
A China-aligned threat group is quietly weaponizing telecom infrastructure with kernel-level backdoors, turning carriers into long-term strategic listening posts.
A low-tech but highly effective social engineering campaign is turning everyday users into their own worst enemy by coaching them to execute the attacker's commands.
A popular AI gateway narrowly avoided a cascading supply-chain breach after compromised packages exposed just how fragile modern dependency chains have become.
A leaked cache of internal documents has forced Anthropic to confirm a powerful new model, spotlighting both its rapid progress and the operational risks of secrecy at scale.
New research shows that AI graders systematically diverge from human judgment, rewarding polish over depth and raising red flags for automated assessment in high-stakes settings.
The US Defense Department is pushing AI vendors onto a single contractual and ethical footing, signaling that military requirements will increasingly define how models can be used.
China’s latest Five-Year Plan elevates AI from a growth priority to a full-spectrum instrument of national power, blending industrial policy with geopolitical strategy.
And finally.. The Meta–Manus deal has evolved into a geopolitical flashpoint, illustrating how cross-border AI acquisitions can collide head-on with state control and national security anxieties.
You don’t even have to be patient with these discoveries. Let’s go!
First we notice the shift from breaking systems → manipulating people & hiding in plain sight.
CN: China-Linked Red Menshen Uses Stealthy BPFDoor Implants to Spy via Telecom Networks
A China-linked cyber espionage group known as Red Menshen has been quietly infiltrating global telecommunications networks, according to new research.
The campaign, active since at least 2021, focuses on embedding long-term access inside critical infrastructure to monitor sensitive communications, including those tied to governments and key industries.
Security researchers describe the operation as highly strategic, relying on stealth rather than disruption.
The group deploys 'sleeper cell' style implants that remain hidden for extended periods, allowing continuous intelligence gathering without alerting defenders.
This approach gives attackers visibility into vast amounts of data moving through telecom systems, which underpin modern digital communication.
At the center of the campaign is a Linux-based backdoor known as BPFDoor.
Unlike typical malware, it does not open visible network ports or maintain standard command channels.
Instead, it operates inside the operating system kernel and activates only when it receives a specially crafted data packet, making detection extremely difficult.
The broader toolkit includes credential harvesting tools and cross-platform control frameworks, enabling persistent access across compromised environments.
Researchers say the attackers can quietly move within networks and maintain control over long periods, reinforcing the campaign's focus on sustained espionage rather than immediate damage.
The findings reinforce growing concerns about state-backed cyber operations targeting telecom infrastructure, where a single breach can expose communications at a population level, turning unseen network access into a powerful intelligence advantage that operates far beyond the reach of traditional defenses.
So what's the upshot for you?
If you run or rely on network infrastructure, assume stealthy, long-term compromise is the real threat, not outages, so detection needs to move from what's noisy to what's quietly, patiently persistent.
Global: ClickFix Keeps Winning Because It Makes People Attack Themselves
Recorded Future identified five distinct ClickFix clusters targeting Windows and macOS users.
The technique tricks victims into manually running malicious commands through familiar tools like the Windows Run box or macOS Terminal.
It highlights Apples' macOS 26.4 Terminal paste prompt as a practical mitigation against this style of attack, sometimes called pastejacking.
This is one of the nastier stories in the batch because it is so low-tech in spirit.
ClickFix does not need magical zero-days if it can just convince a person to paste the bad command themselves.
That is what makes it effective and maddening.
It hijacks that small everyday reflex of "the computer says, do this, so I guess I'll do this”, and turns the victim into the delivery mechanism.
Grimly clever, honestly.
The best audience-facing angle here is: modern cybercrime increasingly succeeds not by smashing the lock, but by getting you to open the door because the sign looked official enough.
So what's the upshot for you?
If an attack requires you to paste a command, you are the vulnerability. Slow down whenever a computer tells you to “just run this”.
Global: LiteLLM Dodged a Supply-Chain Nightmare, Barely.LiteLLM is a popular open-source gateway that helps developers work with many different AI models through one interface.
Malicious LiteLLM packages were discovered after a researcher's machine started failing, leading to a rapid investigation and PyPI quarantine.
LiteLLM's official incident page says the issue involved unauthorized PyPI package publishes, with evidence suggesting a maintainer account compromise, while the project says its main codebase remained safe.
The whole modern software stack is now so interconnected that one compromised component can ripple outward with terrifying efficiency.
In this case, the disaster was caught quickly enough to become a warning shot instead of a crater.
The AI boom is creating incredible new tools, but it is also creating a gold rush of fragile software plumbing underneath them. The plumbing is now part of the story.
So what's the upshot for you?
If you rely on open source AI tooling, your supply chain is now your attack surface. Pin, verify, and monitor or inherit someone else's compromise.
“AI reality check” segment-progress is real, but alignment, evaluation, and control are lagging badly.
AI firm Anthropic is testing a new, more powerful artificial intelligence model with a small group of early users, following an accidental data leak that exposed details of the project.
The company described the system as its most capable to date, signaling a significant jump in performance.
The model's existence came to light after internal materials were found in a publicly accessible data cache.
The leaked documents, including a draft announcement, referred to the model as Claude Mythos and warned it could introduce unprecedented cybersecurity risks.
Researchers, including Roy Paz and Alexandre Pauwels, identified thousands of unpublished files tied to the company's blog that had been left exposed.
The materials also referenced a private executive summit aimed at expanding enterprise adoption.
Anthropic confirmed the exposure resulted from a configuration error in its content management system and said the documents were early drafts.
The company moved quickly to restrict access once notified.
Despite the leak, it acknowledged ongoing development and testing of a next-generation model with major improvements in reasoning, coding, and security capabilities.
The draft outlined a new tier of AI systems, internally referred to as Capybara, positioned above its current Opus models in both capability and cost.
According to the document, the system significantly outperforms earlier versions on technical benchmarks but remains expensive to operate and is not yet ready for broad release.
Anthropic is proceeding cautiously, limiting access while refining the model's deployment strategy.
The episode exposes both the rapid acceleration of AI capability and the operational risks surrounding its rollout, showing that as systems grow more powerful, the margin for error in how they are handled becomes increasingly narrow.
So what's the upshot for you?
If you're adopting cutting-edge AI, assume capability is outpacing control. Today's beta feature is tomorrow's security incident.
Global: LLMs Do Not Grade Essays Like HumansA new academic study titled “LLMs Do Not Grade Essays Like Humans” finds that large language models evaluate writing in ways that diverge sharply from human judgment, raising concerns about their growing role in education.
The research analyzes how AI systems score essays compared to human graders and identifies systematic differences in what each values.
The authors show that language models tend to reward surface-level features such as structure, length, and fluency, often over deeper qualities like reasoning, originality, and argument strength.
Human graders, by contrast, weigh nuance and critical thinking more heavily.
This mismatch can lead to inflated scores for essays that appear polished but lack substance.
The study also highlights consistency issues.
While AI grading can appear objective, models may produce variable results depending on prompt phrasing or subtle input changes.
Human graders, though imperfect, tend to follow more stable evaluation criteria shaped by training and institutional standards.
Researchers further note that students could exploit these differences by tailoring essays to what AI systems favor, rather than demonstrating genuine understanding.
This creates a risk where optimization for machine scoring undermines the educational goal of developing critical thinking and depth.
The findings suggest that while AI can assist in grading workflows, it does not replicate human judgment in meaningful ways, signaling that relying on it without oversight may reward performance over substance in ways that ultimately distort how learning is measured.
So what's the upshot for you?
If you use AI for evaluation (hiring, grading, performance), you may be rewarding polish over substance-so keep a human in the loop or accept gaming as a feature.
US: The Pentagon says it's getting its AI providers on the same baseline
The Pentagon is moving to standardize how it works with major artificial intelligence providers, aiming to ensure all vendors operate under the same baseline expectations for military use.
Officials say the goal is straightforward: any approved AI system must be usable across all lawful defense applications, without restrictions imposed by individual companies.
This effort reflects growing friction between the Department of Defense and some AI firms, particularly over ethical guardrails.
While several companies have aligned with the Pentagon's requirements, at least one has resisted loosening limits on how its models can be used.
Defense officials argue that such constraints are incompatible with military needs, which often involve scenarios that companies may find controversial.
At the same time, the Pentagon maintains that it is still operating within established ethical frameworks for AI.
Leaders emphasize that existing principles guiding responsible use remain intact, even as they push for broader access and fewer vendor-imposed limitations.
The tension highlights a deeper divide between commercial AI governance and national security priorities.
The push for a unified baseline also signals a strategic shift.
The military is accelerating the adoption of advanced AI across classified and operational environments, seeking flexibility to deploy tools wherever needed.
Standardizing contracts and expectations reduces dependency on any single provider and limits the risk of capability gaps if a company withdraws support.
The result is a clearer message to the AI industry: participation in defense work increasingly requires alignment with government-defined use cases, not company-defined boundaries, a shift that places control of AI deployment firmly in the hands of the customer rather than the creator.
So what's the upshot for you?
If you build or buy AI, expect acceptable use to get overridden by your largest customer, especially if that customer has a military budget.
We’re not watching a tech race-we’re watching a restructuring of global power through technology.
CN: The Global Implications of China's 5-Year Plan AI Ambitions
China's newly unveiled 15th Five-Year Plan places artificial intelligence at the center of its economic and geopolitical strategy, signaling a decisive shift in how the country intends to compete globally.
The plan emphasizes embedding AI across nearly all sectors, from manufacturing to healthcare, as part of a broader push to transform China into a technology-driven economy and reduce reliance on foreign systems.
Rather than focusing only on cutting-edge breakthroughs, Beijing is prioritizing the large-scale deployment of AI throughout industry.
This includes automating factories, optimizing supply chains, and integrating robotics into everyday economic activity.
The strategy reflects a belief that widespread adoption, not just innovation, will determine leadership in AI, potentially redefining how global competition in technology is measured.
The plan also underscores a strong push for technological self-reliance.
Facing export controls and geopolitical tensions, China is investing heavily in domestic semiconductor production, computing infrastructure, and homegrown AI ecosystems.
This effort is designed to insulate its economy from external pressure while strengthening its position in critical supply chains that influence global markets.
Internationally, China is pairing its technological ambitions with a more assertive global posture.
The strategy aims to expand influence by offering AI-driven industrial capabilities and digital infrastructure to partner countries, while raising economic and strategic costs for those that resist alignment.
This approach suggests a future where technology ecosystems increasingly reflect geopolitical alliances.
Taken together, the plan reframes AI as both an economic engine and a tool of statecraft, indicating that the next phase of global competition will hinge less on who invents the most advanced systems and more on who integrates them most effectively across society.
So what's the upshot for you?
If you compete globally, you're not just competing on innovation anymore, you're competing against entire state-backed AI ecosystems.
CN: The least surprising chapter of the Manus story is what's happening right now
A fast-rising Chinese AI startup has become the center of a geopolitical standoff after its 2 billion sale to Meta.
Manus, which quickly gained attention for its advanced AI agent capabilities, had relocated from Beijing to Singapore and restructured itself before the acquisition.
The move was widely seen as an effort to distance the company from China's regulatory reach and align more closely with global markets.
Manus first drew interest with a demo showing AI handling complex tasks like hiring decisions, travel planning, and financial analysis.
It rapidly secured major funding and scaled to millions of users, generating over 100 million in annual recurring revenue within months.
The speed of its growth and the eventual acquisition by Meta reflected the intense competition among global tech firms to secure leading AI talent and capabilities.
The deal, however, triggered immediate scrutiny from Chinese authorities.
Reports indicate that Manus’s co-founders were called in by regulators and restricted from leaving the country while officials review whether the transaction violated foreign investment rules.
Beijing has described the process as routine, but the response demonstrates a pattern of tight oversight over strategically important technology companies.
China has long maintained strict control over its tech sector, particularly in areas tied to national competitiveness such as artificial intelligence.
The government has previously intervened forcefully when companies or executives appeared to step outside regulatory boundaries.
In this context, Manus' attempt to relocate and sell itself abroad fits into a three-way tug of war between innovation, capital mobility, and state control.
The situation focuses on how AI is no longer just a commercial race but a geopolitical one, where talent, capital, and control are tightly contested, and any company operating across borders must navigate not just markets but governments that see technology as a matter of national power.
So what's the upshot for you?
If you operate across borders in AI, your biggest risk isn't technical. It’s getting caught between governments that both think they own you.
So to sum up these stories:
In a world of quiet, long-term compromise, the real danger is what you never see on your dashboards. If you own or depend on critical networks, the lesson is clear: invest in deep, behavioral detection or accept that someone else may patiently be reading over your shoulder.
The Click Fix campaign works because it turns a split-second “just paste this” moment into a full-blown breach. If a workflow asks you to run commands you don’t fully understand, your safest move is to pause, verify, and refuse to be the attacker’s helping hand.
The LiteLLM incident is a reminder that your most trusted tools are only as safe as the packages and people behind them. If you build on open source, treat dependency hygiene, pinning, verification, and monitoring as core security practices.
The Anthropic leak shows how quickly frontier AI can move from “internal experiment” to “external risk” when basic operational safeguards fail. If you’re bringing in cutting-edge models, assume capabilities will outpace your controls unless you deliberately design for safety and governance first.
AI essay graders may look efficient, but they can quietly reward style over genuine understanding. If you’re tempted to automate evaluation, keep humans in the loop and be explicit about what you value, or be prepared for people to learn to optimize for the wrong thing.
The Pentagon says it’s getting its AI providers on the same baseline. This retells the story that the biggest customer often gets to rewrite the rules. If you build AI, understand that entering defense or other strategic markets means your “acceptable use” boundaries will be tested, and you need to mark out a clear red line waaaay before you cross it.
China’s 5-Year AI playbook shows that AI advantage now comes from deployment at scale, not just breakthrough labs. If you compete globally, the lesson is to think in ecosystems, talent, chips, infrastructure, regulation, and patience, not isolated products.
The Meta/Manus story pointed out that in AI, your cap table and headquarters can matter as much as your model weights. If you operate across jurisdictions, plan for regulatory whiplash and treat geopolitical risk as a first-order design constraint, not an afterthought.
The quote of the week - “All we can do is make sure that technology becomes the ally and protector of peace, that we build better shields rather than sharper and more deadly swords.” - Ronald Reagan
That’s it for this week. Stay Safe, Stay Secure, Stay Patient, and we’ll see you in se7en.
Comments
Post a Comment