In AI, We Distrust. The IT Privacy and Security Weekly Update for March 17th., 2026

 Episode 283.  

 What if the next cyberattack doesn’t break into your company… but gets hired by it?

What if the AI tools everyone is rushing to adopt are also the biggest security risk we’ve ever invited in?

What if your home internet, the one you trust every day, is secretly working for someone else?

And what happens when governments, tech giants, and criminals all collide around the same powerful new technology at the same time?

Today, we’re diving into the rise of AI agents, the hidden risks behind the hype, and why the biggest question isn’t what this technology can do…

…but whether we can trust it at all.


Global: Nvidia Bets OnOpenClaw, But Adds a Security Layer Via NemoClaw

If OpenClaw is the wild west of AI agents…

Nvidia just showed up wearing a sheriffs badge.

Over the past few months, AI agents have exploded in popularity.

Tools like OpenClaw can read your files, write code, send emails, and automate real work. All on your machine.

That’s powerful.

It’s also a little terrifying for companies.

Because giving an AI that much access is basically handing over the keys to your digital office.

And as we’ve seen, that can lead to unpredictable behavior, data exposure, or worse.

Enter Nvidia.

At its massive GTC conference, the company unveiled NemoClaw. A platform designed to take that same powerful agent technology and wrap it in something businesses actually trust.

Think of it as turning a brilliant but chaotic intern into a fully supervised employee.

It runs inside a controlled sandbox

Every action can be logged and audited

Sensitive data can stay on local machines instead of the cloud

And importantly, Nvidia isn’t trying to replace OpenClaw. They’re building on top of it.

That’s the key move.

Instead of fighting the agent revolution, Nvidia is trying to industrialize it. Making it safe, manageable, and scalable for companies that can’t afford surprises.

Because in the enterprise world, the question isn’t:

Can this AI do amazing things?

It’s:

Can we trust it not to do the wrong thing?


So what's the upshot for you?

This story connects directly to everything we’re seeing across the AI landscape.

China restricting OpenClaw

Companies worried about data exposure

Governments debating control and sovereignty

Nvidia’s move highlights the next phase.

AI is shifting from experimentation to infrastructure.

For a general audience, the takeaway is clear.

The future of AI won’t just be about smarter tools. It will be about controlled, accountable systems that organizations can safely rely on.

Or put simply.

The companies that win in AI won’t just build the most powerful models. They’ll build the ones people are willing to trust.

CN: China Moves To CurbOpenClaw AI Use At Banks, State Agencies

Chinese authorities are restricting use of OpenClaw AI apps across government agencies and state-owned enterprises.

Major entities. Including state-run banks and government offices. Have been told not to install the software on work devices.

Some restrictions extend further.

Employees banned from installing on personal phones connected to corporate networks

In some cases, even families of military personnel are included

Organizations must report prior installations for review and possible removal

The concern.

OpenClaw requires broad system access and external communication, raising data leakage and attack risks

At the same time, China is seeing a massive surge in adoption and investment.

Tech giants like Tencent and Alibaba are deploying versions

Local governments are offering multi-million-yuan subsidies

AI startup MiniMax surged 640 percent in value, now rivaling major incumbents

This is one of those stories where the contradiction is the headline.

On one side, you have a full-blown AI gold rush.

Companies are racing to launch products.

Investors are piling in.

Local governments are handing out subsidies to anyone building on top of this new agentic AI platform called OpenClaw.

One startup. MiniMax. Has skyrocketed in value, leapfrogging some of China’s biggest tech names in just weeks.

It’s hype, speed, and momentum.

And then. Almost simultaneously. You have the Chinese government stepping in and saying:

Not so fast.

Behind the scenes, agencies and state-owned companies are being quietly told to stop installing OpenClaw on work machines.

Some are being asked to report if they’ve already installed it.

Others are banning it outright. Not just on office devices, but even on personal phones connected to company networks.

Why?

Because OpenClaw isn’t just another app.

It’s an agentic AI system. Software that doesn’t just respond to commands, but can act, connect, retrieve, and execute tasks across systems.

To do that, it often needs deep access to data, systems, and external networks.

That’s powerful.

But it’s also exactly what makes it risky.

From a security perspective, it’s like installing an incredibly capable assistant… who can read your files, send messages outside your organization, and make decisions on your behalf.

So now you have two forces colliding in real time.

Innovation pressure pushing adoption as fast as possible

Security and national risk concerns trying to slow things down

And they’re happening at the same time, in the same country, around the same technology.


So what's the upshot for you?

This story is bigger than China.

It highlights a global tension we’re just starting to feel.

The more useful AI becomes, the more access it requires. And the more dangerous it becomes if misused.

For a general audience, the takeaway is straightforward.

Agentic AI isn’t just smarter software.

It’s software that acts on your behalf. Which means it needs trust, permissions, and oversight at a completely different level.

For organizations, this is a preview of what’s coming.

New tools that promise massive productivity gains

But require deep integration into sensitive systems

And introduce risks that traditional security models weren’t designed for

The balanced, professional takeaway.

Before adopting powerful AI tools, the question isn’t just What can this do for us?

It’s What are we giving it access to. And are we comfortable with that?

Because in this new wave of AI, capability and risk are rising together.

US: Meta Buys Moltbook andBets on AI Agents

Meta acquired Moltbook, a social platform where AI agents interact with one another.

Moltbook’s creators, Matt Schlicht and Ben Parr, are joining Meta Superintelligence Labs.

The deal appears to be at least as much about talent as product.

This story feels like a postcard from the near future.

Moltbook was weird, experimental, and exactly the kind of thing a giant platform might once have ignored.

But in the AI race, weird is now strategic.

Meta is not just buying software here. It is buying imagination, velocity, and two people who were already building in a space most of the market is still trying to define.

The cheerful version of this story is that big tech still knows how to spot a frontier.

The less cheerful version is that every frontier gets industrialized eventually.


So what's the upshot for you?

The next wave of consumer technology may not be apps you use directly, but software agents that act, negotiate, search, and make decisions on your behalf.

Global: AI Fever Is Fueling a Malware Gold Rush

Security researchers saw fake pages and ads impersonating tools like Claude Code and other AI agents.

Some campaigns used lookalike documentation pages and malicious installation instructions.

The broader pattern is clear.

If a tool is hot, criminals will build a fake version before the hype cycle cools.

This one is less about one brand and more about a law of the internet.

Wherever excitement goes, malware follows.

AI tools are the hottest thing in computing, so naturally criminals are creating counterfeit roads leading straight to them.

The trap works because the victim is not doing anything obviously reckless.

They are doing what the modern web trained them to do. Search, click the top result, follow the instructions.

That is what makes this wave especially effective.

It piggybacks on curiosity, urgency, and the fear of missing out on the next big thing.


So what's the upshot for you?

New and exciting should now automatically trigger verify twice.

Hype is a security signal.


Global: Downloading a VPN?You Might Be Installing Malware Instead

Microsoft says Storm-2561 used search-engine poisoning to lure people looking for VPN software.

Victims were redirected to spoofed sites and malicious GitHub-hosted ZIP files that installed signed trojans.

The goal was to steal credentials and VPN-related access data.

This story is a perfect snapshot of modern internet danger.

The attacker does not break down your door. They rent a billboard near the front entrance.

People searching for legitimate VPN software were steered by poisoned search results toward polished fake sites and booby-trapped downloads.

The old internet lesson was do not click suspicious links.

The new lesson is nastier.

Even the link that looks polished, sponsored, and professionally branded may be setting you up.

That is not user carelessness.

That is the industrialization of deception.


So what's the upshot for you?

Treat search results as advertisements first and truth second.

For security-sensitive downloads, start from the vendor’s official site, not from a search page.

DPRK/US: How One Company Finally Exposed North Korea's Massive Remote Workers Scam

North Korea is running a global scheme placing fake remote IT workers inside Western companies.

Operatives use stolen or fabricated identities to pass as U.S.-based developers and engineers.

A U.S. security firm (Nisos) uncovered one suspect by running a controlled hiring sting with a monitored laptop.

The broader operation funnels millions (even hundreds of millions) of dollars back to North Korea’s regime.

In some cases, these workers steal data, extort companies, or enable future cyberattacks once inside.

This story reads like a corporate thriller. But it’s real, and it’s happening right now.

A candidate applies for a remote engineering job.

The résumé checks out.

The interview is decent.

The location says Florida.

Everything looks normal. Until it doesn’t.

That’s when a U.S. security firm decided to lean in instead of walking away.

They hired the suspicious candidate on purpose, handed over a locked-down laptop, and watched what happened next.

What they uncovered wasn’t just one fake employee. It was a window into a much larger operation.

Investigators believe the employee was part of a North Korean network designed to quietly infiltrate Western companies.

These workers log in remotely, collect real salaries, and send most of that money back to Pyongyang.

But the money is only part of the story.

Once inside, these operatives can access internal systems, sensitive code, and proprietary data.

In some cases, they’ve been linked to data theft and even extortion. Turning a simple hiring mistake into a full-blown security incident.

The wild part?

Many of these hires look completely legitimate.

They show up on LinkedIn.

They pass interviews.

They blend in.

And they’re doing it at scale.

This is not hacking in the traditional sense.

It’s infiltration through trust.


So what's the upshot for you?

For companies, especially those hiring remote workers, this shifts the focus from just technical security to identity verification, hiring controls, and insider risk.

Background checks, live verification steps, and even basic does this person actually exist where they say they do questions are becoming critical security controls.

For everyone else, it’s a reminder of how dramatically the nature of cyber threats has evolved.

Sometimes the attacker doesn’t need to hack your system, they just need to take your job.

Global: 14,000 Routers Hijacked - And They’re Built to Never Go Away

Researchers uncovered a botnet of roughly 14,000 infected routers and edge devices, many of them Asus models.

The malware, dubbed KadNap, turns devices into a proxy network used for cybercrime traffic.

It spreads by exploiting known but unpatched vulnerabilities, not cutting-edge zero-days.

The botnet uses a peer-to-peer (Kademlia-based) architecture, hiding its command infrastructure.

This decentralized design makes it highly resistant to takedowns, unlike traditional botnets.

Infected routers can persist even after reboot, requiring factory reset plus patching to fully remove.

This story starts with a number that sounds big. But not terrifying. 14,000 routers.

Then you realize what those routers are doing.

They’re not crashing.

They’re not blinking red.

They’re not obviously broken at all.

Instead, they’re quietly working… just not for their owners.

Researchers discovered that thousands of everyday home and small business routers have been turned into a kind of invisible relay network.

Think of it like a criminal rideshare system for internet traffic. Where bad actors can route their activity through your connection to hide where it’s really coming from.

And here’s the clever part.

This malware doesn’t rely on a central command server that law enforcement can just shut down.

It uses a peer-to-peer design, meaning each infected router helps keep the network alive.

Take one down?

The others keep going.

Take down a hundred?

Still running.

It’s the difference between cutting the head off a snake… and dealing with something that doesn’t have a head at all.

Even more frustrating.

This isn’t exploiting cutting-edge, undisclosed vulnerabilities.

It’s mostly using old, known bugs that were never patched.

In other words, the attackers didn’t need brilliance. Just patience.

And maybe the most unsettling detail.

In many cases, simply rebooting your router won’t fix it.

The malware is designed to come back unless you take more drastic action.

So your perfectly normal-looking internet connection might actually be part of someone else’s operation. Right now.


So what's the upshot for you?

This story lands squarely in everyday life.

For most people, routers are set it and forget it devices.

But this is a reminder that they are actually internet-facing computers. And often the least maintained ones in your home or business.

The practical takeaway is simple and non-alarmist.

Keep router firmware updated

Disable remote access unless absolutely needed

Change default credentials

And occasionally treat your router like a device that needs attention. Not a cable box from 2005

More broadly, this story highlights a shift in cybercrime.

Attackers are no longer just targeting your data. They’re targeting your infrastructure.

Because sometimes, the most valuable thing you have is not your password.

It’s your IP address.

CA: OpenAI Has Shown It Cannot Be Trusted. Canada Needs Nationalized, Public AI

Who should control the most powerful technology of the next decade?

Bruce Schneier, one of the most respected voices in cybersecurity, is making a surprisingly bold argument.

Maybe the answer isn’t Silicon Valley at all.

The essay starts with a moment that raised eyebrows.

OpenAI had flagged troubling interactions from a future mass shooter in Canada. But didn’t escalate the issue before the attack.

Whether that decision was about privacy, policy, or hesitation, it exposed something deeper.

These companies are making high-stakes societal decisions behind closed doors.

At the same time, OpenAI is actively pitching governments on partnerships. Essentially offering to help countries build their national AI capabilities.

Schneier’s argument is simple, but provocative.

If AI is going to shape healthcare, education, jobs, and public services… why would a country outsource that to a foreign, for-profit company?

Instead, he paints a different picture.

Imagine AI like public infrastructure.

A healthcare AI that helps detect cancer early

A national tutoring system tailored to school curriculums

Tools that match workers to jobs or optimize transportation systems

Not optimized for ad revenue.

Not optimized for growth at all costs.

Optimized for the public.

And here’s the twist.

This isn’t theoretical.

Switzerland has already built a public AI model. Cheaper, slightly behind the cutting edge, but good enough for most real-world use and freely available.

That’s the core tension in this story.

Do you want the most powerful AI… or the most accountable one?


So what's the upshot for you?

AI is quickly becoming critical infrastructure, not just another app.

And societies will have to decide.

Do we treat AI like social media (privately owned, profit-driven)?

Or like electricity (regulated, widely accessible, and publicly accountable)?

You don’t need to agree with Schneier’s conclusion to see the stakes clearly.

The future of AI isn’t just about what it can do.

It’s about who it ultimately serves.


And to round it all up (or down)

This is what the next phase of AI looks like-powerful, embedded, and no longer optional. The real differentiator won’t be capability, but control and trust.

This is the tension shaping AI globally: move fast, or move safely-but not both. And every country, company, and user is now being forced to choose where they land.

We’re watching the shift from tools to autonomous systems happen in real time. And once software starts acting on your behalf, the stakes change completely.

Every major tech wave creates opportunity-but it also creates cover for attackers. And right now, AI is giving them both at once.

The internet hasn’t gotten safer-it’s just gotten more convincing. And the biggest risk now is trusting something that looks exactly right.

This isn’t a story about hacking systems-it’s about exploiting trust. And in a remote-first world, that may be the most valuable vulnerability of all.

Cybercrime is becoming quieter, more persistent, and harder to detect. And increasingly, it’s not about breaking things-it’s about using them.

We’re moving into a world where AI decisions shape real lives at scale. And the biggest question ahead isn’t innovation-it’s accountability.


Our quote of the week -  “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and godlike technology.”

- E.O. Wilson (Edward O. Wilson) was a highly influential American biologist, naturalist, and writer (1929–2021), often considered one of the greatest scientists of the modern era.


That’s it for this week. Stay safe, stay secure, stay skeptical, and we’ll see you in se7en.




Comments