At war. With the IT Privacy and Security Weekly Update for March 3rd. 2026.
episode 281.
This week's update that makes sense of the week's most important technology and cybersecurity stories, without assuming you have a computer science degree.
This week we have eight stories spanning AI gone wrong, AI used in warfare, a historic security milestone from Apple, and a new kind of AI agent that's making seasoned security professionals nervous. Let's get into it.
America Used Anthropic's AI for Its Attack on Iran, One Day After Banning It
The questionable quality of a headline that a novelist might have rejected as too implausible. On February 27, President Trump posted to Truth Social ordering all federal agencies to immediately stop using AI technology from Anthropic, the company that makes the Claude AI assistant. Hours later, according to a Wall Street Journal report, the U.S. conducted a major air attack on Iran with Anthropic's AI tools playing a supporting role.
Even Trump's own order included a six-month phase-out window, suggesting the technology is too deeply embedded in operations to simply switch off. The post also reportedly warned Anthropic to "get their act together" during the phase-out period, with "major civil and criminal consequences" threatened otherwise.
This wasn't a one-off situation, either. Less than two months earlier, Anthropic's Claude technology was reportedly used in a U.S. military operation in Venezuela, making Anthropic the first AI developer known to have been used in a classified U.S. military operation. The technology reportedly made its way into these missions through Anthropic's contract with data analytics firm Palantir.
The episode raises genuinely HUGE questions about how fast AI is becoming embedded in critical decision-making, and how difficult it is to course-correct once it is.
So what's the upshot for you?
AI tools are moving from productivity experiments to embedded infrastructure faster than most organizations or governments have policies to manage. If your company is using AI tools in any meaningful workflow, now is the right time to ask: what's our exit plan if this tool disappears, changes its terms, or gets restricted? The organizations that think about and even just try to answer that question today will be in a far better position than those who haven't thought about it at all.
CISA's Rough Year: America's Cyber Watchdog Loses Its Acting Director
CISA, the Cybersecurity and Infrastructure Security Agency, is the U.S. government body responsible for defending the nation's critical digital infrastructure. Think power grids, water systems, financial networks, and election systems. Its job is to be the steady hand on the cyber tiller when things get dangerous.
That steady hand has had a rough year. CISA's acting director, Madhu Gottumukkala, has been replaced after a turbulent tenure that included a string of controversies. According to TechCrunch, Gottumukkala reportedly uploaded government documents to ChatGPT, a significant security lapse for someone in his position. He also reportedly failed a counterintelligence polygraph required for classified access and oversaw a reduction of roughly one-third of the agency's staff through budget cuts, layoffs, and furloughs.
The turmoil didn't stop there. Several senior officials were suspended during his tenure, including the agency's chief security officer. The agency's chief information officer also departed amid internal power struggles over a transfer that was blocked by political appointees.
CISA is not just any U.S. government agency; it actively coordinates with the private sector, sharing threat intelligence and issuing guidance that protects businesses across every industry. Instability at the top of an organization like this has real downstream effects.
So what's the upshot for you?
The institutions that protect critical digital infrastructure matter to everyone, not just to government IT teams. If your organization receives threat advisories, security bulletins, or guidance from CISA, it's worth paying closer attention to those communications right now and ensuring you have alternative intelligence sources as well. Don't let institutional turbulence create gaps in your own security awareness.
Google Quantum-Proofs HTTPS, Without Breaking the Internet
Every time you see that little padlock icon in your browser's address bar, you're benefiting from a technology called HTTPS, a system of digital certificates that verifies a website is who it claims to be and encrypts the data traveling between you and it. It's the backbone of secure internet commerce, banking, communication, and healthcare. And it has a looming problem.
Quantum computers, once they reach sufficient power, will be able to crack the mathematical puzzles that today's HTTPS certificates rely on. That's not tomorrow's problem; it's a problem security engineers are preparing for right now. The challenge is that quantum-resistant cryptographic material is roughly 40 times larger than what we use today. Stuffing all that data into every website connection would slow the internet to a crawl.
Google has found an elegant solution using something called Merkle Trees, a mathematical structure that uses cryptographic hashes to verify enormous amounts of data using only a tiny fraction of the usual material. Instead of sending the full heavy chain of signatures, a browser can receive a lightweight "proof of inclusion" that confirms a certificate is legitimate. Google is calling this the quantum-resistant root store, and it's already been implemented in Chrome.
The result is certificates that remain roughly the same compact size they are today, while being hardened against future quantum attacks. It's the kind of unglamorous infrastructure work that most people will never notice, which is exactly how good security engineering is supposed to work.
So what's the upshot for you?
You don't need to understand the mathematics here, but you should know that the organizations keeping the internet safe are actively preparing for threats that don't fully exist yet. The practical takeaway: when your browser or operating system offers an update, accept it. The security improvements bundled into routine updates are often the downstream result of years of exactly this kind of deep research, and they matter.
iPhone and iPad Become the First Consumer Devices Cleared for NATO Classified Data
Apple has achieved something that no consumer electronics company has managed before: its iPhone and iPad have become the first and only consumer mobile devices cleared for use with NATO-restricted classified data. The certification applies to devices running iOS 26 and iPadOS 26 and is valid across all NATO nations.
The security testing and evaluation was conducted by the German government, one of the most rigorous national security evaluation frameworks in the world. Apple didn't need to build special hardware or create a separate "government edition" of its devices. The standard consumer iPhone and iPad, running standard consumer software, passed the bar.
Apple cited several of the specific technologies that contributed to the certification: end-to-end encryption built into hardware at the chip level, biometric authentication via Face ID, and a feature called Memory Integrity Enforcement that prevents malicious code from tampering with the device's core memory.
This is a significant milestone, not just as a marketing achievement for Apple, but as a signal about where the security of consumer technology has arrived. The gap between "good enough for personal use" and "good enough for classified government operations" has never been smaller.
So what's the upshot for you?
The same security features protecting classified NATO communications are protecting your personal banking, health data, and private messages. This is a good moment to make sure you're actually using them: ensure Face ID or a strong passcode is enabled, keep your iOS software updated, and use end-to-end encrypted messaging apps for sensitive personal conversations. Government-grade protection is already in your pocket.
Meta's Smart Glasses Are Sharing Intimate Videos With Human Moderators
Meta's Ray-Ban AI glasses have been a hit as a consumer gadget, stylish, capable of recording video and answering questions through an AI assistant, and increasingly popular in Europe. But a report from Swedish newspaper Svenska Dagbladet has revealed something about how that AI actually works that many users apparently didn't fully appreciate.
When users opt into Meta's AI features on the glasses, the data captured by those glasses can be reviewed by human annotators, workers in places like Nairobi, Kenya, who are paid to label and categorize visual data so that Meta's AI models can improve. According to employees who spoke to the Swedish journalists, the data has included people in the nude, people using the toilet, people engaged in sexual activity, and sensitive financial information, including credit card numbers.
None of this is technically a secret. Meta's terms of service do state that data may be reviewed by humans or automated systems. But the Swedish reporters noted that finding that policy required some persistence; it wasn't prominently disclosed. Under Europe's GDPR rules, transparency about how personal data is processed isn't optional.
The episode is a vivid illustration of a gap that exists across much of the AI industry: the difference between what the terms of service technically permit and what users actually understand they've agreed to.
So what's the upshot for you?
Before you use any AI-powered device that captures audio or video, smart glasses, AI assistants, always-on cameras, spend five minutes reading what the company actually does with that data. The question to ask is not just "is this encrypted?" but "does a human ever see this?" The answer, for many AI products that are actively learning, is yes.
Discord's Age Verification Chaos: Hidden Partners, Stored IDs, and Exposed Code
Discord has been rolling out age verification for users in the UK, a legally required step as governments across Europe push platforms to ensure minors can't access adult content. In principle, a reasonable thing. In practice, the rollout has been a series of self-inflicted wounds.
The platform briefly posted and then quietly deleted a disclaimer revealing that UK users might be part of an undisclosed experiment involving a third-party vendor called Persona, a company not listed anywhere on Discord's official partner pages. That disclaimer also contradicted Discord's earlier assurances about how quickly user ID data is deleted, stating that information would be stored for up to seven days.
This landed especially badly because Discord had only recently dealt with a breach at a previous age verification partner that exposed 70,000 users' government IDs. Users who remembered that incident were understandably unhappy to learn that their data was again being handled by an unlisted vendor under terms Discord hadn't disclosed up front.
The story gets stranger. Hackers quickly identified a workaround to bypass Persona's age checks entirely, and separately found a Persona frontend exposed to the public internet on what appeared to be a U.S. government-authorized server. Independent researchers examining Persona's code found it bundled facial recognition with financial reporting tools, and a parallel implementation that appeared designed to serve federal agencies. None of this was part of what Discord's users thought they were consenting to.
So what's the upshot for you?
When any platform asks you to verify your age or identity, it's worth pausing before uploading a government ID. Ask: who is actually handling this data? Is it the platform itself or a third-party vendor? What are the retention terms? What happens if that vendor is breached? These aren't paranoid questions, they're the questions that the 70,000 Discord users affected by the previous breach probably wish they'd asked.
Meta's AI Is Flooding Child Abuse Investigators With Junk Tips
Here's a case where good intentions and powerful technology collide in an unexpectedly messy way. Meta, the company behind Facebook and Instagram, has AI tools that scan its platforms looking for signs of child sexual abuse material. The goal is vital and unambiguous. But investigators are now raising alarms about a serious problem with how those tools are working in practice.
U.S. child abuse investigators testified in a New Mexico trial that Meta's AI is generating a flood of low-quality tips, reports that lack key evidence, actionable details, or even any clear indication that a crime has occurred. Officers described spending valuable hours sorting through what they bluntly call "junk" to find the genuine leads buried within.
The volume surge stems partly from a 2024 law that broadened reporting requirements. Since then, Meta has doubled its cybertips to the National Center for Missing & Exploited Children, which passes them to law enforcement. Every single report must be reviewed, even if it contains no images and no solid evidence. Morale is dropping. Cases are backing up.
Meta says it works closely with authorities and that its systems are designed to prioritize the most urgent cases. But critics (and staff) argue that when automation favors quantity over quality, the people it's meant to protect can end up worse off.
So what's the upshot for you?
This story is a useful reminder in any professional context: more signals aren't always better signals. Whether you're building automated alerts, setting up notification systems, or just thinking about how AI tools fit into your workflow, always design for quality over quantity. A system that cries wolf 499 times before the real danger makes the real danger harder to spot.
OpenClaw: The AI Agent That's Getting Companies Very Nervous
A new agentic AI tool called OpenClaw, briefly known as MoltBot before rebranding, has been quietly spreading through tech circles and generating a wave of corporate bans. Unlike a typical AI assistant that answers questions and waits for instructions, OpenClaw is designed to act autonomously: browsing the web, reading files, executing tasks, and making decisions without constant human approval. That autonomy is what makes it exciting to enthusiasts, and alarming to security professionals.
Meta executives told their teams to keep OpenClaw off work laptops or risk losing their jobs. Multiple startup CEOs issued preemptive bans before any of their employees had even installed the software. The concern isn't primarily about what OpenClaw is designed to do, it's about what it could be made to do.
One company that allowed a controlled experiment reported a specific attack scenario that researchers found genuinely worrying: if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email instructing the AI to find and share copies of files on that person's computer. The AI would simply follow the instruction, it has no way to verify whether the email is legitimate. Researchers also noted the tool is "pretty good at cleaning up its actions," which makes forensic investigation harder.
Security researchers who studied OpenClaw carefully say the risks can be mitigated, by limiting which users can issue it commands, keeping it isolated from the internet except through controlled channels, and ensuring its control panel requires authentication. But they were clear that those safeguards need to be in place before the tool is deployed, not after.
So what's the upshot for you?
Agentic AI tools, ones that can take actions on your behalf without step-by-step approval, are arriving fast, and they introduce a genuinely new category of risk. The core principle: an AI that can act is an AI that can be tricked into acting. Before deploying any AI agent in your work environment, ask what it can access, what it can do, and whether someone with malicious intent could use it as a vector. These are questions your IT and security teams need to be involved in answering.
OK, to round it all up. This week’s stories all pointed in the same direction: AI and security are no longer side issues, they’re shaping power, policy, and everyday risk.
From the U.S. using Anthropic’s AI in an airstrike hours after banning it, we learned that once AI is embedded in critical systems, policy edicts alone don’t turn it off, you need realistic exit plans and contingency workflows. Each organization using AI should know what happens if a key tool is suddenly restricted or disappears.
From CISA stumbling through a turbulent year, the takeaway is that institutional security depends as much on leadership and discipline as on technical controls. If the national cyber watchdog can be undermined by poor judgment and turnover, private organizations should double‑check that their own processes don’t rest on a single person or shaky governance.
The story about Google’s quantum‑proofing of HTTPS reminded us that real security often happens years before an attack is practical. Quiet, forward‑looking infrastructure work, like preparing for post‑quantum threats, is one of the best defenses you’ll never see.
With Apple’s iPhone and iPad reaching NATO‑grade clearance, we saw that consumer tech can now meet some of the toughest security bars in the world. The lesson is simple: you probably already own hardware that’s good enough for very sensitive data, the question is whether you’ve turned the right features on and kept it updated.
The investigation into Meta’s Ray‑Ban AI glasses showed how often “AI magic” still depends on low‑paid humans reviewing highly sensitive footage. The practical lesson: before you bring any always‑on camera or microphone into private spaces, ask not just how it’s encrypted, but who can actually see or hear what it captures.
Discord’s age‑verification mess highlighted how quickly trust evaporates when companies quietly change data handlers and retention rules. Any time you’re asked to upload an ID, you should assume it may touch third parties and live longer than the marketing copy suggests unless you can verify otherwise.
The flood of low‑quality child‑abuse tips from Meta’s detection AI underscored that “more data” isn’t the same as “better safety.” In security, compliance, and operations, the signal‑to‑noise ratio matters more than volume. Systems that cry wolf too often make real emergencies harder to spot.
And finally, the rise of OpenClaw, an autonomous AI agent that can browse, read, and act on your behalf, crystallized a new class of risk: an AI that can do things for you is also an AI that can be tricked into doing things against you. Before rolling out any agentic tool, you need clear boundaries on what it can access, who can instruct it, and how you’ll detect and contain abuse.
Taken together, these eight stories point to a few durable habits: build exit plans for critical AI tools, diversify your security signals and governance, keep your infrastructure boring but up to date, and never give a tool, human or machine, more access than you can monitor and explain.
Our quote of the week: "The strength of a civilization is not measured by its ability to fight wars, but rather by its ability to prevent them." - Gene Roddenberry
That's it for this week. Stay safe, stay secure, peace out, and we'll see you in se7en.
Comments
Post a Comment