Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

Episode 287

On the day before the tax deadline in the US, we’ve got the most taxing update yet, full of unexpected deductions:


OpenAI has unveiled bold policy recommendations to cushion the societal impact of advanced AI, including robot taxes, a public wealth fund, and trials of a four-day workweek.  Add in cake for all, and we’d swear Marie Antoinette was running the company.


As AI assumes more decision-making roles, human work is evolving from task execution to high-level direction, judgment, and problem framing. Hopefully, there’s still time to talk to your school’s guidance counselor about changing your major.


Professionals are now building personal “AI teams” of multiple specialized agents, dramatically expanding individual capacity while reshaping workloads and expectations.


Citing potential misuse risks, OpenAI is restricting access to its most powerful new cybersecurity model, following a cautious approach already adopted by Anthropic.  “It’s so good you can’t have it.”


A hacker group known as “FlamingChina” claims to have exfiltrated over 10 petabytes of sensitive data from China’s National Supercomputing Center in Tianjin in one of the largest breaches on record.


Iran-linked hackers have reportedly disrupted critical operational systems at U.S. oil, gas, and water facilities, in a demonstration of “You hit us, we hit you.”


A new independent audit reveals that Google, Microsoft, and Meta shockingly continue tracking users even after privacy opt-out signals are enabled.


The New York Times has published a detailed investigation naming British cryptographer Adam Back as the strongest circumstantial candidate yet to be Bitcoin’s mysterious creator, Satoshi Nakamoto.  Quick, now’s the time to get really friendly with Adam.


And just like filing taxes, the sooner we get to it, the sooner we get our refund!  Let’s go!




US: OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption

OpenAI is proposing sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek.

The company said the policy document offered a series of initial ideas to address the risk of jobs and entire industries being disrupted by the adoption of AI tools.

Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens.

Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer benefits and bonuses tied to productivity gains from new AI tools.

The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses.

It also recommends taxes related to automated labor.

OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.


So what's the upshot for you?

This is tech companies starting to say the quiet part out loud: AI might take enough jobs that we’ll need to rethink how money and work actually function.

For you, it means the future might include shorter workweeks or new income models, ah, but don't get ahead of yourself, it would take the people who are making the billions off AI right now wanting to share with you, and er... that's like solving world hunger, and we've been waiting for that one a very long time.


Global: The Real Skill Shift Moves From Doing to Directing

As AI takes on more decisions and tasks, human value is shifting toward direction, judgment, and problem framing.

Key updates

AI is increasingly involved in decision-making at work

Human roles are moving toward guidance and accountability

Clear thinking and problem definition are becoming core skills

The story

If you zoom out, all of this points to something bigger.

We’re not just changing tools, we’re changing what it means to contribute.

It used to be about producing the answer.

Now it’s about defining the problem, guiding the process, and deciding what “good” looks like.

In a world where machines can generate almost anything, direction becomes the scarce resource.


So what's the upshot for you?

You don’t need to know everything.

But you do need to think clearly enough to guide something that does.


Global: One Person, Six AI Employees

People are starting to build their own “AI teams” with multiple agents handling different parts of their work and life.

One professional created six AI agents to manage tasks

AI handled 60–70% of daily operations

Productivity increased, but so did workload and expectations

The story: This one feels like a glimpse into the near future.

A product manager built six AI “employees,” each with a role.

One handles research.

Another manages scheduling.

Others help with content, finances, and even personal planning.

And it worked.

Most of her daily work got done faster than ever.

But here’s the twist: she didn’t end up working less.

She just started doing more.

More output.

More projects.

More expectations, mostly from herself.

It’s like hiring a full team overnight… and realizing you’re now running a much bigger operation.


So what's the upshot for you?

AI doesn’t just save time; it raises the ceiling on what one person can take on.

The real challenge becomes deciding what’s actually worth doing with that extra capacity.


Global: OpenAI To Limit New Model Release On Cybersecurity Fears

OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely.

If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative.

Axios reports:

OpenAI introduced its Trusted Access for Cyber pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model.

Organizations in the invite-only program are given access to even more cyber capable or permissive models to accelerate legitimate defensive work, according to a blog post.

At the time, OpenAI committed $10 million in API credits to participants.

Restricting the rollout of a new frontier model makes more sense if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios.

Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added.

It's the same debate we've had for decades around responsible vulnerability disclosure, Lee said.


So what's the upshot for you?

This is like giving out power tools, but only to people you trust not to immediately cut down all your shrubbery.

For you, it means AI is getting good enough at hacking that even the companies building it are nervous, so expect slower rollouts and more “you’re not invited to this party", parties.


CN: Hacker Steals 10 Petabytes of Data From China's Tianjin Supercomputer Center

A hacker or group calling itself “FlamingChina” claims to have pulled off one of the largest cyber breaches in history, targeting China’s National Supercomputing Center in Tianjin.

According to reporting cited by CNN, the attacker says more than 10 petabytes of sensitive data were stolen, including material tied to military, aerospace, and scientific research.

The facility plays a central role in China’s research and defense ecosystem, supporting thousands of institutions and complex simulations.

Experts who reviewed samples of the leaked data told CNN the material appears consistent with what such a system would store, including schematics for advanced weapons and engineering projects.

The breach, which remains unverified in full, reportedly unfolded over several months.

The hacker claims access was gained through a compromised VPN and that data was quietly extracted in small increments using a botnet to avoid detection.

This slow, distributed approach may have allowed the operation to continue unnoticed.

If confirmed, the incident could carry significant national security consequences.

Analysts warn that exposing this level of technical and defense data could give foreign intelligence services insight into China’s most sensitive capabilities, while also raising broader concerns about vulnerabilities in critical infrastructure.

The episode demonstrates a shifting reality in cyber conflict, where the scale and subtlety of attacks now rival traditional intelligence operations, and the real takeaway is that the most consequential breaches are no longer the loudest ones but the ones that go unnoticed until the damage is already done.


So what's the upshot for you?

This is a reminder that the biggest hacks don’t look like explosions; they look like someone quietly walking out with everything over months.

For you, it means the real risk isn’t just getting hacked, it’s not noticing you’ve been hacked, so detection and visibility matter a lot more than just building higher walls.


US/IR: Iran-Linked Hackers Disrupted US Oil, Gas, Water Sites

The FBI says Iran-linked hackers disrupted internet-connected systems used by U.S. oil, gas, and water companies.

Even with the recent two-week ceasefire between Iran and the United States and Israel, hackers backing Tehran say they won't end their retaliatory cyberattacks.

The report warned that similar companies across the country should be aware of an increased push by hackers to take over programmable logic controller (PLC) systems, which can be used to digitally control physical machinery from remote locations.

Secure internet access for PLCs from one company, Rockwell Automation, were removed by Iran-linked coders who then maliciously interacted with project files and altered data, according to the report.

Hackers first gained access to some of the platforms in January of last year.

All access to compromised platforms ended in March, the report said.

The FBI said the move resulted in operational disruption and financial loss.

Rockwell Automation wasn't the only company to recently face cyberattacks from Iran-linked hackers.

Stryker, a major U.S. medical device maker, was targeted by Iran-affiliated coders in mid-March.

It was unclear if physical operations were affected by the security breach.

FBI Director Kash Patel was personally impacted by hackers who leaked his emails and records related to his personal travels and business from more than 10 years ago.

The FBI urged companies to adopt network defenders and multifactor authentication to prevent future attacks.

Tuesday's report was published alongside the National Security Agency, the Department of Energy, and the Cybersecurity and Infrastructure Security Agency.

Government and experts have been warning about internet-connected systems for years, and how vulnerable they are, one source familiar with the federal investigation into the hacks told CNN.

Many companies have already removed those systems and followed the guidance, the person added.


So what's the upshot for you?

This is what happens when cyberattacks stop targeting data and start targeting the real-world pipes, pumps, and anything with an on/off switch.

For you, it means anything connected to the internet that controls physical systems is a potential target, so basic protections like MFA and limiting remote access aren’t “nice to have” anymore; they’re table stakes.


Global: Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out

A new independent audit is raising fresh concerns about how much control users really have over their personal data online.

The study, conducted by privacy firm webXray and reported by 404 Media, found that major tech companies, including Google, Microsoft, and Meta, may continue tracking users even after they explicitly opt out through standard privacy signals.

At the center of the issue is the Global Privacy Control, a browser-based setting meant to tell websites not to sell or share personal data.

The audit found that this signal is frequently ignored.

In testing, Google failed to honor opt-out requests about 86 to 87 percent of the time, while Meta and Microsoft showed failure rates of roughly 69 percent and 50 percent.

The findings suggest the problem is widespread, not isolated.

Researchers reported that more than half of the websites they examined still placed advertising cookies even after users opted out.

In many cases, the tracking behavior was not hidden but visible in standard web traffic, suggesting the issue is systemic rather than accidental.

The companies dispute the conclusions.

Google said the report misunderstands how its systems work, while Meta argued that opt-out signals limit how data is used, not whether it is collected.

Microsoft maintained that it respects privacy signals but noted that some data collection is necessary for basic operations.

The audit adds to a growing tension between privacy laws and real-world enforcement, indicating that user consent tools may offer less control than they appear, leaving individuals to navigate a system where opting out does not always mean opting out.


So what's the upshot for you?

This is basically the Internet saying, “We heard your privacy request… and we’re going to ignore it politely.”

For you, it means opt-out buttons aren’t magic, so if you actually care about privacy, you’ll need to rely on stronger tools than just clicking “do not track” and hoping for the best.


US/UK: NYT Claims Adam Back Is Bitcoin Creator Satoshi Nakamoto

A new investigation by New York Times reporter John Carreyrou points to British cryptographer Adam Back as the most compelling circumstantial candidate yet for the identity of Bitcoin’s creator, Satoshi Nakamoto.

The report draws on similarities in writing style, technical expertise, and ideological alignment, along with older posts that appear to anticipate core elements of Bitcoin before its release.

Carreyrou’s analysis leans heavily on a growing archive of Satoshi’s communications, including the original white paper and extensive forum posts.

A recent release of hundreds of emails between Satoshi and early collaborator Martti Malmi significantly expanded that dataset, offering a deeper linguistic and conceptual footprint to examine.

Efforts to identify Satoshi have spanned more than a decade, with over 100 individuals proposed as candidates.

These theories often relied on overlapping traits such as coding style or philosophical views but ultimately collapsed under conflicting evidence or lack of definitive proof.

Within the Bitcoin community, many maintain that only control of Satoshi’s original coins could հաստատ identity.

Carreyrou acknowledges the difficulty of the task, noting that many skilled researchers have failed before him.

Still, he pursued the story, motivated by the challenge and the possibility that new material might finally reveal a clearer picture of Bitcoin’s elusive creator.

Back has publicly denied the claim, stating he was simply an early contributor to cryptographic research and digital cash concepts.

The investigation stops short of confirmation, reinforcing a long-standing reality that in Bitcoin’s origin story, compelling evidence can accumulate without ever crossing the threshold into certainty.


So what's the upshot for you?

This is the internet’s favorite mystery, getting another “we think we got him” moment without actually solving anything.

For you, it means Bitcoin’s origin story is probably staying a mystery, so don’t expect a big reveal… just a steady stream of very confident guesses.



So to get this return filed properly:  


OpenAI has proposed sweeping policy measures, including robot taxes, a public wealth fund, and experiments with a four-day workweek, to address the massive disruption AI could bring to jobs and industries. The new document says what many were thinking: that there soon may be a need for systemic changes in taxation, work structure, and wealth distribution as automation accelerates, but don't expect the AI billionaires to be the ones to give anything up.


As AI takes over more decision-making and routine tasks, human roles are shifting from producing outputs to providing direction, judgment, and clear problem definition. This evolution positions strategic thinking and guidance as the most valuable human skills in an increasingly capable machine-driven workplace, and probably a new major in university!


One professional built a team of six specialized AI agents that now handle 60–70% of her daily operations, dramatically boosting output across research, scheduling, content, and personal tasks. Rather than reducing her workload, the AI team raised expectations and capacity, turning her into the manager of a much larger personal operation. Now we just need to wait for the other 10,000 companies that have laid off staff to do the same.


OpenAI is limiting access to its most advanced cybersecurity model through a trusted-access pilot program due to fears it could be misused to create powerful new exploits. The cautious approach mirrors similar moves by Anthropic and reflects growing industry concern over the dual-use risks of frontier AI systems, and may signal a new PR strategy where companies' models are just too good fur us all.

 

A hacker group called “FlamingChina” claims to have stolen more than 10 petabytes of sensitive data from China’s National Supercomputing Center in Tianjin, including military and aerospace research material. The breach, conducted quietly over months through a compromised VPN and botnet, reveals the rising threat of large-scale, stealthy cyber operations against critical infrastructure, and that if they can hit that, they can hit us too!


Iran-linked hackers disrupted internet-connected industrial control systems at U.S. oil, gas, and water facilities, resulting in operational disruptions and financial losses. The attacks spray water all over our thinking that we were secure, and demonstrate the escalating risk to critical physical infrastructure from state-backed actors targeting programmable logic controllers and remote access points.


An independent audit found that Google, Microsoft, and Meta frequently ignore Global Privacy Control signals, continuing to track users 50–87% of the time even after opt-out requests. The findings reveal that standard privacy tools offer far less control than users expect, and that almost the only way not to be tracked is not to be online.


A New York Times investigation has identified British cryptographer Adam Back as the most compelling circumstantial candidate yet (99% certainty) for Bitcoin’s pseudonymous creator, Satoshi Nakamoto. Back has denied the claim, but it’s annoying that this is even happening.  A couple of weeks ago, they outed Banksy, now Satoshi, heavens, what are they going to out Donald Trump as?


And our Quote of the week:  “We contend that for a nation to try to tax itself into prosperity is like a man standing in a bucket and trying to lift himself up by the handle.” - Sir Winston Churchill



Stay safe. Stay secure. AI won’t be dangerous until it figures out the U.S. tax code.
See you in se7en!







Comments