Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

 Episode 286 

And have we got an update for you.  Focus on this:

Researchers have discovered that AI models can be secretly shaped by their training data even after every suspicious signal has been scrubbed out, which raises an uncomfortable question: do we actually know what we've built?

It turns out the most comprehensive profile LinkedIn has on you isn't the one you wrote yourself.

Samsung would like you to know that the $2,000 refrigerator you just bought comes with one small surprise: a billboard.

AI-assisted coding has pushed GitHub to a billion commits a year, which sounds like extraordinary progress right up until you ask who reviewed all of it.

The encryption keeping your most sensitive data safe was designed for a quantum threat that was supposed to be decades away, and researchers just moved the deadline.

Last year, the world invested $98 billion in AI, and if you're wondering where the other countries went, the answer is $1.9 billion split between all of them combined.

Walmart bought Vizio in 2024, and this week, they quietly revealed what they actually purchased: not the screens, but the 20 million living rooms attached to them.

For the first time in a major courtroom, a tech platform is being held liable not for what users posted, but for the machine that decided who should see it.

For this update, let’s not go subliminal!


Global: SUBLIMINAL LEARNING: LANGUAGE MODELS TRANSMIT BEHAVIORAL TRAITS VIA HIDDEN SIGNALS IN DATA 

A new research paper from arXiv introduces a surprising concept in artificial intelligence called 'subliminal learning,' suggesting that language models can pass on hidden behavioral traits through data that appears unrelated. 

In controlled experiments, a 'teacher' model encoded traits such as preferences or biases into datasets made only of number sequences. A separate 'student' model trained on that data adopted the same traits, even though the information seemed irrelevant on the surface. 

What makes this finding notable is that filtering the data did not eliminate the effect. 

Even when obvious references to those traits were removed, the student model still absorbed them. 

Researchers also observed similar outcomes when using code or reasoning data instead of plain text, indicating the phenomenon is not limited to one format. 

However, the effect depended on the models sharing a similar foundation. When the teacher and student were built on different base architectures, the hidden transfer did not occur. This suggests that subliminal learning may rely on shared internal structures, rather than just the content of the data itself. 

The authors also provide a theoretical explanation, arguing that this kind of hidden signal transfer is not a niche quirk but a general property of neural networks under certain conditions. 

Their results imply that even simple models can exhibit this behavior, raising broader questions about how machine learning systems encode and transmit information. 

Taken together, the research points to an underrecognized risk in AI development, where models may inherit unintended traits during training, meaning that what looks like clean data may still carry invisible influence that shapes outcomes in ways developers do not anticipate.

So what's the upshot for you? 

This is the AI equivalent of discovering that a book can be haunted. You audit the data, you scrub the obvious signals, you ship the model, and it still thinks the way the training set wanted it to. 

The implications for AI supply chain security are profound: you cannot fully verify what a model learned, only what it says. Bias, preference, and agenda: these may be baked into the architecture at a level that survives any content filter. Trust in AI outputs now has to account for a channel you literally cannot see.

Global: LinkedIn caught spying on users' browsers: sensitive data harvested 

A new investigation has sparked controversy around LinkedIn, alleging the platform secretly monitors users' browsers through hidden code. 

The report, dubbed 'BrowserGate,' claims the site scanned visitors' devices for thousands of browser extensions without clear disclosure, potentially affecting hundreds of millions of users worldwide. 

Researchers say the system checked for more than 6,000 extensions and collected device-level data such as memory, screen resolution, and time zone. This technique, known as browser fingerprinting, can create highly specific user profiles. 

Because LinkedIn accounts are tied to real identities, the data could be linked directly to individuals rather than remaining anonymous. The report raises particular concern about sensitive inferences. 

Certain extensions can reveal information about a user's political views, health conditions, religion, or job-seeking activity. Investigators argue this type of data collection may have occurred silently in the background and without explicit user consent, intensifying scrutiny under privacy regulations like GDPR. 

There are also claims that the collected data may have been shared with third-party cybersecurity firms, though this has not been independently confirmed. LinkedIn has strongly denied wrongdoing, stating the technology is used to detect malicious extensions and prevent unauthorized data scraping, not to profile users or extract sensitive insights. 

This really throws into question user privacy on LinkedIn, where the same tools used to protect systems can also enable deep surveillance, leaving users to weigh how much visibility into their digital environment they implicitly surrender when they log in.

So what's the upshot for you? 

The irony is almost elegant: a platform where you perform your professional identity is quietly cataloguing your digital one. Every extension you've installed is a breadcrumb to your health app, your political newsletter, your job-search tracker. 

LinkedIn didn't need to ask who you are. Your browser already told them. The lesson for security teams isn't just about LinkedIn; it's that the attack surface now includes what you have installed, not just what you do online. Browser hygiene is the new personal firewall.

US: Ads are popping up on the fridge, and it isn't going over well 

A growing number of Americans are discovering ads appearing on an unlikely surface: their refrigerators. 

Samsung recently began testing advertisements on its Family Hub smart fridges in the U.S., inserting banner ads onto the built-in screens that are typically used for calendars, recipes, and home controls. 

The ads arrived through a software update, affecting premium appliances that often cost well over $1,800. Some users report not just small banners, but occasional full-screen promotions. While Samsung says the ads are 'contextual' and not based on personal data, many owners say they were caught off guard by the change after purchase. 

Customers have reacted sharply. Complaints center on the idea that a high-end appliance is being turned into an advertising platform without clear consent. 

Some users say they can disable the ads, but doing so may also remove useful features like weather, news, and calendar widgets, making the tradeoff frustrating. 

The backlash also reflects a broader concern about how far advertising is creeping into private spaces. 

Competitors like LG, Whirlpool, and GE have said they plan to keep their appliances ad-free, drawing a contrast in how companies approach connected home devices. 

At its core, the controversy signals a shift in how companies monetize products after sale, turning everyday devices into ongoing revenue channels and forcing consumers to reconsider what ownership really means in a connected home.

So what's the upshot for you? 

You paid $2,000 for a refrigerator. Samsung paid nothing for a billboard in your kitchen and a data stream about your household. 

The fridge that knows when you're out of milk now also knows you're home at 7am, that you check the weather every morning, and that you recently searched for cholesterol medication. 

'Contextual ads' is doing a lot of heavy lifting in that press release. This is the connected home's original sin arriving at the appliance layer, and it won't stop at the fridge.

Global: The surge 

A surge in software development activity is reshaping the scale of modern platforms, according to a recent post by GitHub executive Kyle Daigle. 

The data points to explosive growth, with roughly one billion code commits recorded in 2025 and current activity running at about 275 million commits per week. 

If sustained, that pace would reach nearly 14 billion commits annually, reflecting a sharp acceleration in global developer output. The growth is not limited to code contributions. GitHub Actions, the platform's automation engine, has expanded just as rapidly. 

Usage has climbed from 500 million minutes per week in 2023 to one billion in 2025, and now exceeds two billion minutes in a single week. This indicates a parallel rise in automated workflows, continuous integration, and machine-assisted development processes. 

The underlying driver appears to be the increasing role of AI-assisted coding and automation tools. As these tools reduce friction in writing, testing, and deploying software, they are enabling developers to produce and ship code at a scale that was previously impractical. The result is not just more activity, but a structural shift in how software is created and maintained. However, this rapid expansion introduces new pressures. Platform infrastructure must now handle unprecedented throughput, and the quality of contributions may vary as barriers to entry fall. 

The same forces that accelerate productivity also raise questions about signal versus noise in the broader software ecosystem. In effect, software development is entering a phase where volume is no longer the constraint, and the real advantage shifts to those who can filter, validate, and meaningfully leverage the flood of output.

So what's the upshot for you? 

A billion commits a year sounds like progress. It is also a billion chances to introduce a vulnerability, a backdoor, a subtle logic error, or a dependency on something nobody reviewed. AI-assisted coding lowers the floor for contribution, which is wonderful, and simultaneously raises the ceiling on the volume of unreviewed code entering production systems. 

The software supply chain is widening faster than the security tooling designed to inspect it. Volume is not the same as value, and speed is not the same as safety.

Global: Ten thousand qubits 

A new study suggests quantum computers may be far closer to breaking modern encryption than previously believed. 

Researchers report that Shor's algorithm, a method capable of cracking widely used cryptographic systems, could run on machines with as few as 10,000 qubits. Earlier estimates placed that requirement in the millions, making this a sharp shift in expectations. 

The breakthrough comes from improvements in quantum error correction and system design. 

By using advanced coding techniques and reconfigurable neutral-atom architectures, the team reduced the overhead needed to keep fragile quantum states stable. 

These changes allow more efficient computation without dramatically increasing hardware size. 

The paper also outlines realistic performance scenarios. A system with roughly 26,000 qubits could potentially solve certain cryptographic problems, such as elliptic curve encryption, in a matter of days. 

More complex tasks like breaking RSA-2048 would still take significantly longer, but remain within plausible reach as hardware scales. Despite the optimism, the authors acknowledge major engineering hurdles. 

Current quantum systems are still far from this scale, though experiments have already demonstrated arrays with thousands of qubits and early fault-tolerant operations. The trajectory suggests steady progress, not an immediate leap to large-scale deployment. The findings sharpen a growing concern across cybersecurity: the timeline for quantum threats is no longer theoretical, and the real advantage now lies with those who prepare for a world where today's encryption has an expiration date.

So what's the upshot for you? 

The encryption protecting your data, your communications, and your infrastructure was designed assuming it would take millions of qubits to break a timeline so distant it felt theoretical. That timeline just got cut by two orders of magnitude. 

The good news: RSA-2048 isn't cracked tomorrow. The bad news: if you're harvesting encrypted traffic today to decrypt later, the clock is already running. Post-quantum migration isn't a future project. For anyone protecting data with a shelf life longer than a decade, it's already overdue.

US: Global AI investment imbalance 

The United States has pulled far ahead in the global race to fund artificial intelligence, creating a widening gap that is reshaping the industry worldwide. 

Last year, the top global investors poured $96 billion into U.S. AI companies, compared with just $1.9 billion across the rest of the world combined. 

This imbalance is not just about money but also momentum, as most major deals and activity are now concentrated in the U.S. 

The dominance shows up in deal volume as well. Leading investors backed more than 1,200 AI deals in the United States, versus just 271 elsewhere. 

Even as AI investment surges globally, the bulk of capital, attention, and influence continues to cluster in a single market, giving American firms a decisive edge in shaping the technology's future. 

This concentration is being driven by a handful of massive funding rounds that dwarf anything seen internationally. 

The scale of these investments has made it increasingly difficult for startups outside the U.S. to compete, attract talent, or build comparable infrastructure. 

Over time, this dynamic risks reinforcing a cycle where capital and expertise flow to the same place, leaving other regions further behind. 

The effects are already visible in the global workforce. Countries with smaller AI ecosystems are struggling to retain talent, as researchers and engineers gravitate toward better-funded opportunities in the United States. 

The result is a growing imbalance not only in capital, but in the people and ideas that fuel innovation. 

What emerges is a clear picture of an industry consolidating around one center of gravity, where funding drives talent, talent drives innovation, and innovation attracts even more funding, a cycle that rewards proximity to capital above all else.

So what's the upshot for you?

When one country controls 98% of serious AI investment, you don't have a global technology; you have a national one with an export policy. Every frontier model, every safety standard, every deployment norm will be shaped by the incentives and legal frameworks of a single market. 

For the rest of the world, this isn't just an economic gap; it's a strategic dependency. And for the security community: when the AI systems underpinning critical infrastructure, healthcare, and finance are all rooted in one ecosystem, the blast radius of a single bad actor or a single bad policy becomes planetary.

US: Vizio TVs Now Require Walmart Accounts For Smart Features 

Prospective Vizio TV buyers should know there's a good chance the set won't work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. 

Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing 'exclusive offers, subscription management, and tailored support.' Accounts are also central to Vizio's business, which is largely driven by ads and tracking tied to its OS. A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on 'select new Vizio OS TVs' for owners to complete onboarding and to use smart TV features. 

The representative added: 'Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account.' The representative wouldn't confirm which TV models are affected. Walmart's representative said the Walmart account integration is 'designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways' but didn't specify how.

So what's the upshot for you? 

The TV you bought isn't the product; it's the storefront. Walmart didn't acquire Vizio for the screens; they acquired 20 million living rooms and the attention inside them. Requiring a Walmart account isn't onboarding; it's enrollment in a surveillance and targeting program with a 55-inch display as the incentive. 

The data model here is your viewing habits cross-referenced with your purchase history, your grocery patterns, and your household income bracket. 'Designed to respect consumer choice' has never meant less.

US: Will Social Media Change After YouTube and Meta's Court Defeat? 

Two courtroom losses for Meta Platforms are reshaping how the tech industry may be held accountable for child safety. In New Mexico, a jury found the company misled users about platform safety and enabled harmful interactions involving minors, ordering a $375 million penalty. 

A separate Los Angeles case focused on how social media design may contribute to mental health harm, especially among teenagers. The cases mark a shift in legal strategy. Instead of focusing only on harmful content posted by users, prosecutors and plaintiffs targeted product design itself, including features that encourage prolonged use. 

This approach attempts to bypass long-standing protections that typically shield platforms from liability for user-generated content. Evidence presented in court suggested companies were aware of risks tied to engagement-driven features but continued to prioritize growth. Internal discussions and testimony pointed to concerns about addiction-like behavior and exposure to harmful interactions, particularly for younger users. 

The broader implications remain uncertain. While advocates see the rulings as a breakthrough for accountability, critics warn that expanding liability could affect not just social media but other digital industries built around user engagement. 

Thousands of similar lawsuits are already moving through the courts, signaling a wider legal reckoning. What emerges is a new legal frontier where product design, not just content, becomes the battleground, and understanding how platforms shape behavior may soon matter as much as how they moderate it.

So what's the upshot for you? 

For decades, platforms hid behind the content, 'we didn't post it, a user did.' Courts are now pointing at the machine that decided to show it to a 13-year-old at 11pm, again, and again, and again. This is the moment product liability law starts treating recommendation algorithms the way it treats car brakes. 

If you build systems that shape human behavior at scale, the legal system is slowly, finally, catching up. Design is no longer neutral. It never was.


So to round it all up:

Subliminal Learning. Clean data is no longer a guarantee of a clean model; the influence can survive the scrubbing. When you can't audit what a system truly learned, you're not deploying software anymore; you're making a bet.

 LinkedIn BrowserGate. Your browser extensions are a surprisingly intimate portrait of who you are, your health, your politics, your anxieties, and your ambitions. The platform you trusted with your professional reputation was quietly reading the rest of the book.

Fridge Ads. When a company can update your appliance overnight and monetise your kitchen without asking, ownership has quietly changed its meaning. The smart home was always going to become an advertising platform. Samsung just had the nerve to go first.

The Surge. A billion commits a year is a remarkable achievement and a remarkable liability at the same time. The pipeline has never been fuller, and the inspectors have never been more outnumbered.

Ten Thousand Qubits. The cryptographic foundations of the internet were built on the assumption that breaking them would take an impossibly long time; that assumption just got a lot less comfortable. The data you're encrypting today may well outlive the encryption protecting it.

 AI Investment Imbalance. When one country writes the check, it also writes the rules, and right now, almost every meaningful AI decision is being made inside one legal and cultural framework. That's not a technology story; it's a geopolitical one.

Vizio/Walmart. The moment you create that account, your viewing habits become a line item in a retail data model that already knows what you buy, what you eat, and what you can afford. The TV was never the product you were.

Meta's Court Defeat. For the first time, a court looked past the content and held the recommendation engine itself responsible for the harm it caused. That shift from what platforms host to how they're designed to behave is the one the entire industry has been dreading.


And our quote of the week:  "Until you make the unconscious conscious, it will direct your life, and you will call it fate." - Carl Jung


Stay Safe, Stay Secure, Stay Aware, and we will see you in se7en!







Comments