An overview of interesting AI News for the Yate and Sodbury U3A's monthly AI News meeting held on February 12th 2026 (meetings are second Thursday of month, 1400-1500, St John's Church Centre, Wickwar Road, Chipping Sodbury, UK - all welcome). Note that this is a summary to act as background as the topics are discussed in detail at the meeting itself. We sometimes have extended meetings where we demonstrate other AI products and services.
The 'Introduction to AI' course we run in Chipping Sodbury now uses these News sessions to replace the original classes 5 and 6 as the topics in them (humanoid robots, self driving cars, ethics, futures) are now 'news' not 'distant futures'.
The famous illustration from Tim Urban has been updated to show we have moved into the predicted time of rapid acceleration.
Update: pic.twitter.com/P4tIzyWShV
— Tim Urban (@waitbutwhy) January 29, 2026
In February 2026 two of the biggest AI companies, OpenAI and Anthropic, launched the latest versions of the models. These are already contributing significantly to their own development. Many knowledgeable users with early access flagged the significance of the moment. Explaining this to the wider population is harder.
AI Tsunami pic.twitter.com/jRVR9FGHjM
— The Silver AI Project - free AI training materials (@SilverAIProject) February 8, 2026
This month many top AI experts have been writing posts and articles like this one:
"I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy."
(the following para is from later in the article)
"Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest."
link to full Matt Shumer Article
From an interview on Joe Rogan.
ELON: AI WILL TAKE OVER DIGITAL JOBS LIKE LIGHTNING
— Mario Nawfal (@MarioNawfal) February 11, 2026
“AI is really still digital, but ultimately, it will improve the productivity of humans who build things with their hands or do things with their hands.
Welding, electrical work, plumbing, cooking food, or farming—anything… https://t.co/YoU9QQODQV pic.twitter.com/QAlEGgyWBF
These are the first of the production robotaxis with no steering wheel or pedals.
Tesla Cybercab in San Francisco 😎
— Nic Cruz Patane (@niccruzpatane) February 10, 2026
There’s a boatload of Cybercabs testing around the US. https://t.co/UZ42nPRzjo pic.twitter.com/k4it0mTAGo
"lowest cost of transportation, even beating public transportation."
Ashok Elluswamy, VP of AI at @Tesla on the Cybercab:
— Sawyer Merritt (@SawyerMerritt) February 4, 2026
"They are designed for autonomy. They don't have any steering wheel, accelerator pedal, or brake pedal. It's meant for full self-driving only. This will have the lowest cost of transportation, even beating public transport." pic.twitter.com/mXcFqCfNrB
Waymo will soon be generally available in London, the UK's first driver free taxi. Sometimes we see video of USA Waymos seemingly stuck at junctions. It was discovered this week that Waymo uses remote humans to aid Waymos when they ask for advice!
First sighting of a Waymo in the wild in London. Immediately noticed how many drivers are impatient and aggressive when sitting behind a vehicle that is programmed to abide by the speed limit. pic.twitter.com/eOImGMcA2l
— Bob From Accounts 🚲 (@BobFromAccounts) February 2, 2026
Anthropic recently released Claude 4.6 and made interesting comments on how they had tested for safety, and what they had found. The safety level system is summarised below.
ASL-1. Minimal standards; no special measures needed.
ASL-2. Systems showing early signs of dangerous capabilities (e.g., providing bioweapon instructions).
ASL-3. Systems that significantly increase catastrophic misuse risk over non-AI baselines (e.g. textbooks).
ASL-4. ... escalations in misuse potential or autonomy (e.g., rapid self-replication or extreme deception).
When we released Claude Opus 4.5, we knew future models would be close to our AI Safety Level 4 threshold for autonomous AI R&D. We therefore committed to writing sabotage risk reports for future frontier models.
— Anthropic (@AnthropicAI) February 11, 2026
Today we’re delivering on that commitment for Claude Opus 4.6.
When AI model safety conflicts with competitive pressures, it is difficult to slow things down even for a company said to give an emphasis on safety from the start.
Introducing Claude Opus 4.6. Our smartest model got an upgrade.
— Claude (@claudeai) February 5, 2026
Opus 4.6 plans more carefully, sustains agentic tasks for longer, operates reliably in massive codebases, and catches its own mistakes.
It’s also our first Opus-class model with 1M token context in beta. pic.twitter.com/L1iQyRgT9x
As AI Agents start renting humans and starting religions, who can guess what they will do next. Marc Andreessen gives his take.
.@pmarca on Moltbook, the social network for AI agents:
— a16z (@a16z) February 10, 2026
"All of the science fiction novels basically have AI either being super utopian or super dystopian, but they never have this incredible sense of humor aspect, which is what we're actually getting—where people are just using… pic.twitter.com/WPemU7miSt
With hundreds of millions of free users the temptation was too much. Nobody knows what percentage of users will remain.
OpenAI has begun testing ads in ChatGPT for Free and Go users in the U.S.
— Wes Roth (@WesRoth) February 10, 2026
These ads will not affect answers, will be clearly marked, and won’t compromise user privacy.
Ads are matched to the conversation topic and prior usage data (not shared with advertisers).
Users under… https://t.co/7a6CkB3E85 pic.twitter.com/2oWLen3FBe
Anthropic tried to score points over OpenAI with some very good Superbowl halftime content. Unfortunately most of the audience were not regular users of any AI so the subtle points were lost on them.
Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT. Anthropic is airing ads mocking ChatGPT ads during the Super Bowl, and they're hilarious 😅 Anthropic is also committing to no ads in Claude https://t.co/LR1v4xz9ds pic.twitter.com/PXoaZtmCWA
— Tom Warren (@tomwarren) February 4, 2026