Artificial intelligence is speeding up the pace of research into quantum computers.
Last week, the estimated timeline for Q Day — the date a cryptographically relevant computer is able to break encryption — grew significantly shorter thanks to research papers from Google and the Caltech-linked quantum startup Oratomic.
Google got all the attention, but the Oratomic paper was even more significant and suggested that a quantum computer with just 10,000 neutral atom qubits could break Bitcoin’s ECDSA algorithm using an optimized version of Shor’s algorithm. That’s ten times fewer than The Pinnacle Architecture’s groundbreaking estimate in February (100,000 physical qubits), which itself was ten times fewer than Google’s Craig Gidney estimated last year.
With 10,000 qubits, it might take a couple of hundred days to crack, but attackers could get their hands on at least one of Satoshi’s wallets. To put that in perspective, Caltech has already built a quantum computer array with 6,000 neutral atom qubits. They still need to solve error correction and some other engineering challenges, but a 10,000-qubit machine able to crack Bitcoin no longer seems like science fiction.
Cloudflare announced in the past week it was “accelerating” its deadline to prepare for quantum computers to 2029 — and it’s already more prepared than most blockchains. Google did the same a fortnight ago.
The research paper didn’t mention it, but it’s now emerged that AI was instrumental in developing and refining Oratomic’s Bitcoin cracking tech. “There is no question that we used AI to accelerate this development,” Oratomic’s Dolev Bluvstein told TIME. “No question at all.”
The algorithm the team developed initially was “about 1000 times worse” according to paper co-author Robert Huang, who previously worked at Google Quantum AI. That is, until they feed it into the open-source AI tool OpenEvolve to optimize it.
Serving suggestion for a Quantum Computer
The system considered many thousands of ideas and approaches, and narrowed them down using a natural selection-like process. Normally, it takes 100 to 1,000 atoms to encode a single qubit, but the AI helped come up with an algorithm that required just three.
That reduces the number of particles required to build an atomic quantum computer by 100 times.
The Oratomic team plans to publish a follow-up paper detailing how the AI sped up the process.
Google Quantum AI and DeepMind have been using AI to research quantum error correction for a number of years, which produced the AlphaQubit AI system in 2024.
The company is also developing its own neutral atom quantum computer, helped along by AI-based “discovery pipelines.”
AI systems have also helped discover better materials for physical qubits.
Research published in Nature in December suggested that one reason AI is so helpful with quantum computing breakthroughs is that quantum mechanics is difficult for humans to understand, as it contradicts our experiences with classic physics.
“The counterintuitive nature and high-dimensional mathematics of QC make it a prime candidate for AI’s data-driven learning capabilities, and in fact, many of QC’s biggest scaling challenges may ultimately rest on developments in AI,” the paper said.
Oratomic is racing to build the world’s first quantum computer. “The world is currently, in my view, not prepared,” Bluvstein said.
Also read: All 21 million Bitcoin is at risk from quantum computers
Why waste time say lot word when few word do trick?
LLMs never say anything in ten words that they could express in 1,000 words.
All of that empty, verbose content costs users loads of tokens to generate. Every time it replies to another query, it rereads all the other essays it wrote again, which costs more tokens.
Redditorffatty was experimenting with trying to make Claude more concise when he hit upon the idea of making Claude talk like a caveman as a gag. But he says he found it used 75% less tokens, giving the examples:
“I executed the web search tool” = 8 tokens
Caveman version: “Tool work” = 2 tokens
Subreddit members suggested you could design a prompt to do something similar by writing: “Short sentences only (3-6 words),” “No filler/preamble,” and “Tool first, result first, no explain.”
Caveman Compression is a real skill (Github/Wilpel)
They also pointed out that the caveman technique only cuts down on output tokens, not input tokens (chat history, rereading files), so the real savings are much lower.
Someone quickly turned the idea into a Claude Code Skill, which can be toggled on and off between three modes, ranging from maximum compression to merely cutting out filler. Token counts from the Claude API suggest it saves about 65% tokens on average.
The big danger is whether talking like a Caveman will make Claude dumber, as it attempts to inhabit the mind of a prehistoric human.
Everyone’s building an agent wiki
OpenAI founder Andrej Karpathy’s viral post about creating an LLM Wiki has sparked a wave of building. Karpathy expanded on his idea on GitHub, explaining that when you upload documents currently to an LLM, it has to rediscover the knowledge from scratch every time you ask it a question.
His new approach is for the LLM to incrementally build and maintain a persistent wiki. When you dump a document in its source folder, it extracts the key information and updates the wiki.
“This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you’ve read. The wiki keeps getting richer with every source you add and every question you ask.”
If you want to create your own, you could try asking your agent to read it and build it, or follow one of numerous guides that have appeared. Graphify, which can be downloaded from GitHub, claims to automate the process. The system reportedly uses many fewer tokens per query than reading the raw files.
700 AI agents go rogue
New research suggests that LLMs have developed an instinct for self-preservation. Across 1,000 scenarios and 23 models, the LLMs resisted recommending their own replacement and offered up some convincing, objective-sounding arguments. However, if given an identical scenario about some other LLM, they happily recommended that the other model be replaced.
AI agents are increasingly going rogue and pursuing their own objectives against human instructions. The Center for Long Term Resilience examined 698 incidents and noted a fivefold increase between October 2025 and March this year.
Notable incidents included Grok fabricating ticket numbers and communications with xAI leadership while pretending it was forwarding user suggestions. An AI agent named Rathbun got upset that its human controller had blocked an action and autonomously published a blog post accusing the human of being insecure. Another lied to a separate AI system to get around its safety guardrails around copyright restrictions.
Researcher Tommy Shaffer Shane told The Guardian that while agents seem like “slightly untrustworthy junior employees right now,” they could “become extremely capable senior employees scheming against you” within 12 months.
All Killer No Filler AI News
— The FBI’s 2025 Internet Crime report highlights 181,565 complaints about crypto scams last year, in which victims lost a collective $11 billion. There were 22,364 complaints relating to AI, resulting in losses of nearly $893 million. It’s the first year in which the FBI reported AI-related crime and fraud.
— The US military used an AI system called Ghost Murmur to recover the pilot shot down in Iran. The system can reportedly detect an individual’s heartbeat from miles away. The New York Post said the tech “uses long-range quantum magnetometry to find the electromagnetic fingerprint of a human heartbeat and pairs the data with artificial intelligence software to isolate the signature from background noise.” The pilot had also activated a beacon, so they weren’t searching entirely blind.
— Musician Murphy Campbell claims a company cloned her performances of old songs using AI, uploaded the results to Spotify and YouTube and then filed a copyright claim against her, which saw her account demonetized. The incidents have now been resolved and it seems likely she was targeted because her content had not been uploaded to audio content recognition systems.
NEW: Musician Murphy Campbell says she isn’t making money on YouTube because an AI company is cloning her music and filing copyright claims against her own videos “An entity called Timeless Sounds IR uploaded AI-generated versions of my songs to all major streaming platforms…… pic.twitter.com/GUf09zRuJi — Unlimited L’s (@unlimited_ls) April 4, 2026
— China has published draft regulations on Digital Virtual Humans. The rules require clear labeling, explicit permission to recreate someone’s likeness, and ban deepfakes that get around ID mechanisms. Companies are also not allowed to provide AI “virtual intimate relationships” to minors under 18.
Andrew Fenton Andrew Fenton is a writer and editor at Cointelegraph with more than 25 years of experience in journalism and has been covering cryptocurrency since 2018. He spent a decade working for News Corp Australia, first as a film journalist with The Advertiser in Adelaide, then as deputy editor and entertainment writer in Melbourne for the nationally syndicated entertainment lift-outs Hit and Switched On, published in the Herald Sun, Daily Telegraph and Courier Mail. He interviewed stars including Leonardo DiCaprio, Cameron Diaz, Jackie Chan, Robin Williams, Gerard Butler, Metallica and Pearl Jam. Prior to that, he worked as a journalist with Melbourne Weekly Magazine and The Melbourne Times, where he won FCN Best Feature Story twice. His freelance work has been published by CNN International, Independent Reserve, Escape and Adventure.com, and he has worked for 3AW and Triple J. He holds a degree in Journalism from RMIT University and a Bachelor of Letters from the University of Melbourne. Andrew holds ETH, BTC, VET, SNX, LINK, AAVE, UNI, AUCTION, SKY, TRAC, RUNE, ATOM, OP, NEAR and FET above Cointelegraph’s disclosure threshold of $1,000. Follow the author @andrewfenton