Logo
Join the CTO Newsletter for free!
 
Amazing CTO Logo

Amazing CTO | More happiness and success
🚀 121.4

by Stephan Schmidt

Happy 🌞 Sunday,

Welcome to my newsletter. This week’s opinionated insights

  • 🚌 ‘AI First’ and the Bus Factor of 0
  • ⚡ Google drops AI energy cost by 33x
  • 🧠 Microsoft boss troubled by ‘AI psychosis’ - ultra-processed minds?
  • 🔓 CodeRabbit exploited: When AI security tools aren’t secure
  • đŸ« Annual cybersecurity training has no observable benefit
  • 💰 Where to find ideas because you need MORE ideas with AI
  • 📈 You have too many metrics - and most are vanity ones
  • đŸ”„ GitHub nuked my account at midnight - #you have been warned

Good reading, have a nice Sunday ❀ and a great week,

Stephan
CTO-Coach and CTO-veteran

PS: I relaunced my website, take a look at amazingcto.com

Need support as an engineering manager? Thought about coaching? Let's talk—I helped many CTOs and engineering leaders with growth and making the right decisions under pressure, I can help you too.
🎁

If you only read one thing

‘AI First’ and the Bus Factor of 0 - MindFlash - Programming and more (3 minute read)

Interesting idea. The bus factor is the number of people that need to get hit by a bus before you get into trouble with your code. With LLMs the article argues “Before LLMs, provided that your team did some of their due diligence, you could always expect to have some help when tackling new code-bases. [..] With LLMs, this is gone.” - the bus factor is zero according to the article. Interesting, but wrong of course. The assumption, that LLMs/AIs are only ever useful to generate code, is wrong. AIs are very useful for finding bugs, explaining the reason for the bug, extract and explain the architecture, explain the codebase, summarize the technologies used, rank all the technologies used, suggest migrations, when reading git logs finding out whom to ask for some code, and on and on and on. If you think the best AIs can do, is generate code, you’re mistaken. Sadly even the CEOs I meet are 150% focused on “write more code faster!” instead of letting the AI rank and judge the roadmap, the strategy, how the roadmap fits the strategy, check the customer support database and sales CRM to find out what customers need and on and on and on. But AI = gen code it is. #SadFact

https://www.mindflash.org/coding/ai/ai-and-the-bus-factor-of-0-1608


🚀

Stories I’ve enjoyed this week

Google says it dropped the energy cost of AI queries by 33x in one year (8 minute read)

I think talking about AI energy usage is important. Good that Google dropped energy usage. That said, current AI energy usage is 0.1% - 0.3% of world electricity usage - so there are bigger fish to fry - keep this in mind when reading “the environmental impact of AI is looking pretty bad.” from a reporter who might have flown around the world to be corrupted by a company for the launch of a new mobile phone which is the same as the last one. Generally it’s hard to predict or even talk about what is going to happen in disruptive and highly changing fields - the fundamentals change too much (hardware, models, 
).

https://arstechnica.com/ai/2025/08/google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year/


Microsoft boss troubled by rise in reports of ‘AI psychosis’ (6 minute read)

First, interesting, “AI psychosis” when people believe the AI. Didn’t hear the term before. Then "[..] he has this advice: “Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality.” Hmm, like everything else? “We already know what ultra-processed foods can do to the body and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds” - alchemy never died. But everyone is entitled to their own opinion, just like the things I write here. But “ultra-processed minds”, really? I think for people vulnerable to escapist behaviour, AIs are just the next step. Perhaps this way conspiracy theories go out of fashion at least when echo chambers break. Or we get micro-echo-chanmbers, you and your AI. Interesting times at least.

https://www.bbc.com/news/articles/c24zdel5j18o


How We Exploited CodeRabbit: From a Simple PR to RCE and Write Access on 1M Repositories (22 minute read)

We have all those discussions on how AI generated code is insecure, developers are so much better, “beware of the security problem by AIs” and on and on and on. My experience is different. You can ask a developer to write secure code, but for many reasons they won’t. You can ask an AI to write secure code, and you’ll get 80% there.

Here a company deployed a product, #CodeRabbit, which has such big, glaring security mistakes, it was a breeze to exploit. Tools that could execute arbitrary code didn’t run in a sandbox. The configuration of tools could be changed by anyone. Company wide security tokens in ENV variables. Does your code have the same glaring security mistakes? Why not ask Claude Code about it, like now?

https://research.kudelskisecurity.com/2025/08/19/how-we-exploited-coderabbit-from-a-simple-pr-to-rce-and-write-access-on-1m-repositories/


Black Hat Phishing Talk 2025 - US-25-Dameff-Pwning-Phishing-Training-Through-Scientific-Lure-Crafting-Wednesday.pdf (16 minute read)

“Annual cybersecurity training has no observable benefit” a.) Interesting b.) After undergoing cybersecurity trainings in large companies and being responsible that all the people in my departments watch some videos for some hours, I believe this. c.) I would still do that training - with consequences. It seems clicking on a simulated phishing email had no consequences whatsoever for the employees. If things have no consequences, they don’t change behavior.

https://i.blackhat.com/BH-USA-25/Presentations/US-25-Dameff-Pwning-Phishing-Training-Through-Scientific-Lure-Crafting-Wednesday.pdf


Cursed Knowledge | Immich (5 minute read)

CURSED KNOWLEDGE! “PostgreSQL does everything in a transaction, including NOTIFY. This means using the socket.io postgres-adapter writes to WAL every 5 seconds.” I didn’t know. And I also didn’t know about the other cursed knowledge items in the list. For a long time now I have a LEARNINGS.md file in all of my projects - for the exact reason as the cursed knowledge list (Oopsie: There are four items concerning PostgreSQL in that list).

https://immich.app/cursed-knowledge/


Where to find ideas (8 minute read)

If you lay off people because of AI, it means you don’t have enough good ideas. If you don’t want to let people go, you need more ideas. Where do you get good ideas? Seeing companies consistently fail on product market fit (PMF), most often skipping it and jumping right into scaling (and failing), my hopes are low. BUT you’re here, so read the article, there is hope. I didn’t know about the PULL framework for example. Users have a PROJECT that is UNAVOIDABLE and their options they LOOK into are LACKING. PULL for short. And while you’re at it, fix your PMF.

https://howtogrow.substack.com/p/where-to-find-ideas


You Have Too Many Metrics (6 minute read)

What I have been saying for decades now, “The golden rule of metrics is this: any metric you maintain should directly drive action if outside expected bounds.” - always glad at last someone says the same thing, we all want to be acknowledged. If a metric isn’t actionable, it’s vanity. Then there is surprisingly good stuff in old books, “One Page Management”, 1986, which wants to convince you that computers can help your company, tells us that to look at a metric that goes out of bounds twice in a row. Good advice to know what to focus on. Also: People don’t know the difference between metric, KPI and goal - or the KR in OKR. Ask me if you want to know. Also: Drop all the metrics that you don’t know what to do when they run out of bounds.

https://staysaasy.com/management/2025/08/02/metrics.html


GitHub Nuked My Account at Midnight During Alpha Release: Why I Rage-Quit to GitLab (10 minute read)

When I talk to my CTO clients about AWS closing their accounts, and what to do then, they say “AWS never would do that!” So GitHub closed an account without warning or explanation and doesn’t answer support mails. You have been warned.

https://www.stack-junkie.com/blog/GitHub-Nuked-My-Account-at-Midnight-During-Alpha-Release-Why-I-Rage-Quit-to-GitLab


Join the CTO newsletter!
Impressum