Logo
Join the CTO Newsletter for free!
 
Amazing CTO Logo

Amazing CTO | More happiness and success
šŸš€ 120.4

by Stephan Schmidt

Happy šŸŒž Sunday,

Welcome to my opinionated newsletter. This week’s insights:

  • šŸ“Š Trust in AI falls, Stack Overflow Developer Survey 2025
  • ⚔ Inference speed makes the difference for coding
  • šŸ—£ļø What outsiders get wrong about AI (you too!)
  • šŸ¤– 99% of AI Startups Will Be Dead by 2026
  • šŸ‘©ā€šŸ’¼ How to overcome self-doubt
  • šŸ“š Doesn’t matter that you don’t learn coding with AI
  • āš–ļø US vs. EU: The myth of work–life balance is dead
  • šŸ  My Plan B: Self-Hosting AI Models After Claude’s Usage Limits
  • 🪐 At last a Stargate, or not?!

Good reading, have a nice Sunday ā¤ļø and a great week,

Stephan
CTO-Coach and CTO-veteran

Need support as an engineering manager? Thought about coaching? Let's talk—I helped many CTOs and engineering leaders with growth and making the right decisions under pressure, I can help you too.
šŸŽ

If you only read one thing

Stack Overflow Developer Survey 2025 (8 minute read)

Some results from the 2025 Stack Overflow Developer Survey:

  • the future of code is about trust, not just tools (or Control, see my Theory of Control)
  • trust in the accuracy of AI has fallen from 40% in previous years to just 29% this year
  • 66% of developers say they are spending more time fixing ā€œalmost-rightā€ AI-generated code
  • while Redis has been on the survey as a database option since 2017, this year it shines as the top choice for AI agent data storage
  • 75% describe themselves as ā€œcomplacentā€ or ā€œnot happy at work.ā€

The first backlash? Are developers right about AI? Worth a read. Of course we get some propaganda pushed in between the lines, ā€œWhen developers visit Stack Overflow, their top-ranked activity is reading comments, showing a deep interest in human-to-human context.ā€ I’d still rather ask ChatGPT for an answer than Stack Overflow 90% of the time.

https://stackoverflow.blog/2025/07/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/


šŸš€

Stories I’ve enjoyed this week

Inference speed (10 minute read)

ā€œIt’s generatingā€ is the new ā€œCompiling!ā€. Watching Claude Code churn through the tokens, working and working, reminds me of the old days, when I had to let an Amiga program run over lunch, or over night, because it took that long. Faster is always better, we want faster inference! ā€œCerebras claim they can serve the model at an astonishing 2,000 tokens per secondā€ (again with the Qwen Coder model which seems to be the next big thing). ā€œI’ve experimented with Cerebras in the past and found that the speed really does make iterating on code with live previews feel a whole lot more interactive.ā€ https://simonwillison.net/2024/Oct/31/cerebras-coder/

https://simonwillison.net/2025/Aug/1/faster-inference/


Noam Chomsky Speaks on What ChatGPT Is Really Good For (18 minute read)

A mixed bag of an article. In computer science I became aware of Noam Chomsky on languages with the Chomsky hierarchy. But we get the same problem we get everywhere: Someone great at one thing talks about other things. ā€œUse even more of California’s energyā€ might be a minor thing (AI uses 0.2% of world energy - people make it look like a catastrophe), and the ā€œJust to clarify terminology, the term machine here means program, basically a theory written in a notation that can be executed by a computer–and an unusual kind of theory in interesting ways that we can put aside here.ā€ - no, AI is not a program written in a notation. The model on a meta level yes, the code executing the model - like Tensorflow - yes, but the results are not based on the deterministic execution of codified requirements. Everyone seems to get that wrong. AI is not software. On the upside, it’s good to listen to people outside the bubble to see what you miss. In general there seems to be a very weak understanding of what LLMs are - even with a language scientist.

https://chomsky.info/20230503-2/


99% of AI Startups Will Be Dead by 2026 (30 minute read)

For me the call is still out: do AI wrappers provide value? Is the value in the model, the tool/UI, the prompts (I think so, see my prompting tool Marvai ;-) but I’m sure you disagree), or in the GPUs (gaming hardware!) provided by Nvidia? From the article: ā€œBecause if the wrappers go down, OpenAI’s reach shrinks. They can try to convert those users directly — but most of them weren’t signing up for ChatGPT Pro. They showed up for workflow, not raw model access.ā€ A very long article, which I love, because it goes deeper and shows some thinking - not just business-TikTok aka LinkedIn bait.

https://skooloflife.medium.com/99-of-ai-startups-will-be-dead-by-2026-heres-why-bfc974edd968


How Female Entrepreneurs Can Overcome Self-Doubt (18 minute read)

I feel this goes way beyond female entrepreneurs, and touches many people, especially introverts. ā€œOur research offers practical guidance for individuals who are looking to attain an aspired identity or achieve aspirational goals, but face significant internal obstacles such as self-doubt or fear of failure [..] These individuals need to recognize that the internal obstacles they experience stem from the conflict they perceive between their aspirations and one or more of their existing identities.ā€ I have two clients who as CTO lost AI because of their identity. I think a lot of developers struggle with AI adoption because of ā€œtheir existing identityā€. Identity is the strongest thing we cling to; we do everything to keep it intact and stable.

https://journals.aom.org/doi/10.5465/amj.2022.1104.summary


you’re probably not learning with ai (4 minute read)

This misconception is everywhere, that we don’t become experts by using LLMs. You’re not learning to become a senior Python developer when using LLMs, but you’re going to learn to be an expert in managing an AI. Most people don’t know machine code; the compiler has taken that away decades ago (I loved writing Z80 code, I’m that old, plus white, plus shouting at clouds sometimes). One step deeper, do you know what microcode is executed with your code? I believe people only learn one abstraction layer down; painters in the past became experts in mixing and creating colors, but they weren’t experts on mining the minerals for those colors (don’t know one personally, but I assume people who painted cave walls also were experts on mining the minerals for those colors). We shift up, we lose expertise one abstraction layer down.

https://aryas.dev/post/llmstudy


Howard Levitt: The myth of work–life balance is dead, and employers aren’t afraid to say it (6 minute read)

Do your employees need to work for 60h/week so you can become a tech giant? Or is it the over-abundance of money in the valley and a homogeneous, rich market of 350 million people? I’d guess we’d see in the next years with the US reeling back hard from home office and less work while the EU, although 4-day-weeks seem off the table for now - still focuses on a better work-life balance. Personally I think you can’t do 60h/week creative work, and with AI the manual work falls away and the creative remains. The ratio of creative work skyrockets; can you fill 60h/week? I recently wrote about
this in Too Fast to Think: The Hidden Fatigue of AI Vibe Coding if you’re interested.

https://financialpost.com/fp-work/howard-levitt-work-life-balance-dead-employers


Self-Hosting AI Models After Claude’s Usage Limits (15 minute read)

As CTO I once depended on a piece of software, deeply woven into our stack with no anti-corruption layer. Not by my design, but it could have easily been. The vendor increased the price 10x year over year. And not from $5 to $50 - let me assure you. Which nearly led to my firing. VMWare customers tied their company IT deeply to VMWare; when they increased prices which resulted in 10x increases for some customers as it seems, they had nowhere to go - no plan B. When, next year, all of your development depends on AI APIs, and they increase the price 10x, can you cope? What is your plan B? Sadly I think not enough people experiment with self-hosting. Do you use old EPIC processors, or ā€œA few months ago I bought a Mac Studio with 512GB ram, top of the line, to experiment with modelsā€? But then ā€œYes, I can run Deepseek Coder V2 at ~25 tok/s or a quantified version of R1 at ~8–15 tok/s, but that’s not fun, and 128k context size is quite a bit of a downgrade compared to Claude’s 200k or Gemini’s 1Mio context.ā€ At least there is hope on models: ā€œwe also need capable models. These days, chinese labs releases better and more capable models on an almost weekly basis. The release of Qwen3-Coder-480B got my attention, as it’s the first model that achieves a similar score on SWE-bench as Claude Sonnet 4.ā€ Overall a very welcome article. #PlanB

https://steipete.me/posts/2025/self-hosting-ai-models


Introducing Stargate Norway (3 minute read)

Stargate Norway! I guess it has nothing to do with a ā€œStargateā€ :-( Sigh. Why? Latency? Cheap energy in Norway? (When we hiked from Oslo to Trondheim and lights were on during the day, when asked, a Norwegian said some people do that, because energy is so cheap, also EVs.) EU GDPR considerations? Sovereignty? At least interesting, although I don’t have an answer yet. Wish for a Stargate though.

https://openai.com/index/introducing-stargate-norway/


Join the CTO newsletter!
Impressum