If you only read one thingâAI Firstâ and the Bus Factor of 0 - MindFlash - Programming and more (3 minute read) Interesting idea. The bus factor is the number of people that need to get hit by a bus before you get into trouble with your code. With LLMs the article argues âBefore LLMs, provided that your team did some of their due diligence, you could always expect to have some help when tackling new code-bases. [..] With LLMs, this is gone.â - the bus factor is zero according to the article. Interesting, but wrong of course. The assumption, that LLMs/AIs are only ever useful to generate code, is wrong. AIs are very useful for finding bugs, explaining the reason for the bug, extract and explain the architecture, explain the codebase, summarize the technologies used, rank all the technologies used, suggest migrations, when reading git logs finding out whom to ask for some code, and on and on and on. If you think the best AIs can do, is generate code, youâre mistaken. Sadly even the CEOs I meet are 150% focused on âwrite more code faster!â instead of letting the AI rank and judge the roadmap, the strategy, how the roadmap fits the strategy, check the customer support database and sales CRM to find out what customers need and on and on and on. But AI = gen code it is. #SadFact https://www.mindflash.org/coding/ai/ai-and-the-bus-factor-of-0-1608
Stories Iâve enjoyed this weekGoogle says it dropped the energy cost of AI queries by 33x in one year (8 minute read) I think talking about AI energy usage is important. Good that Google dropped energy usage. That said, current AI energy usage is 0.1% - 0.3% of world electricity usage - so there are bigger fish to fry - keep this in mind when reading âthe environmental impact of AI is looking pretty bad.â from a reporter who might have flown around the world to be corrupted by a company for the launch of a new mobile phone which is the same as the last one. Generally itâs hard to predict or even talk about what is going to happen in disruptive and highly changing fields - the fundamentals change too much (hardware, models, âŠ). Microsoft boss troubled by rise in reports of âAI psychosisâ (6 minute read) First, interesting, âAI psychosisâ when people believe the AI. Didnât hear the term before. Then "[..] he has this advice: âDonât be scared of AI tools, theyâre very useful. But itâs dangerous when it becomes detached from reality.â Hmm, like everything else? âWe already know what ultra-processed foods can do to the body and this is ultra-processed information. Weâre going to get an avalanche of ultra-processed mindsâ - alchemy never died. But everyone is entitled to their own opinion, just like the things I write here. But âultra-processed mindsâ, really? I think for people vulnerable to escapist behaviour, AIs are just the next step. Perhaps this way conspiracy theories go out of fashion at least when echo chambers break. Or we get micro-echo-chanmbers, you and your AI. Interesting times at least. https://www.bbc.com/news/articles/c24zdel5j18o How We Exploited CodeRabbit: From a Simple PR to RCE and Write Access on 1M Repositories (22 minute read) We have all those discussions on how AI generated code is insecure, developers are so much better, âbeware of the security problem by AIsâ and on and on and on. My experience is different. You can ask a developer to write secure code, but for many reasons they wonât. You can ask an AI to write secure code, and youâll get 80% there. Here a company deployed a product, #CodeRabbit, which has such big, glaring security mistakes, it was a breeze to exploit. Tools that could execute arbitrary code didnât run in a sandbox. The configuration of tools could be changed by anyone. Company wide security tokens in ENV variables. Does your code have the same glaring security mistakes? Why not ask Claude Code about it, like now? Black Hat Phishing Talk 2025 - US-25-Dameff-Pwning-Phishing-Training-Through-Scientific-Lure-Crafting-Wednesday.pdf (16 minute read) âAnnual cybersecurity training has no observable benefitâ a.) Interesting b.) After undergoing cybersecurity trainings in large companies and being responsible that all the people in my departments watch some videos for some hours, I believe this. c.) I would still do that training - with consequences. It seems clicking on a simulated phishing email had no consequences whatsoever for the employees. If things have no consequences, they donât change behavior. Cursed Knowledge | Immich (5 minute read) CURSED KNOWLEDGE! âPostgreSQL does everything in a transaction, including NOTIFY. This means using the socket.io postgres-adapter writes to WAL every 5 seconds.â I didnât know. And I also didnât know about the other cursed knowledge items in the list. For a long time now I have a LEARNINGS.md file in all of my projects - for the exact reason as the cursed knowledge list (Oopsie: There are four items concerning PostgreSQL in that list). https://immich.app/cursed-knowledge/ Where to find ideas (8 minute read) If you lay off people because of AI, it means you donât have enough good ideas. If you donât want to let people go, you need more ideas. Where do you get good ideas? Seeing companies consistently fail on product market fit (PMF), most often skipping it and jumping right into scaling (and failing), my hopes are low. BUT youâre here, so read the article, there is hope. I didnât know about the PULL framework for example. Users have a PROJECT that is UNAVOIDABLE and their options they LOOK into are LACKING. PULL for short. And while youâre at it, fix your PMF. https://howtogrow.substack.com/p/where-to-find-ideas You Have Too Many Metrics (6 minute read) What I have been saying for decades now, âThe golden rule of metrics is this: any metric you maintain should directly drive action if outside expected bounds.â - always glad at last someone says the same thing, we all want to be acknowledged. If a metric isnât actionable, itâs vanity. Then there is surprisingly good stuff in old books, âOne Page Managementâ, 1986, which wants to convince you that computers can help your company, tells us that to look at a metric that goes out of bounds twice in a row. Good advice to know what to focus on. Also: People donât know the difference between metric, KPI and goal - or the KR in OKR. Ask me if you want to know. Also: Drop all the metrics that you donât know what to do when they run out of bounds. https://staysaasy.com/management/2025/08/02/metrics.html GitHub Nuked My Account at Midnight During Alpha Release: Why I Rage-Quit to GitLab (10 minute read) When I talk to my CTO clients about AWS closing their accounts, and what to do then, they say âAWS never would do that!â So GitHub closed an account without warning or explanation and doesnât answer support mails. You have been warned. Join the CTO newsletter! | |