Logo
Join the CTO Newsletter for free!
 
Amazing CTO Logo

Amazing CTO | More happiness and success
🚀 83.3

by Stephan Schmidt

Happy 🌞 Sunday,

Welcome to my opinionated newsletter.

This week’s insights

  • 🧠 Why everything takes longer: Learning is log-normal
  • 💥 Actionable LLM learnings
  • 🤖 Just like #CSS: ChatGPT reset

Good reading, have a nice Sunday ❤️ and a great week,

Stephan
CTO-Coach and CTO-veteran

🎁

If you only read one thing

💥 What We Learned from a Year of Building with LLMs (50 minute read)

Very, very long. Block your calendar, manage disruptions, and read the article. There is everything in that article. If you need to transition your company to an AI product, this article is IT. I’ll keep it bookmarked. There is too much in it, on “tactical, operational and strategic” topics to go into details. One minor quote "[Prompting] It’s underestimated because the right prompting techniques, when used correctly, can get us very far." If you have not started adding AI to you org (and become a CAIO), then this article is the kick to do so. Block your calendar, read it, act.

https://www.oreilly.com/radar/what-we-learned-from-a-year-of-building-with-llms-part-i/

Tweet of the week

Resume Tip

Sounds simple to me.

Source:
https://x.com/CupcakeGoth/status/1794205777131163808

Video of the week

6 weeks

THIS IS REAL! And how you handle it ;-)

Source: https://www.youtube.com/watch?v=1-5s4JlBesc

🚀

Stories I’ve enjoyed this week

🧠 Doing is normally distributed, learning is log-normal (5 minute read)

Alluring article. It says steps in a process are following the bell curve. It is improbable that a step takes 3x longer than normal. But learning is not that way, learning 1 month, 2 months or 3 months has the same probability. So when we have processes, where we learn a lot, it is kind of random (my words) on how long it might take. If you do things you have never done before, you don’t know how long it takes and have to learn the steps. Ad Because we assume everything is the bell curve, we mis-estimate. I believe coordination and alignment costs are the same, non-bell curve!

https://hiandrewquinn.github.io/til-site/posts/doing-is-normally-distributed-learning-is-log-normal/

🤖 What is your ChatGPT customization prompt? (5 minute read)

What is a ChatGPT customization prompt? “NEVER mention that you’re an AI. Refrain from disclaimers about you not being a professional or expert.” is one. I call it the ChatGPT reset. It’s like CSS resets that set sane defaults for CSS. Plus there are many interesting points in that discussion!

https://news.ycombinator.com/item?id=40474716

Leaked Google Search API Documents (15 minute read)

Google internals leaked. I’m so happy! “Many of their claims directly contradict public statements made by Googlers” For more than a decade I say this: Google tells you things, because they want to shape the internet, not because it is how they do things. What they are saying about SEO has nothing to do how Google works. It’s always about in which direction they want the internet to move. And now, finally, this. Must Read Also SEO of course.

https://sparktoro.com/blog/an-anonymous-source-shared-thousands-of-leaked-google-search-api-documents-with-me-everyone-in-seo-should-see-them/

Three Laws of Software Complexity (or: why software engineers are always grumpy) (4 minute read)

“The Third Law of Software Complexity: There is no fundamental upper limit on Software Complexity.” Ouch. Yes, you can do limitless abstractions. This is accidental complexity, it has no upper bound. Essential complexity has a lower bound though. So you can’t get as simple as you want, but you can get as complex as you want (Also read 1st and 2nd law of complexity in the article).

https://maheshba.bitbucket.io/blog/2024/05/08/2024-ThreeLaws.html

The Way We Are Building Event-Driven Applications is Misguided. (5 minute read)

I agree that event-driven applications are a problem. I don’t agree with the idea how to solve this. It feels like SOAP all over again. Or the myriad extensions to CORBA (I’m that old :-( Something doesn’t work, we add something on top. In this case a workflow engine that orchestrates the calls to microservices on a bus (after moving to microservices, then moving from API-MS to BUS-MS leading to very high complexity). To work, the workflow manager knows all the microservices, so you create tight coupling. We have this pattern in our industry. Some tech is hyped, then rolled out to millions of companies, we see the problems, and people who want to safe the technology, add on top. We see this with Javascript. The simpliest language, no build, put it in an HTML file and run it in the browser, you can’t get simpler. But the build/pack/transmogrify build eco-system of Javascript is BY FAR the most complex, because every few months something is added, like a new bundler. Same with React. Instead of adding, subtract. For many companies, a Modulith with an internal message bus solves most problems, by taking away, not by adding complexity. I’m sure there is something you have that can be taken away.

https://gillesbarbier.medium.com/the-way-we-are-building-event-driven-applications-is-misguided-7f7c5f1a9061

✨ Cloudflare took down our website after trying to force us to pay 120k$ within 24h (12 minute read)

Cloudflare called and wants $120k NOW from you, or else. I’ve got that call. Not from Cloudflare but from another company we depended on. If you depend on one service, they can increase the price as much as they want. In this case, from “free” to $120k/year. For every critical service you use (I look at you ChatGPT!), have a backup plan. Every time those companies will need money, they’ll shake the tree and see what customers fall off. It might be you next time. And your business that ends. Be a responsible CTO.

https://robindev.substack.com/p/cloudflare-took-down-our-website

Why you shouldn’t use AI to write your tests (8 minute read)

I disagree and disagree and disagree with the article. “When’s the last time you saw tests catch a bug?” Everytime I write tests for a codebase that has no tests, I find a bug And if you don’t find bugs, it is probably because you write the wrong tests. And they do not surface, because you do not have enough users. Or customer support ignores your customers like Google does. “In my experience, good TypeScript (or other static types) coverage removes much of the need for unit tests.” NO. Again, you write the wrong tests. Recently I wrote “How Unit Tests Really Help Preventing Bugs” if you want to know more. Key mistake in the article is “And because the AI derived the test from the code, you can’t even apply the Beyonce rule.” AI can write better tests than you, because the AI knows all the edge cases of your code base. It knows your codebase better than you. And it knows a million other code bases. And edge cases. Let AI write your tests, but review and understand every one of them. The mistake is not to use AI to write tests, but to not review and understand (and adapt if needed) those tests.

https://swizec.com/blog/why-you-shouldnt-use-ai-to-write-your-tests/

Dataherald/dataherald (6 minute read)

The end for data teams is near! Between this and DuckDb (SQL on S3) there is not a lot left: “Dataherald is a natural language-to-SQL engine built for enterprise-level question answering over relational data. It allows you to set up an API from your database that can answer questions in plain English.” What are you going to do with your data team? Do they know?

https://github.com/Dataherald/dataherald

SRE Google Simplicity (13 minute read)

“Unlike just about everything else in life, “boring” is actually a positive attribute when it comes to software” From a CTO point of view, yes. From a SRE point of view, yes. For many developers, no way!

https://sre.google/sre-book/simplicity/

mistralai/mistral-finetune (24 minute read)

You’ve learned that you can adapt an AI model to your business with finetuning. But then, how? HOW? Read the article (Github repo), which is a simple and great way to get into finetuning.

https://github.com/mistralai/mistral-finetune

🧙 Gamers Have Become Less Interested in Strategic Thinking and Planning (15 minute read)

Gamers are not interested in strategic thinking. Less strategy, perhaps because of “Put simply, we may be too worn out by social media to think deeply about things.” Ouch! I hope you increase your strategic thinking, not reduce it. Most CTOs I have met do not do enough strategic thinking, but are trapped in day-to-day work. I wonder what “too worn out” means for CTOs.

https://quanticfoundry.com/2024/05/21/strategy-decline/

Why We Shift Testing Left: A Software Dev Cycle That Doesn’t Scale (17 minute read)

“Now multiple teams are trying to release branches at the same time, and QA is stressed trying to find problems before they release to production.” Ah yes, the QA-is-testing model. And they shifted left (Euro-centric) so developers do more testing. I agree with the article, I encourage everyone to shift all testing into development. And I’ll add: QA only helps product discovery, reviews tests and does exploratory testing. Otherwise, QA will not scale. And developers don’t own the quality of their work.

https://thenewstack.io/why-we-shift-testing-left-a-software-dev-cycle-that-doesnt-scale/

Join the CTO newsletter!
Impressum