I’m joining Qodo


Hello Reader,

If you’ve been reading this newsletter for a while, you know that quality engineering is the hill I’ll always choose to stand on. And this week, I get to share something personal that ties directly into that.

I’m joining Qodo

I’ve been following Qodo for almost a year now, and I’ve been getting more and more impressed every day. So I’m thrilled to share that I’m joining Qodo as a DevRel engineer.

Qodo is an enterprise multi-agent platform for AI-driven code reviews. As AI accelerates development, Qodo is on a mission to ensure that quality scales alongside it. As a long time advocate for quality engineering, that’s a mission I can fully get behind.

Code velocity is at an all-time high, and so are concerns about quality. Everyone’s shipping faster, but faster doesn’t mean better. Not by default. Qodo is building a solution that’s uniquely positioned to tackle this, and in my opinion the best-equipped one. Their multi-agent review system doesn’t just suggest naming changes and call it a day. It draws on full-repository signals, codebase history, and prior PR decisions to deliver feedback that’s actually actionable and specific to how your team defines quality.

I’m especially looking forward to working with Nnenna and Itamar, people I’ve admired for a long time. It feels like the right place, at the right time, working on the right problem.

Speaking of right time, Qodo just raised a $70M Series B led by Qumra Capital. This is a big vote of confidence in the company and the team.

Rethinking the development process

Last week I got stuck with this tweet by Addy Osmani in my head: ā€œBuild the thing that feels too ambitious. If you’re using agents to do exactly what you were doing before - just faster - you may be thinking too small. Ambitious is the right size side-project.ā€

AI has been a big ā€œunlockā€ for many of us. Many question the real impact on the job market and how it will change the way we work. I really liked this tweet by Dex Horthy: ā€œIf you’d been walking everywhere and one day you got a car, you wouldn’t just get where you were going faster, you would go to way more places.ā€

The job market tells a different story

It seems that this is an instinct that can be backed by data. Lenny Rachitsky shared some data this week showing that engineering job openings are at the highest levels in over three years. Over 67,000 engineering openings at tech companies globally, with 26,000 just in the U.S. AI roles are exploding. PM openings are up. Despite all the doom-scrolling and anxiety about AI replacing developers, the demand for people who can build things keeps going up.

This shows that AI might very well have an opposite effect on the job market to what many claim. But there also seems to be a bit of a shift in the way many of us work.

Fix the process, not just the speed

Steve Sewell wrote a piece on the Builder.io blog titled AI Won’t Save Your Development Process. Rebuilding It Will. The core argument is simple but important: faster code generation alone doesn’t fix broken workflows. Teams that just bolt AI onto their existing process and expect magic are going to be disappointed.

The real competitive advantage comes from teams that use AI as a catalyst for rethinking their entire approach. It’s not about how quickly you write code, but about how quickly you learn whether the code you wrote was the right code.

Testing events

​I went to TestCrunch last week to meet many of my friends in Testing. AI was the hot topic. I loved that many talks were focused on practical examples on how to use AI to improve testing workflows. There aren’t really recordings of these talks, but if you want to dive deeper into this, I recommend checking out a workshop on April 29th with Ivan Davidov and Debbie O’Brien on orchestrating AI-native testing with Playwright.


That’s it for this week. A lot of change on my end, but the mission stays the same - helping teams ship with confidence. If you have thoughts on any of this, just hit reply. I read every email.

See you next week!

Filip Hric

Sign up for weekly tips on testing, development, and everything related. Unsubscribe anytime you feel like you had enough 😊

Read more from Filip Hric

Hey Reader, An interesting thought is popping up in conversations around AI agents: the environment around the thing matters more than the thing itself. Last week I read about Harness engineering and it felt very familiar. As if it was tapping into instincts I already had. If you’ve ever debugged a flaky test only to find the problem was in the setup, not the assertion, that instinct will feel natural. But at the same time it also feels like an unfamiliar territory. It borrows the same...

Hello Reader, AI can generate code, arguably with a pretty decent quality. That’s not news anymore. The question that’s been forming in my head all week is different: how do we decide what should go into production? Writing code is not the hard part (arguably, it never was). The hard part is making sure the right code ships and the wrong code doesn’t. And right now, that selection problem is becoming the defining challenge of AI-assisted development. Last week has definitely showed this....

Hey Reader, If you’re reading this, chances are you care about quality. Coming from QA, I never stop looking at the apps I build and systems I use through the lens of quality. Now, with more and more code being written by AI, this question matters more than ever. Many people wonder whether AI is even capable of delivering quality. I think it is. Though it’s worth remembering that quality is multidimensional. You can always have more or less of it. When it comes to AI and specifically LLMs,...