Abstractions in Test Automation: Friend or Foe?


Filip Hric

14th January 2025

Abstractions in Test Automation: Friend or Foe?

Hello Reader,
I’ve been recently doing some thinking about abstractions in tests. The promise is that they simplify test code, but reality seems to suggest the opposite.

Here’s the dilemma: The bigger the codebase is, the more it feels like it makes sense to not repeat yourself. While abstractions prevent repetition, they often come with unintended consequences—changing one abstraction can ripple through an entire code base. This is a common problem in app development (I edit stuff here, the app breaks there), which is the reason we write tests in the first place - but it seems that when it comes to writing test code, we deal with same problems..

I brought this discussion to my social networks, and some some of you shared valuable perspectives. To sum up the arguments I found most compelling:

  • Repetition Isn’t Always Bad: Tests benefit from clarity over cleverness. Small, targeted utility functions often outperform sprawling abstractions.
  • Focus on the Essentials: Instead of obsessing over eliminating repetition, channel your energy into crafting meaningful assertions and a strong test strategy.
  • Ask This Question: Can I delete this code easily? If not, your abstraction might be doing more harm than good.

My takeaway is that test code should be simple and descriptive, not clever. Next time you’re tempted to abstract, ask yourself: Is this solving a problem, or creating one?

What’s your approach to balancing abstraction and clarity? Hit reply—I’d love to hear your thoughts!


Blogposts, discussions, events

k6.io workshop 🇨🇿

My friend Dan is launching a web performance workshop. You do not want to miss it. You’ll learn about modern approaches to Performance Testing and Performance Observability. The workshop is in Czech language

​Read more →​

v0.dev

Hardest part in building my course was manually coding test apps for examples. v0.dev has been a godsend. Honestly can't believe how much time I've saved.

​Watch the video →​

99 Cypress tips

I'm still preparing a special course for you! It takes a bit more than I expected, but I’m truly excited for it! It will feature bite-sized advice to help you master configuration, API testing, advanced networking, DOM manipulation, performance, and debugging in Cypress. Stay tuned!

​Watch the update →​


Test automation tip

When creating custom functions and helpers for your Cypress tests, you can create custom log groups. These will couple all the commands inside the function in a visual group in Cypress timeline. This is quite helpful when debugging your tests.

 1  const createCard = (cardName) => {
 2    cy.then(() => {
 3      const log = Cypress.log({
 4        name: "createCard",
 5        message: cardName,
 6        groupStart: true,
 7      });
 8  
 9      cy.contains('Add another card')
10        .click()
11  
12      cy.get('[data-testid="new-card-input"]')
13        .type(`${cardName}{enter}`)
14  
15      cy.then( () => {
16        log.endGroup()
17      })
18    })
19  }

The resulting code will appear in the Cypress timeline like this:


Meme of the week


Keep learning and growing 💪

Filip Hric

Teaching testers about development, and developers about testing

filip@filiphric.sk, Senec, Slovakia 90301
​Unsubscribe · Preferences​

Filip Hric

Sign up for weekly tips on testing, development, and everything related. Unsubscribe anytime you feel like you had enough 😊

Read more from Filip Hric

Hey Reader, An interesting thought is popping up in conversations around AI agents: the environment around the thing matters more than the thing itself. Last week I read about Harness engineering and it felt very familiar. As if it was tapping into instincts I already had. If you’ve ever debugged a flaky test only to find the problem was in the setup, not the assertion, that instinct will feel natural. But at the same time it also feels like an unfamiliar territory. It borrows the same...

Hello Reader, AI can generate code, arguably with a pretty decent quality. That’s not news anymore. The question that’s been forming in my head all week is different: how do we decide what should go into production? Writing code is not the hard part (arguably, it never was). The hard part is making sure the right code ships and the wrong code doesn’t. And right now, that selection problem is becoming the defining challenge of AI-assisted development. Last week has definitely showed this....

Hey Reader, If you’re reading this, chances are you care about quality. Coming from QA, I never stop looking at the apps I build and systems I use through the lens of quality. Now, with more and more code being written by AI, this question matters more than ever. Many people wonder whether AI is even capable of delivering quality. I think it is. Though it’s worth remembering that quality is multidimensional. You can always have more or less of it. When it comes to AI and specifically LLMs,...