Adopting CRO Frameworks to Thrive in High-Competition SERPs

At a high level, most SEO professionals have a very intuitive understanding of Conversion Rate Optimization.

I think this is because both CRO and SEO share a lot of overlapping commonalities in terms of best-practices, data-driven prioritization methods, & even UX-based optimization considerations.

CRO is an indispensable arrow in the quiver of our T-shaped marketing belts.

(Pardon my botched analogy there. I wrote it down, realized I like it anyway, and I can’t let that brain teaser go to waste.)

In this article, I’ll be covering the following ground:

  • Why future SEO programs may start to rely more heavily on CRO frameworks.
  • How CRO frameworks help our team survive high-competition SERPs
  • CRO principles & frameworks that SEOs can incorporate today.
SEO Testing Bundle

Why future SEO programs may start to rely more heavily on experimentation frameworks.

As of today, it appears as though we are already seeing a rise in SEO experimentation. Split testing, for example, seems to be more popular than it was even 3, 4, or 5 years ago.

I’m of the opinion that this trend will continue to rise, and SEOs will need to get more savvy with our experimentation programs. As with all predictions, I might be wrong, but I do have a strong hunch that experimentation will play a much larger role in future SEO programs.

I’m basing this prediction on two known characteristics in Google’s search evolution.

Characteristic #1: ML-based SERP evolutions

In recent years, we’ve witnessed Google conducting staggering numbers of SERP layout experiments, results experiments, and a miles-long list of machine-learning microtests. Thinking ahead to the future, it’s easy to assume that we’ll see a lot more experimentation on Google’s part.

That means that the same content that performs well in today’s SEO programs, could very easily fall by the wayside in tomorrow’s search engine results pages.

We should ask ourselves, if Google is running all of these experiments, then what do we need to do to keep up as SEOs?

You guessed it… we can dial up our own SEO experimentation processes & test to figure out which pages, content, and techniques will perform in the ever-changing search environment.

Characteristic #2: Rising saturation

Another known factor in search is the rising saturation-levels of cookie-cutter SEO content.

Case-in-point, this tweet from my colleague, the wizard of SaaS, John-Henry Sherk.

FWIW, i don’t think the search results are going to look like this for too much longer: pic.twitter.com/Jcp2GZr5Im

— JH Scherck (@JHTScherck) January 17, 2022

How do you compete with dozens of articles that say the same thing as your article?

In reality there are many different ways to help your content stand out. Some teams like to add data, multimedia, increase link velocity, etc.

I’ve also found that we can get the same performance increases with simple, low-resource experimentation efforts to help give our content a competitive advantage in these saturated search environments.

How our team leverages CRO frameworks to thrive in high-competition SERPs

For our team here at Tipalti, SEO experimentation has become the competitive advantage that we like to hang our hat on. If you look at our SEO program on the whole, we do most things the same as you might see in a SaaS company’s SEO program.

Except that we do a lot of SEO experiments.

Last year, I calculated that we averaged about 3 experiments per week.

As is natural with experimentation, we had a lot of wins, and we also had a lot of losses.

Hey, they can’t all be winners, right?

The important lesson for us was that our wins helped our team remain competitive in high-competition SERPs, and even recover from core algorithm updates.

The main things we test for are:

  1. Title tests
  2. Answer box tests
  3. FAQ, How-to, PAAs & other featured snippets
  4. Full-page revamp testing

This list shouldn’t be surprising to most SEOs. If you’re reading this, chances are you probably run these tests already.

There are two primary differences in the way that we test. One is that we have a high testing velocity with weekly and monthly goals attached. And the second is our ongoing adaptation of processes, most of which we’ve adapted from carefully studying conversion rate optimization and applying CRO methods to our experimentation program.

CRO Frameworks that SEOs can use (if you’re not already)

Okay, enough about our team. Let’s talk CRO frameworks now.

Please bear in mind that this was a very difficult list to narrow down. So many valuable CRO ideas, so little room to write about them.

These are the top 5 concepts that I think help bring an SEO testing program from good to great.

1. Execution Velocity

To borrow a quote here from CXL’s Peep Laja about execution velocity:

“The success of your testing program is a sum of these two: the number of tests run (volume) and the percentage of tests that provide a win.

Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.”

SEOs, if you’re only testing here and there, there’s probably more wins you’re leaving on the table.

2. Experimentation roadmaps

Experimentation roadmaps are a positive sign of a very well-developed SEO program. I’ve come across a few teams that utilize roadmaps (or something akin to roadmaps) for their SEO experiments, but most of the SEOs I engage with don’t have a testing roadmap, preferring to rely on a small number of ad-hoc testing initiatives throughout the year.

I like to refer to our roadmap as a “tracking log” because we use it to ideate, prioritize, and measure historical tests, so its functions go beyond the functions of just a roadmap.

Without it, our experimentation program would be utterly lost and impossible to keep track of, let alone scale.

3. Prioritization frameworks

Most expert-led CRO programs use prioritization models to help sort through the ideas roadmap and focus their time on the tests where they can leverage their time to compound the impacts of their experiments.

The two models that seem to be most popular in CRO are the PIE framework and the ICE framework.

PIE stands for:

  • Potential – How much improvement can be made on the pages?
  • Importance – How valuable is the traffic to the pages? (amount of traffic, etc.)
  • Ease – How complicated will the test be to implement on the page or template?

While ICE stands for:

  • Impact – What will the impact be if this works?
  • Confidence – How confident am I that this will work?
  • Ease – What is the ease of implementation?

Each of these are strong ways to prioritize, and there are other prioritization frameworks that professionals will spin off. I have my own custom prioritization framework, and there’s a suuuper robust PXL framework that I learned about from CXL here.

Prioritizing our experiments in a way that helps your team focus on the highest-impact ideas first is one of the most valuable lessons SEO teams can borrow from the CRO community.

4. Iterative testing vs. Innovative testing

When we conduct experiments, it’s common for SEOs to focus mainly on iterative testing. These are tests that are typically easy to execute, and that make small incremental improvements to the page (or pages) that we want to test.

Iterative tests are things like:

  • Title & meta description tests
  • Featured snippet tests
  • Schema markup tests
  • Etc.

Innovative tests are bigger bets that usually require more effort (the risk) and sometimes can create larger outcomes (the reward).

Innovative test ideas for SEO might include:

  • A site migration project
  • An acquisition
  • Architecting (or re-architecting) topic clusters
  • Revamping a full page (or pages) with unique content &/or design layout
  • Etc.

It’s tempting in SEO to focus only on iterative tests because of how much more effort & risk it takes to deploy innovative tests, but innovative testing is a key part of any well-developed experimentation program.

5. Continuous innovation loops (& reporting)

One more high-level framework that we borrow from CRO is building a continuous innovation loop. At the end of an experiment, most professionals understand the value of reporting the success, or failure of their results.

Beyond reporting, however, it’s important to build a continuous innovation loop into your experimentation program.

A continuous innovation loop acts like a feedback loop for your team. It goes beyond just reporting to do two things for your team.

1. Consolidates the learnings for all your experiments over time so that the broader team can learn from each experiment.

2. Builds a body of knowledge that the team can draw from, so that future hypotheses can be formed with experiential knowledge.

Basically, if all you’re doing is reporting on the success or failure of the test, it’s harder for your team to build up a body of knowledge that can increase the success rates of future experimentation ideas.

Conclusion

There are still so many more incredible principles and methodologies that I would have liked to include in this article. The cross-education that we’ve applied from CRO frameworks to our team’s SEO experimentation process have been extremely cool and exciting, which is why I love talking about experiments, running experiments, and getting that rush of excitement whenever there’s an experiment that pays off for us.

As some of these ideas are my own subjective opinions about the future of SEO experimentation, I’d love to hear from you, so feel free to reach out to me on Twitter, and let me know if you have anything you’d like to add to the conversation.