Insights

Archive for the 'Metrics' Category

Barriers To Data Driven Web Optimization

Wednesday, July 29th, 2009
Photo: StarrGazr

Photo: StarrGazr

One of the greatest advantages of online marketing is the marketers’ ability to quickly adjust and respond to the data they can obtain directly from their interactions with their market.  However, each organization is at a different stage in terms of their online measurement sophistication and each faces barriers to becoming more effective at optimizing their web marketing.

Avinash Kaushik recently posted a good blog entry on Barriers To An Effective Web Measurement Strategy.  In the blog, he recommends some first steps in overcoming the following 11 barriers to an effective online measurement strategy that were cited in Econsultancy’s Online Measurement and Strategy Report.

1.  Lack of budget/resources (45%)
2.  Lack of strategy (31%)
3.  Siloed organization (29%)
4.  Lack of understanding (25%)
5.  Too much data (18%)
6.  Lack of senior management buy-in (18%)
7.  Difficulty reconciling data (17%)
8.  IT blockages (17%)
9.  Lack of trust in analytics (16%)
10. Finding staff (12%)
11. Poor technology (9%)

While I did find Avinash’s insights on each of the barriers valuable, I think the list of barriers is flawed in that it confuses the symptoms with the actual barriers.  Could you imagine, for example, if a baseball manager explained that the reason his team was losing was because he was getting “too much data” from his statistician (point 5 from the barrier list).  If he did, he would not be manager for long.  It’s the manager’s job to determine which stats are important and will make a difference in his game strategy.  While I’m sure there are many executives and analysts who feel that they do have to sift through too much data, the true barrier is the lack of a good strategy that focuses the organization on a few core metrics that get to the heart of what you are trying to accomplish.

Given our experience in helping many organizations with their data driven optimization, I think the 11 point barrier list could be refined down to the following 4 core barriers.

1.  Strategy
It all starts with a good strategy.  It’s necessary for obtaining management buy-in and for ultimately attaining the ROI on any investments made in optimization initiatives.  It is critical that the strategy identify a small set of critical metrics that are meaningful to management and drive any ROI business case.

It’s also important to recognize that the web optimization strategy needs to evolve as the web optimization capabilities of the organization evolve.  The strategy is about pinpointing the most important constraints that you need to overcome and focusing your time and resources on making that happen.  Often this may require small steps first to gain credibility and management buy-in.

2.  Management Buy-in
Not surprisingly, once you obtain management buy-in, many of the symptoms begin to disappear.  Certainly lack of budget/resources and IT blockages begin to dissipate if you have the support of senior management.  While it is likely the most important barrier to overcome, this can often be the most difficult.  It’s also important to recognize that until management is willing to make some level of investment in analytics and optimization, then they have not really bought-in.

This often creates a chicken or the egg situation where the online marketer asks, “How do I credibly demonstrate ROI on the investment prior to making the investment”?   Early on this is where the hard work and innovative thinking must occur to implement examples of how data-driven initiatives can provide the ROI necessary to develop a strong business case.  The good news is that with the use of inexpensive, often free, tools in combination with some good analysis, it is not too difficult to gain credibility by demonstrating substantive ROI.  This is often the key objective of organizations that are relatively early in their web optimization evolution.

3.  Infrastructure for Testing, Analysis, and Change
The infrastructure I am addressing here includes the website or ecommerce management technology as well as the implementation of the analytics tools.  We have come across many situations where online marketers have properly identified what are potentially significant improvements that will really move the needle on their core metrics, but the amount of work involved to test or implement the changes were believed to be too significant to justify the investment.

This can be a daunting challenge in that the investment in the current infrastructure and the staff to support it may have been significant and modifications may be difficult and expensive.  In many cases this requires working actively with someone who is technically proficient enough to implement some relatively small tests across several areas that when viewed together make a strong case for an investment in a new infrastructure or new component to that infrastructure.  In other cases, this is more of an uphill battle and the key becomes just recognizing the constraints and focusing on those areas where improvements can be made.

4.  Skills and Time for Testing, Analysis, and Change
In my opinion this is the most overlooked and most underestimated barrier that must be overcome.   Many organizations that have implemented analytics either largely ignore the analytics reports, don’t have time to analyze them, or generate reports that provide little in terms of insights that stimulate changes that improve the critical metrics and provide a high ROI.  In order for an organization to effectively implement data-driven optimization, they must have time from the personnel who know what data they should be focused on, what tests should be implemented, and what actions should be taken as a result of the analysis.  This, by the way, does not mean that an analyst should be doing all of the analysis.  In fact, quite the opposite, they need to know how to get the appropriate people throughout the organization involved in the analysis (the “Why” behind the “What” happened).   As the benefits of this time become apparent, it becomes easier to build this testing and analysis into normal work processes.

Most organizations also tend to under invest in the time required from the personnel who are necessary to effectively carry out tests.  For example, designing multiple versions of a page for A/B or multivariate testing can pay significant dividends.  Organizations need to plan accordingly for copywriting and design resources in order to make these tests successful.

By focusing in on and addressing these four key areas, the other issues on the Econsultancy list will likely be addressed in the process.  What do you think of the 11 barriers identified in the Econsultancy study?

Improving Email Marketing Results With Pre-Testing

Monday, April 20th, 2009

As Internet marketers, we love our testing, and one of the greatest benefits of email marketing over direct marketing is the immediacy of testing results.  By ‘pre-testing’, we can use that immediacy to improve the performance of our email campaigns.

A typical A/B test usually involves developing two different versions of an email (e.g. different subject lines, including personalization, etc.), splitting the list of subscribers into two randomly selected groups, and sending a different version to each group.  The test is often run multiple times, results are analyzed, and the information is used to inform future campaigns. 

Most marketers start with what I’ll call ‘macro tests’ which involve larger issues such as testing different layouts, best time of day and day of week to send, etc.  All of these type of macro tests are very important and establish best practices and guidelines for an email program.

However, there are situations in which elements specific to a campaign need to be tested – I’ll refer to those as ‘micro tests’.  For example, maybe the creative director and product manager disagree on which photo should be used in the email as a hero shot or there are questions about the arrangement of words in the subject line (i.e. which are most important to place toward the front).  You could just A/B test the two approaches, sending each version to 1/2 of the list.  However, if one version significantly outperforms the other, then you would have lost opportunity by sending out the worst performing version to 50% of your list. 

Let’s look at the results (similar to one of our client’s recent campaigns) of an email that was A/B tested with 200,000 subscribers and in which version A outperformed version B:

Typical A/B Test Scenario

Typical A/B Test Scenario

The good news is that we did 20% better than if we would have sent version B to the entire list. However, the bad news is that we performed 20% worse than if we had sent version A to the entire list.  Of course, we didn’t know which would be the best version prior to the send.  Pre-testing allows us to reduce the risk associated with sending a worse-performing email to a large percentage of our list.

A pre-tests involves deploying the initial A/B test to a smaller, but statistically significant percentage of subscribers first and then sending the ‘winning’ version to the remainder of the list.   For example, using the same number of subscribers and response rates in the example above, a pre-test sent to 20% of the list would generate the following results:

Pre-Testing Scenario

Pre-Testing Scenario

In this example, pre-testing improved results by 16% over straight A/B testing.  The greater the performance between the two versions, the more benefit provided (and risk-reduced) by pre-testing.

A few caveats about pre-testing:

  • Pre-tests are not suitable for all situations.  For example, there are some tests (like testing a new enewsletter layout) that you are going to want to run multiple times involving as many subscribers in the the sample as possible.  Also, you need to allow at least 24 hours between the pre-test and the send to the reaminder of the list so that you have enough data to reach a conclusion, so if the email is time sensitive, you may not have time for the pre-test.
  • Even though you want the pre-test groups to be small, the groups need to be large enough to be statistically significant. (for more on sample sizes and statistical relevance, read Wayde Nelson’s response in a MarketingProf knowledge exchange answer)
  • To help validate your approach to pre-testing, run a few tests where you conduct a pre-test with your two versions and then deploy an A/B test to the remaining subscribers.  If you don’t see the same results between your pre-test and full A/B tests, then you need to pre-test with a larger sample size or check to see if something else is impacting results (e.g. day of send).

Free Website Traffic Estimation Services – How Accurate Are They?

Tuesday, March 24th, 2009

There are several reasons why an online marketer wants to know how much traffic a site other than their own is receiving:

  • You may want to compare how much traffic a competitors’ site is getting.
  • In reviewing the landscape of sites your target customers are visiting, you may want to ballpark the volume of traffic to each in order to get a sense of their relative prominence.
  • When reviewing the sources of traffic to your website, you may identify a new source that has had good conversion success and you may want to determine if they have a significant audience and would like to see if that audience is growing. If so, you may want to pursue a more in-depth relationship with that source.
  • It may also help to provide focus in trying to establish high quality links to your site for SEO purposes.

There are now several free traffic estimation services (Amit Agarwal did a nice job of outlining these options in his article, Find Out How Much Traffic a Website is Getting). However, one obvious question is how accurate is the data?

In order to help answer that question, we evaluated the quality of these estimation services against the web analytics data collected through Omniture, Google, etc. for a subset of our clients. We thought that the quality of those estimates might vary significantly depending upon how heavy the volume of traffic was to the site being estimated. As such, we grouped the results based on site activity: heavy, moderate, and light. When available, we also evaluated how accurately the services reflected the trend of the site traffic as well as the volume of visits and visitors for the following traffic estimation services: Alexa, Compete, Google Ad Planner, Google Trends, QuantCast, and StatBrain.

The following table shows how the free sources compared to the data collected by our clients’ analytics programs (e.g. Compete’s estimation of traffic for moderate sites was lower than reported by the analytics tools used by those sites).

Free Traffic Estimation Comparisons

I should note that this was not a formal study, was based upon a relatively small sample set, and other factors may impact the results (e.g. relative volume of paid search marketing may influence the accuracy of some sites versus others).

A few observations/conclusions:

  1. Not surprisingly, the quality of the estimates is considerably better for higher volume websites.
  2. At this point, there does not seem to be a reliable source for viewing site trends for lower volume websites and the trends for moderate traffic websites are not much better.
  3. The growing volume of incremental demographic information being provided by some of these services is encouraging. Most of the demographic information is fairly rudimentary, but it is definitely more than what has been available in the past.
  4. You should definitely review Dataopedia.com. In addition to pulling in data from Alexa, Compete, and QuantCast, this service displays other non-traffic related data such as Google Page Rank and Twitter posts related to the site.
  5. Given that none of the free services provided accurate estimates in every scenario, you may be able to use our findings to make adjustments for your specific situation.

Has anyone else conducted a similar comparison? If so, what type of results did you find?

Getting Tutored In Analytics

Wednesday, May 23rd, 2007

I was talking to someone who tutors school children and he commented that it can be difficult because some parents mistakenly believe that they will see significant progress only sending their child to a tutor for one hour a week and doing nothing else.

The point he tries to make is that the real success will only come when the parents and child commit to spending the necessary time during the rest of the week studying and practicing and refining skills.

The conversation reminded me of issues companies have with their web analytics programs.  Too often, organizations install analytics, and after the initial excitement, the applications go the way of Squeaky the Penguin from Toy Story 2 (he gathers dust at the bottom of a toy box).

As with the tutoring, analytics applications by themselves aren’t a panacea.  Tools like Google Analytics, Omniture, and Hitbox provide useful data (just like a tutor provides valuable help), but companies will only realize the true benefits when they dedicate the time and resources to go beyond page views and visitor counts to unearth issues such products that are getting a significant number of views but few conversions, significant fallout during the check-out process, etc.  Whether it is learning Algebra or trying to improving your site conversion, dedicating the right amount of time and effort will pay dividends.

 

The Less Talked About, But Most Significant, Hidden Cost of Google Analytics

Thursday, April 19th, 2007

Since storming on to the scene in 2005, Google Analytics has turned a lot of heads and has garnered some serious consideration from some large companies who would have previously only considered the top tier vendor solutions.  Given that it is a feature rich solution and it’s free, the value proposition seems to be difficult to dispute.

Yet, despite it being free, there have been a lot of ‘buyer beware’ articles written describing some of the hidden costs and risks associated with using Google Analytics.  Some of the costs/risks cited, for example, are the lack of customer support and training, data ownership issues, and despite its rich set of features, it does not provide the depth of analysis that a high-end analytics solution does.  And unfortunately you have to invest a lot of time with the tool to really identify its limitations.

So while these hidden costs are certainly relevant and should definitely be taken into consideration when evaluating analytics vendor options, I do believe Google Analytics is a compelling tool that can provide enormous value in evaluating the effectiveness of online marketing initiatives, understanding web site visitor behavior, and optimizing the performance of a web site.

But here’s the rub, the decision to use Google Analytics frequently conveys an implicit devaluation of the importance of the analysis of the data that actually provides the real value to the organization.

The use of Google Analytics is often a reflection of the diminished importance an organization places on obtaining the actionable insights that can provide extremely high returns.Of course there is no doubt that organizations are frequently guilty of not investing the time and energy necessary to take advantage of a tool even after making a fairly significant investment in the tool.

However, there is an exponentially higher risk when no investment has been made. No one in the organization is saying, “we invested all of this money in the tool and we are not taking advantage of it,” because there was no investment.  In fact, to most it would seem almost ridiculous to suggest that a company should hire someone full-time to analyze data derived from Google Analytics.

Yet more and more firms are adding full-time analysts to their web optimization teams because they recognize the dividends it pays. Unfortunately many companies are now asking, “can’t we get as much from Google Analytics for free instead of paying for our current tool” instead of asking “shouldn’t we be doing more and better analysis to gain actionable insights from our current tool?”

I believe this is by far the most significant hidden cost of Google Analytics and it has nothing to do with the quality of the tool.

Digg!