Monday, January 28, 2013

Dashboards, Advanced Segments, And Custom Reports For Your Business Needs

We’ve heard you loud and clear that getting started on Google Analytics can be challenging. It’s such a robust tool with a variety of reports, filters, and customizations that for a new user it can be overwhelming to figure out where to look first for the data and insights that will enable you to make better decisions. For more advanced users it can be time consuming to build out different variations of reports and dashboards to get the clearest snapshot of your performance. That is why we’ve created the Google Analytics Solution Gallery.

The Google Analytics Solution Gallery hosts the top Dashboards, Advanced Segments and Custom Reports which you can quickly and easily import into your own account to see how your website is performing on key metrics. It helps you to filter through the noise to see the metrics that matter for your type of business: Ecommerce, Brand, Content Publishers. If you're not familiar with DashboardsAdvanced Segments and Custom Reports, check out these links to our help center for detailed descriptions on how they work and the insights they can help provide.

Solution examples

Here are a few examples of the solutions that you can download into your account to see how the analysis works with your data.

  • Social sharing report - Content is king, but only if you know what it's up to. Learn what content from your website visitors are sharing and how they're sharing it. 

  • Publisher dashboard - Bloggers can use this dashboard to see where readers come from and what they do on your site.

  • Engaged traffic advanced segment - Measure traffic from high-value visitors who view at least three pages AND spend more than three minutes on your site. Why do these people love your site? Find out!

How do I add these to my account?

We’ve designed it so it’s easy to get started. Simply go to the Google Analytics Solution Gallery, pick from the drop drown menu the solutions that will be most helpful for your business. Select from Publisher, Ecommerce, Social, Mobile, Brand, etc.. . Hit “Download” for the solution you want to see in your account. If you are not already logged into Google Analytics we’ll ask you to sign in. Then you’ll be asked if you want to accept this solution into your account and what Web Profile do you want to apply it to. After you select that it will be in your account and your own data will populate the report.

We’re planning on expanding on this list of top solutions throughout the year so be sure to check back and see what we’ve added. A big thank you to Justin Cutroni & Avinash Kaushik for supplying many of the solutions currently available.

Posted by Ian Myszenski, Google Analytics team

Friday, January 25, 2013

Digital Analytics Association Awards Are Back

It’s that time of year again - award season. No, not Hollywood awards, Digital Analytics awards! 

The Digital Analytics Association has announced its list of nominees for the DAA Awards of Excellence. These awards celebrate the outstanding contribution to our profession of individuals, agencies, vendors and practitioners.

This year we’re honored to be nominated for two awards.

Google Tag Manager has been nominated for New Technology of the Year. Launched in October 2012, Google Tag Manager has helped many companies simplify the tag management process.

Google, as an organization, has been nominated in the category Agency/Vendor of the year. 

We’re incredibly humbled by these nominations - thank you. Our goal is to provide all businesses with the ability to improve their performance using data. We’re excited to be part of this community and we look forward to an even more amazing future.

In addition, a few Googlers have been nominated for individual awards:

Eduardo Cereto Carvalho and Krista Seiden have been nominated for Digital Analytics Rising Star.

Our Analytics Advocate, Justin Cutroni and our Digital Marketing Evangelist, Avinash Kaushik, who travel the world sharing Analytics love have each been nominated as Most Influential Industry Contributor (individual).

If you’re a DAA member make sure you vote by February 6. Winners will be announced at the 2013 DAA Gala in San Francisco on April 16. Tickets are available now.

Posted by the Google Analytics Team finds that traditional conversion tracking significantly undervalues non-brand search

The following post originally appeared on the Inside AdWords Blog.

Understanding the true impact of advertising

Advertisers have a fundamental need to understand the effectiveness of their advertising. Unfortunately, determining the true impact of advertising on consumer behavior is deceptively difficult. This difficulty in measurement is especially applicable to advertising on non-brand (i.e. generic) search terms, where ROI may be driven indirectly over multiple interactions that include downstream brand search activities. Advertising effectiveness is often estimated using standard tracking processes that rely upon ‘Last Click’ attribution. However, ‘Last Click’ based tracking can significantly underestimate the true value of non-brand search advertising. This fact was recently demonstrated by, a leading travel brand, using a randomized experiment - the most rigorous method of measurement.

Experimental Approach recently conducted an online geo-experiment to measure the effectiveness of their non-brand search advertising on Google AdWords.  The study included offline and online conversions.  The analysis used a mathematical model to account for seasonality and city-level differences in sales.  Cities were randomly assigned to either a test or a control group. The test group received non-brand search advertising during the 12 week test period, while the control group did not receive such advertising during the same period. The benefit of this approach is that it allows statements to be made regarding the causal relationship between non-brand search advertising and the volume of conversions - the real impact of the marketing spend.

Download the full case study here.


The results of the experiment indicate that the overall effectiveness of the non-brand search advertising is 43% greater1 than the estimate generated by’s standard online tracking system.

The true impact of the non-brand search advertising is significantly larger than the ‘Last Click’ estimate because it accounts for

  • upper funnel changes in user behavior that are not visible to a ‘Last Click’ tracking system, and

  • the impact of non-brand search on sales from online and offline channels.

This improved understanding of the true value of non-brand search advertising has given the opportunity to revise their marketing strategy and make better budgeting decisions.

How can you benefit?

As proven by this study, ‘Last Click’ measurement can significantly understate the true effectiveness of search advertising. Advertisers should look to assess the performance of non-brand terms using additional metrics beyond ‘Last Click’ conversions. For example, advertisers should review the new first click conversions and assist metrics available in AdWords and Google Analytics. Ideally, advertisers will design and carry out experiments of their own to understand how non-brand search works to drive sales.

Read more about AdWords Search Funnels

Read more about Google Analytics Multi-Channel Funnels

-- Anish Acharya, Industry Analyst, Google; Stefan F. Schnabl, Product Manager, Google; Gabriel Hughes, Head of Attribution, Google; and Jon Vaver, Senior Quantitative Analyst, Google contributed to this report.

1 This result has a 95% Bayesian confidence interval of [1.17, 1.66].

Posted by Sara Jablon Moked, Google Analytics Team

Thursday, January 24, 2013

Increasing Your Analytics Productivity With UI Improvements

We’re always working on making Analytics easier for you to use. Since launching the latest version of Google Analytics (v5), we’ve been collecting qualitative and quantitative feedback from our users in order to improve the experience. Below is a summary of the latest updates. Some you may already be using, but all will be available shortly if you’re not seeing them yet. 

Make your dashboards better with new widgets and layout options

Use maps, devices and bar chart widgets in order to create a perfectly tailored dashboard for your audience. Get creative with these and produce, share and export custom dashboards that look exactly how you want with the metrics that matter to you. We have also introduced improvements to customize the layout of your dashboards to better suit individual needs. In addition dashboards now support advanced segments!

Get to your most frequently used reports quicker

You’ll notice we’ve made the sidebar of Google Analytics even more user-friendly, including quick access to your all-important shortcuts:

If you’re not already creating Shortcuts, read more about them and get started today. We have also enabled shortcuts for real-time reports, which allows you to set up a specific region to see its traffic in real-time, for example.

Navigate to recently used reports and profiles quicker with Recent History

Ever browse around Analytics and want to go back to a previous report? Instead of digging for the report, we’ve made it even simpler when you use Recent History.

Improving search functionality

Better Search allows you to search across all reports, shortcuts and dashboards all at once to find what you need.

Keyboard shortcuts

In case you've never seen them, Google Analytics does have some keyboard shortcuts. Be sure you’re using them to move around faster. Here are a few useful ones:

Search: s , / (Access to the quick search list)

Account List: Shift + a (access to the quick account list)

Set date range: d + t (set the date range to today)

On screen guide: Shift + ? (view the complete list of shortcuts)

Easier YoY Date Comparison

The new quick selection option lets you select previous year to prefill date range improving your productivity to conduct year over year analysis.

Export to Excel & Google Docs 

Exporting keeps getting better, and now includes native Excel XSLX support and Google Docs:

We hope you find these improvements useful and always feel free to let us know how we can make Analytics even more usable for you to get the information you need to take action faster.

Posted by Nikhil Roy, Google Analytics Team

Wednesday, January 23, 2013

Multi-armed Bandit Experiments

This article describes the statistical engine behind Google Analytics Content Experiments. Google Analytics uses a multi-armed bandit approach to managing online experiments. A multi-armed bandit is a type of experiment where:

  • The goal is to find the best or most profitable action

  • The randomization distribution can be updated as the experiment progresses

The name "multi-armed bandit" describes a hypothetical experiment where you face several slot machines ("one-armed bandits") with potentially different expected payouts. You want to find the slot machine with the best payout rate, but you also want to maximize your winnings. The fundamental tension is between "exploiting" arms that have performed well in the past and "exploring" new or seemingly inferior arms in case they might perform even better. There are highly developed mathematical models for managing the bandit problem, which we use in Google Analytics content experiments.

This document starts with some general background on the use of multi-armed bandits in Analytics. Then it presents two examples of simulated experiments run using our multi-armed bandit algorithm. It then address some frequently asked questions, and concludes with an appendix describing technical computational and theoretical details.


How bandits work

Twice per day, we take a fresh look at your experiment to see how each of the variations has performed, and we adjust the fraction of traffic that each variation will receive going forward. A variation that appears to be doing well gets more traffic, and a variation that is clearly underperforming gets less. The adjustments we make are based on a statistical formula (see the appendix if you want details) that considers sample size and performance metrics together, so we can be confident that we’re adjusting for real performance differences and not just random chance. As the experiment progresses, we learn more and more about the relative payoffs, and so do a better job in choosing good variations.


Experiments based on multi-armed bandits are typically much more efficient than "classical" A-B experiments based on statistical-hypothesis testing. They’re just as statistically valid, and in many circumstances they can produce answers far more quickly. They’re more efficient because they move traffic towards winning variations gradually, instead of forcing you to wait for a "final answer" at the end of an experiment. They’re faster because samples that would have gone to obviously inferior variations can be assigned to potential winners. The extra data collected on the high-performing variations can help separate the "good" arms from the "best" ones more quickly.

Basically, bandits make experiments more efficient, so you can try more of them. You can also allocate a larger fraction of your traffic to your experiments, because traffic will be automatically steered to better performing pages.


A simple A/B test

Suppose you’ve got a conversion rate of 4% on your site. You experiment with a new version of the site that actually generates conversions 5% of the time. You don’t know the true conversion rates of course, which is why you’re experimenting, but let’s suppose you’d like your experiment to be able to detect a 5% conversion rate as statistically significant with 95% probability. A standard power calculation1 tells you that you need 22,330 observations (11,165 in each arm) to have a 95% chance of detecting a .04 to .05 shift in conversion rates. Suppose you get 100 visits per day to the experiment, so the experiment will take 223 days to complete. In a standard experiment you wait 223 days, run the hypothesis test, and get your answer.

Now let’s manage the 100 visits each day through the multi-armed bandit. On the first day about 50 visits are assigned to each arm, and we look at the results. We use Bayes' theorem to compute the probability that the variation is better than the original2. One minus this number is the probability that the original is better. Let’s suppose the original got really lucky on the first day, and it appears to have a 70% chance of being superior. Then we assign it 70% of the traffic on the second day, and the variation gets 30%. At the end of the second day we accumulate all the traffic we’ve seen so far (over both days), and recompute the probability that each arm is best. That gives us the serving weights for day 3. We repeat this process until a set of stopping rules has been satisfied (we’ll say more about stopping rules below).

Figure 1 shows a simulation of what can happen with this setup. In it, you can see the serving weights for the original (the black line) and the variation (the red dotted line), essentially alternating back and forth until the variation eventually crosses the line of 95% confidence. (The two percentages must add to 100%, so when one goes up the other goes down). The experiment finished in 66 days, so it saved you 157 days of testing.

Figure 1. A simulation of the optimal arm probabilities for a simple two-armed experiment. These weights give the fraction of the traffic allocated to each arm on each day.

Of course this is just one example. We re-ran the simulation 500 times to see how well the bandit fares in repeated sampling. The distribution of results is shown in Figure 2. On average the test ended 175 days sooner than the classical test based on the power calculation. The average savings was 97.5 conversions.

Figure 2. The distributions of the amount of time saved and the number of conversions saved vs. a classical experiment planned by a power calculation. Assumes an original with 4% CvR and a variation with 5% CvR.

But what about statistical validity? If we’re using less data, doesn’t that mean we’re increasing the error rate? Not really. Out of the 500 experiments shown above, the bandit found the correct arm in 482 of them. That’s 96.4%, which is about the same error rate as the classical test. There were a few experiments where the bandit actually took longer than the power analysis suggested, but only in about 1% of the cases (5 out of 500).

We also ran the opposite experiment, where the original had a 5% success rate and the the variation had 4%. The results were essentially symmetric. Again the bandit found the correct arm 482 times out of 500. The average time saved relative to the classical experiment was 171.8 days, and the average number of conversions saved was 98.7.

Stopping the experiment

By default, we force the bandit to run for at least two weeks. After that, we keep track of two metrics.

The first is the probability that each variation beats the original. If we’re 95% sure that a variation beats the original then Google Analytics declares that a winner has been found. Both the two-week minimum duration and the 95% confidence level can be adjusted by the user.

The second metric that we monitor is is the "potential value remaining in the experiment", which is particularly useful when there are multiple arms. At any point in the experiment there is a "champion" arm believed to be the best. If the experiment ended "now", the champion is the arm you would choose. The "value remaining" in an experiment is the amount of increased conversion rate you could get by switching away from the champion. The whole point of experimenting is to search for this value. If you’re 100% sure that the champion is the best arm, then there is no value remaining in the experiment, and thus no point in experimenting. But if you’re only 70% sure that an arm is optimal, then there is a 30% chance that another arm is better, and we can use Bayes’ rule to work out the distribution of how much better it is. (See the appendix for computational details).

Google Analytics ends the experiment when there’s at least a 95% probability that the value remaining in the experiment is less than 1% of the champion’s conversion rate. That’s a 1% improvement, not a one percentage point improvement. So if the best arm has a conversion rate of 4%, then we end the experiment if the value remaining in the experiment is less than .04 percentage points of CvR.

Ending an experiment based on the potential value remaining is nice because it handles ties well. For example, in an experiment with many arms, it can happen that two or more arms perform about the same, so it does not matter which is chosen. You wouldn’t want to run the experiment until you found the optimal arm (because there are two optimal arms). You just want to run the experiment until you’re sure that switching arms won’t help you very much.

More complex experiments

The multi-armed bandit’s edge over classical experiments increases as the experiments get more complicated. You probably have more than one idea for how to improve your web page, so you probably have more than one variation that you’d like to test. Let’s assume you have 5 variations plus the original. You’re going to do a calculation where you compare the original to the largest variation, so we need to do some sort of adjustment to account for multiple comparisons. The Bonferroni correction is an easy (if somewhat conservative) adjustment, which can be implemented by dividing the significance level of the hypothesis test by the number of arms. Thus we do the standard power calculation with a significance level of .05 / (6 - 1), and find that we need 15,307 observations in each arm of the experiment. With 6 arms that’s a total of 91,842 observations. At 100 visits per day the experiment would have to run for 919 days (over two and a half years). In real life it usually wouldn’t make sense to run an experiment for that long, but we can still do the thought experiment as a simulation.

Now let’s run the 6-arm experiment through the bandit simulator. Again, we will assume an original arm with a 4% conversion rate, and an optimal arm with a 5% conversion rate. The other 4 arms include one suboptimal arm that beats the original with conversion rate of 4.5%, and three inferior arms with rates of 3%, 2%, and 3.5%. Figure 3 shows the distribution of results. The average experiment duration is 88 days (vs. 919 days for the classical experiment), and the average number of saved conversions is 1,173. There is a long tail to the distribution of experiment durations (they don’t always end quickly), but even in the worst cases, running the experiment as a bandit saved over 800 conversions relative to the classical experiment.

Figure 3. Savings from a six-armed experiment, relative to a Bonferroni adjusted power calculation for a classical experiment. The left panel shows the number of days required to end the experiment, with the vertical line showing the time required by the classical power calculation. The right panel shows the number of conversions that were saved by the bandit.

The cost savings are partially attributable to ending the experiment more quickly, and partly attributable to the experiment being less wasteful while it is running. Figure 4 shows the history of the serving weights for all the arms in the first of our 500 simulation runs. There is some early confusion as the bandit sorts out which arms perform well and which do not, but the very poorly performing arms are heavily downweighted very quickly. In this case, the original arm has a "lucky run" to begin the experiment, so it survives longer than some other competing arms. But after about 50 days, things have settled down into a two-horse race between the original and the ultimate winner. Once the other arms are effectively eliminated, the original and the ultimate winner split the 100 observations per day between them. Notice how the bandit is allocating observations efficiently from an economic standpoint (they’re flowing to the arms most likely to give a good return), as well as from a statistical standpoint (they’re flowing to the arms that we most want to learn about).

Figure 4. History of the serving weights for one of the 6-armed experiments.

Figure 5 shows the daily cost of running the multi-armed bandit relative to an "oracle" strategy of always playing arm 2, the optimal arm. (Of course this is unfair because in real life we don’t know which arm is optimal, but it is a useful baseline.) On average, each observation allocated to the original costs us .01 of a conversion, because the conversion rate for the original is .01 less than arm 2. Likewise, each observation allocated to arm 5 (for example) costs us .03 conversions because its conversion rate is .03 less than arm 2. If we multiply the number of observations assigned to each arm by the arm’s cost, and then sum across arms, we get the cost of running the experiment for that day. In the classical experiment, each arm is allocated 100 / 6 visits per day (on average, depending on how partial observations are allocated). It works out that the classical experiment costs us 1.333 conversions each day it is run. The red line in Figure 5 shows the cost to run the bandit each day. As time moves on, the experiment becomes less wasteful and less wasteful as inferior arms are given less weight.

Figure 5. Cost per day of running the bandit experiment. The constant cost per day of running the classical experiment is shown by the horizontal dashed line.

1The R function power.prop.test performed all the power calculations in this article.

2See the appendix if you really want the details of the calculation. You can skip them if you don’t.

Posted by Steven L. Scott, PhD, Sr. Economic Analyst, Google

Tuesday, January 22, 2013

Google Tag Manager: Technical Implementation Deep Dive Webinar

Just three months ago we launched Google Tag Manager to make it easier for marketers (or anyone in the organization) to add and update website tags, such as conversion tracking, site analytics, remarketing, and more. The tool provides an easy-to-use interface with templates for tags from Google and templates for other vendor’s tags, as well as customizable options for all your tagging needs. This minimizes site-coding requirements and simplifies the often error-prone tagging process.

In November, we held an introductory webinar (watch the recording here, plus read Q&A), and next week we’re holding a second webinar going beyond the basics and diving into the technical details and best practices for how to implement Google Tag Manager. This webinar will be hosted by Rob Murray, our Engineering Manager, and Dean Glasenberg, Sales Lead.

Webinar: Google Tag Manager Technical Implementation

Date: Tuesday, January 29, 2013

Time: 10 am PST / 1pm EST / 6pm GMT

Register here:

Recommended Audience: IT or webmaster team members

During the webinar we’ll go through a step-by-step process for implementation, and we’ll cover some more advanced topics (i.e. deploying more complex tags). We’ll introduce the role of a Data Layer and use it in conjunction with Events to show how you can set up a site to gather detailed usage metrics, for example, to help you understand why users are dropping off at a specific page.  We’ll also show you how common browser Developer Tools, as well as the Google Tag Manager Debug mode, can be used to help verify that your tags are working correctly (and fix them if they’re not).

Hope to see to see you on Tuesday!

Wednesday, January 16, 2013

Kapitall Uses Content Experiments To Drive A 44% Conversion Increase

Video game entrepreneur Gaspard de Dreuzy and financial technologist Serge Kreiker had a thought: why not use the gaming experience to break the traditional online investing mold? Their idea took hold and Wall Street firm Kapitall, Inc. was born in 2008. Based in New York, Kapitall now has 15 full-time employees providing a unique online investing platform and brokerage.

Kapitall has used Google Analytics Certified Partner Empirical Path since 2011 for analytics services on its JavaScript website. The complex implementation required custom JavaScript to allow for Google Analytics tracking within the trading interface as well as on landing pages. Empirical Path implemented Google Analytics tracking directly within the Kapitall interface so that decision makers could understand pivotal actions, such as how often brokerage accounts were being funded or where in the sign-up process potential investors were dropping out.

Challenge: Refining the landing page for maximum response 

Kapitall wanted to do more than simply capture data however; they also wanted to test the content of their landing page and then optimize it by targeting visitors with messages and options that would lead to conversions. Why was creating a truly effective landing page seen to be so critical? Kapitall’s gaming-style interface enlists traders to sign up for brokerage accounts and use the site to trade stocks or create practice portfolios. Every incremental sign-up is key to the company’s success.

Approach: Split testing to identify a winning landing page 

Kapitall understood that there was little point in making one-off ad hoc responses to analytics insights, or doing before-and-after comparisons that would inevitably be confounded by differences in the before and after audiences. Empirical Path recommended taking their analytics efforts to the next level with a closed-loop solution to eliminate complications and identify the best page version. 

The team proposed automated experiments to compare different versions of the landing page to see which performed best among a random sample of visitors. To accomplish this, Empirical Path first set Google Analytics’ Event Tracking and Custom Variables on brokerage accounts to distinguish current customers from traders. The team then designed Content Experiments in Google Analytics to understand which version of the landing page drove the greatest number of sign-ups.

Results: A new landing page with proven success

The outcomes from the test were illuminating, clearly identifying that the Angry Birds landing page was most effective. The winning version showed a dramatic increase in sign-ups of 44 percent and a 98 percent probability that this version would continue to beat the original. “Kapitall was impressed by how quickly Content Experiments was able to zero in on the Angry Birds version,” says Jim Snyder, principal at Empirical Path Consulting. “Having the ability  to quickly surface the best performing version directly resulted in attracting more investors at a faster rate, and that was a huge value-add to Kapitall.” Thanks to the split testing approach, Kapitall possesses valuable insights into the perfect blend of messaging and creative elements to optimize the page. With the strongest version now implemented, Kapitall is able to realize the true power of its online real estate. 

View the entire case study as a PDF here.

Posted by the Google Analytics Team

Friday, January 11, 2013

10 Google Analytics Resolutions for 2013

The following is a guest post from Michael Loban, CMO of InfoTrust a Google Analytics Certified Partner and Google Analytics Premium Reseller based in Cincinnati, OH.

New Year’s is the ideal time for making resolutions (that we keep until the second week of January). To avoid this cliché, I decided to actually wait until the second week of January to put together my resolutions/ideas/tactics for taking Google Analytics in 2013 to the next level.

1. Address your data phobia. Maybe it is a little bit extreme to say that a lot of digital marketers have an extreme fear of web analytics data, but it is safe to assume that data is what often causes migraines. Staring at pie charts, graphs and percentages until you know what you are looking for is the wrong way to start. The remedy for data phobia is simple – ignore the data you do not need to make a marketing decision. And always remember to align your Google Analytics configuration with your business goals.

2. Assign a monetary value to your goals even if you do not sell anything on your website. Each submitted form, played video, downloaded PDF is worth something. Otherwise, why did you put it on a site? It is not enough to say that you need to decrease your bounce rate by 5%. Equate the decrease in your bounce rate by 5% to the amount of money that you can make when those visitors submit completed “contact us” forms or other micro conversions. For example, work with the email marketing team to determine the value of each new email subscriber. If you get more people to join your email list then you will be able to sell more products via email marketing.

3. Not all marketing strategies are created equal. In order to turn a prospect into a customer you might have to engage in remarketing, email marketing and social media marketing. Use attribution modeling inside Google Analytics to examine how each marketing tactic contributes to a sale/conversion. Here is a blog post from the Google Analytics team on how to get started with attribution modeling. In 2012, Attribution Modeling was only available for Google Analytics Premium customers; in 2013 it will be available across all Google Analytics.

4. For some reason, social media measurement is something companies are still unsure about. It is difficult and complex, but you have to start somewhere. Why not start with Google Analytics Social Reports? This will help you track visitors that social media channels brought to your website, measure the value of those channels by tracking conversions and examine how your content is being shared across social networks. It feels good to say that last month, 10 people from Facebook who came to your website became your customers. Learn more about Social Reports

5. Tools are great, but great analysts are awesome. The true value of analytics is fully unlocked when you get to work on your data and turn it into something actionable. Make sure that you have a team (even if it is a team of 1) that knows how to analyze data to help you reach your marketing goals. 

6. If you begin to analyze your data, and realize that you do not have enough context to make a decision, get more data. Now, you can do a cost data import and Google Analytics will display how non-Google paid search campaigns perform and compare to your other marketing campaigns. Here is a how-to article from Google Analytics on Measuring Performance of Paid Campaigns.  

7. While we are on the subject of trying new things, Remarketing is something that I have promised myself to do more of. Remarketing is now available right inside Google Analytics. Here is a PDF document and a Webinar about Remarketing with Google Analytics. 

8. Mobile optimization is the most exciting digital opportunity for marketers in the coming year, according to new Econsultancy report. Since this is the case, mobile analytics will become more important than ever. Segmenting and understanding your mobile visitors will help you create a winning mobile experience that will lead to conversions and sales. In October, Google Analytics announced a public beta launch for mobile app analytics.

9. Begin measuring your analytics ROI. Time that you spend on collecting, reporting and analyzing data is not free – there is an opportunity cost. In order to prove the value of analytics inside your organization, begin measuring your Return on Analytics. When you accurately collect data, and properly analyze it, you are able to make accurate marketing decisions. Measure the impact of your analytics.

10. ACTION! This is a common phrase on any movie set. This should become a common phrase for everyone who uses Google Analytics. Turn your data into actionable marketing reports and smart dashboards that will help with the analysis. When you see true data analysis, you will want to scream ACTION! This means that the data and data analysis are so clear and crisp that you know exactly what needs to be done to reach your marketing objectives. Don’t settle for anything less. DATA, ANALYSIS, ACTION!

Happy New Year!

Thursday, January 10, 2013

Must-Have Analytics Customizations for Any Business

The following is a guest post by Mike Pantoliano, a web marketing consultant at Distilled in Seattle.

Out of the box, Google Analytics is really powerful. It's amazing how much awesome data we have at our fingertips by just implementing a couple of lines of code across our entire site. Having worked in an agency setting for a number of years now, I'm fortunate to have overseen hundreds of various web sites' Google Analytics implementations. And while no business' analytics needs are the same, I've found there are a few must-have customizations that can be applied across almost any GA implementation.

While the following tips will help you get more out of Google Analytics, there's no replacing a solid understanding of how Google Analytics operates by default. I consider this post a successor to Daniel Waisberg's 5 Ways To Ensure Google Analytics Is Running Perfectly and Simply Business's Small Business Guide to Google Analytics. Once you've got a good hold on how things work, give some of the following a shot in your accounts.

Build a Branded RegEx

Regular expressions can be scary, but in many cases this will only have to be done once. Once you have one built it can be applied to advanced segments or multi-channel funnel channel groupings to get a really enlightening look at how visitors coming from non-branded keywords are interacting with your site. If you're actively trying to grow your traffic from search, the biggest gains can be had from visitors that do not yet know your brand.

Even if you're not a RegEx pro, your Google Analytics keyword report will allow you to tinker until you get it just right. Once you have some of the basics down, you can begin to build your branded RegEx:

Head to your keywords reports and click advanced

Begin to build your RegEx. Simply typing in your brand name would be a good start.

Watch out for brand-name-less keywords that are technically still branded. For instance, Distilled's conference brand would still appear in our keyword reports with our original RegEx.

Make adjustments as necessary to your RegEx using pipes ("|") to indicate an OR, and other RegEx operators like "?", "*", and parentheses.

Now you can apply to your advanced segments and compare behavior and conversion data.

Or create custom channels in your multi-channel funnel reports.

And speaking of MCF channel groupings…

Create Custom MCF Channel Groupings

The world of digital measurement is increasingly becoming aware of the fact that the customer journey is far too complicated to work solely off of last touch attribution. Many marketing channels like social, display advertising, and organic search (especially non-branded) inherently act as "exposers". Looking only at last touch attribution isn't fair to these channels that are bringing potential customers in the door. This is the problem that the multi-channel funnels reports and the (soon to be released for everyone) attribution modeling tool are built to solve.

But those reports are only as good as the input channel segmentation. By default, Google Analytics offers a solid basic channel grouping from which to work. Right off the bat I like to create a copy of the default channels, and customize for the site I'm analyzing.

Now I can create custom channels. This will vary greatly between websites, but the following are some channel ideas that might be useful:

  1. The aforementioned non-branded and branded channels

  2. Separate out partner sites or special relationships from the default "referral" channel into their own group.

  3. An affiliate channel

  4. Separated social network channel. Perhaps separate channels for just the channels you're active on (Pinterest, Google+, Twitter, etc.). Or maybe pulling out any networks that are used in the "closer" role (say, if you're mainly using Twitter to post coupon codes).

  5. A channel for a subset of your visitors that were exposed to a specific portion of your site before anything else. Check out Josh Braaten's How to Prove the Value of Content Marketing with Multi-Channel Funnels for a great example of this.

Once you've built your custom channels, take a look at the assisted conversion reports. Watch for channels with high assisted/last interaction conversion ratios. Those are your exposers! They've been acting mostly in the assist role and might deserve a bit more credit than they've been getting with standard last touch attribution.

Build a Custom Dashboard or Two (or 10!)

Dashboards are a great way to create an at-a-glance snapshot of what matters. It's here where we'll be able to make sure everything is operating as normally, all in one view. Any general marketing dashboard worth its weight in pixels will include a 10,000 feet look at acquisition, behavior, and outcomes. How is traffic? How is time on site/bounce rate/etc.? How are conversions/revenue?

Larger organizations may have stakeholders in various parts of your business that would love a 10,000 foot view of the metrics that matter to them. Got a team that runs the blog? Build a dashboard that offers a view of dimensions like top landing pages and entrance keywords, as well as metrics like bounce rate and comments (set up as a goal). Need a view for just the C-level folks? Build one with revenue, overall site traffic, and time on site.

The previously mentioned Simply Business "Small Business Guide to Google Analytics" includes an example dashboard that can be copied into your account and modified as needed.

Setup Custom Intelligence Data Alerts

Even with daily checkups on your site's health, sometimes major problems can go unnoticed until it's too late. Enter custom data alerts. This handy feature lets you define triggers that will alert you via email or text message should a given threshold be passed. It's really easy to setup alerts for site wide drops in traffic, conversions, revenue, etc.

And we can take it a step further and apply our triggers to a subset of our site's traffic, for example:

  • drop in traffic from search

  • increase in bounce rate from direct

  • drop in conversions from

  • drop in impressions from ppc

And even more advanced:

One client I've worked with was sending events whenever an error was triggered in their checkout process. With custom data alerts, it's then totally possible to get an alert whenever there's an increase in checkout error events.

Both Luna Metrics and Justin Cutroni wrote some great posts on data alerts if you'd like more ideas:

Wrapping Up

These are just some of the most common enhancements I make to GA's out-of-the-box setup. Even after the above, there's so much more that can be tweaked as necessary to make for the perfect analysis reports, the possibilities are endless. I didn't even touch on filters, custom reports, and advanced segments! And now with even more features like cost analysis, dimension widening, and Universal Analytics being rolled out the possibilities will be even more endless-er.

What are your go-to Google Analytics customizations?