Wednesday, February 24, 2016

Does Ad Viewability Always Equal Views?

Posted by rMaynes1

[Estimated read time: 6 minutes]

There’s a lot of talk about ad viewability at present, and with big players such as Google announcing in 2015 that it would only charge advertisers for 100% viewable impressions, it’s easy to see how it’s become such a hot topic in the digital marketing world.

But what exactly does it mean to be “viewable?” Does this mean people will look at your ad? We recently conducted a research study that set out to answer these questions.

Conducting the eye-tracking study

The study was conducted in two parts: an online survey of 1400 participants for quantitative data, and an eye-tracking study designed to observe actual behaviors of searchers online — more qualitative data.

The goal was to measure the type of ads people noticed and engaged with the most, determining whether behavior changed depending on the intent behind the search task (research or purchase) and the relevancy of the ad. We also wanted to determine how viewable online display ads truly are and what other factors influenced whether or not people viewed them.

Participants performed tasks in Mediative’s lab while being recorded using the Tobii T60 desktop eye tracker. The key metrics measured were:

  • Time to First Fixation – How long it took the searcher to fixate on an ad. A fixation is when we hold our eyes still and actually take in visual information. A typical fixation lasts from 100–300 milliseconds, and we generally fixate 3–4 times every second (Source: tobii.com).
  • Total Visit Duration – How long the searcher spent in total fixating on the ad.
  • Visit Count – How many times the searcher came back to look at the ad.
  • Percentage Fixated – The percentage of all participants who looked at the ad.
  • Percentage Clicked The percentage of all participants who clicked on the ad.

A participant conducting a calibration test on the T60 Tobii Eye-tracker in Mediative’s research lab

“The findings in this study are a powerful reminder to create engaging advertising programs that responsibly leverage first- and second-party data. Marketers are still better off complimenting user experiences than disrupting them.”

– Sonia Carreno, President, Interactive Advertising Bureau of Canada

What is viewability?

“Viewability,” as defined by the Media Ratings Council, means an ad has 50% of its pixels in view for a minimum of one second. Essentially, an ad is viewable if there’s an opportunity for it to be viewed. 76% of ads in Mediative’s study were served in a “viewable” position as defined above.

An opportunity for an ad to be viewed, however, does not mean that it will be viewed. 16.6% of the ads that were served throughout the study were viewed. 50% more ads were viewed above the fold compared to below the fold, and ads above the fold were viewed for 87% longer.

Increasing viewability to increase views

Although click-through rate is a clear indicator of whether an ad was viewed or not, it doesn’t give the whole picture. It gives no measurement of how many ads were seen, but not clicked on. Therefore, CTR cannot be the sole measurement of a display ad campaign’s success. Ads can be seen, noticed, and influence a purchase — all without generating a click.

Buying a viewable ad impression is only the first step, however. Here are some areas for you to consider improving in order to maximize the chances of your display ads being seen.

1. Ad relevancy

The research showed that ads relevant to the searcher’s current task are 80% more likely to be noticed than ads relevant to something the searcher had looked for in the past. Additionally, ads relevant to the search query are viewed for 67% longer than irrelevant ads. Relevant ads were visited on average 2.59 times per visitor per page, versus only 1.6 times for irrelevant ads. Relevant ads received 5.7x more clicks.

Below are heat maps for web pages containing two big box ads. The page on the left features an ad that was irrelevant to the search task. The page on the right features a relevant ad. The areas in red had the most views, followed by orange, yellow, and green.

Your action item:

You can advertise on sites that are relevant to the audience you are trying to reach (e.g. a car ad on a car information site). However, adding data into campaigns and purchasing impressions in real-time will increase the relevancy of your ads, no matter the site the user is visiting. With demographic data and/or intent- and interest-based data, specific audiences of people can be targeted, rather than specific sites. This is more likely to result in a higher return on ad investments, as impressions land on the most likely buyers.

2. Ad type

In a survey, we asked people which ads they pay attention to the most on a webpage. The responses show that people believe they pay attention to the leaderboard ads at the top of the page the most. Our eye-tracking study confirmed that, yes, this ad type was noticed the fastest, and by the most people.

However, it was the ads to the side of the page (skyscraper ads) and within the page content (big box ads) that were viewed for the longest and received the most clicks. A November 2014 report by Google had similar findings, reporting that the most viewable ads on a page are those that are positioned just above the fold, not at the top of the page.

Your action item:

Don’t rule out ads that might traditionally have poor click performance. This doesn’t mean the ad isn’t seen!

3. Ad design

Poor display ad design is often to blame for a poor click-through rate; if people don’t notice the ad, they won’t click it. When it comes to online display ads, images, videos, and animations are more important than what’s actually being said with the text.

Your action item:

Invest in good ad creative. Keep ads simple, yet eye-catching. Ensure the ad features a clear call-to-action to indicate why the searcher should click on your ad so that they don’t lose interest.

4. Multiple ad exposures

Multiple relevant ads on the same page were viewed, on average, by 2.7x more participants and captured 2.8x more clicks than the individual relevant ads.

Multiple ads shown several times across different pages increase in engagement the more times they’re shown. The average number of clicks increased by 162% between one exposure and two, and by 39% between two exposures and three.

Your action item:

Consider advertising placements such as home page takeovers or run-of-site/run-of-network advertising, where multiple exposures of the same ad will be served. Retargeted ads will also likely result in multiple exposures to the same ad. When retargeted ads were presented to a searcher, they were viewed, on average, 65% faster than ads that were not retargeted.

In summary

Ultimately, what we’ve discovered through this research is that buying a “viewable” ad impression does not guarantee that it’s going to be seen and/or clicked on, and that there are many ways you can maximize the chances of your ad being viewed. It’s critical to understand, however, that online ad success cannot be determined by views and clicks to an ad alone. The entire customer’s purchase journey must be considered, and how ads can influence behavior at different stages. Display advertising is just one part of an integrated digital campaign for most advertisers.

For more tips on how to maximize display ad viewability, download the full Mediative paper for free.

Let us know your thoughts and questions in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tuesday, February 23, 2016

How to Find Your Brand's Disruptive Opportunity

Posted by ronell-smith

[Estimated read time: 9 minutes]

In 2009, I wrote a magazine story about a Japanese lure company selling high-end fishing lures in the US for $20 to $50 each. With anglers buying the lures in droves, it was only a matter of time before competitors followed suit, creating carbon copies of the best sellers, which lead to a market frenzy the industry had never seen.

Months later, I interviewed the owner of the lure company everyone was copying. His comments were eye-opening and accurate.

“I don’t get it,” he said, referring to the competition. “I sell one million lures for $25. They sell 10 to 15 million lures for $5 to $10. I should be copying them.”

Basic math highlights the truth of his words: $25 million < $50 million–$150 million.

A good idea doesn’t mean good for your brand

What looked like an idea — selling more expensive products to eager buyers — sufficed as a blinder to what would become an amazing opportunity: finding a way to sell more low-cost lures.

For those of us involved in content marketing, we’re used to scenarios like these. Right?

The competition does something cool or interesting or that gets links, likes, or conversions, and we lose our minds in an attempt to copy them, even if it makes zero sense for us to do so.

Buzzfeed, anyone?

Sure, list posts can be and have been effective, but other than throwing some traffic your way, for most businesses the long-term value simply isn’t there.

But we live in a monkey-see, monkey-do world, so whatever the competition does, we attempt to do it better.

Never mind the fact that (a) we don’t really know how successful they are, or (b) how successful attempting the same tactics will prove for us.

Most important, because our resources are limited, we don’t often see how choosing to chase others’ ideas means we typically cannot adequately focus on the opportunities right in front of us.

Opportunities > ideas

At Mozcon 2015, the word “disruption” kept being spoken by speakers on stage. In fact, a prominent theme of Rand’s opening talk was how a number of prominent brands were willing to disrupt themselves:

  • Facebook: The brand’s Little Red Book, given to all employees, contains many useful, guiding tidbits, among them, “If we don’t create the thing that kills Facebook, someone else will.”
  • Microsoft: After years of openly expressing contempt for Linux, the brand is welcoming working with the open-source outsider.

And as someone who was fortunate enough to stumble onto disruption (correctly called disruptive innovation) after college, hearing Rand talk about this theory made me very proud.

Problem is, what these businesses he described are doing is not really disruption.

In a strict business sense, the examples he shared are known as a pivot, where businesses re-imagine (or refashion) themselves and/or their assets in an entirely different light, as a way to grow, ward off competitors, solve big problems or grow their audience.

What is disruption?

Disruption is something far more significant, especially for brands looking to set themselves apart in competitive markets.

Coined by current Harvard Business School professor Clayton Christensen in his 1996 book “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail,” innovative disruption refers to the transformation of a product or a service in such a way that it makes it more affordable and more accessible to a wider audience.

These products start at the bottom, catering to a market that cannot afford the more expensive (and more popular) option. But, as in the case of the lure companies, by focusing on the bottom of the market, there is less competition and greater numbers who can afford the product.

One of the most prominent and most vivid examples of disruptive innovation is how smartphones disrupted the laptop market, which in the 1980s disrupted the desktop market, which itself disrupted the mainframe computer.

Disruptors enter the market at the bottom, where people or industries are being underserved, which is a result of the major players in a vertical choosing to go upstream and over-serve the market, often through added features and benefits people cannot use and don’t need.

But while they continue to cater to the high end of the market, disruptors slide in and gobble up market share at the bottom, before moving upstream to challenge their biggest competitors as the former’s products or services improve in quality, grow in popularity, and suffice as a viable option for even those with more discerning taste.

Why should content marketers care?

For those of you wondering what any of this has to do with content marketing, I say “plenty.” Think about the biggest challenges currently weighing down content marketing, beginning with brands…

  • Laboring to create engaging content
  • Failing to to understand their audience and their needs
  • Lacking clarity on which metrics to use in determining the success of their marketing efforts

I’m convinced this occurs because brands are too focused on better serving audiences they neither own nor enjoy the full support of, instead of looking for the next opportunity on the horizon. Or they’re too focused on ideas, instead of opportunities.

Finding your brand’s disruptive opportunity means you’re not competing in the same sandbox as everyone else, which means your chances of dominating the category are much, much higher.

Brands making it work

When Facebook announced 2G Tuesdays, whereby it asks some of its employees to use their brand’s apps over a slower 2G connection, it was billed as an opportunity for the company to see the challenges people in the developing world have when using their products.

Maybe.

That’s probably only part of the story. It’s just as likely that the company, which has nearly one billion active users, is looking at what additional services it can offer people in developing countries. It’s a good bet that, to garner interest from the other 6.4 billion people on earth, the service they offer won’t look anything like the Facebook we now know and (mostly) love.

Or what about Wavestorm, a company which sells surfboards for $99.99 and are offered exclusively at discount warehouse Costco? The company entered the bottom of the market, where surfboards regularly cost $300 to $1,000 or more. Even the brand’s owner is not shy in saying that he realizes the folks who can only pay $99 today will likely be willing to spend far more in the future.

Is your brand ready to find its disruptive opportunity?

Seize your brand’s disruptive opportunity

The beginning is always the most painful part, and this exercise will be no different. Begin by pulling the marketing team together for a brainstorming session.

Then throw two ideas on the table:

  1. What are the verticals where a large portion of the audience is underserved?
  2. What are we uniquely qualified to offer at least one of these markets that the competition would have a hard time beating us on?

Here’s the kicker: You cannot limit yourself to any specific vertical.

To many of you, the idea will sound crazy at first. That is, until you re-read the questions and see that what you’re really asked to do is think of where you should be looking for growth, expanded opportunities that you maybe have not and would not otherwise look for.

When I talk to brands, what I frequently hear is that they have maxed out in a market, lack the skilled staff to compete in their vertical, or are no longer able to connect with the audience in a meaningful way.

As tastes have changed, these brands have not been able to keep up.

So instead of playing catch-up, my suggestion is to slowly but surely start charting a new path, one where the territory is fertile (i.e., lots of opportunities) and the competition is not entrenched.

Don’t let fear get in your way.

Maybe you’re an agency sick of losing clients. You could take options off the table, offering only those services that you’ve determined, during the discovery process, will benefit the client. Any prospect who wants to cherry-pick services would have to look elsewhere.

How is this disruptive? The vast majority of agencies offer a smorgasbord of services, many of which they do poorly, while many others specialize in areas where they have deep expertise. The sweet spot is often in the middle, where you identify needs but only take on the most glaring of those needs, or very specific needs, which could move the business forward. (The management consulting field is currently being disrupted in similar fashion.)

Let’s take a look at a couple of examples of disruption at work.

Disruption in action

When I first encountered strength coaches Dean Somerset and Tony Gentilcore, they were both making a name for themselves as bloggers and trainers in Canada and Boston, respectively. Fast forward seven years, and they are now two of the most well-known, most-sought-after young experts in the field.

While most trainers go after the largest piece of the pie — fat loss clients — they’ve focused on helping folks to move better (i.e., mobility) rather than just look better, which means clients can enjoy their newfound size and weight. Also, they spend a considerable amount of time traveling the US, Canada and Europe, teaching other strength coaches how to be better at their jobs.

One of my favorite examples of a small brand that’s taken up the challenge to disrupt a sector is Boston-based Wistia, the video-hosting company that makes it easier for businesses to add their videos to the web.

The brand was founded in 2006, which is significant because video-hosting juggernaut YouTube came to fruition in 2005.

You might ask yourself, “What were [Wistia] thinking?” One word: Opportunity.

Where others saw a dominant player owning a category, they saw a dominant player opening a category so wide that others had room to thrive.

(Not shown to scale, of course)

“We had an opportunity to go deeper on one segment of this market and create specialized features that YouTube would never build as a broad-based platform,” says Wistia co-founder and CEO Chris Savage.

So while YouTube focuses on being everything to everyone, Wistia has singled out a lucrative, largely ignored piece of the pie they can own and dominate.

Next steps

Let me be emphatic: I have no expectation that businesses will read this post, then dramatically reshape their products, product lines, or services overnight. The point of this article is to make it clear that opportunities are all around, and the more open we are to these opportunities, the more we’ll increase our chances of continued success and limit the number of missed opportunities.

In the end, the lure companies chasing the Japanese brands realized their error too late: The category is now dominated by low-cost alternatives that cost a fraction of the price of the originals.

If only one the copycats had looked more closely at the numbers, they could have seen the opportunity ahead.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Friday, February 19, 2016

Four Ads on Top: The Wait Is Over

Posted by Dr-Pete

For the past couple of months, Google has been testing SERPs with 4 ads at the top of the page (previously, the top ad block had 1-3 ads), leading to a ton of speculation in the PPC community. Across the MozCast data set, 4 ads accounted for only about 1% of SERPs with top ads (which matches testing protocol, historically). Then, as of yesterday, this happened:

Over the past 2 weeks, we’ve seen a gradual increase, but on the morning of February 18, the percentage of top ads blocks displaying 4 ads jumped to 18.9% (it’s 19.3% as of this morning). Of the 5,986 page-1 SERPs in our tracking data that displayed top ads this morning, here’s how the ad count currently breaks down:

As you can see, 4-ad blocks have overtaken 2-ad blocks and now account for almost one-fifth of all top ad blocks. Keep in mind that this situation is highly dynamic and will continue to change over time. At the 19% level, though, it’s unlikely that this is still in testing.

Sample SERPs & Keywords

The 4-ad blocks look the same as other, recent top ad blocks, with the exception of the fourth listing. Here’s one for “used cars,” localized to the Chicago area:

Here’s another example, from an equally competitive search, “laptops”:

As you can see, the ads continue to carry rich features, including site-links and location enhancements. Other examples of high-volume searches that showed 4 top ads in this morning’s data include:

  • “royal caribbean”
  • “car insurance”
  • “smartphone”
  • “netbook”
  • “medicare”
  • “job search”
  • “crm”
  • “global warming”
  • “cruises”
  • “bridesmaid dresses”

Please note that our data set tends toward commercial queries, so it’s likely that our percentages of occurrence are higher than the total population of searches.

Shift in Right-column Ads

Along with this change, we’ve seen another shift – right-hand column ads seem to be moving to other positions. This is a 30-day graph for the occurrence of right-hand ads and bottom ads in our data set:

The same day that the 4-ad blocks jumped, there was a substantial drop in right-column ad blocks and corresponding increasing in bottom ad blocks. Rumors are flying that AdWords reps are confirming this change to some clients, but confirmation is still in progress as of this writing.

Where is Google Headed?

We can only speculate at this point, but there are a couple of changes that have been coming for a while. First, Google has made a public and measurable move toward mobile-first design. Since mobile doesn’t support the right-hand column, Google may be trying to standardize the advertising ecosystem across devices.

Second, many new right-hand elements have popped up in the last couple of years, including Knowledge Panels and paid shopping blocks (PLAs). These entities push right-hand column ads down, sometimes even below the fold. At the same time, Knowledge Panels have begun to integrate with niche advertising in verticals including hotels, movies, music, and even some consumer electronics and other products.

This is a volatile situation and the numbers are likely to change over the coming days and weeks. I’ll try to update this post with any major changes.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Should SEOs Only Care About DIRECT Ranking Signals in Google? – Whiteboard Friday

Posted by randfish

Can a new friend you connect with at a conference be as strong of a ranking signal as a quality backlink? Can it be stronger? The power of indirect ranking signals is something that can often be overlooked or brushed aside in favor of what we know as hard truth from Google, but doing so is a mistake. In today’s Whiteboard Friday, Rand talks about the importance of broadening your perspective and tactics when it comes to considering both direct and indirect ranking signals in your SEO.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about direct ranking signals versus indirect ranking signals. I see people ask questions like this all the time. Should I only care about the direct ranking signals? Do I even really care, as an SEO, if something indirectly impacts my rankings in Google, because I can’t really influence that, can I? Or I can’t be confident that Google is going to make those changes. The answer, from my perspective, is “Well, hang on a second. Let’s walk through this together.”

Direct ranking signals

Direct ranking signals are pretty obvious. These are things like links. You earn a bunch of new links. They point to your page. Your rankings go up. Now my page for bookcases ranked a little higher because all these people just linked over to me. Great, that’s very nice.

Direct ranking signal, my page takes eight seconds to load. Now I improve a bunch of things about it and it only takes three seconds to load. Maybe I move up. Maybe I move up a lot less. I put this from 3 to 2 and this one from 32 to 31. Page load speed, a very small ranking factor, but as Google says a direct one. All right, great. Page load speed, that’s improved.

Direct ranking signal, keyword and language usage. I go from having my page about media storage furniture, and then I realize no one looks for media storage furniture. They look for bookcases, because in fact the thing that I’m storing here, the media is books. Boom, I’m moving up for bookcases again.

These are direct ranking signals. They’re very obvious. We know that they are in Google’s ranking algorithm. Google is public about a lot of them. They talk about them. We can test them and observe them. They are consistent. They move the needle. Great.

Indirect ranking influencers

What about stuff like this? I go to a conference. I meet people, this friendly person with a hat. Friendly person with a hat goes home and they write an article. Friendly person with a hat’s article contains a link to my bookcase website. Well, Google would never say that going to conferences gives you higher rankings. That’s not a ranking signal. Even building relationships with friendly people who have hats on, also not a ranking signal. Did it move the needle? Yeah, it probably did, because it ended up indirectly influencing a direct ranking signal.

Maybe Gillian Anderson — she probably has better things to do what with “The X-Files” being back on the air, very exciting — Gillian Anderson: “Rand’s bookcases are my favorite thing in the apartment.” Wow, look. She sent thousands of people searching for Rand’s bookcases. Hmm, what happens then? Well, people pick it up and write about it. Lots of people searching for it. Maybe Google’s entity associations and topic modeling algorithm starts to associate Rand’s bookcases as being an entity and associates the word “Rand’s” with the word “bookcases.” Maybe now I rank higher. Is it suddenly the case then that tweets equal higher rankings? Google would certainly tell you tweets don’t impact rankings. Tweets are not a ranking factor. What’s going on? It’s indirect.

What about I go to my page and I decide, “Hey, you know what I think would be really cool is if I had a feature where you take a photo of your bookshelf and I will tell you all the books that you’ve got on it and then I’ll even show them on my bookshelf on the website so that you can see how your books look on Rand’s bookcases.” You upload a photo. Bam, you can see all your books on there. Super sweet feature. Gets me some news. Gets me a bunch of shares. Adds time to my time on site. Improves my conversion rate. Also, weirdly, influences maybe a bunch of direct ranking factors that lead to higher rankings. Is it the case that photo upload features mean higher rankings? Again, Google would never tell you that. It’s not consistent. It’s not like every time I add a photo upload feature to a website it’s suddenly going to rank higher.

The problems

1. What Google says

What’s going on here? Well, indirect ranking features, indirect ranking influencers are powerful. It doesn’t matter that they’re indirect. They can have powerful impacts on your rankings. I think for SEOs this is really hard, because Google’s representatives will often say things like, “We don’t use that in the rankings. That signal, that is not a ranking signal for us,” which shouldn’t shut down the conversation, but it really does in our industry. A lot of times we hear that Google says social signals are not ranking signals, tweets are not a ranking signal, or time on site is not a ranking signal, so why should SEOs try and influence that?

2. What clients, teams, and managers will say

That brings us to the second problem. Clients, teams, managers, what do they say? They say, “That’s not SEO. That’s not your job, SEO person.” They’ll go back and they’ll cite Google saying, “Hey, this doesn’t influence rankings.” Well, guess what? Both of these are really problematic because they may be technically accurate, but they don’t capture the big picture, and because of that you miss out.

3. It takes time

The other one I hear is “indirect influences take time.” This is absolutely the case. You get a bunch of links. They’re probably going to be counted ASAP as soon as Google finds them. You add this new feature here, it’s going to take a while for all of these other things to propagate over to the ranking signals that are actually going to impact your position. That’s tough. They only have the desired impact when (a) they get counted by the direct signals, and (b) when they actually work to influence the direct signals. So indirect signals are tough in all these ways.

My advice

Focus on what leads to improvements

My advice is to take a broader perspective. Stop focusing exclusively on “this directly impacts SEO and therefore is my job,” and “this doesn’t directly influence SEO and therefore is not my job.” Say holistically I know that lots of things impact searcher satisfaction:

  • User experience,
  • Amplification and amplification likelihood,
  • Engagement,
  • Branding through memory and association,
  • Relationships with people,
  • Brand coverage, and
  • Saturation.

All of these things will indirectly positively impact your rankings, but not just your rankings. They’ll impact positively your conversion rate. They’ll impact positively your user experience. They’ll impact positively your bottom line. That’s the one that you really care about. SEO is just a path to the bottom line to sales and brand building and amplification and the things that you are actually trying to grow.

If you focus on these and you’re aware of what is direct versus indirect and how the indirect things impact the direct things, I think you can craft a very smart holistic SEO strategy. If you throw out all the indirect stuff, because it’s not direct, you are killing yourself. You’re shooting yourself in the foot. Your competitors, frankly, the smart ones are the ones who are going to concentrate on both of these. Sometimes indirect stuff can be more powerful in the short term and the long term than direct stuff. That’s just how it goes.

Fight and work influencers for the future

I’d urge you to fight for the ability to have influence on these indirect signals, especially when they also have positive impacts on other channels. Don’t let rank influence be a short-term measurement only. I think one of the big problems is that folks look at their rankings and they say, “Okay, we did this. It moved up the next week. That clearly had an impact.” No. Look, a lot of the work that we do in SEO has rank impact for months and years to come. We can’t just measure things right away. If it has a positive impact on other signals you care about on the bottom line, on all of these types of factors, then it’s going to influence rankings positively as well.

All right, everyone, look forward to your comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Thursday, February 18, 2016

Success Metrics in a World Without Twitter Share Counts

Posted by EricaMcGillivray

On November 20, 2015, Twitter took away share counts on their buttons and from their accessible free metrics. Site owners lost an easy signal of popularity of their posts. Those of us in the web metrics business scrambled to either remove, change, or find alternatives for the data to serve to our customers. And all those share count buttons, on sites across the Web, started looking a tad ugly:

Where's my shares?Yep, this is a screenshot from our own site.

Why did Twitter take away this data?

When asked directly, Twitter’s statement about the removal of tweet counts has consistently been:

“The Tweet counts alone did not accurately reflect the impact on Twitter of conversation about the content. They are often more misleading to customers than helpful.”

On the whole, I agree with Twitter that tweet counts are not a holistic measurement of actual audience engagement. They aren’t the end-all-be-all to showing your brand’s success on the channel or for the content you’re promoting. Instead, they are part of the puzzle — a piece of engagement.

However, if Twitter were really concerned about false success reports, they would’ve long ago taken away follower counts, the ultimate social media vanity metric. Or taken strong measures to block automated accounts and follower buying. Not taking action against shallow metrics, while “protecting” users from share counts, makes their statement ring hollow.

OMG, did Twitter put out an alternative?

About a year ago, Twitter acquired Gnip, an enterprise metrics solution. Gnip mostly looks to combine social data and integrate it into a brand’s customer reputation management software, making for some pretty powerful intelligence about customers and community members. But since it’s focused on an enterprise audience, it’s priced out of the reach of most brands. Plus, the fact that it’s served via API means brands must have the knowledge and development skills/talent in order to really customize the data.

Since the share count shutdown, Gnip released a beta Engagement API and has promised an upcoming Audience API. This API seems to carry all the data you’d need to put those share counts back together. However, an important note:

“Currently only three metrics are available from the totals endpoint: Favorites, Replies, and Retweets. We are working to make Impressions and Engagements available.”

For those of you running to your favorite tools — Gnip’s TOS currently forbids the reselling of their data, making it essentially forbidden to integrate into tools, although some companies like Buzzsumo have paid and gotten permission to use the data in their software. The share count removal caused Apple to quietly kill Topsy.

Feel social media’s dark side, Twitter

Killing share counts hasn’t been without its damage to Twitter as a brand. In his post about brands who’s lost and won in Google search, Dr. Pete Meyers notes that Twitter dropped from #6 to #15. That has to hurt their traffic.

Twitter lost as a major brand on Google in 2015

However, Twitter also made a deal with Google in order to show tweets directly in Google searches, which means Twitter’s brand may not be as damaged as it appears.

Star Wars tweet stream in Google results

Perhaps the biggest ding to Twitter is in their actual activity and sharing articles on their platform. Shareaholic reports sharing on Twitter is down 11% since the change was implemented.

Share of voice chart on Twitter from Shareaholic

It’s hard to sell Twitter as a viable place to invest social media time, energy, and money when there’s no easy proof in the pudding. You might have to dig further into your strategy and activities for the answers.

Take back your Twitter metrics!

The bad news: Almost none of these metrics actually replicate or replace the share count metric. Most of them cover only what you tweet, and they don’t capture the other places your content’s getting shared.

The good news: Some of these are probably better metrics and better goals.

Traffic to your site

Traffic may be an oldie, but it’s a goodie. You should probably already be tracking this. And please don’t just use Google Analytics’ default settings, as they’re probably slightly inaccurate.

Google Analytics traffic from Social and Twitter

Some defaults for one of my blogs, since I’m lazy.

Instead, make sure you tag what you’re sharing on social media and you’ll be better able to attribute your hard, hard work to the proper channels. Then you can really figure out if Twitter is the channel for your brand’s content (or if you’re using it right).

Use shortening services and their counters

Alternatively, especially if you’re sharing content not on your own site, you can use share and click counting from various URL shortening services. But this will only count toward individual links you share.

Bit.ly's analytics around share counts for individual links

Twitter’s own free analytics

No, you won’t find the share count here, either. Twitter’s backends are pretty limited to specific stats on individual tweets and some audience demographics. It can be especially challenging if you have multiple accounts and are working with a team. There is the ability to download reporting for further Excel wizardry.

Tweet impressions and Twitter's other engagement metrics

Twitter’s engagement metric is “the number of engagements (clicks, retweets, replies, follows, and likes) divided by the total number of impressions.” While this calculation seems like a good idea, it’s not my favorite, given the specific calculation’s hard to scale as you grow your audience. You’re always going to have more lurkers instead of people engaging with your content, and it’s going to take a lot of massaging of metric reporting when you explain how you grew your audience and those numbers went down. Or how the company with 100 followers does way better on Twitter’s engagement metric.

TrueSocialMetric’s engagement numbers

Now these are engagement metrics that you can scale, grow, and compare. Instead of looking at impressions, TrueSocialMetrics gives conversation, amplification, and applause rates for your social networks. This digs into the type of engagement you’re having. For example, your conversation rate for Twitter is calculated by taking how many comments you got and dividing it by how many times you tweeted.

TrueSocialMetric's engagement numbers

At Moz, we use a combination of TrueSocialMetrics and traffic to report on the success of our social media efforts to our executives. We may use other metrics internally for testing or for other needs, depending on that specific project.

Twitcount

Shortly after the removal of share counts was announced, Twitcount popped up. It works by installing their share counters on your site, where it then can surface historical totals. Twitcount’s numbers only start counting the day you install the code and the button to your site. There are limitations, since they use Twitter’s API, and these limitations may cause data inaccuracies. I haven’t used their solution, but if you have, let us know in the comments how it went!

Buffer’s reach and RT metrics

Again, this only counts for your individual tweet’s metrics, and Buffer only grabs metrics on tweets sent out via their platform. Buffer’s reach metric is similar to what many traditional advertisers and people in public relations are used to, and it is similar to Twitter’s general impressions metric. Reach looks at how far your tweet has possibly gone due to size of the retweeter’s audience.

Like most analytic tools, you can export the metrics and play with them in Excel. Or you can pay for Buffer’s business analytics, which runs between $50–$250/month.

Trending topics and hashtag reports

There are many tools out there where you can track specific trends and hashtags around your brand. At MozCon, we know people are tweeting using #MozCon. But not every brand has a special hashtag, or even knows the hot topics around their brand.

SproutSocial’s trends report is unique in that it pulls both the topics and hashtags most associated with your brand and the engagement around those.

Obviously, in last July, #MozCon is hot. But you can also see that we have positive community sentiment around our brand by what else is happening.

Buzzsumo

Our friends at Buzzsumo can be used as a Topsy topic replacement and share counter. They did a great write-up on how to use their tool for keyword research. They are providing share counts from Gnip’s data.

Share counts from BuzzSumo

Though when I ran some queries on Moz’s blog posts, there seemed to be a big gap in their share counts. While we’d expect to see Moz’s counts down a bit on the weekends, there would be something there:

BuzzSumo on Moz's share counts over the week

I’m unsure if this is Buzzsumo’s or Gnip’s data issue. It’s also possibly that there are limits on the data, especially since Moz has large numbers of followers and gets large amounts of shares on our posts.

Use Fresh Web Explorer’s Mention Authority instead

While Fresh Web Explorer’s index only covers recent data — the tool’s main function being to find recent mentions of keywords around the web a la Google Alerts — it can be helpful if you’re running a campaign and relying on instant data no older than a month. Mention Authority does include social data. (Sorry, the full formula involved with creating the score is one of Moz’s few trade secrets.) What’s nice about this score is that it’s very analogous across different disciplines, especially publicity campaigns, and can serve as a holistic alternative.

Fresh Web Explorer's mention authority

Embedded tweets for social proof

Stealing this one from our friends at Buffer, but if you’re looking to get social proof back for people visiting your post, embedded tweets can work well. This allows others to see that your tweet about the post was successful, perhaps choosing to retweet and share with their audience.

Obviously, this won’t capture your goals to hand to a boss. But this will display some success and provide an easy share option for people to retweet your brand.

Predictions for the future of Twitter’s share count removal

Twitter will see this as a wash for engagement

With the inclusion of tweets directly in Google search results, it balances out the need for direct social proof. That said, with the recent timeline discussions and other changes, people are watching Twitter for any changes, with many predicting the death of Twitter. (Oh, the irony of trending hashtags when #RIPTwitter is popular.)

Twitter may not relent fully, but it may cheapen the product through Gnip. Alternatively, it may release some kind of “sample” share count metric instead. Serving up share count data on all links certainly costs a lot of money from a technical side. I’m sure this removal decision was reached with a “here’s how much money we’ll save” attached to it.

Questions about Twitter’s direction as a business

For a while, Twitter focused itself on being a breaking news business. At SMX East in 2013, Twitter’s Richard Alfonsi spoke about Twitter being in competition with media and journalism and being a second screen while consuming other media.

Lack of share counts, however, make it hard for companies to prove direct value. (Though I’m sure there are many advertisers wanting only lead generation and direct sales from the platform.) Small businesses, who can’t easily prove other value, aren’t going to see an easy investment in the platform.

Not to mention that issues around harassment have caused problems even celebrities with large followings like Sue Perkins (UK comedian), Joss Whedon (director and producer), Zelda Williams (daughter of Robin Williams), and Anne Wheaton (wife of Wil Wheaton). This garners extremely bad publicity for the company, especially when most were active users of Twitter.

No doubt Twitter shareholders are on edge when stock prices went down and the platform added a net of 0 new users in Q4 of 2015. Is the removal of share counts something in the long list of reasons why Twitter didn’t grow in Q4? Twitter has made some big revenue and shipping promises to shareholders in response.

Someone will build a tool to scrape Twitter and sell share counts.

When Google rolled out (not provided), every SEO software company clamored to make tools to get around it. Since Gnip data is so expensive, it’s pretty impractical for most companies. The only way to actually build this tool would be to scrape all of Twitter, which has many perils. Companies like Hootsuite, Buffer, and SproutSocial are the best set up to do it more easily, but they may not want to anger Twitter.

What are your predictions for Twitter’s future without share counts? Did you use the share counts for your brand, and how did you use them? What will you be using instead?

Header image by MKH Marketing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tuesday, February 16, 2016

Revved Up Rankings: History & Filtering at Your Fingertips, New in Moz Pro

Posted by jmodjeska

Today I’m proud to announce some new features in Moz Pro that help you get a lot more value out of your keyword rankings reports. You can now view your full rankings history for any campaign, select specific date ranges for your charts and tables, better segment your rankings data to get a clearer understanding of your performance and visibility, and effectively manage large campaigns with numerous keywords. Did I also mention it’s lightning-fast? To get started, visit the keyword rankings page in any of your campaigns or test drive Moz Analytics with a free trial today.

Want a quick recap? Tori goes over the highlights in this quick 1:22 minute video!

Historical rankings: getting from 12 to infinity

The major value of today’s release is that it enables customers to visualize their campaign’s entire rankings history. This is thanks to an ongoing effort to completely overhaul our data assembly architecture. I’m excited about today’s release because it lets loose the first phase of this overhaul initiative, and marks the end of the 12-cycle limitation in our rankings reports.

As of today, timeframe selection has no bounds. You can report on rankings data with start and end dates anywhere in the life of your campaign, up to and including the entire campaign’s history, even on campaigns with long histories and lots of keywords. Your full rankings histories have been liberated.

12 weeks of keyword rankings history in Moz Analytics — a limitation until today

Success! A campaign’s entire rankings history in Moz Analytics

And more new features

In addition to unlimited rankings history, we’re giving users the freedom to compare rankings, search visibility, engine performance, and competitive metrics within customizable timeframes. We want our users’ reporting needs to drive the application, and not vice-versa. Here are some other features available as of today:

  • Customizable timeframe selection. In addition to weekly and monthly views, you can now select and display start and end dates, and export reports for specific timeframes. Rankings deltas (changes over time) are now calculated over the duration of the selected timeframe.

Calendar controls to select your data display range

Quick-select menu for common timeframes

  • Flexible, universal filtering. Fast response times and full keyword history means no more limits on how you view and filter your data. Use the new universal filter to narrow displayed keywords by locality, labels, and keyword text.

  • On-the-fly aggregate calculations. Rankings summaries, deltas, search visibility, and universal results all update on-demand whenever you select a new timeframe.
  • Flexible, fast sorting. Data points — like difference between rankings by engine — that previously took so much overhead to calculate that they couldn’t be sorted in-place, are now easily sortable on-demand.

Sort by anything, anywhere

And performance improvements, too

These new features are built on an entirely new architecture. We’ve been running the new and old systems in full parallel mode for about two months now to ensure everything was ready to switch over. This has also given us the opportunity to measure some key performance improvements:

  • 30X faster pipeline. Our data assembly and storage processes run up to 30X faster, eliminating delays between data collection and in-app availability. The low latency between data collection and availability is what facilitates the delivery of full campaign histories.
  • 20X faster server response times. For most in-app requests, our response times are dramatically faster than the previous system. We’re seeing rankings datasets delivered in 50 ms for average-sized campaigns (compared to 800+ ms in the previous system). We’ve also moved many calculations into the browser, reducing network calls and wait times for filter and sort requests.

Why we did all of this

Rankings data is important to our customers

Keyword rankings data is a core component of the Moz Pro suite of tools. We gather localized and national data on millions of keywords each day across hundreds of search engine locales so that our customers can analyze their SEO keyword performance. Moz Analytics users spend the bulk of their time in the Rankings section, where we present metrics that include mobile and desktop keyword rankings, historical SERP analysis, local and national keywords, search visibility scores, and competitive metrics.

The data was already there

We store deep historical rankings data going back to the moment of a campaign’s creation. While this information has always been accessible via historical rankings CSV downloads, we’ve been aware for some time that this is frustrating and this data would be much more useful in the UI. What held us back was our architecture. If you’re interested in the technical challenges and how we overcame them to deliver these new features, I offer a detailed explanation on our Developer Blog, covering the project background and architecture that makes all of this possible.

Where we’ll go next

We plan to round out our rankings overhaul project with backend and UI updates to the Analyze a Keyword page. We’ll also speed up Page Optimization, at which point the entire corpus of ranking-related data will be on our new platform.

Ultimately, all of our numerous datasets, including crawl and links, will be assembled and stored on the new architecture, unlocking new features and delivering data faster as we go. We’ll continue to be agile and iterative, progressively releasing updates as soon as they’re ready.

So go check it out!

To experience the new features in the rankings section, visit your ranking report in any Moz Analytics campaign. If you’re not already a Moz Pro subscriber, why not take a free trial and see how our software can help you do better marketing? As always, we would love to hear your feedback below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Beyond App Streaming & AMP: Connection Speed's Impact on Mobile Search

Posted by Suzzicks

Most people in the digital community have heard of Facebook’s 2G Tuesdays. They were established to remind users that much of the world still accesses the Internet on slow 2G connections, rather than 3G, 4G, LTE or WiFi.

For an online marketer in the developed world, it’s easy to forget about slow connections, but Facebook is particularly sensitive to them. A very high portion of their traffic is mobile, and a large portion of their audience uses their mobile device as their primary access to the Internet, rather than a desktop or laptop.

Facebook and Google agree on this topic. Most digital marketers know that Google cares about latency and page speed, but many don’t realize that Google also cares about connection speed.

Last year they began testing their revived mobile transcoding service, which they call Google Web Lite, to make websites faster in countries like India and Indonesia, where connection speed is a significant problem for a large portion of the population. They also recently added Data Saver Mode in Chrome, which has a similar impact on browsing.

AMP pages begin ranking in mobile results this month

This February, Google will begin ranking AMP pages in mobile search results. These will provide mobile users access to news articles that universally render in about one second. If you haven’t seen it yet, use this link on your phone to submit a news search, and see how fast AMP pages really are. The results are quite impressive.

In addition to making web pages faster, Google wants to make search results faster. They strive to provide results that send searchers to sites optimized for the device they’re searching from. They may alter mobile search results based on the connection speed of the searcher’s device.

To help speed up websites and search results as the same time, Google is also striving to make Chrome faster and lighter. They’re even trying to ensure that it doesn’t drain device batteries, which is something that Android users will especially appreciate! Updated versions of Chrome actually have a new compression method called Brötli, which promises to compress website files 26% more than previous versions of Chrome.

We’ll review the impact of Google’s tests on changing search results based on connection speed. We’ll outline how and why results from these tests could become more salient and impact search results at various different speeds. Finally, we’ll explain why Google has a strong incentive to push this type of initiative forward, and how it will impact tracking and attribution for digital marketers now and in the future.

The diagram below provides a sneak peak of the various connection speeds at which Google products are best accessed and how these relationships will likely impact cross-device search results in the future.

Connection Speed

Best for these Google Products

Impact on SERP

WiFi & Fiber

Fiber, ChromeCast, ChromeCast Music, Google Play, Google Music, Google TV, ChromeBooks, Nest, YouTube, YouTube Red

Streaming Apps, Deep Linked Media Content

3G, 4G, LTE

Android Phones, Android Wear, Android Auto, ChromeBooks, YouTube, YouTube Red

Standard Results, App Packs, Carousels, AMP Pages

2G & Edge

Android Phones, Android Auto

Basic Results, Google Web Lite, AMP Pages

Basic vs. standard mobile search results

The image below shows the same search on the same phone. The phone on the left is set to search on EDGE speeds, and the one on the right is set to 4G/LTE. Google calls the EDGE search results “Basic,” and the 4G/LTE results “Standard.” They even include a note at the bottom of the page explaining “You’re seeing a basic version of this page because your connection is slow” with an option to “switch to standard version.” In some iterations of the message, this sentence was also included: “Google optimized some pages to use 80% less data, and rest are marked slow to load.”

Notice that the EDGE connection has results that are significantly less styled and interactive than the 4G/LTE results.

Serving different results for slower connection speeds is something that Google has tested before, but it’s a concept that seems to have been mostly dormant until the middle of last year, when these Basic results started popping up. Google quietly announced it on Google+, rather than with a blog post. These results are not currently re-creatable (at least for me), but the concept and eventual implementation of this kind of variability could have a significant impact on the SEO world, further deprecating our ability to monitor keyword rankings effectively.

The presentation of the mobile search results isn’t all that’s. changing. The websites included and the order in which they’re ranked changes, as well. Google knows that searchers with slow connections will have a bad experience if they try to download apps, so App Packs are not included in any Basic search results. That means a website ranking in position #7 in Standard search results (after the six apps in the App Pack) can switch to ranking number one in a Basic search. That’s great news if you’re the top website being pushed down by the App Pack!

The full list of search results are included below – items that only appear in one result are bolded.

Standard Search Result
“Superman Games”

Basic Search Result
“Superman Games”

App – City Jump

Web – herogamesworld.com>superman-games

App – Man of Steel

Web – http://ift.tt/1XuK3hU;

App – Superman Homepage

Web – LEGO>dccomicssuperheroes>games

App – Superbman

Web – Wikipedia>wiki>List_of_Superman_vi…

App – Batman Arkham Origins

Web – http://ift.tt/1KTitsG;

App – Subway Superman Run

Web – YouTube>watch (Superman vs Hulk – O Combate – YouTube)

Web – Herogamesworld.com>superman-games

Web – http://ift.tt/1Lr0sNl

Web – Heroesarcade.com>play-free>sub…

Web – fanfreegames.com > superman-games
Web – Wikipedia>wiki>List_of_Superman_vi…

Web – moviepilot.com>posts > 2015/06/25

Web – LEGO>dccomicssuperheroes>games

Web – m.batmangamesonly.com > superman-ga…

You may have the urge to write this off, thinking all of your potential mobile customers have great phones and fast connections, but you’d be missing the bigger picture here.

First, slow connection speeds can happen to everyone: when they’re in elevators, basements, subways, buildings with thick walls, outside of city centers, or simply in places where the mobile connection is overloaded or bad. Regardless of where they are, users will still try to search, often ignorant of their connection speed.

Second, this testing probably indicates that connection speed is an entirely new variable which could even be described as a ranking factor.

Responsive design does not solve everything

Google’s desire to reach a growing number of devices might sound fantastic if you’re someone who’s recently updated a site to a responsive or adaptive design, but these new development techniques may have been a mixed blessing. Responsive design and adaptive design can be great, but they’re not a panacea, and have actually caused significant problems for Google’s larger goals.

Responsive sites face speed and development challenges.

Responsive design sites are generally slow, which means there is a strong chance that they won’t rank well in Basic search results. Responsive sites can be built to function much more quickly, but it can be an uphill battle for developers. They face an ever-growing set of expectations, frameworks are constantly changing, and they’re already struggling to cram extra functionality and design into clean, light, mobile-first designs.

They can have negative repercussions.

Despite Google’s insistence that responsive design is easier for them to crawl, many webmasters that transitioned saw losses in overall conversions and time-on-site. Their page speed and UX were both negatively impacted by the redesigns. Developers are again having to up their skills and focus on pre-loading, pre-rendering, and pre-fetching content in order to reduce latency — sometimes just to get it back to what it was before their sites went responsive. Others are now forced to create duplicate AMP pages, which only adds to the burden and frustration.

Wearables/interactive media pose new problems.

Beyond the UX and load time concerns, responsive design sites also don’t allow webmasters to effectively target these new growth channels that Google cares about — wearables and interactive media. Unfortunately, responsive design sites are nearly unusable on smartwatches, and probably always will be.

Similarly, Google is getting much more into media, linking search with large-screen TVs, but even when well-built, responsive design sites look wonky on popular wide-screen TVs. It seems that the development of mobile technology may have already out-paced Google’s recommended “ideal” solution.

Regardless, rankings on all of these new devices will likely be strongly influenced by the connection speed of the device.

Is AMP the future of mobile search for slow connections?

The good news is that AMP pages are great candidates for ranking in a Basic search result, because they work well over slow connections. They’ll also be useful on things like smart watches and TVs, as Google will be able to present the content in whichever format it deems appropriate for the device requesting it — thus allowing them to provide a good experience on a growing number of devices.

App streaming & connection speed

A couple months ago, Google announced the small group of apps in a beta test for App Streaming. In this test, apps are hosted and run from a virtual device in Google’s cloud. This allows users to access content in apps without having to download the app itself. Since the app is run in the cloud, over the web, it seems that this technology could eventually remove the OS barrier for apps — an Android app will be able to operate from the cloud on an iOS device, and an iOS app will be able to run on an Android device the same way. Great for both users and developers!

Since Google is quietly working on detecting and perfecting their connection-speed-based changes to the algorithm, it’s easy to see how this new ranking factor will be relied upon even more heavily when App Streaming becomes a reality. App Streaming will only work over WiFi, so Google will be able to leverage what it’s learned from Basic mobile results to provide yet another divergent set of results to devices that are on a WiFi connection.

The potential for App Streaming will make apps much more like websites, and deep links much more like…regular web links. In some ways, it may bring Google back to its “Happy Place,” where everything is device and OS-agnostic.

How do app plugins & deep links fit into the mix?

The App Streaming concept actually has a lot in common with the basic premise of the Chrome OS, which was native on ChromeBooks (but has now been unofficially retired and functionally replaced with the Android OS). The Chrome OS provided a simple software framework that relied heavily on the Chrome browser and cloud-based software and plugins. This allowed the device to leverage the software it already had, without adding significantly more to the local storage. http://ift.tt/1Lr0u7J

This echoes the plugin phenomenon that we’re seeing emerge in the mobile app world. Mobile operating systems and apps use deep links to other local apps plugins. Options like emoji keyboards and image aggregators like GIPHY can be downloaded and automatically pulled into to the Facebook Messenger app.

Deep-linked plugins will go a long way toward freeing storage space and improving UX on users’ phones. That’s great, but App Streaming is also resource-intensive. One of the main problems with the Chrome OS was that it relied so heavily on WiFi connectivity — that’s relevant here, too.

What does music & video casting have to do with search?

Most of the apps that people engage with on a regular basis, for hours at a time, are media apps used over WiFi. Google wants to be able to index and rank that content as deep links, so that it can open and run in the appropriate app or plugin.

In fact, the indexing of deep-linked media content has already begun. The ChromeCast app is using new OS crawler capabilities in the Android Marshmallow OS to scan a user’s device for deep-linked media. They then create a local cache of deep links to watched and un-watched media that a user might want to “cast” to another device, then organize it and make it searchable.

For instance, if you want to watch a documentary on dogs, you could search your Netflix and Hulu apps, then maybe Amazon Instant Video, and maybe even the NBC, TLC, BBC, or PBS apps for a documentary on dogs.

Or, you could just do one search in the ChromeCast app and find all the documentaries on dogs that you can access. Assuming the deep links on those apps are set up correctly, you will be able to compare the selection across all apps that you have, choose one, and cast it. Again, these type of results are less relevant if you are on a 2G or 3G connection and thus not able to cast the media over WiFi.

This is an important move for Google. Recently, they’ve been putting a lot of time and energy into their media offerings. They successfully launched ChromeCast2 and ChromeCastMusic at about the same time as they dramatically improved their GoogleMusic subscription service (a competitor to Spotify and Pandora) and launched YouTubeRed (their rival for Hulu, Netflix, and Amazon Prime Video). They may eventually even begin to include the “cast” logo directly in SERPS, as they have in the default interface of Google+ and YouTube.

Google’s financial interest in adapting results by connectivity

Google’s interest in varying search results by connection speed is critical to their larger goals. A large portion of mobile searches are for entertainment, and the need for entertainment is unending and easy to monetize. Subscription models provide long-term stable revenue with minimal upkeep or effort from Google.

Additionally, the more time searchers spend consuming media, either by surfacing it in Google or the ChromeCast app, or through Now on Tap, the more Google can tailor its marketing messages to them.

Finally, the passive collection and aggregation of people’s consumption data also allows Google to quickly and easily evaluate which media is popular or growing in popularity, so they can tailor Google Play’s licensing strategy to meet users’ demands, improving the long-term value to their subscribers.

As another line of business, Google also offers ChromeCast Music and Google Music, which are subscription services designed to compete with Amazon Music and iTunes. You might think that all this streaming — streaming apps, streaming music, streaming video and casting it from one device to another — would slow down your home or office connection speed as a whole, and you would be right. However, Google has a long-term solution for that too: Google Fiber. The more reliant people become on streaming content from the cloud, the more important it will be for them to get on Google’s super-fast Internet grid. Then you can stream all you want, and Google can collect even more data and monetize as they see fit.

http://ift.tt/1U53C0j

Image credit: The NextWeb

What’s the impact of connection variability in SERPS on SEO strategy & reporting?

So what might this mean for your mobile SEO strategy? Variability by connection speed will make mobile keyword rank reporting and attribution nearly impossible. Currently, most keyword reporting tools either work by aggregating ranking results that are reported from ISPs, or by submitting test queries and aggregating the results.

Unfortunately, while that’s usually sufficient for desktop reporting (though still error-prone and very difficult for highly local searches), it’s nearly impossible for mobile. All of the SEO keyword reporting tools out there are struggling to report on mobile search results, and none take connection speed into account. Most don’t even take OS into account, either, so App Packs and the website rankings around them are not accurately reported.

Similarly, most tools are not able to report on anything about deep links, so it’s hard to know if click-through traffic is even getting to the website, or if it might be getting to a deep screen in an app instead. In short, ranking tools have a long way to go before they will be accurate in mobile, and this additional factor makes the reporting even harder.

In mobile, there are additional factors that can change the mobile rankings and click-through rates dramatically:

  • Localization
  • Featured Rich Snippets (Answer Boxes)
  • Results that are interactive directly in the SERP (playable YouTube videos, news, Twitter and image carousels)
  • AJAX expansion opportunities

All of these things are nightmares for the developers who write ranking software that scrapes search results. Even worse, Google is constantly testing new presentation schemes, so even if the tools could briefly get it right, they risk a constant game of catch-up.C:UsersCindyAppDataLocalTempSNAGHTML1116a741.PNG

One of the reasons Google is constantly testing new presentation schemes? They’re trying to make their search results work on an ever-growing list of new devices while minimizing the need for additional page loads or clicks. This is what drives all the testing.

If you think about a traditional set of search results, they’re an ordered list that goes from top to bottom. Google has gotten so fast that the ten-link restriction actually hurts the user experience when the mobile connection is good.

In response, Google has started to include carousels that scroll left to right. Only one or two search results can show on a smart watch at one time, so this feature allows searchers to delve deeper into a specific type of result without the additional click or page load.

However, carousels don’t appear in Basic search results. Also, the carousels only count as one result in the vertical list, but can add as many as 5 or 10 results to the page. Again, SEO’s and SEO software really haven’t settled on a way to represent this effectively in their tracking, and little has been reported about the impact on CTR for either the items in the carousel or the items below it.

Conclusion

Speed matters.

Not just latency and page speed, but also connection speed. While we can’t directly impact the connection speed of our mobile users, we should at least anticipate that search results might vary based on the use-case of their search and strategize accordingly.

In the meantime, SEOs and digital marketers should be wary of tools that report mobile keyword rankings without specifying things like OS, app pack rankings, location and, eventually, connection speed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!