Monday, November 30, 2015

It's Here! The MozCon Local 2016 Agenda

Posted by EricaMcGillivray

*drumroll* The MozCon Local 2016 agenda is here! For all your local marketing and SEO needs, we’re pleased to present a fabulous lineup of speakers and topics for your enjoyment. MozCon Local is Thursday and Friday, February 18–19 2016 in Seattle. On Thursday, our friends LocalU will present a half-day of intensive workshops, and on Friday we’ll be having an entire day of keynote-style conference fun. (You do need to purchase the workshop ticket separately from the conference ticket.)

If you’ve just remembered that you need to purchase your ticket, do so now:

Buy your MozCon Local 2016 ticket!

Otherwise, let’s dig into that agenda!

MozCon Local 2016


Thursday workshops

12:00–12:30pm
Registration


12:30–12:35pm
Introduction and Housekeeping


David Mihm12:35–12:55pm
The State of Local Search with David Mihm

Already one of the most complex areas in all of search marketing, local has never been more fragmented than it is today. Following a brief summary of the Local Search Ranking Factors, David will give you his perspective on which strategies and tactics are worth paying attention to, and which ones are simply “nice to have.”

David Mihm is one of the world’s leading practitioners of local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000s. David co-founded GetListed.org, which he sold to Moz in November 2012.


12:55–1:35pm
Local Search Processes with Aaron Weiche, Darren Shaw, Mike Ramsey, and Paula Keller

Darren Shaw, Mike Ramsey, Aaron Weiche, and Paula Keller

Panel discussion and Q&A on the best processes to use in marketing local businesses online.


1:35–2:35pm
How to do Competitive Analysis for Local Search with Aaron Weiche, Darren Shaw, David Mihm, Ed Reese, Mary Bowling, Mike Ramsey

Each panelist will demonstrate their methods and the tools they use to audit a specific area of the online presence of a single local business. The end result will be a complete picture of how a thorough competitive analysis for a local business can be done.


2:35–2:50pm
Break


During this time period, each attendee will choose any three 30-minute workshops to attend. Some workshops are offered in all time slots, while others are only offered at specific times. Present your challenges, discuss solutions, and get your burning questions answered in these small groups.

LocalU Workshops

2:50–3:20pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Local Links with Mike Ramsey
  • Citations: Everything You Need to Know with Darren Shaw

3:20–3:50pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Agency Issues with Mike Ramsey
  • Local Links with Darren Shaw

3:50–4:20pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Local Links with Mike Ramsey
  • Citations: Everything You Need to Know with Darren Shaw

4:20–5:00pm
Live Site Reviews

The group will come back together for live site reviews!


5:00–6:00pm
Happy Hour!


Friday conference

Mary Bowling talks to the local crowd

8:00–9:00am

Breakfast


David Mihm9:00–9:05am
Welcome to MozCon Local 2016! with David Mihm

David Mihm is one of the world’s leading practitioners of Local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000s. David co-founded GetListed.org, which he sold to Moz in November 2012.


Mary Bowling9:05–9:35am
Feeding the Beast: Local Content for RankBrain with Mary Bowling

We now know searcher behavior and continual testing via machine learning indeed affects Google rankings and algorithm refinements. Learn how to create local content to satisfy both Google and our human visitors.

Mary Bowling’s been in SEO since 2003 and has specialized in local SEO since 2006. When she’s not writing about, teaching, consulting, and doing internet marketing, you’ll find her rafting, biking, and skiing/snowboarding in the mountains and deserts of Colorado and Utah.


Mike Ramsey9:35–10:05am
Local Links: Tests, Tools, and Tactics with Mike Ramsey

Going beyond the map pack, links can bring you qualified traffic, organic rankings, penalties, or filters. Mike will walk through lessons, examples, and ideas for you to utilize to your heart’s content.

Mike Ramsey is the president of Nifty Marketing and a founding faculty member of Local University. He is a lover of search and social with a heavy focus in local marketing and enjoys the chess game of entrepreneurship and business management. Mike loves to travel and loves his home state of Idaho.


Darren Shaw10:05–10:35am
Citation Investigation! with Darren Shaw

Darren investigates how citations travel across the web and shares new insights into how to better utilize the local search ecosystem for your brands.

Darren Shaw is the president and founder of Whitespark, a company that builds software and provides services to help businesses with local search. He’s widely regarded in the local SEO community as an innovator, one whose years of experience working with massive local data sets have given him uncommon insights into the inner workings of the world of citation-building and local search marketing. Darren has been working on the web for over 16 years and loves everything about local SEO.


10:35–10:55am
AM Break


Lindsay Wassell10:55–11:20am
Technical Site Audits for Local SEO with
Lindsay Wassell

Onsite SEO success lies in the technical details, but extensive SEO audits can be too expensive and impractical. Lindsay shows you the most important onsite elements for local search optimization and outlines an efficient path for improved performance.

Lindsay Wassell’s been herding bots and wrangling SERPs since 2001. She has a zeal for helping small businesses grow with improved digital presence. Lindsay is the CEO and founder of Keyphraseology.


Justine Jordan11:20–11:45am
Optimizing and Hacking Email for Mobile with Justine Jordan

Email may be an old dog, but it has learned some new mobile tricks. From device-a-palooza and preview text to tables and triggers, Justine will break down the subscriber experience so you (and your audience) get the most from your next campaign.

In addition to being an email critic, cat lover, and explain-a-holic, Justine Jordan also heads up marketing for Litmus, an email testing and analytics platform. She’s strangely passionate about email, hates being called a spammer, and still codes like it’s 1999.


Emily Grossman11:45am–12:10pm
Understanding App-Web Convergence and the Impending App Tsunami with Emily Grossman

People no longer distinguish between app and web content; both compete for the same space in local search results. Learn how to keep your local brand presence afloat as apps and deep links flood into the top of search results.

Emily Grossman is a Mobile Marketing Specialist at MobileMoxie, and she has been working with mobile apps since the early days of the app stores in 2010. She specializes in app search marketing, with a focus on strategic deep linking, app indexing, app launch strategy, and app store optimization (ASO).


Robi Ganguly12:10–12:35pm
Building Customer Love and Loyalty in a Mobile World with Robi Ganguly

How the best companies in the world relate to customers, create a personal touch, and foster customer loyalty at scale.

Robi Ganguly is the co-founder and CEO of Apptentive, the easiest way for every company to communicate with their mobile app customers. A native Seattleite, Robi enjoys building relationships, running, reading, and cooking.


12:35–1:35pm
Lunch



Luther Lowe and Willys Devol1:35–2:05pm
The Past, Present, and Future of Local Listings with Luther Lowe and Willys Devol

Two of the biggest kids on the local search block, Google and Yelp, share their views on the changing world of local listings, their place in the broader world of local search, and what you can do to keep up, in this Q&A moderated by David Mihm.

Luther Lowe is VP of Public Policy at Yelp.

Willys Devol is the content strategist for Google My Business, and he spends his time designing and writing online content to help business owners enhance their presence online. He’s also a major proponent of broccoli and gorillas.


Paula Keller2:05–2:35pm
Fake It Til You Make It: Brand Building for Local Businesses with Paula Keller

Explore real-world examples of how your local business can establish a brand that both customers and Google will recognize and reward.

As Director of Account Management at Search Influence, Paula Keller strategizes with businesses on improving their search, social, and online ads results, and she works to scale those tactics for her team’s 800+ local business clients. Paula views online marketing the same way she views cooking (her favorite way to spend her free time): trends come and go, but classic tactics are always the foundation of success!


Dana DiTomaso2:35–3:05pm
Your Marketing Team is Larger Than You Think with Dana DiTomaso

Imagine doing such a great job with your branding that you become a part of your customer’s life. They trust your brand as part of their community. This magic doesn’t happen by dictating the corporate voice from a head office, but from empowering your locations to build customer community.

Whether at a conference, on the radio, or in a meeting, Dana DiTomaso likes to impart wisdom to help you turn a lot of marketing bullshit into real strategies to grow your business. After 10+ years, she’s (almost) seen it all. It’s true, Dana will meet with you and teach you the ways of the digital world, but she is also a fan of the random fact. Kick Point often celebrates “Watershed Wednesday” because of Dana’s diverse work and education background. In her spare time, Dana drinks tea and yells at the Hamilton Tiger-Cats.


3:05–3:25pm
PM Break


Cori Shirk3:25–3:55pm
Mo’ Listings, Mo’ Problems: Managing Enterprise-Level Local Search with Cori Shirk

Listings are everyone’s favorite local search task…not. Cori takes you through how to tackle them at large scale, keep up, and not burn out.

Cori Shirk is a member of the SEO team at Seer Interactive, where she specializes in managing enterprise local search accounts and guiding strategy across all of Seer’s local search clients. When she’s not sitting in front of a computer, you can usually find her out at a concert enjoying a local craft beer.


Matthew Moore3:55–4:10pm
The Enterprise Perspective on Local Search with Matthew Moore

Learn how the person responsible for local visibility across a portfolio of nearly 1,000 locations tackles this space on a daily basis. Matthew from Sears Home Services shares his experiences and advice in this Q&A moderated by David Mihm.

Matthew Moore is Senior Director, Marketing Analytics at Sears Holdings Corporation.


Adria Saracino4:10–4:40pm
How to Approach Social Media Like Big Brands with Adria Saracino

Facebook, Twitter, LinkedIn, Instagram, Pinterest, YouTube, Snapchat, Periscope…the seemingly never-ending world of social media can leave even the most seasoned marketer flailing among too many tasks and not enough results. Adria will help you cut through the noise and share actionable secrets that big brands use to succeed with social media.

Adria Saracino is a digital strategist whose marketing experience spans mid-stage startups, agency life, and speaking engagements at conferences like SearchLove and Lavacon. When not marketing things, you can see her cooking elaborate meals and posting them on her Instagram, @emeraldpalate.


Rand Fishkin4:40–5:10pm
Analytics for Local Marketers: The Big Picture and the Right Details with Rand Fishkin

Are your marketing efforts taking your organization where it needs to go, or are they just boosting your vanity metrics? Rand explains how to avoid being misled by the wrong metrics and how to focus on the ones that will keep you moving forward. Learn how to determine what to measure, as well as how to tie it to objectives with clear, concise, and useful data points.

Rand Fishkin uses the ludicrous title “Wizard of Moz.” He’s the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org.


6:00–10:00pm
MozCon Local Networking Afterparty, location TBA

Join your fellow attendees and Moz and LocalU staff for a networking party after the conference. Light appetizers and drinks included. See you there!

Buy your MozCon Local 2016 ticket!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Persona Research in Under 5 Minutes

Posted by CraigBradford

Well-researched personas can be a useful tool for marketers, but to do it correctly takes time. But what if you don’t have extra time? Using a mix of Followerwonk, Twitter, and the AIchemy language API, it’s possible to do top-level persona research very quickly. I’ve built a Python script that can help you answer two important questions about your target audience:

  1. What are the most common domains that my audience visits and spend time on? (Where should I be trying to get mentions/links/PR)
  2. What topics are they interested in or reading on those sites? (What content should I potentially create for these people)

You can get the script on Github: Twitter persona research

Once the script runs, the output is two CSV files. One is a list of the most commonly-shared domains by the group, the other is a list of the topics that the audience is interested in.

A quick introduction to Watson and the Alchemy API

The Alchemy API has been around a while, and they were recently acquired by the IBM Watson group. The language tool has 15 functions. I’ve used it in the past for language detection, sentiment analysis, and topic analysis. For this personas tool, I’ve used the Concepts feature. You can upload a block of text or ask it to fetch a URL for analysis. The output is then a list of concepts that are relevant to the page. For example, if I put the Distilled homepage into the tool, the concepts are:

Notice there are some strange things like Arianna Huffington listed, but running this tool over thousands of URLs and counting the occurrences takes care of any strange results. This highlights one of the interesting features of the tool: Alchemy isn’t just doing a keyword extraction task. Arianna Huffington isn’t mentioned anywhere on the Distilled homepage.

Alchemy has found the mention of Huffington Post and expanded on that concept. Notice that neither search engine optimization or Internet marketing are mentioned on the homepage, but have been listed as the two most relevant concepts. Pretty clever. The Alchemy site sums it up nicely:

“AlchemyAPI employs sophisticated text analysis techniques to concept tag documents in a manner similar to how humans would identify concepts. The concept tagging API is capable of making high-level abstractions by understanding how concepts relate, and can identify concepts that aren’t necessarily directly referenced in the text.”

My thinking for this script is simple: If I get a list of all the links that certain people share and pass the URLs through the Alchemy tool, I should be able to extract the main concepts that the audience is interested in.

To use an example, let’s assume I want to know what topics the SEO community is interested in and what sites are most important in that community. My process is this:

  1. Find people that mention “SEO” in their Twitter bio using Followerwonk
  2. Get a sample of their most recent tweets using the Twitter API
  3. Pull out the most common domains that those people share
  4. Use the Alchemy Concepts API to summarize what the pages they share are about
  5. Output all of the above to a spreadsheet

Follow the steps below. Sorry, but the instructions below are for Mac only; the script will work for PCs, but I’m not sure of the terminal set up.

How to use the script

Step 1 – Finding people interested in SEO

Searching Followerwonk is the only manual part of the process. I might build it into the the script in future, but honestly, it’s too easy to just download the usernames from the interface.

Go into the “Search Bios” tab and enter the job title in quotes. In this case, that’s “SEO.” More common jobs will return a lot of results; I recommend setting some filters to avoid bots. For example, you might want to only include accounts with a certain number of followers, or accounts with less than a reasonable number of tweets. You can download these users in a CSV as shown in the bottom-right of the image below:

Everything else can be done automatically using the script.

Step 2 – Downloading the script from GitHub

Download the script from Github here: Twitter API using Python. Use the Download Zip link on the right hand side as shown below:

Step 3 – Sign up for Twitter and Alchemy API keys:

It’s easy to sign up using the links below:

  • Get a Twitter API key
  • Get a free API key for Alchemy

Once you have the API keys, you need to install a couple of extra requirements for the script to work.

The easiest way to do that is to download Pip here: http://ift.tt/1mn7OFn — save the page as “get-pip.py”. Create a folder on your desktop and save the Git download and the “get-pip.py” file in it. You then need to open your terminal and navigate into that folder. You can read my previous post on how to use the command line here: The Beginner’s Guide to the Command Line.

The steps below should get you there:

Open up the terminal and type:

“cd Desktop/”

“cd [foldername]”

You should now be in the folder with the get-pip.py file and the folder you downloaded from Github. Go back to the terminal and type:

“sudo python get-pip.py”

“sudo pip install -r requirements.txt”

Create two more files:

  1. usernames.txt – This is where you will add all of the Twitter handles you want to research
  2. api_keys.py – The file with your API keys for Alchemy and Twitter

In the api_keys file, paste the following and add the respective details:

watson_api_key = “[INSERT ALCHEMY KEY]”

twitter_ckey = “[INSERT TWITTER CKEY]”

twitter_csecret = “[INSERT CSECRET]”

twitter_atoken = “[INSERT TOKEN]”

twitter_asecret = “[INSERT ASECRET]”

Save and close the file.

Step 4 – Run the script

At this stage you should:

  1. Have a username.txt file with the Twitter handles you want to research
  2. Have downloaded the script from Github
  3. Have a file named api_keys.py with your details for Alchemy and Twitter
  4. Installed Pip and the requirements file

The main code of the script can be found in the “get_tweets.py” file.

To run the script, go into your terminal, navigate to the folder that you saved the script to (you should still be in the correct directory if you followed the steps above. Use “pwd” to print the directory you’re in). Once you are in the folder, run the script by going to the terminal and typing: “python get_tweets.py”. Depending on the number of usernames you entered, it should take a couple of minutes to run. I recommend starting with one or two to check that everything is working.

Once the script finishes running, it will have created two csv files in the folder you created:

  1. “domain + timestamp” – This includes all the domains that people tweeted and the count of each
  2. “concepts + timestamp” – This includes all the concepts that were extracted from the links that were shared

I did this process using “SEO” as the search term in Followerwonk. I used 50 or so profiles, which created the following results:

Top 30 domains shared:

Top 40 concepts

For the most part, I think the domains and topics are representative of the SEO community. The output above seems obvious to us, but try it for a topic that you’re not familiar with and it’s really helpful. The bigger the sample size, the better the results should be, but this is restricted by the API limitations.

Although it looks like a lot of steps, once you have this set up, it’s very easy to repeat — all you need to change is the usernames file. Using this tool can get you some top-level persona information in a very short amount of time.

Give it a try and let me know what you think.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Wednesday, November 25, 2015

30+ Important Takeaways from Google's Search Quality Rater's Guidelines

Posted by jenstar

For many SEOs, a glimpse at the Google’s Search Quality Rater’s Guidelines is akin to looking into Google’s ranking algorithm. While they don’t give the secret sauce to rank number one on Google, they do offer some incredible insight into what Google views as quality – and not-so-quality – and the types of pages they want to serve at the top of their search results.

Last week, Google made the unprecedented move of releasing the entire Search Quality Rater’s Guidelines, following an analysis of a leaked copy obtained by The SEM Post. While Google released a condensed version of the guidelines in 2013, until last week, Google had never released the full guidelines that the search quality raters receive in their entirety.

First, it’s worth noting that quality raters themselves have no bearing on the rankings of the sites they rate. So quality raters could assign a low score to a website, but that low rating would not be reflected at all in the actual live Google search results.

Instead, Google uses the quality raters for experiments, assessing the quality of the search results when they run these experiments. The guidelines themselves are what Google feels searchers are looking for and want to find when they do a Google search. The type of sites that rate highest are the sites and pages they want to rank well. So while it isn’t directly search algorithm-related, it shows what they want their algos to rank the best.

The document itself weighs in at 160 pages, with hundreds of examples of search results and pages with detailed explanations of why each specific example is either good, bad, or somewhere in between. Here’s what’s most important for SEOs and webmasters to know in these newly-released guidelines.

Your Money or Your Life Pages (aka YMYL)

SEOs were first introduced to the concept of Your Money or Your Life pages last year in a leaked copy of the guidelines. These are the types of pages that Google holds to the highest standards because they’re the types of pages that can greatly impact a person’s life.

While anyone can make a webpage about a medical condition or offer advice about things such as retirement planning or child support, Google wants to ensure that these types of pages that impact a searcher’s money or life are as high-quality as possible.

In other words, if low-quality pages in these areas could “potentially negatively impact users’ happiness, health, or wealth,” Google does not want those pages to rank well.

If you have any web pages or websites that deal in these market areas, Google will hold your site to a higher standard than it would a site on a hockey team fan page or a page on rice cooker recipes.

It is also worth noting that Google does consider any website that has a shopping component, such as an online store, as a type of site that also falls under YMYL for ratings. Therefore, ensuring the sales process is secure would be another thing raters would consider.

If a rater wouldn’t feel comfortable ordering from the site or submitting personal information to it, then it wouldn’t rate well. And if a rater feels this way, it’s very likely visitors would feel the same too — meaning you should take steps to fix it.

Market areas for YMYL

Google details five areas that fall into this YMYL category. If your website falls within one of these areas, or you have web pages within a site that do, you’ll want to take extra care that you’re supporting this content with things like references, expert opinions, and helpful supplementary or additional content.

  • Shopping or financial transaction pages
    This doesn’t apply merely to sites where you might pay bills online, do online banking, or transfer money. Any online store that accepts orders and payment information will fall under this as well.
  • Financial information pages
    There are a ton of low-quality websites that fall under this umbrella of financial information pages. Google considers these types of pages to be in the areas of “investments, taxes, retirement planning, home purchase, paying for college, buying insurance, etc.”
  • Medical information pages
    Google considers these types of pages to go well beyond the standard medical conditions and pharmaceuticals, but it also covers things such as nutrition and very niche health sites for sufferers of specific diseases or conditions — the types of sites that are often set up by those suffering from medical condition themselves.
  • Legal pages
    We’ve seen a ton of legal-related sites pop up by webmasters who are looking to cash in on AdSense or affiliate revenue. But Google considers all types of legal information pages as falling under YMYL, including things such as immigration, child custody, divorce, and even creating a well.
  • All-encompassing “Other”
    Then, of course, there are a ton of other types of pages and sites that can fall under YMYL that aren’t necessarily in any of the above categories. These are still things where having the wrong information can negatively impact the searcher’s happiness, health, or wealth. For example, Google considers topics such as child adoption and car safety information as falling under this as well.

Google makes frequent reference to YMYL pages within the quality guidelines and repeatedly stresses the importance of holding these types of sites to a higher bar than others.

Expertise / Authoritativeness / Trustworthiness, aka E-A-T

Expertise / Authoritativeness / Trustworthiness — shortened to E-A-T — refers to what many think of as a website’s overall value. Is the site lacking in expertise? Does it lack authoritativeness? Does it lack trustworthiness? These are all things that readers are asked to consider when it comes to the overall quality of the website or web page, particularly for ones that fall into the YMYL category.

This is also a good rule of thumb for SEO in general. You want to make sure that your website has a great amount of expertise, whether it’s coming from you or contributors. You also want to show people why you have that expertise. Is it the the experience, relevant education, or other qualities that gives the writer of each page that stamp of expertise? Be sure to show and include it.

Authoritativeness is similar, but from the website perspective. Google wants websites that have high authority on the topic. This can come from the expertise of the writers, or even the year quality of the community if it’s something like a forum.

When it comes to trustworthiness, again Google wants raters to decide: Is a site you’d feel you can trust? Or is it somewhat sketchy and you’d have trouble believing what the website is trying to tell you?

Why you need E-A-T

This also comes down to something that goes well beyond just the quality raters and how they view E-A-T. It’s something that you should consider for your site even if these quality raters didn’t exist.

Every website should make a point of either showing how their site has a high E-A-T value or figure out what it is they can do to increase it. Does it mean bringing contributors on board? Or do you merely need to update things like author bios and “About Me” pages? What can you do to show that you have the E-A-T that not only quality raters are looking for, but also just the general visitors to your site?

If it is forums, can your posters show their credentials on publicly-visible profile pages, with additional profile fields for anything specific to the market area? This can really help to show expertise, and your contributors to the forums will appreciate being showcased as an expert, too.

This comes back to the whole concept of quality content. When a searcher lands on your page and they can easily tell that it’s created by someone (or a company) with high E-A-T, this not only tells that searcher that this is great authoritative content, but they’re also that much more likely to recommend or share it with others. It gives them the confidence that they’re sharing trustworthy and accurate information in their social circles.

Fortunately for webmasters, Google does discuss how someone can be an authority with less formal expertise; they’re not looking for degrees or other formal education for someone to be considered an expert. Things like great, detailed reviews, experiences shared on forums or blogs, and even life experience are all things that Google takes into account when considering whether someone’s an authority.

Supplementary content

Supplementary content is where many webmasters are still struggling. Sometimes it’s not easy to add supplementary content, like sidebar tips, into something like your standard WordPress blog for those who are not tech-savvy.

However, supplementary content doesn’t have to require technical know-how. It can comprise things such as similar articles. There are plenty of plug-ins that allow users to add suggested content and can be used to provide helpful supplementary content. Just remember: the key word here is helpful. Things like those suggested-article ad networks, particularly when they lead to Zergnet-style landing pages, are not usually considered helpful.

Think about the additional supporting content that can be added to each page. Images, related articles, sidebar content, or anything else that could be seen as helpful to the visitor of the page is all considered supplementary content.

If you are questioning whether something on the page can be considered secondary content or not, look at the page — anything that isn’t either the main article or advertising can be considered supplementary content. Yes, this includes a strong navigation, too.

Page design

By now you’d think this is a no-brainer, but there are still some atrocious page designs out there with horrible user experiences. But this goes much further than how easy the website is to use.

Google wants raters to consider the focus of the pages. Ideally, the main content of the page, such as the main article, should be “front and center” and the highlight of the page. Don’t make your user scroll down to see the article. Don’t have a ton of ads above the fold that push the content lower. And don’t try to disguise your ad content. These are all things that will affect the rating.

They do include a caveat: Ugly does not equal bad. There are some ugly websites out there that are still user-friendly and meet visitors’ needs; Google even includes some of them as examples of pages with positive ratings.

More on advertising & E-A-T

Google isn’t just looking for ads that are placed above the fold and in a position where one would expect the article to begin. They examine some other aspects as well that can impact the user experience.

Are you somehow trying to blend your advertising too much with the content of the page? This can be an issue. In Google’s words, they say that ads can be present for any visitors that may want to interact with them. But the ads should also be something that can be ignored for those who aren’t interested in the ads.

They also want there to be a clear separation between advertising and the content. This doesn’t mean you must slap a big “ads” label on them, or anything along those lines. But there should be a distinction to differentiate the ads from the main content. Most websites do this, but many try and blur the lines between ads and content to incite accidental clicks by those who don’t realize it was actually an ad.

All about the website

There are still a ton of websites out there that lack basic information about the site itself. Do you have an “About” page? Do you have a “Contact Us” page so that visitors can contact you? If you are selling a service or a product, do you have a customer service page?

If your site falls into the YMYL category, Google considers this information imperative. But if your site isn’t a YMYL page, Google suggests that just a simple email address is fine, or you can use something like a contact form.

Always make sure there’s a way for a visitor to find a little bit more about you or your site, if they’re so inclined. But be sure to go above and beyond this if it’s a YMYL site.

Reputation

For websites to get the highest possible rating, Google is looking at reputation as well. They ask the raters to consider the reputation of the site or author, and also ask them to do reputation research.

They direct the raters to look at Wikipedia and “other informational sources” as places to start doing reputation research when it comes to more formal topics. So if you’re giving medical advice or financial advice, for example, make sure that you have your online reputation listed in places that would be easy to find. If you don’t have a Wikipedia page, consider professional membership sites or similar sites to showcase your background and professional reputation.

Google also considers that there are some topics where this kind of professional reputation isn’t available. In these cases, they say that the reader can look at things such as “popularity, user engagement, and user reviews” to discover reputation within the community or market area. This can often be represented simply by a site that is highly popular, with plenty of comments or online references.

What makes a page low-quality?

On the other end of the spectrum, we have pages that Google considers low-quality. And as you can imagine, a lot of what makes a page low-quality should be obvious to many in the SEO industry. But as we know, webmasters aren’t necessarily thinking from the perspective of a user when gauging the quality of their sites, or they’re looking to take advantage of shortcuts.

5 clues

Google does give us insight into exactly what they consider low-quality, in the form of five things raters should look for. Any one of these will usually result in the lowest ratings.

  1. The quality of the main content is low.
    This shouldn’t be too surprising. Whether it’s spun content or just poorly-written content, low-quality content means a low rating. Useless content is useless.
  2. There is an unsatisfying amount of main content for the purpose of the page.
    This doesn’t mean that short content cannot be considered great-quality content. But if your three-sentence article needs a few more paragraphs to fully explain what the title of that article implies or promises, then you need to rethink that content and perhaps expand it. Thin content is not your SEO friend.
  3. The author of the page or website doesn’t have enough expertise for the topic of the page, and/or the website is not trustworthy or authoritative enough for the topic. In other words, the page/website is lacking E-A-T.
    Again, Google wants to know that the person has authority on the subject. If the site isn’t displaying the characteristics of E-A-T, it can be considered low-quality.
  4. The website has a negative reputation.
    This is where reputation research comes back into play. Ensure you have a great online reputation for your website (or your personal name, if you’re writing under your own name). That said, don’t be overly concerned about it if you have a couple of negative reviews; almost every business does. But if you have overwhelmingly negative reviews, it will be an issue when it comes to how the quality raters see and rate your site.
  5. The supplementary content is distracting or unhelpful for the purpose of the page.
    Again, don’t hit your visitors over the head with all ads, especially if they’re things like autoplay video ads or super flashy animated ads. Google wants the raters to be able to ignore ads on the page if they don’t need them. And again, don’t disguise your ads as content.

Sneaky redirects

If you include links to affiliate programs on your site, be aware that Google does consider these to be “sneaky redirects” in the Quality Rater’s Guidelines. While there isn’t necessarily anything bad about one affiliate link on the page, bombarding visitors with those affiliate links can impact the perceived quality of the page.

The raters are also looking for other types of redirects. These include the ones we usually see used as doorway pages, where you’re redirected through multiple URLs before you end up at the final landing page — a page which usually has absolutely nothing to do with the original link you clicked.

Spammy main content

There’s a wide variety of things that Google is asking the raters to look at when it comes to quality of the main content of the page. Some are flags for what Google considers to be the lowest quality — things that are typically associated with spam. A lot of things are unsurprising, such as auto-generated main content and gibberish. But Google wants their raters to consider other things that signal low quality, in their eyes.

Keyword stuffing

While we generally associate keyword stuffing with content so heavy with keywords that it comes across as almost unreadable, Google also considers it keyword stuffing when the overuse of those keywords seems only a little bit annoying. So for those of you that think you’re being very clever about inserting a few extra keywords in your content, definitely consider it from an outsider’s point of view.

Copied content

This shouldn’t come as a surprise, but many people feel that unless someone is doing a direct comparison, they can get away with stealing or “borrowing” content. Whether you’re copying or scraping the content, Google asks the raters to look specifically at whether the content adds value or not. They also instruct them on how to find stolen content using Google searches and the Wayback Machine.

Abandoned

We still come across sites where the forum is filled with spam, where there’s no moderation on blog comments (so they’re brimming with auto-approved pharmaceutical spam), or where they’ve been hacked. Even if the content seems great, this still signals an untrustworthy site. If the site owner doesn’t care enough to prevent it, why should a visitor care enough to consider it worthy?

Scam sites

Whether a site is trying to solicit extensive personal information, is for a known scam, or is a phishing page, these are all signs of a lowest-quality page. Also included are pages with suspicious download links. If you’re offering a download, make sure it comes across as legitimate as possible, or use a third-party verified service for offering downloads.

Mobile-friendly

If you haven’t taken one of the many hints from Google to make your site mobile friendly, know that this will hurt the perceived quality of your site. In fact, Google tells their raters to rate any page that is not mobile-friendly (a page that becomes unusable on a mobile device) at the lowest rating.

In this latest version of the quality guidelines, all ratings are now being done on a mobile device. Google has been telling us over and over for the last couple of years that mobile is where it’s at, and many countries have more mobile traffic than desktop. So, if you still haven’t made your site mobile-friendly, this should tell you emphatically that it needs to be a priority.

If you have an app, raters are also looking at things like app installs and in-app content in the search results.

Know & Know Simple Queries

Google added a new concept to their quality guidelines this year. It comes down to what they consider “Know Queries” and “Know Simple Queries.” Why is this important? Because Know Simple Queries are the driving force behind featured snippets, something many webmasters are coveting right now.

Know Simple

Know Simple Queries are the types of searches that could be answered in either one to two sentences or in a short list. These are the types of answers that can be featured quite easily in a featured snippet and contain most of the necessary information.

These are also queries where there’s usually a single accepted answer that most people would agree on. These are not controversial questions or types of questions where there are two very different opinions on the answer. These include things such as how tall or how old a particular person is – questions with a clear answer.

These also include implied queries. These are the types of searches where, even though it’s not in the form of a question, there’s clearly a question being asked. For example, someone searching for “Daniel Radcliffe’s height” is really asking “How tall is Daniel Radcliffe?”

If you’re looking for featured snippets, these are the types of questions you want to answer with your webpages and content. And while the first paragraph may only be 1–2 sentences long as a quick answer, you can definitely expand on it in subsequent paragraphs, particularly for those who are concerned about the length of content on the page.

Know Queries

The Know Queries are all the rest of the queries that would be too complex or have too many possible answers. For example, searches related to stock recommendations or a politician wouldn’t have a featured snippet because it’s not clear exactly what the searchers are looking for. “Barack Obama” would be a Know Query, while “Barack Obama’s age” would be a Know Simple Query.

Many controversial topics are considered to be Know Queries, because there are two or more very different opinions on the topic that usually can’t be answered in those 1–2 sentences.

The number of keywords in the search doesn’t necessarily preclude whether it is a Know Query or Know Simple Query. Many long-tail searches would still be considered Know Queries.

Needs Met

Needs Met is another new section to the new Quality Rater’s Guidelines. It looks at how well the search result meets what the searcher’s query is. This is where sites that are trying to rank for content that they don’t have supporting content for will have a hard time, since those landing pages won’t meet what the searchers are actually looking for.

Ratings for this range from “Fully Meets” to “Fails to Meet.”

The most important thing to know is that any site that is not mobile-friendly will get “Fails to Meet.” Again, if your site is not mobile-friendly, you need to make this an immediate priority.

Getting “Highly Meets”

Essentially, your page needs to be able to answer whatever the search query is. This means that the searcher can find all the information they were looking for from their search query on your page without having to visit other pages or websites for the answer. This is why it’s so crucial to make sure that your titles and keywords match your content, and your content is quality enough to answer fully whatever the searchers are looking for when your page surfaces in the SERPs.

Local Packs & “Fully Meets”

If your site is coming up in a local 3-pack, as long as those results in the 3-pack match what the query was, they can be awarded “Fully Meets.” The same applies when it’s a local business knowledge panel — again, provided that it matches whatever the search query is. This is where local businesses that spam Google My Business will run into problems.

Product pages

If you have a quality product page and it matches the search query, this page can earn “Highly Meets.” It can be for both more general queries — the type that might lead to a page on the business website that lists all the products for that product type (such as a listing page for backpacks) — or for a specific product (such as a specific backpack).

Featured snippets

Raters also look at featured snippets and gauging how well those snippets answer the question. We’ve all seen instances where a featured snippet seems quite odd compared to what the search query is, so Google seems to be testing how well their algorithm is choosing those snippets.

“Slightly Meets” and “Fails to Meet”

Google wants the raters to look at things like whether the content is outdated, or is far too broad or specific to what the page is primarily about. Also included is content that’s created without any expertise or has other signals that make it low-quality and untrustworthy.

Dated & updated content

There’s been a recent trend lately where webmasters change the dates on some of their content to make it appear more recent than it really is, even if they don’t change anything on the page. In contrast, others add updated dates to their content when they do a refresh or check, even when the publish date remains the same. Google now takes this into account and asks raters to check the Wayback Machine if there are any questions about the content date.

Heavy monetization

Often, YMYL sites run with heavy monetization. This is one of the things that Google asks the raters to look for, particularly if it’s distracting from the main content. If your page is YMYL, then you’ll want to balance the monetization with usability.

Overall

First and foremost, the biggest takeaway from the guidelines is to make your site mobile-friendly (if it’s not already). Without being mobile-friendly, you’re already missing out the mobile-friendly ranking boost, which means your site will get pushed down further in the results when someone searches on a mobile device. Clearly, Google is also looking at mobile-friendliness as a sign of quality. You might have fabulous, high-quality content, but Google sees those non-mobile-friendly pages as low-quality.

Having confirmation about how Google looks at queries when it comes to featured snippets means that SEOs can take more advantage of getting those featured snippets. Gary Illyes from Google has said that you need to make sure that you’re answering the question if you want featured snippets. This is clearly what’s at the heart of Know Simple Queries. Make sure that you’re answering the question for any search query you hope to get a featured snippet on.

Take a look at your supplementary content on the page and how it supports your main content. Adding related articles and linking to articles found on your own site is a simple way to provide additional value for the visitor — not to mention the fact that it will often keep them on your site longer. Think usefulness for your visitors.

And while looking at that supplementary content, make sure you’re not going overboard with advertising, especially on sites that are YMYL. It can sometimes be hard to find that balance between monetization and user experience, but this is where looking closely at your monetization efforts and figuring out what’s actually making money can really pay off. It’s not uncommon to find some that ad units generate pennies a month and are really not worth cluttering up the page to add fifty cents of monthly revenue.

Make sure you provide sufficient information to a visitor, or a quality rater, that can answer simple questions about your site. Is the author reputable? Does the site have authority? Should people consider the site trustworthy? And don’t forget to include things like a simple contact form. Your site should reflect E-A-T: Expertise, Authoritativeness and Trustworthiness.

Bottom line: Make sure you present the highest-quality content from highly reputable sources. The higher the perceived value of your site, the higher the quality ratings will be. While this doesn’t translate directly into higher rankings, doing well with regards to these guidelines can translate into the type of content Google wants to serve higher in the search results.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tuesday, November 24, 2015

RankBrain Unleashed

Posted by gfiorelli1

Disclaimer: Much of what you’re about to read is based on personal opinion. A thorough reflection about RankBrain, to be sure, but still personal — it doesn’t claim to be correct, and certainly not “definitive,” but has the aim to make you ponder the evolution of Google.

Introduction

Whenever Google announces something as important as a new algorithm, I always try to hold off on writing about it immediately, to let the dust settle, digest the news and the posts that talk about it, investigate, and then, finally, draw conclusions.

I did so in the case of Hummingbird. I do it now for RankBrain.

In the case of RankBrain, this is even more correct, because — let’s be honest — we know next to nothing about how RankBrain works. The only things that Google has said publicly are in the video Bloomberg published and the few things unnamed Googlers told Danny Sullivan for his article, FAQ: All About The New Google RankBrain Algorithm.

Dissecting the sources

As I said before, the only direct source we have is the video interview published on Bloomberg.

So, let’s dissect that video and what Greg Corrado — senior research scientist at Google and one of the founding members and co-technical lead of Google’s large-scale deep neural networks project — said.

RankBrain is already worldwide.

I wanted to say this first: If you’re wondering whether or not RankBrain is already affecting the SERPs in your country, now you know — it is.

RankBrain is Artificial Intelligence.

Does this mean that RankBrain is our first evidence of Google as the Star Trek computer? No, it does not.

It’s true that many Googlers — like Peter Norvig, Corinna Cortes, Mehryar Mohri, Yoram Singer, Thomas Dean, Jeff Dean and many others — have been investigating and working on machine/deep learning and AI for a number of years (since 2001, as you can see when scrolling down this page). It’s equally true that much of the Google work on language, speech, translation, and visual processing relies on machine learning and AI. However, we should consider the topic of ANI (Artificial Narrow Intelligence), which Tim Urban of Wait But Why describes as: “Machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.”

Considering how Google is still buggy, we could have some fun and call it HANI (Hopefully Artificial Narrow Intelligence).

All jokes aside, Google clearly intends for its search engine to be an ANI in the (near) future.

RankBrain is a learning system.

With the term “learning system,” Greg Corrado surely means “machine learning system.”

Machine learning is not new to Google. We SEOs discovered how Google uses machine learning when Panda rolled out in 2011.

Panda, in fact, is a machine learning-based algorithm able to learn through iterations what a “quality website” is — or isn’t.

In order to train itself, it needs a dataset and yes/no factors. The result is an algorithm that is eventually able to achieve its objective.

Iterations, then, are meant to provide the machine with a constant learning process, in order to refine and optimize the algorithm.

Hundreds of people are working on it, and on building computers that can think by themselves.

Uhhhh… (Sorry, I couldn’t resist.)

RankBrain is a machine learning system, but — from what Greg Corrado said in the video — we can infer that in the future, it will probably be a deep learning one.

We do not know when this transition will happen (if ever), but assuming it does, then RankBrain won’t need any input — it will only need a dataset, over which it will apply its learning process in order to generate and then refine its algorithm.

Rand Fishkin visualized in a very simple but correct way what a deep learning process is:

Remember — and I repeat this so there’s no misunderstanding — RankBrain is not (yet) a deep learning system, because it still needs inputs in order to work. So… how does it work?

It interprets languages and interprets queries.

Paraphrasing the Bloomberg interview, Greg Corrado gave this information about how RankBrain works:

It works when people make ambiguous searches or use colloquial terms, trying to solve a classic breakdown computers have because they don’t understand those queries or never saw them before.

We can consider RankBrain to be the first 100% post-Hummingbird algorithm developed by Google.

Even if we had some new algorithms rolling out after the Hummingbird release (e.g. Quality Update), those were based on pre-Hummingbird algos and/or were serving a very different phase of search (the Filter/Clustering and Ranking ones, specifically).

Credit: Enrico Altavilla

RankBrain seems to be a needed “patch” to the general Hummingbird update. In fact, we should remember that Hummingbird itself was meant to help Google understand “verbose queries.”

However, as Danny Sullivan wrote in the above mentioned FAQ article at Search Engine Land, RankBrain is not a sort of Hummingbird v.2, but rather a new algorithm that “optimizes” the Hummingbird work.

If you look at the image above while reading Greg Corrado’s words, we can say with a high degree of correctness that RankBrain acts in between the “Understanding” and the “Retrieving” phases of the overall search process.

Evidently, the too-ambiguous queries and the ones based on colloquialisms were too hard for Hummingbird to understand — so much so, in fact, that Google needed to create RankBrain.

RankBrain, like Hummingbird, generalizes and rewrites those kinds of queries, trying to match the intent behind them.

In order to understand a never-before-seen or unclear query, RankBrain uses vectors, which are — to quote the Bloomberg article — “vast amounts of written language embedded into mathematical entities,” and it tries to see if those vectors may have a meaning in relation to the query it’s trying to answer.

Vectors, though, don’t seem to be a completely new feature in the general Hummingbird algorithm. We have evidence of a very similar thing in 2013 via Matt Cutts himself, as you can see from the Twitter conversation below:

At that time, Google was still a ways from being perfect.

Upon discovering web documents that may answer the query, RankBrain retrieves them and lets them proceed, following the steps of the search phase until those documents are presented in a visible SERP.

It is within this context that we must accept the definition of RankBrain as a “ranking factor,” because in regards to the specific set of queries treated by RankBrain, this is substantially the truth.

In other words, the more RankBrain considers a web document to be a potentially correct answer to an unknown or not understandable query, the higher that document will rank in the corresponding SERP — while still taking into account the other applicable ranking factors.

Of course, it will be the choice of the searcher that ultimately informs Google as to what the answer to that unclear or unknown query is.

As a final note, necessary in order to head off the claims I saw when Hummingbird rolled out: No, your site did not lose visibility because of a mysterious RankBrain penalty.

Dismantling the RankBrain gears

Kristine Schachinger, a wonderful SEO geek whom I hold in deep esteem, relates RankBrain to Knowledge Graph and Entity Search in this article on Search Engine Land. However — while I’m in agreement that RankBrain is a patch of Hummingbird and that Hummingbird is not yet the “semantic search” Google announced — our opinions do differ on a few points.

I do not consider Hummingbird and Knowledge Graph to be the same thing. They surely share the same mission (moving from strings to things), and Hummingbird uses some of the technology behind Knowledge Graph, but still — they are two separate things.

This is, IMHO, a common misunderstanding SEOs have. So much so, in fact, that I even tend to not consider the Featured Snippets (aka the answers boxes) part of Knowledge Graph itself, as is commonly believed.

Therefore, if Hummingbird is not the same as Knowledge Graph, then we should think of entities not only as named entities (people, concepts like “love,” planets, landmarks, brands), but also as search entities, which are quite different altogether.

Search entities, as described by Bill Slawski, are as follows:

  • A query a searcher submits
  • Documents responsive to the query
  • The search session during which the searcher submits the query
  • The time at which the query is submitted
  • Advertisements presented in response to the query
  • Anchor text in a link in a document
  • The domain associated with a document

The relationships between these search entities can create a “probability score,” which may determine if a web document is shown in a determined SERP or not.

We cannot exclude the fact that RankBrain utilizes search entities in order to find the most probable and correct answers to a never-before-seen query, then uses the probability score as a qualitative metric in order to offer reasonable, substantive SERPs to the querying user.

The biggest advancement with RankBrain, though, is in how it deals with the quantity of content it analyzes in order to create the vectors. It seems bigger than the classic “link anchor text and surrounding text” that we always considered when discussing, for instance, how the Link Graph works.

There is a patent filed by Google that cites one of the AI experts cited by Greg Corrado — Thomas Strohmann — as an author.

In that patent, very well explained (again) by Bill Slawski in this post on Gofishdigital.com, is described a process through which Google can discover potential meanings for non-understandable queries.

In the patent, huge importance is attributed to context and “concepts,” and the fact that RankBrain uses vectors (again, “vast amounts of written language embedded into mathematical entities”). This is likely because those vectors are needed to secure a higher probability of understanding context and detecting already-known concepts, thus resulting in a higher probability of positively matching those unknown concepts it’s trying to understand in the query.

Speculating about RankBrain

As the section title says, now I enter in the most speculative part of this post.

What I wrote before, though it may also be considered speculation, has the distinct possibility of being true. What I am going to write now may or may not be true, so please, take it with a grain of salt.

DeepMind and Google Search

In 2014, Google acquired a company specialized in learning systems called DeepMind. I cannot help but consider that some of its technology and the evolutions of its technologies are used by Google for improving its search algorithm — hence the machine learning process of RankBrain.

In this article published last June on technologyreview.com, it’s explained in detail how not having a correctly-formatted database is the biggest obstacle for a correct machine and deep learning process. Without it, the neural computing (which is behind machine and deep learning) cannot work.

In the case of language, then, having “vast amounts of written language” is not enough if there’s no context, especially if not using n-grams within the search so the machine can understand it.

However, Karl Moritz Hermann and some of his DeepMind colleagues described in this paper how they were able to discover the kind of annotations they were looking for in classic “news highlights,” which are independent from the main news body.

Allow me to quote the Technology Review article in explaining their experiment:

Hermann and co anonymize the dataset by replacing the actors in sentences with a generic description. An example of some original text from the Daily Mail is this: “The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.”

An anonymized version of this text would be the following:

The ent381 producer allegedly struck by ent212 will not press charges against the “ent153” host, his lawyer said friday. ent212, who hosted one of the most – watched television shows in the world, was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “to an unprovoked physical and verbal attack.”

In this way it is possible to convert the following Cloze-type query to identify X from “Producer X will not press charges against Jeremy Clarkson, his lawyer says” to “Producer X will not press charges against ent212, his lawyer says.”

And the required answer changes from “Oisin Tymon” to “ent212.”

In that way, the anonymized actor is only possible to identify with some kind of understanding of the grammatical links and causal relationships between the entities in the story.

Using the Daily Mail, Hermann was able to provide a large, useful dataset to the DeepMind deep learning machine, and thus train it. After the training, the computer was able to correctly answer up to 60% of the questions asked.

Not a great percentage, we might be thinking. Besides, not all documents on the web are presented with the kind of highlights the Daily Mail or CNN sites have.

However, let me speculate: What are the search index and the Knowledge Graph if not a giant, annotated database? Would it be possible for Google to train its neural machine learning computing systems using the same technology DeepMind used with the Daily Mail-based database?

And what if Google were experimenting and using the Quantum Computer it shares with NASA and USRA for these kinds of machine learning tasks?

Or… What if Google were using all the computers in all of its data centers as one unique neural computing system?

I know, science fiction, but…

Ray Kurzweil’s vision

Ray Kurzweil is usually known for the “futurist” facets of his credentials. It’s easy for us to forget that he’s been working at Google since 2012, personally hired by Larry Page “to bring natural language understanding to Google.” Natural language understanding is essential both for RankBrain and for Hummingbird to work properly.

In an interview with The Guardian last year, Ray Kurzweil said:

When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.

The DeepMind technology I cited above seems to be going in that direction, even though it’s still a non-mature technology.

The biggest problem, though, is not really being able to read billion of documents, because Google is already doing it (go read the EULA of Gmail, for instance). The biggest problem is understanding the implicit meaning within the words, so that Google may properly answer users’ questions, or even anticipate the answers before the questions are asked.

We know that Google is hard at work to achieve this, because the same Kurzweil told us that in the same interview:

“We are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

The vectors used by RankBrain may be our first glimpse of the technology Google will end up using for understanding all context, which is fundamental for giving a meaning to language.

How can we optimize for RankBrain?

I’m sure you’re asking this question.

My answer? This is a useless question, because RankBrain targets non-understandable queries and those using colloquialisms. Therefore, just as it’s not very useful to create specific pages for every single long-tail keyword, it’s even less useful to try targeting the queries RankBrain targets.

What we should do is insist on optimizing our content using semantic SEO practices, in order to help Google understand the context of our content and the meaning behind the concepts and entities we are writing about.

What we should do is consider the factors of personalized search as priorities, because search entities are strictly related to personalization. Branding, under this perspective, surely is a strategy that may have positive correlation to RankBrain and Hummingbird as they interpret and classify web documents and their content.

RankBrain, then, may not mean that much for our daily SEO activities, but it is offering us a glimpse of the future to come.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Monday, November 23, 2015

We're Pleased to Announce Moz Content!

Posted by JayLeary

Stating the obvious here, but content is a massively important part of any inbound marketing campaign. The problem that most of us run into — and I know this well from years of SEO consulting with publishers — is that even “good” content can fade from view without a share, link, or conversion. Engaging an audience isn’t as simple as clicking “publish.”

So, how do we avoid making phantom content a habit?

For Moz, timely data has been a big part of the answer. Over the years, we’ve built internal tools like 1Metric to guide our work. It’s a simple strategy, but the more analysis we perform, the better we understand our audience. The better we understand our audience, the easier it is to produce engaging content.

When we blog and talk about those tools, folks in the community remind us that having something similar for their own use would be really helpful.

Well, we took that feedback to heart and, about a year ago, set out to create a product that helps marketers and content creators optimize their content efforts. Now, after lots of hard work, we’re ready to roll back the curtain on our latest offering: Moz Content.

Here’s a quick overview of we came up with…

The Content Audit

At the heart of Moz Content is the Content Audit. With an Audit, you can crawl and analyze any site, including a competitor’s. The Audit inventories a site’s pages and uncovers wins based on social and link activity… In other words, the basic analysis you’re probably already cobbling together in Excel.

More importantly, Moz Content helps you find meaning in that mess of data with automatic tagging and filtering based on topics, authors, and even content types (think lists, videos, news articles, and more). With an Audit, you can answer important questions about a site’s strategy, like:

How do Guides on the Moz Blog stack up against Lists?

vs.

Average links and shares are almost double for Guides. Let’s keep it up!

The filtering lets you segment content to easily surface insights about your current strategy. Are “social media” or “link building” articles generating more links? How do Whiteboard Fridays compare with other videos? Audits let you shortcut the analysis and answer pressing questions about your audience’s interests.

That point-in-time analysis is helpful when you’re researching or course-correcting, but we also know that ongoing performance reporting is critical to a content marketer’s workflow. That’s where Tracked Audits come in.

With a Tracked Audit, Moz Content will automatically re-crawl a site every week and trend your performance metrics. Then, with the handy Audit Selector, you can compare the Audits we’ve archived in order to measure your progress.

By comparing two Audits, you can easily surface gains or losses and learn if your latest efforts are resonating.

Content Search

When we built Moz Content, we knew that we’d need to help sites at both ends of the content creation spectrum. Tracked Audits are great if a site has an active audience, but if you’re just getting started, the focus is usually on research. That’s where Content Search comes in.

Content Search lets you explore popular articles from across the web with simple topic searches. Interested in SEO and content? A quick search for (no surprises here) “SEO AND content” surfaces competitor articles that have garnered lots of attention.

(You can also search for content on domains with the “site:” operator.)

Moz Content monitors hundreds of thousands of English-language sites in order to surface new content about the niches you play in. Use the tool to analyze competitors or research topics that are important to your audience.

For social media marketers, Content Search also helps with curation. After you find something interesting, you can share it directly with your followers:

It’s worth mentioning that our index is still growing and you may see some gaps in the reporting. If that’s the case, feel free to reach out with topics you’d like covered in the future.

And a final note: you’ll probably notice we’re not reporting Twitter shares. Twitter, as of a few days ago, shut down the endpoint that many of us were using to measure Tweet counts. We didn’t want this wrinkle to hold up the launch, but we’re on the case and working on alternatives.

Time to test drive

There are other details we could cover, but I’m guessing you’d rather just dive in and see for yourself. With Moz Content, we’re providing free, limited access to the Audit and Content Search. Just head over to https://moz.com/content and take it for a spin. (Tip: Log in to your Community account first for elevated page limits, more searches, and a saved Audit.)

If you need more data and higher limits, you can always subscribe to Moz Content on a monthly or annual plan. The Strategists tier goes for $59/month and we’ll be adding higher limit tiers with Google Analytics integration soon.

This is just the beginning for Moz Content — we’ll need your help as we improve and expand upon the functionality. Don’t hesitate to let us know what you’d like to see, and feel free to send any feedback our way with a comment below, a note to our Help Team, or outreach on social.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

5 Steps to Content Marketing Success

Posted by Paddy_Moogan

Content marketing is hard.

The problem is that the process looks easy. You brainstorm some ideas, choose one that you like, design and build it, do some outreach and you get traffic, links and social shares. Job done.

It’s a bit like link building, where someone may say, “Just build great content and the links will come.”

Unfortunately, it’s very rarely that straightforward.

Yes, sometimes you can get lucky and something will fly with little effort. But anyone that says that content marketing is easy has probably never done it over and over again. This is one of the reasons that I really liked this post last week by Simon Penson, because he admitted that he’d failed many times before getting it right. Simon pointed out that the plan he shared just increases the possibility of success — it doesn’t guarantee it.

In this post, I’m going to share our process for putting together a content marketing campaign. It doesn’t guarantee success either, but I’m positive that it puts us in a much better position than if we didn’t have a process at all. We’re always trying to improve this process, and it’s never going to be 100% perfect. With each campaign we do, there’s usually something we add or take away which also reflects the ever-changing nature of our industry. It’s also hard to manufacture and force that “ah-ha” moment, when you get a great insight into something and which then generates a great idea. Although this slide deck by Mark Johnstone helps make sense of how we get those moments in an excellent way.

One thing to point out before we get into the meat of this post is that it’s not just about “big” content. Our role as digital marketers (many of us with an SEO background) goes much wider than content that is purely designed to generate links and social shares.

A content strategy needs to include more than just one type of content, and for most clients at Aira, we do multiple types of content based on their objectives. But that’s a post for another day, because today I’m going to talk about our process in the context of content that is designed to generate links and social shares, driving traffic as a result.

There are five broad steps in the process:

  1. Research and idea generation
  2. Idea validation
  3. Production
  4. Promotion
  5. Conversion

Step 1 – Research and idea generation

It’s easy to dive straight into brainstorming and idea generation, and sometimes, that can work. However, I’d always recommend a period of research into an industry prior to this so that you can get a feel for what’s been done before, what has worked, and what hasn’t. This can mean that you go into a brainstorming section far better equipped to generate ideas that may work.

One thing to point out at this stage is that you shouldn’t put yourself under pressure to come up with a completely new idea. It’s great if you can, but the reality is that it’s unlikely that something has never, ever been done before in some form or another. So you shouldn’t put this pressure on yourself. The following quote is an apt one:

“An idea is nothing more or less than a new combination of old elements.”

This is from the book A Technique for Producing Ideas, published in 1939 by James Webb Young. It’s a short — but excellent — read, and I’d highly recommend it.

I think you’d agree that over 75 years later, this quote is even more true now!

Therefore, a big part of the thinking behind our process is looking for inspiration in what others have done and asking ourselves if we can do it a little bit better or a bit differently. I’m certainly not saying you shouldn’t try to come up with brand new ideas, but don’t let an idea fall by the wayside just because it has been done before.

I’m going to frame the rest of this step by saying something: The most successful* content that you find will come down to at least one of three things:

  1. The story – If something has a strong story or hook behind it, it’s more likely to grab attention and be picked up by mainstream news websites and publishers.
  2. The data – Often tied into the story but not always explicitly, if you have unique data or data that has been sliced/interpreted differently, it can be of more interest to someone.
  3. The production – Sometimes a piece of content may just look visually stunning, and that is enough to generate links and shares.

There is one more which I want to point out, but it’s been deliberately left out of the list above. The one other thing that can make a piece of content successful is an existing audience to market that audience to. A prime example in our industry here is Moz, who has a very large, existing audience. This means that this very blog post is more likely to get links and shares than it would if I published it on my own blog, which has a very small audience.

This is important to remember because, when looking at your competitors and the success of their content, the numbers may be skewed a bit because of the audience they have. I’ll show you how to offset this below.

* Successful, in the context of this post, means generating links and social shares that drive quality traffic. Success can mean many things to different businesses, so I just wanted to remind you of this.

Find your content competitors

The first key step is to research your content competitors, and it’s very important to recognize the difference between your product/service competitors and your content competitors. Let’s look at an example.

Let’s say you’re a travel website. You may be trying to rank for keywords such as “flights to New York” or “holiday apartments in Italy” because you provide those things. You’ll have competitors who are trying to rank for the same kind of keywords and of course, you should take a look at what they’re up to. However, there is a whole other section of websites who don’t compete for these type of keywords, but whom you can learn a lot from when it comes to content. In this example, those websites are travel bloggers and publishers who have travel sections. They produce the exact kind of content that generates links, social shares and traffic — exactly what we’re trying to do with our own websites.

Examples in the travel world could be Nomadic Matt and Jayne Gorman, who both produce great content that generates links and social shares. If I run a travel website and I wanted to learn what content can work well in my industry, I’d definitely take a closer look at these kind of people for inspiration. They may even be people who I could partner with on content ideas, but that’s a bit outside the scope of this post.

It’s pretty simple to find our content competitors. The quickest way is to think of a few non-commercial keywords. Examples related to travel may be “guide to New York City” or “planning a trip to Italy,” which are likely to show search results that include publishers/blogs as opposed to direct competitors. You can also use the keyword search function in Buzzsumo to do these kind of searches:

The results will show you content that contains this keyword, ordered by social shares

If you’re not familiar with Buzzsumo and would like to learn the basics, take a look at this post that I wrote on Moz a few months ago, which talks about this and shows how we use the API for one of our internal tools at Aira.

Finding stories and topics

Once you’ve found a handful of content competitors (we try and find at least 4–5), it’s time to start taking a closer look at what they’re doing. Buzzsumo allows us to do this quickly and easily; all we need to do is run a domain search and use an advanced search operator to search multiple domains at once:

You just need to put OR in between the domains that you want to do research on. The resulting search looks like this:

As you can see, Nomadic Matt is dominating the results, which is likely to be because of a combination of writing great content and having a larger audience than the other websites we searched for. This is a good example of where we may actually want to temporarily remove him from the list, so that we see a more diverse set of results. However, you can also just download a CSV from Buzzsumo and filter his domain out if you wish.

The important step here is to scan the list of results to try and find patterns and trends. In the screenshot below, I can immediately see a pattern:

Some of the best-performing posts are lists. We can see this quickly by noticing the numbers at the start of the title. Going a bit further down, I notice another pattern:

Lots of these posts are “How to”-style posts, which are clearly popular with his audience due to them featuring high on the list of results.

It doesn’t take long to start noticing these patterns. Make a note of them and we’ll come back to how we’re going to use them later.

Another way to find patterns is to analyze the titles in bulk. We can do this by doing an export from Buzzsumo so that we get a list of titles:

You can then copy and paste these titles into a word cloud generator tool, such as Wordle, and get something like this:

You’ll need to remove common words, such as the website names and domains, but the result above is basically a summary of the words that get the most shares — which is really handy to know in bulk. Again, make a note of these kind of themes.

I know what you may be thinking at this point: What about links? Buzzsumo can give you backlink data, but you have to click on each individual result to get it. This is fine for a small number of articles, but we’re trying to do bulk analysis. So instead, we’re going to export the results to a CSV and then upload those results into URL Profiler, which can fetch link metrics for us in bulk.

These are the settings you want:

You can select your choice of Mozscape, Majestic, or Ahrefs data, or all three — it’s up to you. The point is that we need to know how many links our content competitors are generating to their individual content pieces. The results will then look something like this when you export the results to Excel:

Once you’ve got this, you can do some pivot table magic to make the data easier to consume. Here are the settings that you need:

Then you’ll end up with a graph that looks something like this (you can, of course, make it prettier!):

As we can see, A Luxury Travel Blog is leading the way in terms of generating links to their content, so they’re worthy of a closer look. The beauty of this process is that Buzzsumo does a pretty good job of excluding the homepage from their results, so the results are showing links just to the content they produce. From here, we can do a deeper dive into their links using Mozscape, Majestic, or Ahrefs — whichever you prefer.

Before moving on, I want to mention a few other tools that we use in this step of the process. Epic Beat is very similar to Buzzsumo in that you can enter a domain or keyword to find what content is being shared the most. Combining the results from Epic Beat and Buzzsumo can give you lots of information on what is working for competitors in your industry:

Another cool tool — which is more for qualitative analysis rather than quantitive — is Brandtale, which curates digital content/advertising campaigns on large publishers. Sticking with our travel example, I can browse their travel section to find brands who are running campaigns:

I can drill into any of these and see what these brands are doing and if I can learn anything. Trust me, running content campaigns like this on large publishers, such as National Geographic or the New York Times, is expensive. A lot of work will have gone into them, which means they’re worth looking at.

Finding data sources

Our next step is to try and find data sources that could lead to us creating a piece of content or a story that can be pitched to publications. I’d highly recommend Statista for this, which is a growing resource of statistics and facts. Sticking with our travel example, here is a snapshot of the kind of data it has available with a simple search:

If Statista doesn’t have what you need, a few simple searches on Google will often yield good results. Just remember to do a bit of due diligence on where the data comes from and make sure that it’s as sound as possible.

Failing that, can you get your own data? There are many organizations and services out there who will gather data on your behalf. Yes, you have to pay for them, but if you think the data can help you generate links and shares for your website, then it could be worth the investment. Here are a few options:

  • Google Consumer Surveys
  • One Poll
  • You Gov
  • Toluna
  • Usurv
  • Survata

Some of these can be expensive to use, so I recommend using something like Google Consumer Surveys to poll a small sample of people. Then, if the data is looking promising, run the full survey.

Finding visual content

The final piece of this research is finding visual content which has done well and seeing if we can do better. Like finding data, don’t overthink this, and start with a few simple searches. Google Images is always a good place to start with keywords such as this:

You can get more specific based on the website you’re working with, but what we’re looking to do here is scan the results quickly and see if anything stands out to us:

If we find any that look particularly good or interesting, we take a closer look and ask ourselves the question, “Can we do it better?” While some visuals may look okay and performed well, there are often ways to improve on something, such as:

  • Making the core story or headline more obvious
  • Making it interactive to make the key messages easier to consume
  • Making the design cleaner so that key messages are communicated better

There are any number of ways a good designer can make an existing idea much better, and as we discussed earlier, making something beautiful can sometimes be enough to make it successful.

Once we’ve found something we think we can do better, the next question is how successful was it? One way to do this is to use this Google Chrome extension to automatically do a Google Image reverse search to see how many other websites have used that visual. If the answer is a fair few, then you know that a better version is likely to be of interest to a number of websites.

Putting all of this research together

That was a lot to go through! But trust me, it’s worth it. The next step is to take all of this information and put it into a brainstorm session brief for your team. When it comes to brainstorming, many people will say “all ideas are good ideas” — but this simply isn’t true.

A brief is very important here, because your team needs to walk into that session with the right information and context. If they don’t, then the majority of ideas that are generated may not actually be usable — which isn’t a very good use of time.

To make this easier, I’ve put together a Google Doc template which you’re welcome to download and make a copy of. You can find it here.

Step 2 – Idea validation

The more I work on content marketing campaigns, the more I value this step in the process. You can think that you have a great idea, but how do you know for sure? The fact is that you can never predict this 100%, but you can increase the possibility by using a framework to validate an idea.

The key thing here is not the frameworks that I talk about the below, but to make sure you use some kind of framework so that you can consistently and fairly assess the quality of your ideas.

One of the frameworks I’d recommend, which some of you may have heard of, is from Made to Stick by Chip and Dan Heath. I’m not going to go into too much detail here simply because lots has already been written on the topic, including this post from Distilled and this more recent post by Hannah Smith, which references the framework. There is also this summary of the book, which talks about the key takeaways and what the principles of Made to Stick are.

In summary, the book outlines six principles which, through their research, the authors feel make an idea stick in our minds.

  1. Simplicity – An idea needs to be easy for us to comprehend quickly. A good way to test this is to write the headline and see if you can communicate the idea within the restrains of a headline( i.e. you only have a short sentence).
  2. Unexpectedness – While the idea doesn’t have to be 100% brand new, there needs to be something new or unexpected about it.
  3. Concreteness – This can often be mixed up with simplicity, but is subtly different. Concreteness is all about the idea not allowing room for ambiguity or misinterpretation of what you’re trying to say.
  4. Credibility – The basis of the idea needs to be credible. This can be via credible data, a credible (expert) author or a credible company behind the idea.
  5. Emotion – If an idea provokes an emotional response, we’re much more likely to remember it.
  6. Story – We touched upon this earlier and goes back to when we were children. We were told stories and all of us can remember certain ones. We’re used to the structure of a story and how it peaks our interest.

The key here isn’t the framework itself, although that is very important. The key is the ability for you and your team to give each other valuable, constructive feedback on an idea. It’s often easy to just say “I don’t like that idea” or “That idea won’t work,” which, even if you’re right, isn’t the most useful feedback to receive. With a good framework, someone can reference it in their feedback. So if you’re using the framework above, you can say “I don’t think the idea is simple” or “It’s not concrete enough” — this is far more useful feedback to hear and it may mean that an idea simply needs tweaking rather than dumping completely.

As mentioned earlier, this isn’t the only framework you can use. Another one goes back to what we talked about earlier:

  1. Do we have a story or an interesting hook?
  2. Do we have unique, interesting data?
  3. Can we make the idea look beautiful?

Answering yes to at least one of these questions can increase the chances of your idea being a success.

Step 3 – Production

I’m not a designer or a developer, so I’m not going to tell you how to design or develop a piece of content. But there are some things that we’ve learned (sometimes the hard way) when it comes to producing a piece of content.

Function over form

The first thing I want to share here which is important is to remember function over form. Never, ever say “I want an infographic” or “I want a video” or “I want an interactive piece of content.” You should focus on getting the right idea first, then ask what the best way to present that idea is. If it turns out that an infographic is the best way to present your idea, then great. But don’t start with the form; start with the idea and see where it takes you.

This may help reduce the number of terrible infographics on the web which, unfortunately, our industry is at least partly responsible for!

Mobile-first design

There are stats upon stats showing the growth of mobile, so I’m not going to tell you those again. If you want to do some digging, I’d highly recommend the work and analysis from Ben Evans, who specializes in this area.

In relation to content, what we need to remember is that content discovery is becoming more and more mobile-centric. We typically think of content discovery as someone browsing on their laptop/desktop machines and clicking through from a blog, Twitter, or Facebook. In reality, though, it actually looks more like this:

When someone clicks on a link like this on their mobile device, they expect the content they land on to work perfectly on their device. If it doesn’t, the user is not likely to enjoy or engage with the content, let along share it or link to it from somewhere.

This deck from Vicke Cheung does a great job of showing the importance of designing for mobile, along with practical tips for doing this:

Ten Lessons in Designing Content for Mobile from Vicke Cheung

Another key thing here is to let designers design. Try not to restrict them by providing a brief that tells them 100% how something needs to be done. Give them the goals of the piece and some guidelines, then let them design. Of course give them feedback along the way, but try not to be too prescriptive.

Go-live checklist

One of the lessons we’ve learned the hard way is that in your excitement to get something live, you can forget some of the basics. A few common things that need to be thought about, but are easily forgotten, can be:

  • Social/Open Graph tagging
  • Analytics code
  • Responsive testing

To help with this, here is another Google Doc which you can download and use which contains a few things to remember:

While the things on this list seem basic, it can be very easy to forget!

Step 4 – Promotion

Here is one of the key takeaways: Spend just as much time on promotion as you do on the production. It’s so easy to get caught up on design, development, and the idea itself, you can end up spending most of your time on producing it and not nearly enough time on promoting it.

There are three different types of promotion we work on at Aira. These differ by client, but ideally, we spend time on all of the following:

A combination of all three can help ensure that your content reaches as many people as possible. I used to rely solely on organic content promotion via traditional link building outreach/digital PR, but this may not be enough and ignores some useful techniques.

Paid promotion

Paying to promote your content can be very useful in generating traffic to a piece of content, which in turn, can also help generate social shares and sometimes links. Larry Kim goes into detail on this in his post over on Search Engine Land. The basic principle is that you can use paid promotion to get your content in front of writers, bloggers, journalists, and influencers.

There are a few options for how you can do this. Firstly, to reach a wide audience, you can use platforms such as Taboola or Outbrain. These can work well for reaching a very big audience, but targeting options for specific demographics on these platforms is still rather limited.

Wil Reynolds ran an experiment using these (and other) platforms, which is definitely worth looking at:

The $10,000 Paid Content + Paid Linking Test that is 100% Google Safe from Wil Reynolds

Our experience with these particular platforms is very mixed, with it working well for some clients but sending very untargeted traffic for others. So we’d advise starting with a small budget and assessing the quality of traffic before spending too much.

Other options are more regular social channels such as:

  • Facebook
  • Twitter
  • LinkedIn
  • Pinterest

The one I want to focus on is Facebook, where the targeting options are almost scary. But they’re useful to us nonetheless. You can do things such as specifically targeting journalists using options such as:

You can put whatever list you’d like in here, but I’m sure you get the idea!

You can also go one step further in targeting people by uploading their email address into the custom audience feature of Facebook:

It’s straightforward here to upload your outreach list; if Facebook can find a match for the email addresses, you can advertise directly to those people. If you’d like to go into more detail on this, take a look at this post I wrote last year.

Earned promotion

This is likely to be more familiar to most of us because this section covers traditional link building outreach and digital PR. Essentially, we need to find a list of influencers and contact them in order to promote our content to them. This sounds simple, but can often be the trickiest part of the process… because it’s here that you may find out that you don’t actually have a great idea! This is why the idea validation step is so important — because it reduces the chances of promotion going wrong.

I’ve written multiple times about finding outreach prospects before, so I won’t repeat everything here. But I will point out my favourite techniques for doing this.

Finding existing lists of prospects

I honestly start every single piece of link building research with these kind of searches:

Simply switch out [INDUSTRY] for your own industry and you’ll find more than enough prospects to keep you busy!

Finding mainstream publications and journalists

Here, we’re trying to find high-level outreach targets who write for national newspapers and mainstream publications. The value of these can be huge, because many websites like this have the ability to send a LOT of traffic to your website.

Here are a few tools (mostly paid, unfortunately) that you can use for this kind of research:

  • Gorkana
  • Cision
  • Muckrack
  • A News Tip

You can of course do manual research as well, but these tools can help speed things up a bit.

Owned promotion

This one will depend heavily on what your client already has, but essentially we’re talking about using their own channels, such as:

  • Social channels like Facebook, Twitter, Instagram, LinkedIn, or Pinterest
  • Their existing blog
  • Their email marketing list

This may sound easy, but I’ve worked with some companies where the social team sits separately to the SEO/content team — which can make it harder to get them to work together! If you can bridge this gap, though, it’s a pretty easy win to get eyeballs on your content.

Step 5 – Conversion and tracking

So here we are, at the final step of our process, and I want to be really honest about this bit. It can be quite hard to convert a visitor to a piece of content that is designed for links and social shares. These kind of content pieces are often not designed to “sell” to the visitor, so getting them to click across to the main website or a product page (let alone getting them to buy something) is difficult. There are exceptions; this piece from Bellroy is one that comes to mind which is informational but very related to their product:

Generally though, this is difficult to pull off. So what can we do instead?

Micro-conversions

If we can’t convert someone into a buyer, what else can we do? One thing we’ve done for some clients is to try and capture a visitor’s email address so that we can then target them on Facebook or via email marketing. Or it could be any other number of things, such as:

  • Commenting on a piece of content (so you also get their email address)
  • Sharing a piece of content
  • Spending a certain amount of time on your content

Build retargeting lists

If someone visits a piece of content, you can build a retargeting list and then advertise to them in the future. There are two ways you can do this:

  1. Advertise your products and services to try and encourage clicks and future purchases
  2. Advertise your future content pieces — this can work very well if you’re working on a content series, i.e. a series of blog posts that all tie together

Build retargeting lists based on interactions with a page

This is a post for another day, but it is possible to go more targeted when building your retargeting lists, by building them based on how someone interacts with your content. For example, you can fire Facebook retargeting pixels when someone clicks on a certain link or when someone selects a certain option (if your content is interactive). This means that you can build lists that are very specific, and you can cater your advertising based on the interactions that users have carried out.

To wrap up

So that’s about it for today’s post! These are the five broad steps that we take for a content marketing campaign, and while we’re always iterating on them and improving them, they have increased our chances of success — which is what this is all about. You’ll never guarantee success, but whether you use the process above or your own, you certainly should utilize a process to enhance your chances.

I’d love to hear your feedback in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!