To see what Databox can do for you, including how it helps you track and visualize your performance data in real-time, check out our home page. Click here.
“Ok. I really need to optimize organic search rankings. It’s time to hire an expert consultant. Or an agency. Or, wait – maybe a full time employee? Hell, this is too important and too complicated to muck up – I’d better hire all three. Where can I cut budget?”
It’s true that SEO is time-consuming and requires a degree of technical comfort. And I have no doubt that, as with every area of marketing, there are individuals and agencies out there that can make a huge difference.
But that’s no excuse not to tackle some of the basics yourself. It’s time to stop overcomplicating the technical aspects of search engine optimization, and get comfortable with some of the basics.
Before we get started, I’d like to set the stage by reminding you that SEO is about content. Content for humans, and content for search engine spiders. Your job, as a digital marketer, is to make sure your website content:
It’s that last part that confuses everyone, but doesn’t have to. Search engine spiders are trying to understand your site. You’re trying to make it dead simple for spiders to read your site through relevant language and clean code.
At the end of the day, there are two parts to technical SEO: getting your content read, and getting your content understood.
The universal language of the web is HTML. That’s what search engines depend on to figure out what content you’re offering visitors. But a lot of the fancier things you see online – like infinite scroll pages or dynamic content – rely heavily on Javascript. There’s nothing inherently wrong with Javascript; it’s just traditionally not been something that search engine spiders are fluent in. The good news: Spiders want to read content. That’s their sole purpose. And Javascript is pretty popular. So Google has an incentive to get better at handling Javascript. But to play it safe, if you use a lot of Javascript commands to render your content you may want to code fallback pages for your site.
A sitemap is a shortcut to seeing what’s on your site. Think of a sitemap as doing the work for the spiders – sort of a Cliff Notes for robots. You can use a sitemap to provide Google with metadata about specific types of content on your pages, including videos and images.
Now, creating a sitemap does mean coding an XML file. There are tools that you can use to do it, but this is probably not something you want to attempt if you’re not comfortable with code. Instead, make a point to ask your nearest front-end dev to make sure you have one for your site.
The purpose of the robots.txt file is to inform spiders which directories you do not want indexed – or to block an entire site. It’s a standard followed by all major search engines. Now, keep in mind – you can’t use it to make a site private; if someone has the link, they’ll be able to see it even if you’re told search engines not to index your content. But you can use robots.txt to keep your blog that’s dedicated to embarrassing 80s pictures from showing up in Google search results.
Search engine spiders want to make sure the pages they’re reading are really yours. They also don’t like having to arbitrarily choose which version of a page is the “right” one to display in search results. So having the same thing hosted on more than one URL can hurt you – and duplicate content is more common than you think:
That doesn’t mean you can’t ever have the same content in two places – you just need to use canonical links to designate the preferred version of a page when multiple versions of the URL exist.
To set up a canonical like, just place the following code in the page header: < link rel="canonical" href=“ENTER YOUR PREFERRED URL"/ >
< link rel="canonical" href=“ENTER YOUR PREFERRED URL"/ >
And if you have two entirely separate pages that host the same content, use a “no follow” tag in the page header.
You can also use redirects to avoid publishing content in two places for the sake of something like a vanity URL. There are a couple types of redirects, but the ones you want to ask your development team for are most commonly 301, or permanent, redirects.
Your title tags are the first stop a search engine spider makes to understand what your page is all about. The html tag < title > specifies your page title. A good title tag is descriptive and unique. The first word is valued most, followed by the second word, then the third, and so on. But your title tag is not strictly for robots: it will also control the headline that appears when your site is returned in a search query. Typically, Google will display the first 50-60 characters. So remember: good title tags are keyword rich but still user friendly. You want someone clicking on it!
< title >
Search engine spiders consider your header tags the most important description of your page. Your primary header is indicated by a < h1 > tag. A good h1 is relevant and consistent to other content on the page. You do not always need to style it to look like your headline – you can use css and style overrides to make it visually look like a subhead, for example. And you can use more than one h1 tag per page, but that suggests you have two different things of equal importance to the search engine spider – which isn’t always a good things. Use your h1 tags wisely!
< h1 >
Your URL structure is another indication of what your page is about. Spiders (and humans) like “user-friendly” URLs. A general best practice is to relate your URL structure to your page title.
Remember, search engine spiders understand “text” – i.e. html – not images or videos. If you want your image to contribute to the description of your page – and you should! – use a descriptive file name and add alt text so crawlers know what they’re looking at.
Meta descriptions have no direct link to SEO, so they’re easy to overlook. But they do show up in search results as the description for the page returned – Google will typically display the first 120-150 characters. That means they play a large role in CTR. Think of your meta description as ad copy – you want to use them to optimize your CTR, because higher engagement will help you increase your search rank over time.
Once you have all of these things in place, use this SEO dashboard to track the performance of your content in SERPs.
If I’ve convinced you that there are some basic aspects of search engine optimization you can get started on today, your next step will be doing a little digging to figure out where you should focus your SEO efforts. Your first step should always be some basic research on Google. Specifically, you should:
Once you’ve gotten a basic handle on where you stand, you can always do a more in depth SEO audit.
At the end of the day, the technical side is probably the easier aspect of search engine optimization. The real power comes from creating lots of great content that people want. After all, Google’s purpose is to direct people to content they are actively looking for. Every change they make to how websites show up in search results is designed to support that purpose. Truly, the best SEO strategy is making sure you’re giving your website visitors a great experience and improving overtime based on insights from your SEO dashboard software.
Are you maximizing your business potential? Stop guessing and start comparing with companies like yours.
At Databox, we’re obsessed with helping companies more easily monitor, analyze, and report their results. Whether it’s the resources we put into building and maintaining integrations with 100+ popular marketing tools, enabling customizability of charts, dashboards, and reports, or building functionality to make analysis, benchmarking, and forecasting easier, we’re constantly trying to find ways to help our customers save time and deliver better results.
Hey, we’re Databox.Our mission is to help businesses save time and grow faster. Click here to see our platform in action.
The all seeing eye of Databox webmaster
Get practical strategies that drive consistent growth