4 Easy Ways to Make Your URL Structure SEO-Friendly
There has been an age-old discussion on the topic of whether SEO’s and Web Developers can work in an equal partnership. The relationship between these two professions has been complex and strained at times, and one area where the two clash is how site URL structure should be set-up.
On the one hand you have SEO’s who look primarily at how Google will respond to the structure and on the other hand you are faced with Web Developers who look at all the technical aspects. So the question is, how can both professions work together to create the perfect SEO friendly URL structure for your site?
Well, as an SEO Manager myself, I understand why a strong relationship between a web developer and myself is so important at this stage. I need a web developer just as much as they need me, and working in harmony is paramount to getting things done right.
That is why we have come up with four easy ways on how you can help improve your site’s URL structure.
How To Create Keyword Rich URL’s
I think the first point we should address is the way the URL’s on your site look. For example, if you create a post or page on WordPress you will most likely get this type of URL:
This URL is what we call dynamic, as it can change depending on the context. You will also see that this URL tells you nothing about what you can expect to find on the page, does it?
Therefore, we want something that is both user-friendly, as well as something that helps the likes of Google determine what the page is about by the URL.
This URL is perfect, as it’s static and contains everything we need to know about the post :
Tip: If you are using the WordPress Content Management System, then head over to the Permalinks section where you can customise the structure of your posts using variables. This section can be found on the main left hand navigation under settings, and then Permalinks.
To get your URL structure to look like the link above, you will need to enter /%postname%/ in the Permalink section. The screenshot below should help you.
How To Prevent Google From Indexing Certain Pages
We all tend to have some pages on the site that either contain sensitive information, or are simply just not that important. What I am talking about here are pages such as Privacy Policies and the Terms & Conditions sections.
These are pages that you do not need to rank, and pages that will only dilute the rest of you site’s profile.
The easiest way to stop Google from crawling these pages is to make use of your Robots.txt file, this is a text file that is usually houses on the root of your server which contains instructions for search engines.
If you wanted to extend that piece of code to all search engines, you would simply replace Googlebot with the wildcard character *
How To Design a Sitemap The Search Engines
The next point I wanted to address is the use of an XML Sitemap. This Sitemap helps the likes of Google and other search engines see all the pages on your site, that you want indexing. I would not recommend adding every URL on your site to this XML version, as it is only intended to serve as a base for the URL’s that you want to see in Google’s results.
An XML Sitemap will also help with any duplicated content that you may have on your site, by using the rel=”canonical” tag which sets a preferred URL for Google to select and show in results.
So for example, let’s say that three URL’s take you to the same page.
This will be classed as duplicate content in Google’s eyes, but it will also devalue the strength of that link as it is split over three URL’s. Therefore, you will need to tell Google that only one of these URL’s needs to be displayed in the search engines results, which will keep the weight of that page.
Tip: If you are running a WordPress blog then there is a nifty plugin called Google XML Sitemaps which can automatically generate the Sitemap for you, as well as notifying all major search engines when you create a new post.
Watch Matt Cutts Discuss The Canonical Tag Element
Use Webmaster Tools To Check For Broken Links
404 errors don’t affect your site’s performance in search, but do cause usability issues that should be addressed as soon as possible. The best place to see any broken links that are reported is in Google Webmaster Tools.
By having broken links, you could be missing sales, but you could also be losing potential customers, as they will not return to the site in the future.
So, to combat these errors you will need to firstly login to Webmaster Tools, then locate the Health link on the left. Now you will be able to see an option labelled as Crawl Errors, this section will show you how many “Not Found” errors are being recorded, and if you scroll a little further down the page, you can see which links are broken and when these have been detected.
Tip: In Google Webmaster Tools you also have the feature to configure your preferred domain which can establish a link between the http:// and www. version of your site. Therefore, if we set our preferred URL to http://www.websiteplanet.com and links have been built to the http://websiteplanet.com version, Google will treat both URL’s as the same so we won’t lose any link love.
What Have We Learnt?
Many sites neglect having an SEO Friendly URL structure; however, you can see from our improvement tips here that having a good, solid URL structure in place can give your site a nudge forward to better search engine placement.
- Make use of static URL’s rather than dynamic.
- Make use of Robots.txt to exclude certain URL’s from being indexed in search engines.
- Make use of an XML sitemap so that Google can pick up what links you want caching.
- Make use of the Canonical tag to identify to Google which page you want indexing when there are duplicates accessible.
- Make use of Google Webmaster Tools to tidy up any broken links, and setting up your preferred domain.
You are not only helping the site in search engines, but you are also pushing for improved site usability, so everyone’s a winner!