On this page:
- Explaining Natural Search to business managers
- Improving indexing; mostly a technical task
- Improving ranking; mostly a business/marketing strategy
- What works now may not work in the future
- It takes time
- Terminology confuses matters?
A common expectation (from both technical and non-technical people alike) I find is that web developers should be able to develop sites in such a way that they will rank higher.
I have seen IT tenders that requires a site that “must” appear on the first page of a search query. From a business perspective, this requirement is understandable. However, for an IT/web development/search company to somehow guarantee this is misleading and irresponsible. However, it is an opportunity to explain “natural search.”
Explaining Natural Search to business managers
I usually try to explain that natural search engine optimisation (SEO) involves at least two areas:
- Search engine indexing
- Getting search engines inside your site to understand the content
- Search engine ranking
- Determining how to rank the content
I then try to explain that the issues/techniques for each of these areas can be quite different (some may overlap though) and I often sum it as:
- Improving indexing is mostly a technical task
- Improving ranking is mostly a business/marketing strategy
- What might work now may not work in the future
- It takes time
Improving indexing; mostly a technical task
Technical things that web developers do typically helps increase the chance a site is indexed well. For example (in no particular order, and not a complete list!):
- Sufficiently clean URLs to avoid page weight dilution. This usually means trying to ensure that everyone uses the same link to a given page, with no variation in the way querystrings or paths are written (else the search engines will assume these are different pages)
- Good use of the
<title>element as well as meta keywords and descriptions so that if a page does show up in the rankings, the summary information will be useful (these elements have little impact on ranking these days, but are useful for indexing).
- Proper use of redirects and other HTTP status codes to help search engines follow or not follow certain pages. For example, an HTTP 302 is for a temporary redirect and will not be followed by search engines. A 301 Permanent redirect will be.
- Sitemaps especially if your content is vast.
Side note about HTTP status codes
In some web technologies such as ASP/ASP.NET, a
Response.Redirect issues an HTTP 302 and is the most common way to do redirects. Although headers can be set manually, there are no convenient methods for permanent redirects, and so developers often overlook this crucial difference.
It is especially important during site redesigns to use 301s to redirect old URLs to new ones. Otherwise, all those people linking to your old pages will be left out, and search engines won’t pass on the weighting/recognition of those links to the new pages.
Sometimes using proper 404 Not Found status codes on pages that query databases might be useful if you do not want the search engine to index that page (maybe your site no longer sells those products).
Or, similarly a 500 Internal Server Error is very important. For example, if there is a temporary glitch resulting in the page showing some error information, without the right status code, the search engine will index that content, even replacing your previously good indexed content!
What about using web standards?
Some web standards advocates will be surprised to read that I have not included the use of standards-compliant HTML markup and appropriate use of headers from a search engine perspective.
While these techniques are undoubtedly crucial for accessibility and forms the basis for any modern web development strategy, their value for search engine indexing or ranking is questionable (unfortunately), because spammers can easily abuse elements such as
That being said, avoiding table-based layout and following web standards can help because:
- Standards help to minimize code bloat (some search engines limit how much of a page they will index, though increasingly less important it seems).
- A valid page does not hurt in ensuring a search engine can understand your content. An invalid page might be so invalid that even if it somehow renders okay, a technical program such as a search engine robot may struggle to make sense of it.
- Use of proper markup such as headers do help users, which can help with ranking indirectly (explained below)
There used to be a time when having content first and navigation last was helped too (something done reasonably easily with CSS, and to some extent with HTML layout tables too). But even this technique is less important. If something like HTML 5 becomes more prominent, then elements such as
<nav> will make source code order less relevant (although search engines will still need to deal with abuse of those elements!)
Improving ranking; mostly a business/marketing strategy
To get good ranking however, there is typically very little a web developer can do. Ranking, these days, boils down to search engines trying to determine how popular your site is. The way they do that is through seeing how many links your page(s) get and the nature of those inbound links. (You can also provide internal links from some pages to others, and that can sometimes help, but most SEO experts seem to find that it is the external inbound links that are key.)
This means the task of getting good ranking is ultimately a business/marketing strategy: sites need to have compelling enough content for others to want to link to them.
There are of course always caveats or exceptions. For example, a new site on a very niche area may rank highly on those niche keywords (but the number of people searching for such niche words may be small too).
One of the few technical things that may help (though not necessarily “technical” as such) is training business/marketing to encourage those linking to your pages to use relevant text in those links, such as the title of the page, instead of “click here” or “more info.”
Providing content management systems that allow content creators to provide proper titles, keywords, descriptions, etc is also important.
What about link farms, keyword stuffing, etc?
So, you may have received lots of requests to join various link farms to help each other promote themselves. Search engines try to watch out for these things and only factor relevant in-bound links, by analysing the topics and keywords of the other site. Also, if the other site linking to you is itself determined to be popular, then your page’s weighting increases accordingly. It is kind of like search engines on the look out for who “votes” for your page.
Some people have tried to stuff key words everywhere in the content (I am surprised some developers even advocate use of the HTML
title attribute in many places in the belief it will aide with search engine ranking!). Again, this misses the point that this technical trick is likely not to work (and actually make the site more noisy, especially for those using assistive technologies).
Trying to trick search engines isn’t worth it; you will likely get found out and deslisted. Building up your ranking and reputation will be difficult. For example, BMW was delisted from Google’s listings for providing different content to search engines and users. After changing their practices they were listed again, but few online businesses can afford such delisting.
Search engine companies such as Google provide guidelines that ultimately advise web masters not to trick search engines, but instead concentrate on creating sites for humans to consume; search engines will pick up on that and visit accordingly.
Here are some examples from Google:
Don’t fill your page with lists of keywords, attempt to “cloak” pages, or put up “crawler only” pages. If your site contains pages, links, or text that you don’t intend visitors to see, Google considers those links and pages deceptive and may ignore your site.
Don’t feel obligated to purchase a search engine optimization service. Some companies claim to “guarantee” high ranking for your site in Google’s search results. While legitimate consulting firms can improve your site’s flow and content, others employ deceptive tactics in an attempt to fool search engines. Be careful; if your domain is affiliated with one of these deceptive services, it could be banned from our index.
Don’t use images to display important names, content, or links. Our crawler doesn’t recognize text contained in graphics. Use ALT attributes if the main content and keywords on your page can’t be formatted in regular HTML.
— How can I create a Google-friendly site?, Google.com, accessed September 9, 2007
Quality guidelines – basic principles
- Make pages for users, not for search engines. Don’t deceive your users or present different content to search engines…
- Avoid tricks intended to improve search engine rankings. … ask, “Does this help my users? Would I do this if search engines didn’t exist?”
- Don’t participate in link schemes designed to increase your site’s ranking or PageRank. … avoid links to web spammers … as your own ranking may be affected adversely by those links.
— Webmaster Guidelines, Google.com, accessed September 9, 2007
What works now may not work in the future
Search engine companies are always looking to improve their algorithms so what seems true today may not be the case tomorrow. They also try to guard their algorithms as much as possible so a lot of the above comes from trial and error.
I have often come across people who encourage techniques which are no longer as relevant as they may have once been. It is a rapidly changing area. For example, a useful search engine resource, SEOmoz, provides an excellent summary of ranking factors for 2007 and compares them to just two years ago, showing that even then factors have changed considerably.
Here is a part of their summary:
Top 10 Ranking Factors in 2005:
- Title Tag
- Anchor Text of Links
- Keyword Use in Document Text
- Accessibility of Document
- Links to Document from Site-Internal Pages
- Primary Subject Matter of Site
- External Links to Linking Pages
- Link Popularity of Site in Topical Community
- Global Link Popularity of Site
- Keyword Spamming
Top 10 Ranking Factors in 2007:
- Keyword Use in Title Tag
- Global Link Popularity of Site
- Anchor Text of Inbound Link
- Link Popularity within the Site’s Internal Link Structure
- Age of Site
- Topical Relevance of Inbound Links to Site
- Link Popularity of Site in Topical Community
- Keyword Use in Body Text
- Global Link Popularity of Linking Site
- Topical Relationship of Linking Page
For me, this is one of the most valuable documents on the web for determining how to approach an overall SEO strategy. While the factors may not be perfect, they give a remarkably concise and trustworthy view of what makes a site rank well at Google. I hope you all enjoy it as much as I have – please add your thoughts in the comments!
— Ranking Factors Version 2 Released, April 3, 2007, SEOmoz
It takes time
Businesses can understandably expect a new site to start ranking highly quickly, even if it is a prominent brand. However, the reality is that it takes time (and effort) to build up the critical mass and quality in-bound links. Some search engine companies, such as Google are even factoring in the age of the site into some of their ranking algorithms.
Other times, new content (e.g. a news-based site) even from a new site can rank highly quickly (temporarily or for a long time), especially on niche topics.
Bottom line though is that nothing is really guaranteed!
Terminology confuses matters?
“SEO” is probably a misleading phrase; you don’t optimise google (unless you work there!).
Maybe two terms should be used to help with communication: Search Engine Marketing (SEM) is already often used when talking about making a site compelling enough for others to link to it and improve ranking. So how about something like Search Engine Visibility (SEV) when talking about indexing?
Or maybe I contribute to the confusion by trying introducing another TLA!