Blog posts of '2011' 'September'

RSS
Promoting Your Site to Increase Traffic - Wednesday, September 07, 2011

The main purpose of SEO is to make your site visible to search engines, thus leading to higher rankings in search results pages, which in turn brings more traffic to your site. And having more visitors (and above all buyers) is ultimately the goal in sites promotion. For truth's sake, SEO is only one alternative to promote your site and increase traffic – there are many other online and offline ways to do accomplish the goal of getting high traffic and reaching your target audience. We are not going to explore them in this tutorial but just keep in mind that search engines are not the only way to get visitors to your site, although they seem to be a preferable choice and a relatively easy way to do it.

1. Submitting Your Site to Search Directories, forums and special sites

After you have finished optimizing your new site, time comes to submit it to search engines. Generally, with search engines you don't have to do anything special in order to get your site included in their indices – they will come and find you. Well, it cannot be said exactly when they will visit your site for the first time and at what intervals they will visit it later but there is hardly anything that you can to do invite them. Sure, you can go to their Submit a Site pages in submit the URL of your new site but by doing this do not expect that they will hop to you right away. What is more, even if you submit your URL, most search engines reserve the right to judge whether to crawl your site or not. Anyway, here are the URLs for submitting pages in the three major search engines: Google, MSN, and Yahoo.

In addition to search engines, you may also want to have your site included in search directories as well. Although search directories also list sites that are relevant to a given topic, they are different from search engines in several aspects. First, search directories are usually maintained by humans and the sites in them are reviewed for relevancy after they have been submitted. Second, search directories do not use crawlers to get URLs, so you need to go to them and submit your site but once you do this, you can stay there forever and no more efforts on your side are necessary. Some of the most popular search directories are DMOZ and Yahoo! (the directory, not the search engine itself) and here are the URLs of their submissions pages: DMOZ and Yahoo!.

Sometimes posting a link to your site in the right forums or special sites can do miracles in terms of traffic. You need to find the forums and sites that are leaders in the fields of interest to you but generally even a simple search in Google or the other major search engines will retrieve their names. For instance, if you are a hardware freak, type “hardware forums” in the search box and in a second you will have a list of sites that are favorites to other hardware freaks. Then you need to check the sites one by one because some of them might not allow posting links to commercial sites. Posting into forums is more time-consuming than submitting to search engines but it could also be pretty rewarding.

2. Specialized Search Engines

Google, Yahoo!, and MSN are not the only search engines on Earth, nor even the only general-purpose ones. There are many other general-purpose and specialized search engines and some of them can be really helpful for reaching your target audience. You just can't imagine for how many niches specialized search engines exist – from law, to radiostations, to educational one! Some of them are actually huge sites that gather Webwide resources on a particular topic but almost all of them have sections for submitting links to external sites of interest. So, after you find the specialized search engines in your niche, go to their site and submit your URL – this could prove more trafficworthy than striving to get to the top of Google.

3. Paid Ads and Submissions

We have already mentioned some other alternatives to search engines – forums, specialized sites and search engines, search directories – but if you need to make sure that your site will be noticed, you can always resort to paid ads and submissions. Yes, paid listings are a fast and guaranteed way to appear in search results and most of the major search engines accept payment to put your URL in the Paid Links section for keywords of interest to you but you also must have in mind that users generally do not trust paid links as much as they do with the normal ones – in a sense it looks like you are bribing the search engine to place you where you can't get on your own, so think twice about the pros and cons of paying to get listed.



Comments (0)
Static Versus Dynamic URLs - Wednesday, September 07, 2011

Based on the previous section, you might have gotten the impression that the algorithms of search engines try to humiliate every designer effort to make a site gorgeous. Well, it has been explained why search engines do not like image, movies, applets and other extras. Now, you might think that search engines are far too cheeky to dislike dynamic URLs either. Honestly, users are also not in love with URLs like http://domain.com/product.php?cid=1&pid=5 because such URLs do not tell much about the contents of the page.

There are a couple of good reasons why static URLs score better than dynamic URLs. First, dynamic URLs are not always there – i.e. the page is generated on request after the user performs some kind of action (fills a form and submits it or performs a search using the site's search engine). In a sense, such pages are nonexistent for search engines, because they index the Web by crawling it, not by filling in forms.

Second, even if a dynamic page has already been generated by a previous user request and is stored on the server, search engines might just skip it if it has too many question marks and other special symbols in it. Once upon a time search engines did not index dynamic pages at all, while today they do index them but generally slower than they index static pages.

The idea is not to revert to static HTML only. Database-driven sites are great but it will be much better if you serve your pages to the search engines and users in a format they can easily handle. One of the solutions of the dynamic URLs problem is called URL rewriting. There are special tools (different for different platforms and servers) that rewrite URLs in a friendlier format, so they appear in the browser like normal HTML pages. Try the URL Rewriting Tool below, it will convert the cryptic text from the previous example into something more readable, like http://mydomain.com/product-categoryid-1-productid-5.


URL Rewriting Tool

Enter Dynamic URL



Comments (0)
Visual Extras and SEO - Wednesday, September 07, 2011

As already mentioned, search engines have no means to index directly extras like images, sounds, flash movies, javascript. Instead, they rely on your to provide meaningful textual description and based on it they can index these files. In a sense, the situation is similar to that with text 10 or so years ago – you provide a description in the metatag and search engines uses this description to index and process your page. If technology advances further, one day it might be possible for search engines to index images, movies, etc. but for the time being this is just a dream.

1. Images

Images are an essential part of any Web page and from a designer point of view they are not an extra but a most mandatory item for every site. However, here designers and search engines are on two poles because for search engines every piece of information that is buried in an image is lost. When working with designers, sometimes it takes a while to explain to them that having textual links (with proper anchor text) instead of shining images is not a whim and that clear text navigation is really mandatory. Yes, it can be hard to find the right balance between artistic performance and SEO-friendliness but since even the finest site is lost in cyberspace if it cannot be found by search engines, a compromise to its visual appearance cannot be avoided.


With all that said, the idea is not to skip images at all. Sure, nowadays this is impossible because the result would be a most ugly site. Rather the idea is that images should be used for illustration and decoration, not for navigation or even worse – for displaying text (in a fancy font, for example). And the most important – in the <alt> attribute of the <img> tag, always provide a meaningful textual description of the image. The HTML specification does not require this but search engines do. Also, it does not hurt to give meaningful names to the image files themselves rather than name them image1.jpg, image2.jpg, imageN.jpg. For instance, in the next example the image file has an informative name and the alt provides enough additional information: <img src=“one_month_Jim.jpg” alt=“A picture of Jim when he was a one-month puppy”>. Well, don't go to extremes like writing 20-word <alt> tags for 1 pixel images because this also looks suspicious and starts to smell like keyword-stuffing.

2. Animation and Movies

The situation with animation and movies is similar to that with images – they are valuable from a designer's point of view but are not loved by search engines. For instance, it is still pretty common to have an impressive Flash introduction on the home page. You just cannot imagine what a disadvantage with search engines this is – it is a number one rankings killer! And it gets even worse, if you use Flash to tell a story that can be written in plain text, hence crawled and indexed by search engines. One workaround is to provide search engines with a HTML version of the Flash movie but in this case make sure that you have excluded the original Flash movie from indexing (this is done in the robots.txt file but the explanation of this file is not a beginners topic and that is why it is excluded from this tutorial), otherwise you can be penalized for duplicate content.

There are rumors that Google is building a new search technology that will allow to search inside animation and movies and that the .swf format will contain new metadata that can be used by search engines, but until then, you'd better either refrain from using (too much) Flash, or at least provide a textual description of the movie (you can use an <alt> tag to describe the movie).

3. Frames

It is a good news that frames are slowly but surely disappearing from the Web. 5 or 10 years ago they were an absolute hit with designers but never with search engines. Search engines have difficulties indexing framed pages because the URL of the page is the same, no matter which of the separate frames is open. For search engines this was a shock because actually there were 3 or 4 pages and only one URL, while for search engines 1 URL is 1 page. Of course, search engines can follow the links to the pages in the frameset and index them but this is a hurdle for them.

If you still insist on using frames, make sure that you provide a meaningful description of the site in the <noframes> tag. The following example is not for beginners but even if you do not understand everything in it, just remember that the <noframes> tag is the place to provide an alternative version (or at least a short description) of your site for search engines and users whose browsers do not support frames. If you decide to use the <noframes> tag, maybe you'd like to read more about it before you start using it.

Example: <noframes> <p> This site is best viewed in a browser that supports frames. </p><p> Welcome to our site for prospective dog adopters! Adopting a homeless dog is a most noble deed that will help save the life of the poor creature. </p></noframes>

4. JavaScript

This is another hot potato. It is known by everybody that pure HTML is powerless to make complex sites with a lot of functionality (anyway, HTML was not intended to be a programming languages for building Web applications, so nobody expects that you can use HTML to handle writing to a database or even for storing session information) as required by today's Web users and that is why other programming languages (like JavaScript, or PHP) come to enhance HTML. For now search engines just ignore JavaScript they encounter on a page. As a result of this, first if you have links that are inside the JavaScript code, chances are that they will not be spidered. Second, if JavaScript is in the HTML file itself (rather than in an external .js file that is invoked when necessary) this clutters the html file itself and spiders might just skip it and move to the next site. Just for your information, there is a <noscript> tag that allows to provide alternative to running the script in the browser but because most of its applications are pretty complicated, it is hardly suitable to explain it here.

Comments (0)
Content Is King - Wednesday, September 07, 2011

If you were writing SEO text solely for machines, optimization would be simple. Sprinkle in some keywords, rearrange them at random and watch the hit counter skyrocket. Sometimes SEO copy writers forget that this isn't the case. Real people read your text and they expect something in return for the time and attention they give you. They expect good content, and their expectations have shaped how search engines rank your site.

What Is Good Content?

Good SEO content has three primary characteristics:

  • Offers useful information presented in an engaging format to human readers
  • Boosts search engine rankings
  • Attracts plenty of links from other sites

Note that human readers come first on the list. Your site must deliver value to its visitors and do it in an engaging way. Few sites specialize in a subject so narrow that they have an information niche all to themselves. You'll have competition. Set yourself apart from it with expert interviews, meaningful lists and well-researched resources. Write well or invest in someone who does; your investment will pay off in increased traffic.

Although search engines aren't your primary audience, they still influence your page rankings. In the days of early SEO, using keyword-stuffed META tags brought in plenty of traffic. People didn't hang around on a site that promised low air fares and delivered advertisements, but that didn't affect the search engines. Each iteration of the engines' algorithms got better at discerning valuable sites from clutter, though, so site creators had to sharpen their technique as well. Instead of META tags, they used keywords sprinkled throughout an article.

In April 2011, Google's algorithm change devalued keyword and keyphrase "spam" in favor of more nuanced means of determining a web site's value to viewers. This update sent ripples throughout the Internet. From major commerce sites to hobbyists' blogs, search engines boosted high-value sites and cast down some once-mighty sites that relied too much on keyword-stuffing. Keywords haven't lost their value, but they no longer provide the only cue to search engines.

If SEO keywords have become devalued, links have grown in value. If other sites link to yours as an engaging read, controversial screed or authoritative text, search engines view your page as a site that viewers will want to see and bump it up accordingly. Filling your site with link bait will get you noticed by search engines and the people who use them, and the best way to draw links is with strong, fresh content. Social media sites provide even more buzz for pages with great content. Those links count too, so court them with content-rich pages.

Writing SEO Content for Search Engines -- And for People

SEO no longer means scattering keywords like Hansel and Gretel throwing breadcrumbs. The newest search engines scan pages almost as your readers might. Jakob Nielsen, a researcher and expert in human-machine interaction at the Technical University of Copenhagen, found that almost 80 percent of a web site's visitors scanned the page rather than reading it line by line. They spent their first fractions of a second on the page deciding if it was worth their time. Search engine programmers still use this research to devise algorithms that provide more organic and meaningful rankings.

The same things that catch a visitor's eye will get a search engine's attention. The upper left corner of the page is the most valuable real estate on the page, as it's where a reader's eyes go first. Put important text there so search engines and people will see it immediately. It's also a good spot for boxed text and itemized lists, both of which appeal equally to carbon-based and silicon-based brains.

Bold text makes people and machines notice, but use those tags judiciously. Too much bold text looks like an advertisement and will cause search engines to devalue your site. Italic text bold HTML tags should surround meaningful concepts, not emphasis words. Bolding a "very" or italicizing a "more" means nothing to a search engine, so apply those tags to important concepts and sub-headings.

Searches now look for associated terms and relevant phrases, not just keywords. A person picks up meaning from context and readily distinguishes the term "clipping" as it applies to hair from the same word as it refers to film stock or video game graphics. Let your visitors -- human and machine -- know whether you're talking about German shepherds as a dog breed or as an exciting career in European wool and mutton. In your SEO text, include synonyms and relevant terms to let search engines recognize the purpose of your site.

Happily, there's a way to work these terms into your content without monitoring keyword and keyphrase percentages: simply write the kind of engaging copy that people like to read. If you write for readers, the search engines will follow.

SEO Killers - Duplicate Content, Spam and Filler

You have a handle on what modern SEO content should be, but it's also vital to understand what it shouldn't be. Nielsen's research described what kept readers on web sites and shed light on what drove them away. Search engines take these same factors into account and rank pages down or even remove them from ranking altogether.

Duplicate content can sink a site. Even legally obtained duplicate content such as articles linked whole from news feeds and large blocks of attributed quotes diminish a site's SEO value. Readers have no reason to visit a site that gives them other sites' news verbatim. Page ranks will decline over time without original content.

While you don't want large blocks of duplicate content on your site, you want the timely information that your news feeds deliver. Build fresh new content on the foundation of other information whenever possible. It takes more effort to assimilate and summarize a news story or to use it as a link within an original article, but doing so will cast your site in a more positive light. If you add sufficient value with sharp writing and relevant links, you'll find yourself in the search engine stratosphere.

The old method of following keyword formulas and meeting keyword percentages is not only outdated, it will actively lower your site's rank. Heavy keyword-loading is the hallmark of advertising web sites, and search engines know it. Using related words and relevant phrases to enhance topic recognition marks your site as valuable and drives its search engine value higher. Varied writing is also more readable to your human visitors.

Nielsen found that human readers shunned sites full of filler phrases. Clear, concise web writing has greater value than sprawling pages full of fluff. Hyperbole and promotional language -- describing a product as "the best ever" or "the perfect solution," for example -- contributes nothing to the meaning of the text. Human readers filter out fluff and software ranks down sites with too much of it, so eliminate it from your site.

Search engines change their algorithms regularly in an effort to provide their users with more relevant results. The state of SEO art changes with them. The only constant in web writing is its human audience. Pages that provide novel, appealing content in a reader-friendly format will rise to the top of the rankings.

Try the Similar Page Checker to check the similarity between two URLs.


Similar Page Checker

Enter First URL

Enter Second URL


Comments (0)
Metatags - Wednesday, September 07, 2011

What are Meta tags ?

Meta tags are used to summarize information of a page for search engine crawlers. This information is not directly visibles to humans visiting your website. The most popular are the meta keywords and description tag. These meta tags to be inserted into the area of your page.

A couple of years ago meta tags were the primary tool for search engine optimization and there was a direct correlation between keywords in the meta tags and your ranking in the search results. However, algorithms have got better and today the importance of metadata is decreasing day by day.

Meta Description

The meta Description tag is are one more way for you to write a description of your site, thus pointing search engines to what themes and topics your Web site is relevant to. Some search engines (including Google) use these meta description display a summary of the listings on the search results page. So if your meta descriptions are well written you might be able to attract more traffic to your website.

For instance, for the dog adoption site, the meta Description tag could be something like this:
<Meta Name=“Description“ Content=“Adopting a dog saves a life and brings joy to your house. All you need to know when you consider adopting a dog.“>

Meta Keywords

A potential use of the Meta Keywords tags is to include a list of keywords that you think are relevant to your pages. The major search engines will not take this into account but still it is a chance for you to emphasize your target keywords. You may consider including alternative spellings (or even common misspellings of your keywords) in the meta Keywords tag. It might be a very small boost to your search engine rankings but why miss the chance?
eg.
<Meta name=“Keywords“ Content=“adopt, adoption, dog, dogs, puppy, canine, save a life, homeless animals“>

Meta Robots

In this tag you specify the pages that you do NOT want crawled and indexed. It happens that on your site you have contents that you need to keep there but you don't want it indexed. Listing this pages in the Meta Robots tag is one way to exclude them (the other way is by using a robots.txt file and generally this is the better way to do it) from being indexed.
eg.
<META NAME=“ROBOTS“ CONTENT=“NOINDEX, NOFOLLOW“>



Comments (0)
Backlinks – Another Important SEO Item - Wednesday, September 07, 2011

What are Backlinks?

In layman's terms, there are two types of links: inbound and outbound. Outbound links start from your site and lead to an external site, while inbound links or backlinks, come from an external site to yours. e.g. if cnn.com links to yourdomain.com, the link from cnn.com is a backlink (inbound) for yourdomain.com, however the link is an outbound link from cnn.com's perspective. Backlinks are among the main building blocks to good Search Engine Optimisation (SEO).

Why Backlinks Are Important

The number of backlinks is an indication of the popularity or importance of that website. Backlinks are important for SEO because some search engines like Google, give more credit to websites that have a large number of quality backlinks, and consider those websites more relevant than others in their results pages for a search query.

Therefore, when search engines calculate the relevance of a site to a keyword, they not only consider the number of backlinks to that site but also their quality. In order to determine the quality, a search engine considers the content of the sites. When backlinks to your site come from other sites, and those sites have content related to your site, these backlinks are considered more relevant to your site. If backlinks are found on sites with unrelated content, they are considered less relevant. The higher the relevance of backlinks, the greater their quality.

For example, if a webmaster has a website about how to rescue orphaned dogs, and received a backlink from another website about dogs, then that would be more relevant in a search engine's assessment than say a link from a site about car racing. Therefore, higher the relevance of the site linking back to your website, the better the quality of the backlink.

Search engines want websites to have a level playing field, and look for natural links built slowly over time. While it is fairly easy to modify your webpages to make them more SEO friendly it is a lot harder for you to influence other websites and get them to link to your website. This is the reason search engines regard backlinks as a very important factor. Further, search engine's criteria for quality backlinks has gotten even tougher, thanks to unscrupulous webmasters trying to achieve these backlinks by deceptive or sneaky techniques, such as hidden links, or automatically generated pages whose sole purpose is to provide backlinks to websites. These pages are called link farms, and they are not only disregarded by search engines, but linking to a link farm could get your site banned entirely.

Anchor Text

When a link incorporates a keyword into the text of the hyperlink, we call this anchor text. A link's anchor text may be one of the most powerful resources a webmaster has. Backlinks from multiple websites with the anchor text "orphaned dogs" would help your website rank higher for the keyword "orphaned dogs". Using your keyword is a superior way to utilize a hyperlink as against having links with words like "click here" which do not relate to your website. The 'Backlink Anchor Text Analysis Tool' is a tool which will assist you find your backlinks and the text which is being used to link to your website. If you find that your site is being linked to from another website, but the anchor text is not being utilized properly, you should request that the website change the anchor text to something which incorporates relevant keywords. This will also help boost your rankings.



Backlink Anchor Text Analyzer

Domain Name

Note* Results may vary if prefixed with www.

Ways to Build Backlinks

Even if plenty of backlinks come to your site the natural way, additional quality backlinks are always welcome.

1 The Backlink Builder Tool

When you enter the keywords of your choice, the Backlink Builder tool gives you a list of relevent sites from where you might get some backlinks.


Backlink Builder

Enter Keyword (Theme)


Number of Results per Keyphrases Category     Display 

2 Getting Listed in Directories

If you are serious about your Web presence, getting listed in directories like DMOZ and Yahoo is a must, not only because this is a way to get some quality backlinks for free, but also because this way you are easily noticed by both search engines and potential visitors. Generally inclusion in search directories is free but the drawback is that sometimes you have to wait a couple of months before you get listed in the categories of your choice.

3 Forums and Article Directories

Generally search engines index forums so posting in forums and blogs is also a way to get quality backlinks with the anchor text you want. If the forum or blog is a respected one, a backlink is valuable. However, in some cases the forum or blog administrator can edit your post, or even delete it if it does not fit into the forum or blog policy. Also, sometimes administrators do not allow links in posts, unless they are relevant ones. In some rare cases (which are more an exception than a rule) the owner of a forum or a blog would have banned search engines from indexing it and in this case posting backlinks there is pointless.

4 RSS Feeds

You can offer RSS feeds to interested sites for free, when the other site publishes your RSS feed you will get a backlink to your site and potentially a lot of visitors, who will come to your site for more details about the headline and the abstract they read on the other site.

5 Affiliate programs

Affiliate programs are also good for getting more visitors (and buyers) and for building quality backlinks but they tend to be an expensive way because generally the affiliate commission is in the range of 10 to 30 %. But if you have an affiliate program anyway, why not use it to get some more quality backlinks?

6 News Announcements and Press Releases

Although this is hardly an everyday way to build backlinks, it is an approach that gives good results, if handled properly. There are many sites that publish news announcements and press releases for free or for a small fee . A professionally written press release about an important event can bring you many visitors and the backlink from a respected site to yours is a good boost to your SEO efforts. The tricky part is that you cannot release press releases if there is nothing newsworthy. That is why we say that news announcements and press releases are not a commodity way to build backlinks.

Link Practices That Are To Be Avoided

There is much discussion in these last few months about reciprocal linking. In the past few Google updates, reciprocal links were one of the targets of the search engine's latest filter. Many webmasters had agreed upon reciprocal link exchanges, in order to boost their site's rankings. In a link exchange, one webmaster places a link on his website that points to another webmasters website, and vice versa. Many of these links were simply not relevant, and were just discounted. So while the irrelevant backlinks were ignored, the outbound links still got counted, diluting the relevancy score of many websites. This caused a great many websites to drop off the Google map.

There is a Google patent in the works that will deal with not only the popularity of the sites being linked to, but also how trustworthy a site is that you link to from your own website. This will mean that you could get into trouble with the search engine just for linking to a bad apple.

Many webmasters have more than one website. Sometimes these websites are related, sometimes they are not. You have to also be careful about interlinking multiple websites on the same IP. If you own seven related websites, then a link to each of those websites on a page could hurt you, as it may look like to a search engine that you are trying to do something fishy. Many webmasters have tried to manipulate backlinks in this way; and too many links to sites with the same IP address is referred to as backlink bombing.

One thing is certain, interlinking sites doesn't help you from a search engine standpoint. The only reason you may want to interlink your sites in the first place might be to provide your visitors with extra resources to visit. In this case, it would probably be okay to provide visitors with a link to another of your websites, but try to keep many instances of linking to the same IP address to a bare minimum. One or two links on a page here and there probably won't hurt you.

Comments (0)
Keywords – the Most Important Item in SEO - Wednesday, September 07, 2011

Keywords are the most important SEO element for every search engine, they are what search strings are matched against. Choosing the right keywords to optimize for is thus the first and most crucial step to a successful SEO campaign. If you fail on this very first step, the road ahead is very bumpy and most likely you will only waste your time and money. There are many ways to determine which keywords to optimize for and usually the final list of them is made after a careful analysis of what the online population is searching for, which keywords have your competitors chosen and above all - which are the keywords that you feel describe your site best.

1. Choosing the Right Keywords to Optimize For

It seems that the time when you could easily top the results for a one-word search string is centuries ago. Now, when the Web is so densely populated with sites, it is next to impossible to achieve constant top ratings for a one-word search string. Achieving constant top ratings for two-word or three-word search strings is a more realistic goal.

For instance, If you have a site about dogs, do NOT try and optimize for the keyword "dog" or "dogs". Instead you could try and focus on keywords like "dog obedience training", "small dog breeds", "homemade dog food", "dog food recipes" etc. Success for very popular one-two word keywords is very difficult and often not worth the trouble, it's best to focus on less competitive highly specific keywords.

The first thing you need to do is come up with keywords that describe the content of your website. Ideally, you know your users well and can correctly guess what search strings they are likely to use to search for you. You can also try the Website Keyword Suggestions Tool below to come up with an initial list of keywords. Run your inital list of keywords by the Google keyword Suggestion tool, you'll get a related list of keywords, shortlist a couple of keywords that seem relevent and have a decent global search volume.


Website Keyword Suggestions

Enter Website URL / Domain

When choosing the keywords to optimize for, you need to consider not only the expected monthly number of searches but also the relevancy of these keywords to your website. Although narrow keywords get fewer searches they are a lot more valuable than generic keywords because the users would be more interested in your offerings. Lets say you have a section on your website where you give advice on what to look for when adopting a dog. You might discover that the "adopt german shepherd" keyphrase gives you better results than a keyword like "german shepherd dogs". This page is not of interest to current german shepherd owners but to potential german shepherd owners only. So, when you look at the numbers of search hits per month, consider the unique hits that fit into the theme of your site.

2. Keyword Density

After you have chosen the keywords that describe your site and are supposedly of interest to your users, the next step is to make your site keyword-rich and to have good keyword density for your target keywords. Keyword density although no longer a very important factor in SEO is a common measure of how relevant a page is. Generally, the idea is that the higher the keyword density, the more relevant to the search string a page is. The recommended density is 3-7% for the major 2 or 3 keywords and 1-2% for minor keywords. Try the Keyword Density Checker below to determine the keyword density of your website.


Keyword Density Checker

Enter a URL

Although there are no strict rules, try optimizing for a reasonable number of keywords – 5 or 10 is OK. If you attempt to optimize for a list of 300, you will soon see that it is just not possible to have a good keyword density for more than a few keywords, without making the text sound artificial and stuffed with keywords. And what is worse, there are severe penalties (including ban from the search engine) for keyword stuffing because this is considered an unethical practice that tries to manipulate search results.

3. Keywords in Special Places

Keywords are very important not only as quantity but as quality as well – i.e. if you have more keywords in the page title, the headings, the first paragraphs – this counts more that if you have many keywords at the bottom of the page. The reason is that the URL (and especially the domain name), file names and directory names, the page title, the headings for the separate sections are more important than ordinary text on the page and therefore, all equal, if you have the same keyword density as your competitors but you have keywords in the URL, this will boost your ranking incredibly, especially with Yahoo!.

a. Keywords in URLs and File Names

The domain name and the whole URL of a site tell a lot about it. The presumption is that if your site is about dogs, you will have “dog”, “dogs”, or “puppy” as part of your domain name. For instance, if your site is mainly about adopting dogs, it is much better to name your dog site “dog-adopt.net” than “animal-care.org”, for example, because in the first case you have two major keywords in the URL, while in the second one you have no more than one potential minor keyword.

When hunting for keyword rich domain names, don't get greedy. While from a SEO point of view it is better to have 5 keywords in the URL, just imagine how long and difficult to memorize the URL will be. So you need to strike a balance between the keywords in the URL and site usability, which says that more than 3 words in the URL is a way too much.
Probably you will not be able to come on your own with tons of good suggestions. Additionally, even if you manage to think of a couple of good domain names, they might be already taken. In such cases tools like the Tool below can come very handy.

Enter Keyword

Choose Your Domain Extensions:

.com  .net  .org  .info 
.biz  .us  .name  .in 

File names and directory names are also important. Often search engines will give preference to pages that have a keyword in the file name. For instance http://mydomain.com/dog-adopt.html is not as good as http://dog-adopt.net/dog-adopt.html but is certainly better than http://mydomain.com/animal-care.html. The advantage of keywords in file names over keywords in URLs is that they are easier to change, if you decide to move to another niche, for example.

b. Keywords in Page Titles

The page title is another special place because the contents of the <title> tag usually gets displayed in most search engines, (including Google). While it is not mandatory per the HTML specification to write something in the <title> tag (i.e. you can leave it empty and the title bar of the browser will read “Untitled Document” or similar), for SEO purposes you may not want to leave the <title> tag empty; instead, you'd better write the the page title in it.

Unlike URLs, with page titles you can get wordy. If we go on with the dog example, the <title> tag of the home page for the http://dog-adopt.net can include something like this: <title>Adopt a Dog – Save a Life and Bring Joy to Your Home</title>, <title>Everything You Need to Know About Adopting a Dog</title> or even longer.

c. Keywords in Headings

Normally headings separate paragraphs into related subtopics and from a literary point of view, it may be pointless to have a heading after every other paragraph but from SEO point of view it is extremely good to have as many headings on a page as possible, especially if they have the keywords in them.

There are no technical length limits for the contents of the <h1>, <h2>, <h3>, ... <hn> tags but common sense says that too long headings are bad for page readability. So, like with URLs, you need to be wise with the length of headings. Another issue you need to consider is how the heading will be displayed. If it is Heading 1 (<h1>), generally this means larger font size and in this case it is recommendable to have less than 7-8 words in the heading, otherwise it might spread on 2 or 3 lines, which is not good and if you can avoid it – do it.


Comments (0)
Introduction – What Is SEO - Wednesday, September 07, 2011

Whenever you enter a query in a search engine and hit 'enter' you get a list of web results that contain that query term. Users normally tend to visit websites that are at the top of this list as they perceive those to be more relevant to the query. If you have ever wondered why some of these websites rank better than the others then you must know that it is because of a powerful web marketing technique called Search Engine Optimization (SEO).

SEO is a technique which helps search engines find and rank your site higher than the millions of other sites in response to a search query. SEO thus helps you get traffic from search engines.

This SEO tutorial covers all the necessary information you need to know about Search Engine Optimization - what is it, how does it work and differences in the ranking criteria of major search engines.

1. How Search Engines Work

The first basic truth you need to know to learn SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven. Although technology advances rapidly, search engines are far from intelligent creatures that can feel the beauty of a cool design or enjoy the sounds and movement in movies. Instead, search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.

First, search engines crawl the Web to see what is there. This task is performed by a piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way. Having in mind the number of pages on the Web (over 20 billion), it is impossible for a spider to visit a site daily just to see if a new page has appeared or if an existing page has been modified, sometimes crawlers may not end up visiting your site for a month or two.

What you can do is to check what a crawler sees from your site. As already mentioned, crawlers are not humans and they do not see images, Flash movies, JavaScript, frames, password-protected pages and directories, so if you have tons of these on your site, you'd better run the Spider Simulator below to see if these goodies are viewable by the spider. If they are not viewable, they will not be spidered, not indexed, not processed, etc. - in a word they will be non-existent for search engines.


Spider Simulator

Enter URL to Spider

After a page is crawled, the next step is to index its content. The indexed page is stored in a giant database, from where it can later be retrieved. Essentially, the process of indexing is identifying the words and expressions that best describe the page and assigning the page to particular keywords. For a human it will not be possible to process such amounts of information but generally search engines deal just fine with this task. Sometimes they might not get the meaning of a page right but if you help them by optimizing it, it will be easier for them to classify your pages correctly and for you – to get higher rankings.

When a search request comes, the search engine processes it – i.e. it compares the search string in the search request with the indexed pages in the database. Since it is likely that more than one page (practically it is millions of pages) contains the search string, the search engine starts calculating the relevancy of each of the pages in its index with the search string.

There are various algorithms to calculate relevancy. Each of these algorithms has different relative weights for common factors like keyword density, links, or metatags. That is why different search engines give different search results pages for the same search string. What is more, it is a known fact that all major search engines, like Yahoo!, Google, Bing, etc. periodically change their algorithms and if you want to keep at the top, you also need to adapt your pages to the latest changes. This is one reason (the other is your competitors) to devote permanent efforts to SEO, if you'd like to be at the top.

The last step in search engines' activity is retrieving the results. Basically, it is nothing more than simply displaying them in the browser – i.e. the endless pages of search results that are sorted from the most relevant to the least relevant sites.

2. Differences Between the Major Search Engines

Although the basic principle of operation of all search engines is the same, the minor differences between them lead to major changes in results relevancy. For different search engines different factors are important. There were times, when SEO experts joked that the algorithms of Bing are intentionally made just the opposite of those of Google. While this might have a grain of truth, it is a matter a fact that the major search engines like different stuff and if you plan to conquer more than one of them, you need to optimize carefully.

There are many examples of the differences between search engines. For instance, for Yahoo! and Bing, on-page keyword factors are of primary importance, while for Google links are very, very important. Also, for Google sites are like wine – the older, the better, while Yahoo! generally has no expressed preference towards sites and domains with tradition (i.e. older ones). Thus you might need more time till your site gets mature to be admitted to the top in Google, than in Yahoo!.


Comments (0)
Duplicate Content Filter: What it is and how it works - Tuesday, September 06, 2011

Duplicate Content has become a huge topic of discussion lately, thanks to the new filters that search engines have implemented. This article will help you understand why you might be caught in the filter, and ways to avoid it. We'll also show you how you can determine if your pages have duplicate content, and what to do to fix it.

Search engine spam is any deceitful attempts to deliberately trick the search engine into returning inappropriate, redundant, or poor-quality search results. Many times this behavior is seen in pages that are exact replicas of other pages which are created to receive better results in the search engine. Many people assume that creating multiple or similar copies of the same page will either increase their chances of getting listed in search engines or help them get multiple listings, due to the presence of more keywords.

In order to make a search more relevant to a user, search engines use a filter that removes the duplicate content pages from the search results, and the spam along with it. Unfortunately, good, hardworking webmasters have fallen prey to the filters imposed by the search engines that remove duplicate content. It is those webmasters who unknowingly spam the search engines, when there are some things they can do to avoid being filtered out. In order for you to truly understand the concepts you can implement to avoid the duplicate content filter, you need to know how this filter works.

First, we must understand that the term "duplicate content penalty" is actually a misnomer. When we refer to penalties in search engine rankings, we are actually talking about points that are deducted from a page in order to come to an overall relevancy score. But in reality, duplicate content pages are not penalized. Rather they are simply filtered, the way you would use a sieve to remove unwanted particles. Sometimes, "good particles" are accidentally filtered out.

Knowing the difference between the filter and the penalty, you can now understand how a search engine determines what duplicate content is. There are basically four types of duplicate content that are filtered out:

  1. Websites with Identical Pages - These pages are considered duplicate, as well as websites that are identical to another website on the Internet are also considered to be spam. Affiliate sites with the same look and feel which contain identical content, for example, are especially vulnerable to a duplicate content filter. Another example would be a website with doorway pages. Many times, these doorways are skewed versions of landing pages. However, these landing pages are identical to other landing pages. Generally, doorway pages are intended to be used to spam the search engines in order to manipulate search engine results.
  2. Scraped Content - Scraped content is taking content from a web site and repackaging it to make it look different, but in essence it is nothing more than a duplicate page. With the popularity of blogs on the internet and the syndication of those blogs, scraping is becoming more of a problem for search engines.
  3. E-Commerce Product Descriptions - Many eCommerce sites out there use the manufacturer's descriptions for the products, which hundreds or thousands of other eCommerce stores in the same competitive markets are using too. This duplicate content, while harder to spot, is still considered spam.
  4. Distribution of Articles - If you publish an article, and it gets copied and put all over the Internet, this is good, right? Not necessarily for all the sites that feature the same article. This type of duplicate content can be tricky, because even though Yahoo and MSN determine the source of the original article and deems it most relevant in search results, other search engines like Google may not, according to some experts.

So, how does a search engine's duplicate content filter work? Essentially, when a search engine robot crawls a website, it reads the pages, and stores the information in its database. Then, it compares its findings to other information it has in its database. Depending upon a few factors, such as the overall relevancy score of a website, it then determines which are duplicate content, and then filters out the pages or the websites that qualify as spam. Unfortunately, if your pages are not spam, but have enough similar content, they may still be regarded as spam.

There are several things you can do to avoid the duplicate content filter. First, you must be able to check your pages for duplicate content. Using our Similar Page Checker, you will be able to determine similarity between two pages and make them as unique as possible. By entering the URLs of two pages, this tool will compare those pages, and point out how they are similar so that you can make them unique.

Since you need to know which sites might have copied your site or pages, you will need some help. We recommend using a tool that searches for copies of your page on the Internet: www.copyscape.com. Here, you can put in your web page URL to find replicas of your page on the Internet. This can help you create unique content, or even address the issue of someone "borrowing" your content without your permission.

Let's look at the issue regarding some search engines possibly not considering the source of the original content from distributed articles. Remember, some search engines, like Google, use link popularity to determine the most relevant results. Continue to build your link popularity, while using tools like www.copyscape.com to find how many other sites have the same article, and if allowed by the author, you may be able to alter the article as to make the content unique.

If you use distributed articles for your content, consider how relevant the article is to your overall web page and then to the site as a whole. Sometimes, simply adding your own commentary to the articles can be enough to avoid the duplicate content filter; the Similar Page Checker could help you make your content unique. Further, the more relevant articles you can add to compliment the first article, the better. Search engines look at the entire web page and its relationship to the whole site, so as long as you aren't exactly copying someone's pages, you should be fine.

If you have an eCommerce site, you should write original descriptions for your products. This can be hard to do if you have many products, but it really is necessary if you wish to avoid the duplicate content filter. Here's another example why using the Similar Page Checker is a great idea. It can tell you how you can change your descriptions so as to have unique and original content for your site. This also works well for scraped content also. Many scraped content sites offer news. With the Similar Page Checker, you can easily determine where the news content is similar, and then change it to make it unique.

Do not rely on an affiliate site which is identical to other sites or create identical doorway pages. These types of behaviors are not only filtered out immediately as spam, but there is generally no comparison of the page to the site as a whole if another site or page is found as duplicate, and get your entire site in trouble.

The duplicate content filter is sometimes hard on sites that don't intend to spam the search engines. But it is ultimately up to you to help the search engines determine that your site is as unique as possible. By using the tools in this article to eliminate as much duplicate content as you can, you'll help keep your site original and fresh.

Comments (0)
Dynamic URLs vs. Static URLs - Tuesday, September 06, 2011

The Issue at Hand
Websites that utilize databases which can insert content into a webpage by way of a dynamic script like PHP or JavaScript are increasingly popular. This type of site is considered dynamic. Many websites choose dynamic content over static content. This is because if a website has thousands of products or pages, writing or updating each static by hand is a monumental task.

There are two types of URLs: dynamic and static. A dynamic URL is a page address that results from the search of a database-driven web site or the URL of a web site that runs a script. In contrast to static URLs, in which the contents of the web page stay the same unless the changes are hard-coded into the HTML, dynamic URLs are generated from specific queries to a site's database. The dynamic page is basically only a template in which to display the results of the database query. Instead of changing information in the HTML code, the data is changed in the database.

But there is a risk when using dynamic URLs: search engines don't like them. For those at most risk of losing search engine positioning due to dynamic URLs are e-commerce stores, forums, sites utilizing content management systems and blogs like Mambo or WordPress, or any other database-driven website. Many times the URL that is generated for the content in a dynamic site looks something like this:

   http://www.somesites.com/forums/thread.php?threadid=12345&sort=date

A static URL on the other hand, is a URL that doesn't change, and doesn't have variable strings. It looks like this:

   http://www.somesites.com/forums/the-challenges-of-dynamic-urls.htm

Static URLs are typically ranked better in search engine results pages, and they are indexed more quickly than dynamic URLs, if dynamic URLs get indexed at all. Static URLs are also easier for the end-user to view and understand what the page is about. If a user sees a URL in a search engine query that matches the title and description, they are more likely to click on that URL than one that doesn't make sense to them.

A search engine wants to only list pages its index that are unique. Search engines decide to combat this issue by cutting off the URLs after a specific number of variable strings (e.g.: ? & =).

For example, let's look at three URLs:

   http://www.somesites.com/forums/thread.php?threadid=12345&sort=date
   http://www.somesites.com/forums/thread.php?threadid=67890&sort=date
   http://www.somesites.com/forums/thread.php?threadid=13579&sort=date

All three of these URLs point to three different pages. But if the search engine purges the information after the first offending character, the question mark (?), now all three pages look the same:

   http://www.somesites.com/forums/thread.php
   http://www.somesites.com/forums/thread.php
   http://www.somesites.com/forums/thread.php

Now, you don't have unique pages, and consequently, the duplicate URLs won't be indexed.

Another issue is that dynamic pages generally do not have any keywords in the URL. It is very important to have keyword rich URLs. Highly relevant keywords should appear in the domain name or the page URL. This became clear in a recent study on how the top three search engines, Google, Yahoo, and MSN, rank websites.

The study involved taking hundreds of highly competitive keyword queries, like travel, cars, and computer software, and comparing factors involving the top ten results. The statistics show that of those top ten, Google has 40-50% of those with the keyword either in the URL or the domain; Yahoo shows 60%; and MSN has an astonishing 85%! What that means is that to these search engines, having your keywords in your URL or domain name could mean the difference between a top ten ranking, and a ranking far down in the results pages.

The Solution
So what can you do about this difficult problem? You certainly don't want to have to go back and recode every single dynamic URL into a static URL. This would be too much work for any website owner.

If you are hosted on a Linux server, then you will want to make the most of the Apache Mod Rewrite Rule, which is gives you the ability to inconspicuously redirect one URL to another, without the user's (or a search engine's) knowledge. You will need to have this module installed in Apache; for more information, you can view the documentation for this module here. This module saves you from having to rewrite your static URLs manually.

How does this module work? When a request comes in to a server for the new static URL, the Apache module redirects the URL internally to the old, dynamic URL, while still looking like the new static URL. The web server compares the URL requested by the client with the search pattern in the individual rules.

For example, when someone requests this URL:
   http://www.somesites.com/forums/the-challenges-of-dynamic-urls.html

The server looks for and compares this static-looking URL to what information is listed in the .htaccess file, such as:

   RewriteEngine on
   RewriteRule thread-threadid-(.*)\.htm$ thread.php?threadid=$1

It then converts the static URL to the old dynamic URL that looks like this, with no one the wiser:
   http://www.somesites.com/forums/thread.php?threadid=12345

You now have a URL that only will rank better in the search engines, but your end-users can definitely understand by glancing at the URL what the page will be about, while allowing Apache's Mod Rewrite Rule to handle to conversion for you, and still keeping the dynamic URL.

If you are not particularly technical, you may not wish to attempt to figure out the complex Mod Rewrite code and how to use it, or you simply may not have the time to embark upon a new learning curve. Therefore, it would be extremely beneficial to have something to do it for you. This URL Rewriting Tool can definitely help you. What this tool does is implement the Mod Rewrite Rule in your .htaccess file to secretly convert a URL to another, such as with dynamic and static ones.

With the URL Rewriting Tool, you can opt to rewrite single pages or entire directories. Simply enter the URL into the box, press submit, and copy and paste the generated code into your .htaccess file on the root of your website. You must remember to place any additional rewrite commands in your .htaccess file for each dynamic URL you want Apache to rewrite. Now, you can give out the static URL links on your website without having to alter all of your dynamic URLs manually because you are letting the Mod Rewrite Rule do the conversion for you, without JavaScript, cloaking, or any sneaky tactics.

Another thing you must remember to do is to change all of your links in your website to the static URLs in order to avoid penalties by search engines due to having duplicate URLs. You could even add your dynamic URLs to your Robots Exclusion Standard File (robots.txt) to keep the search engines from spidering the duplicate URLs. Regardless of your methods, after using the URL Rewrite Tool, you should ideally have no links pointing to any of your old dynamic URLs.

You have multiple reasons to utilize static URLs in your website whenever possible. When it's not possible, and you need to keep your database-driven content as those old dynamic URLs, you can still give end-users and search engine a static URL to navigate, and all the while, they are still your dynamic URLs in disguise. When a search engine engineer was asked if this method was considered "cloaking", he responded that it indeed was not, and that in fact, search engines prefer you do it this way. The URL Rewrite Tool not only saves you time and energy by helping you use static URLs by converting them transparently to your dynamic URLs, but it will also save your rankings in the search engines.

Comments (0)
1 2 3 4 5 Next
LiveZilla Live Help