Tag Archives: SEO

Search Engine Optimization & SEO rich Web Content Development

Search Engine Optimization (SEO) is the process of improving the ranking in the search engine it involves editing its content, HTML, Images.

Content is always king and it should be very relative to its specific keywords. Content portion itself of the web development process is vital to the overall success of the website and Google raking and PR.


Web Content is the main reason that people come to your Web pages but if your designs, architectures, and interactivity don’t provide that content they will leave.


Keep in mind that content is still king.

Basically, there are two types of Web Content:



Text Web Content
It’s the written content that’s on the page, both inside images and in text blocks. The best textual Web Content is that text that has been written for the Web, rather than simply copy-and-pasted from a print source. Textual Web Content will also have good internal links to help readers get more information and aid in scanning the text. Finally, Web text will be written for a global audience as even local pages can be read by anyone around the world.


Multimedia Web Content
The other type of Web Content is multimedia. To put it simply, multimedia is any content that isn’t text, but it includes the following.










Images are the most common way to add multimedia to websites. Images on Web pages should be optimized, so that they download fast as it affect the site load speed.


Animation can be created using GIF images or using Flash, JavaScript, Ajax or other animation tools.


Sound is embedded in a Web page so that readers hear it when they enter the site or when they click a link to turn it on. Always keep in mind that sounds on Web pages can create frustration, especially if you turn it on automatically and don’t provide any way to turn it off easily.


Video is getting more and more popular on Web pages. But it can be challenging to add a video so that it works reliably and well across different browsers.


What should be there in your Web Content?


1)  It should be very simple, lucid and easy to read and understand.


2)  It should be original content (no copy-paste).


3)  Search Engine prefers the grammatically correct content first.


4)  Use optimum keywords and LSI (Latent Semantic Indexing) keywords after proper keyword analysis.


5)  Web Content should be written after proper SEO analysis.


Hope the above article will help you to write the content more SEO friendly & rich.


The Best Web Search Engines List

There are so many search engines available on Internet.
Most of the users want only a single search engine that delivers basic key features as


  • Relevant results (results you are actually interested in)
  • Uncluttered, easy to read interface
  • Helpful options to broaden or tighten a search
  • Quick response (less time consuming)

I prefer Google.Google is fast, relevant, and the largest single catalog of Web pages available today
Below are some other Search engines available, these are also used very much by the people.

Alexa Rank : 27
Estimated Unique Monthly Visitors : 165,000,000

Alexa Rank : 4
Estimated Unique Monthly Visitors : 160,000,000

Alexa Rank : 52
Estimated Unique Monthly Visitors : 125,000,000

Alexa Rank : 64
Estimated Unique Monthly Visitors : 33,000,000

Alexa Rank : 118
Estimated Unique Monthly Visitors : 19,000,000

Alexa Rank : 1640
Estimated Unique Monthly Visitors : 4,300,000

Alexa Rank : 3617
Estimated Unique Monthly Visitors : 2,900,000

Alexa Rank : 6358
Estimated Unique Monthly Visitors : 2,700,000

Alexa Rank : 3038
Estimated Unique Monthly Visitors : 2,600,000

Alexa Rank : 1862
Estimated Unique Monthly Visitors : 2,000,000

Alexa Rank : 5903
Estimated Unique Monthly Visitors : 1,450,000

Alexa Rank : 4422
Estimated Unique Monthly Visitors : 1,150,000

Alexa Rank : 7676
Estimated Unique Monthly Visitors : 700,000


Note : All the above data related to search engines have collected on dated 03/31/2012.


SEO Tips to Improve Your Website’s Google Ranking & Website Traffic

Boost your website in major search engines by following some SEO tips as below:


1) Website content is the most important for SEO.Good, well-written (spelling and grammatically) and unique with quality content that contain your primary keyword and phrases always hike your site ranking.But Search engine prefer natural language content 🙂 Don’t try to mix so much your text with keywords.


2) Create a network of quality back-links using your keyword phrase as the link i.e. if your target is ‘Scriptarticle’ then link to’scriptarticle’ instead of ‘Click here’ link.Not only should your links use keyword anchor text, but the text (description text) around the links should also be related to your keywords.


3) You must have a unique, keyword focused Title tag (h1) on every page of your site.If optimizing blog posts; optimize your post title independently from your blog title.Focus on search phrases, not single keywords, and put your location in your text (‘Our Jaipur Centre’ not only ‘Our Centre’) to help you get found in local search.


4) When link building, think quality, not quantity. One single, good and authoritative link can do a lot more for your site than a dozen poor quality links.Links from a high Page Rank site are good as high PR indicates high trust, so the back links will carry more weight age.


5) Give each page a focus on a single keyword phrase. Don’t try to optimize the page for several keywords at once.


6) Check for canonical issue i.e. and non-www domains. Decide which you want to use and 301 redirect the other to it. In other words, if http://www.yourdomain.com you prefer, then http://yourdomain.com should redirect to it.


7) Frames, Flash and AJAX all share a common problem – you can’t link to a single page. Spiders or Crawlers can crawl text, not Flash and images. Don’t use Frames at all and use Flash and AJAX as less as you can for best SEO results.


8) If you want a new website to be spidered, submitting through Google’s regular submission can take many days. The quickest and easiest way to get your site crawled is by getting a link to it through another quality site.


9) SEO doesn’t matter if you have a weak or non-existent/dead call (href) to action. Be sure your call (href) to action is clear and present.


10) Optimize the text in your RSS feeds just like you should with your posts and web pages. Use descriptive, keyword rich text in your title and description.


11) Use captions and alt tag with your images. As with newspaper photos, place keyword rich captions with your images. A lot of searches is for a keyword plus one of those words.


12) Good global navigation and linking and paging will serve you much better than relying only on an XML Sitemap.


13) During link purchase or exchange, check the cache date of the page where your link will be located in Google.Search for ‘cache:URL’ where you substitute ‘URL’ for the actual page. If the page isn’t there or the cache date is more than an month old, the page isn’t worth much.


14) You must be sure that your URLs reports of server headers have ‘200 OK’ status or ‘301 Moved permanently’. If the status shows anything else, check to be sure your URLs are set up properly. You can find online tool for checking server header.


15) Social marketing is a part of SEO.As more you understand about sites like Twitter, Facebook, LinkedIn, Digg, etc, the better you will be able to compete in search.


16) Some of your most links has not appeared in web sites at all but be those in the form of e-mail communications such as newsletters.


17) Add all components to your Blog website like reviews, sharing functions, ratings, images, visitor comments, photo gallery etc.


Googlebot & Site Crawl

Advice Googlebot (Google) To Crawl Your Site


Googlebot is a Google’s web crawling bot or spider. This collects data from the web pages to build a searchable index for the Google search engine. Crawling is simply a process by which Googlebot visits new and updated pages, It uses an algorithmic programs determine which sites to crawl, how often, and how many pages to fetch from each site?


As Googlebot visits website it detects links (src and href) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.


If a webmaster wishes to control the information on their site available to a Googlebot,they can do so with the appropriate directives in a robots.txt file, or by adding the meta tag


<meta name=”Googlebot” content=”nofollow” />


to the web page.


Once you’ve created your robots.txt file, there may be a small delay before Googlebot discovers your changes.


Googlebot discovers pages by visiting all of the links on every page it finds. It then follows these links to other web pages. New web pages must be linked to other known pages on the web in order to be crawled and indexed or manually submitted by the webmaster.


How to control or stop Search Engines to crawl your Website using Robots.txt

Control or Stop Search Engines to crawl your Website using Robots.txt

Website owner can instruct search engines on which pages to crawl and index, They can use a robots.txt file to do so.

A search engine robot want to visit a website URL, say http://www.domainname.com/index.html (as defined in directory index)before visit, it first check http://www.domainname.com/robots.txt, and looks to see if there are specific directives to follow. Let’s suppose it finds the following code in the robots.txt.

User-agent: *
Disallow: /


The “User-agent: *” means this is a directive for all robots. The * symbol means all.
The “Disallow: /” tells the robot that it should not visit any pages on the site.


Important considerations to use robots.txt file.

1) Robots that choose to follow the instructions try to search this file and read the instructions before visiting the website.If this file doesn’t exist web robots assume that the web owner wishes to provide no specific instructions.

2) A robots.txt file on a website will function as a request that specified robots ignore specified files or directories during crawl.

3) For websites with multiple sub domains, each sub domain must have its own robots.txt file. If domainname.com had a robots.txt file but sub.domainname.com did not, the rules that would apply for domainname.com would not apply to sub.domainname.com.

4) The robots.txt file is available to the public to view. Anyone can see what sections of your server you don’t want robots to use.

5) Robots can ignore your /robots.txt.

6) Your robots.txt file should be in the root for your domain. In our server’s configurations this would be the public_html folder in your account. If your domain is “domainname.com” then the bots will look for the file path http://domainname.com/robots.txt.If you have add-on domains and want to use a robots.txt file in those as well you will need to place a robots.txt file in the folder you specified as the root for the add-on domain.


Some examples:

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/


In the example, web-owner told ALL robots (remember the * means all) to not crawl four directories on the site (cgi-bin, images, tmp, private), if you do not specify files or folders to be excluded it is understood the bot then has permission to crawl those items.


To exclude ALL bots from crawling the whole server.
User-agent: *
Disallow: /


To allow ALL bots to crawl the whole server.
User-agent: *


To exclude A SINGLE bot from crawling the whole server.
User-agent: BadBot
Disallow: /


To allow A SINGLE bot to crawl the whole server.
User-agent: Google

User-agent: *
Disallow: /


To exclude ALL bots from crawling the ENTIRE server except for one file.
🙂 Tricky since there’s no ‘allow’ directive in the robots.txt file. What you have to do is simply place all the files you do not want to be crawled into one folder, and then leave the file to be crawled above it. So if we placed all the files we didn’t want crawled in the folder called SCT we’d write the robots.txt rule like this.


User-agent: *
Disallow: /SCT


Or you can do each individual page like this.
User-agent: *
Disallow: /SCT/home.html


To create a Crawl Delay for the whole server.
User-agent: *
Crawl-delay: 10


If you wish to block one page, you can add a <meta> robots tag.
<meta name=”robots” content=”” />

You can get more knowledge about robots.txt file from http://www.robotstxt.org/


Meta refresh redirect (tag) and Search Engines

The meta refresh tag or meta redirect is a tool for reloading and redirecting web pages. Meta refresh tag is easy to use, but most don’t know that innocent use of that tag may significantly lower your page rank or even get your pages banned in some of search engines.


The meta tag belongs within the <head> of your HTML document. When used to refresh the current page, the syntax looks like this:


[php]<meta http-equiv="refresh" content="600">[/php]


<meta> – This is the HTML tag. It belongs in the <head> of your HTML document.


http-equiv=”refresh” – This attribute tells the browser that this meta tag is sending an HTTP command rather than a standard meta tag. Refresh is an actual HTTP header used by the web server. It tells the server that the page is going to be reload or redirect somewhere else.


content=”600″ – This is the amount of time, in seconds, until the browser should reload the current page.


However, when using this HTML redirect code, please ensure that you don’t use it to trick the Search Engines, as this could get your website banned. It is always best to work hard and learn quality ways in which to drive traffic to your web site.


Meta refresh tags have some drawbacks


  • Meta refresh redirects have been used by spammers to fool search engines. So search engines remove those sites from their database. If you use a lot of meta refresh tags to redirect pages, the search engines may decide your site is spam and delete it from their index. It’s better to use a 301 Server Redirect instead.
  • If the redirect happens quickly (less than 2-3 seconds), readers with older browsers can’t hit the “Back” button. This is a usability problem.
  • If the redirect happens quickly and goes to a non-existent page, your readers won’t be able to hit the “Back” button. This is a usability problem that will cause people to completely leave your site.
  • Refreshing the current page can confuse people. If they didn’t request the reload, some people can get concerned about security.
Alternatives of META Refresh  or best use of Meta Refresh Tag
  • Since search engines constantly change their algorithms and spam policies, a tag that may be fine one week could drop you to the bottom of the rankings the next. It’s best to not use the META refresh attribute on pages you want indexed, but if you do, set it to at least 10 seconds.
  • Server side redirection is a better way to ensure that visitors can still find your Web pages after you make changes because there are no spamming penalties associated with it. The most common use of server site redirects is to send visitors to a custom error document when they enter an invalid URL.
  • Although it’s a safer, more elegant solution, server side redirection is more technically demanding than using META tag or JavaScript redirects. But it won’t get you banned either! You’ll need to edit your .htaccess file on your server.
  • If you’re using a web host instead of running your own server, then the server administrator will probably have to make the change for you. Contact your Web host to see if they offer that service.