Jalani skenario yang telah ALLAH SWT tetapkan, jangan pernah anda sesekalipun menyesali apa yang telah ALLAH beri. Menataplah selalu pada kaca masa depan anda
Showing posts with label SEO. Show all posts
Showing posts with label SEO. Show all posts

Saturday, May 19, 2012

Why Search Engine Optimization may Count for more Conversions than you Think

At SEO Inc. we measure success based on three factors: rankings, traffic and conversions – conversions of course being the most important indicator of success. I mean, it is the main thing you are trying to achieve online right?
Interestingly, seo in many cases delivers more conversions than it is given credit for, especially when a business has a long consumer purchase process. Let’s take a moment to consider why. We will create an example from the consumer perspective.
Image you are looking for a diamond ring. You begin your search for the perfect ring by utilizing Google. You perform a few searches such as:
  • Diamond Rings
  • Best Diamond Rings
  • Engagement Rings
The search results provide you with a variety of top brands that sell the product. Over the next couple weeks you browse these websites, perform a few more searches and then finally decide on the ring you want to buy.
conversion analytics inforgraphic
Now when you started the search you weren’t ready to make the purchase. Because of this, months go by. But then you have enough money, go back to the website and purchase the ring. But how did you go back to the website? Well, this time you didn’t do a Google search for it. You already knew the website you wanted to buy from so you simply became a direct visit. Or maybe you did one last browse on site that rates diamond rings. Finally, you clicked a link back to the site you originally found through search.
The issue is, if it weren’t for the websites high search rankings for the non-branded term in the first place, you may have never became a customer.
Here are some things to consider on this from Google.
AdWords cookies expire 30 days after a customer’s click, while Analytics uses a cookie that lasts six months to two years. That means if a customer completed a conversion 31 days after clicking on an AdWords ad, the conversion wouldn’t be recorded in AdWords but would be in Analytics.
While this is true, keep in mind a user can always clear cookies in their browser. This would mean that the tracking method would be deleted and the goal would not be attributed to the first visit.
Referrals and organic searches can steal conversions from AdWords and vice-versa, as Analytics attributes the conversion to the last source.
I search for diamond rings and find your site and the ring I want. 6 months later I am browsing the internet and find a great review of the ring on another site. I click a link they have and visit the same site and convert. Google will attribute that conversion to the referring site. Where is the credit for search!

Search Engine Optimization March Madness! [Infographic]

Sure, March Madness is a crazy time for basketball. But it is also a crazy time for search engine optimization! In this graphic we take a look at March Madness from a SEO perspective. We didn’t include everything to be considered. For instance, rich snippits are missing, but you get the idea!
Search Engine Optimization March Madness
Search Engine Optimization March Madness

Can Google Crawl the new Facebook Business Page Timelines?

There has been an ugly rumor going around, started by some very large companies in fact, that Google cannot crawl Facebook pages.
As many of you know, Google has had problems crawling JavaScript and Ajax in the past. There are of course modifications you can make to help get this type of online content indexed, but you have to make the effort. Furthermore, there are really two categories of JavaScript and Ajax. There is JavaScript that once activated generates indexable content on the page and that when activated simply modifies the indexable content. It is really the activation that has been the problem for Google.
For example, the SEO Inc. Facebook page has plenty of great content on their new Facebook Business Page Timeline, but turn off JavaScript and visit the page and this is what you get.
As you can see, all of the posts are now missing due to turning off the Javascript.
Can Google Crawl Facebook
Can Google Crawl Facebook
So can Google crawl the content on the new Facebook Business Page Timlines? We believe so. First, lets take a look at a post by Matt Cutts.
Can Google Crawl Ajax and JavaScript Matt Cutts
Can Google Crawl Ajax and JavaScript Matt Cutts
As we can see, back in November of 2011 Matt Cutts confirmed that the Googlebot has the ability to execute Ajax and JavaScript to index some dynamic content. We would assume that Google would do everything in their power to index Facebook content. So it is pretty safe to believe that they are using these or similar abilities to index Facebook pages.
Furthermore, any SEO can look at their backlink profile in webmaster tools and see links from Facebook pages.
Webmaster Tools Facebook Backlinks
Webmaster Tools Facebook Backlinks
As of now, all signs point to Google being able to crawl Facebook pages and index Facebook backlinks. Have question or comment? Leave it below!
Footnotes
  • Facebook has commented out content. However, Google would still have the ability to access this. An exception may be being made for Facebook.
  • The webmaster tools backlink report shows links from Facebook pages as well as http://en-gb.facebook.com subdomain.

How Google Classifies You

Google AdWords has a very useful targeting feature for delivering ads to users within its display network called interest categories. Interest categories are buckets of users who have been classified as interested in a category based on their browsing behavior. There are over a thousand of these categories and some 500 million users classified. If you browse the Web, you’ve been classified.
From an advertiser perspective interest categories are awesome. We use them here at SEO Inc to deliver outstanding results both for client branding and direct response initiatives. We like interest categories because they expand the targeted audience available to an advertiser and provide a layer of control when combined with other display targeting options such as contextual or remarketing.
On the other side of the coin are the internet users who are being classified. Their perspective is not always so rosy because for some it seems like a violation of privacy. Personally I think that behavioral targeting is a great thing. I want to be shown ads that are highly relevant to my interests. I don’t want to see ads for senior living or baby toys – I want to see ads for solutions to my particular wants and needs. If an advertiser can figure out what I need based on my browsing history they have saved me time and made my life more efficient.
Like them or not you’ve probably been classified. Here’s how to view, edit, remove and even opt out of interest category ads. Navigate to the following URL http://www.google.com/settings/ads/onweb. Under the “Your Categories” section select “remove or edit” which should bring you to a page that contains table like this:


It’s definitely an interesting experience see the categories that Google has matched to your browsing history for the first time. My demographics are spot on and I’ve been browsing eCard sites for a client which explains that category.
If you’re like me you’d rather not see ads for Cards & Greetings, but marketing and search engine solutions would be useful. First click “remove” on any existing categories you don’t like, then add your preferences:


Google’s transparency and user enabled controls for interest categories are impressive, but there’s much more to the Web than Google. If you’d like to know more about how you’re being tracked online beyond Google try downloading Collusion, a Firefox add-on that allows you to see all the third parties that are tracking you across the Web.
How have you been tracked and categorized and what is your opinion of behavioral targeted advertisements? If you have an interesting experience as a user or advertiser lets continue the conversation in the comments.

Google Over-Optimization Penalty (Penguin Update) Is your Website at Risk?

We’ve been hearing about it for weeks. It’s sparked debate in many SEO Inc. meetings and now it is finally here. Google has launched an update targeting webspam in search results. It has been interesting to read the buzz online about this update. First, we’ll tell you what the update entails according to Google. Next, we will get into some winners and losers. Then we will end this post with a little SEO insight and some reactions.

What is the Update Targeting?

In this portion I would like to list a few choice quotes from Google and then we can talk about them. These quotes are pulled from Google’s Post on Another step to reward high-quality sites.
In the post Google mentions, “Earlier this year we launched a page layout algorithm that reduces rankings for sites that don’t make much content available “above the fold.”
I thought this was pretty interesting. I think a lot of people missed this update. If you did, get that content higher up on the page.
Google goes on to say the following. “In the next few days, we’re launching an important algorithm change targeted at webspam. The change will decrease rankings for sites that we believe are violating Google’s existing quality guidelines. We’ve always targeted webspam in our rankings, and this algorithm represents another improvement in our efforts to reduce webspam and promote high quality content. While we can’t divulge specific signals because we don’t want to give people a way to game our search results and worsen the experience for users, our advice for webmasters is to focus on creating high quality sites that create a good user experience and employ white hat SEO methods instead of engaging in aggressive webspam tactics.”
Here we see that Google wants you to create high quality websites. So let’s find out more concerning what they view as a high quality site. Google starts off with an example of a website that is keyword stuffing.
“Here’s an example of a webspam tactic like keyword stuffing taken from a site that will be affected by this change:”
This one is pretty hard to argue with. I mean, no one wants to Google something and then come to a page with keywords stuffed into it like this. So I have to say, they got it right here.
Google then goes on to show another example. “Of course, most sites affected by this change aren’t so blatant. Here’s an example of a site with unusual linking patterns that is also affected by this change. Notice that if you try to read the text aloud you’ll discover that the outgoing links are completely unrelated to the actual content, and in fact the page text has been “spun” beyond recognition.”
Again, bravo to Google for making this not OK in the search world, but wait, weren’t these things already not OK? The answer is yes! Almost all of the items targeted in this update have been frowned upon and spoken out against for sometime. However, Google says they are now making better updates to the algorithm to catch this type of spam.
Google says that sites affected by this change might not be easily recognizable as spamming without deep analysis or expertise, but the common thread is that these sites are doing much more than white hat SEO; they are being spammy.
This is also multilingual update; this algorithm affects about 3.1% of queries in English to a degree that a regular user might notice. The change affects roughly 3% of queries in languages such as German, Chinese, and Arabic, but the impact is higher in more heavily-spammed languages. For example, 5% of Polish queries change to a degree that a regular user might notice.
So now we have a decent idea of what Google says the update entails. Let’s take a look at around the web.
Winners and Loser from Google Webspam (Penguin) Update
There have already been some big gains and massive losses in this algorithm change. In this post by Danny Sullivan, we see who won and lost.
Big Winners Include
  • Spotify.com – Traffic up 30%
  • Yellowbook.com – Traffic up 30%
  • Observer.com – Traffic up 30%
  • MensHealth.com – Traffic up 30%
Big Losers Include
  • Similarsites.com – Traffic down -73%
  • Doc-txt.com – Traffic down -72%
  • Cubestat – Traffic down 69%
  • 5ty.org – Traffic down -65%
Overall, looks like the update has really made an impact on sites. It’s pretty interesting to see how drastic is really has been. Seeing how drastic this change in traffic has been for these websites leads me to believe that Google may be looking at much more than they let on.
SEO Strategies for Google Webspam Update
This Google Penguin Webspam Update is still very new. So it is hard to make any large SEO policy changes without fully evaluating the change. That being said, we do know a few things.
Briefly:
1.      Get content high on the page
2.      Do not keyword stuff
3.      Do not spin content
4.      Do not internal link non-relevant content
Nothing really new here… We don’t do this anyway.
Comments from Around the Web
The outcry has been tremendous to this update. Here are some of the things people are saying.
  •  There is no way it only affected 3% of searches
  •  People are angry at Google who have lost rankings (duh)
  •  People feel that this is an effort to kill SEO and drive everyone to PPC and Google +
  •  People feel that this is an update done to strategically increase Googles income

Why HTML5? Where Did It Come From?

There is a W3C presentation that helps frame the historical reasons for HTML5. The slides are available in plain text [1] and Technicolor.[2] At the risk of confusing the facts,* I’ll try to explain why this is important and what is important about it.
For over a decade, XHTML1/1.1, a successor to HTML4, has been the most current version of HTML. XHTML1/1.1 leveraged the strengths of XML to create well-formed Web pages. These pages could be validated against a schema to test for compliance to a standard. Perhaps most importantly, it helped fix the issue of cross-browser incompatibility.
XHTML2 was going to be the successor to XHTML1/1.1; as the name suggests. However, this ended up not being the case. HTML5 is. Here’s why: XHTML2 was actually a different language; a new abstract approach to HTML. In at least the immediate future, making Web pages would’ve been made more difficult. It was a departure from the trajectory of many HTML traditions:
  • IMG elements were being phased out in favor of OBJECT elements.[3]
  • The anchors, A elments, were being phased out because, “all elements may now play the role of a hyperlink.“ [4]
The objections to adopting XHTML2 were compounded by the fact that XHTML2 was not reverse compatible, by design. This meant that browsers that could already read HTML4 and XHTML1/1.1 could not read XHTML2. It kinda’ seemed like an effort by a consortium of smarty-pants engineers to force an idealized hypertext markup language onto the World Wide Web; with disregard for the immediate needs – and sights set on the long run.
Fortunately, there was outrage about all of this [5] and the W3C took a different tact. The next generation of Web pages would be made using HTML5, rather than a new markup language. HTML5 would incrementally change HTML, instead of completely overhauling it. It would be a forgiving syntax, one that anticipates that there will be deviation from standards. Instead of forcing compliance – it makes recommendations for how Web browsers should adapt. It also adds some new features.[6]
*If nothing else, read this: Misunderstanding Markup: XHTML2/HTML5 Comic Strip by Brad Colbow.
  1. HTML5, XHTML2 – Learning from history how to drive the future of the Web: http://www.w3.org/2009/Talks/05-20-smith-html5-xhtml2/
  2. HTML5, XHTML2 – Learning from history how to drive the future of the Web: http://www.slideshare.net/sideshowbarker/html5-and-xhtml2
  3. XHTML™ 2.0 XHTML Image Module: http://www.w3.org/TR/xhtml2/mod-image.html#sec_20.1.
  4. XHTML™ 2.0 XHTML Hypertext Module: http://www.w3.org/TR/2004/WD-xhtml2-20040722/mod-hypertext.html#sec_10.1.
  5. Jeffrey Zeldman Present: The Daily Report – XHTML 2 and all that (The Sky is Falling): http://www.zeldman.com/daily/0103b.shtml#skyfall
  6. HTML5 Differences from HTML4: http://www.w3.org/TR/html5-diff/

Social Media, Search Engine Optimizations Best Friend?

Organic search engine optimization has changed over the years; there is no doubt about it. And today, it is more competitive than ever. Now social media is peeking around the corner ( more like running full speed around the corner actually) so how do these two types of internet marketing solutions work together?
With this question in mind, I have thrown together these bullet points. Take a look and let me know what you think.

How Social Media Helps SEO

  • Social media sites rank for targeted branded and non-branded terms helping secure a larger search space.
  • When blog content is pushed out through social media it gets exposure & stimulates organic linking.
  • Content must be generated for social media. Fresh content greatly benefits search.
  • Twitter content is indexed and shown in Google real time search.
  • The Twitter search engine is used by millions. Tweets allow you to optimize keywords for that engine.
  • YouTube videos rank in Google general search and are important for optimizing the video search space.
  • Social media sites drive traffic!

But that’s not all; social media is pretty powerful on its own. Here are some reasons why I think it’s important:

 Social Media Benefits

  • Build an online community of loyal users
  • Get customer mindshare
  • Strengthen brand identity
  • Create your own media channel
  • Get important demographic information
  • Watch the size of your reach grow exponentially with time
  • Have a presence in the world’s largest online communities
  • Grow your online presence
  • Make your brand likable and positive
  • Watch your community do your marketing for you
  • Manage your brand identity by responding to customer complaints
  • Get people talking about your products, announcements and content
  • Become the source of your own news
  • Optimize your brand for the millions of users/searchers on YouTube, Twitter and Facebook
So this is just a quick list. I’m sure there is plenty we can add to it. So you tell me, what did I forget? Why is social media important to search engine optimization? And why is it important on its own?

Building Links for Search Engine Optimization

Building links may be the toughest part of search engine optimization. People literally ask me about it all the time. And they have every right to do so, because let’s face it, it is not an easy part of search engine marketing.
Today, Matt Cutts released a new blog on effective techniques for link building in search engine optimization and I must say I really like his approach.  To me, building links is all about creating something of value online. Whether you are writing inspiring articles, information rich blogs or creating useful applications, links will come to you, but you have to offer the online world something worthwhile. And that is really what his post is all about.

In addition to touching on the basic search engine optimization principal that “content is king,” Cutts also describes the use of social media, email newsletters and RSS feeds in link building. This is something that we at Search Engine Optimization Inc. have been preaching for a long time. Each of these outlets is a delivery method that should be utilized to get your content exposure. The more exposure you can get, the larger potential you will have to build links with each new piece of content.
But that is enough talking by me. Please watch Matt’s video blog on link building in search engine optimization below. If you have any questions make sure to leave a comment.

 

PHP Session HTTP Header Optimization

Let’s make the Web faster. I perceived something that may be of interest in regards to PHP, sessions, and SEO. On at least one installation, the default HTTP headers sent by the function session_start( ) were set to disable cache. In some cases, I think these headers may not be “good for Googlebot.”
As you may know, Google made an algorithm update that takes into account many aspects of site performance, often called Page Speed. Optimizing HTTP headers is one of the checkpoints and enabling browser caching via HTTP headers is a sub-point. If Googlebot respects site performance enhancing HTTP headers then they’ll probably also acknowledge wacky ones like the PHP session_start( ) default HTTP headers that say, “do not store a copy of this page for cache; always check with the server to see if you have the most recent copy of a page immediately before and after you request a page because this content expired back in 1981 and today is April 13, 2010.”
The perceived solution for the installation was to start sessions only on pages that needed them- instead of everywhere. Now, revisiting the topic I’ve also learned that using the session_cache_limiter( ) function to control what headers are sent may also work.
What are your experiences with this?
-Sean

Search Engine Optimization vs. Website Usability

Mike Moran posted an article on Search Engine Guide today discussing a common folly in perspective regarding SEO, that being, optimizing a web page for search engines opposed to customer interaction. While this is clearly an issue which can arise in SEO, in this post I would like to comment on a sister subject, the tug of war that can occur between search engine optimization and website usability.
When you are optimizing a site for search, the search marketer’s is often enticed to pull out all the stops. I mean let’s face it, search engine optimization is competitive and a few keywords in the right place can sometimes make a big difference. But one very important item to keep in mind is usability. If the implementation of your SEO best practices is at all misleading, and causes the usability of the site to diminish, then visitors will bounce from your pages and your conversions will most likely go down. Also, eventually your rankings will slip.

In SEO, just like in life, honesty is the best policy. If you try to cut corners by not thoroughly thinking through an SEO strategy that makes sense for both the website users and the search engines, your site will suffer. This is really what Mike Morgan is hinting at in his post. Take a look at this excerpt below:
In my talk Friday, I posed a list of things to do for improving your search marketing campaign:
  • Discovering the most popular keywords.
  • Using those keywords in your copy.
  • Conducting link campaigns to get links to your pages.
That’s not terribly controversial advice, but suppose you had a somewhat different list:
  • Discover what your customers want.
  • Provide the information they need.
  • Create information so useful that other sites link to your pages.
Ultimately, both of these perspectives need to be considered in every SEO strategy. You must have keywords in the right places, you must engage in some type of link building, and you have to optimize your site with best practices. But without considering what your customers want, giving them the information they need, and making sticky, popular pages, your site will never reach greatness.
Now this is nothing new for search engine optimization. People have been saying, “Content is King” for a long time. But in today’s online space it’s becoming even more important.
Matt Cutts, the head of web spam at Google, perhaps put it best in his recent video blog. To summarize, his main point was create amazing content while using best SEO practices and get that content exposure through all your means. Cutts’ comments hit home for me. If you create something worthwhile online it will get the credit it deserves. But you have to optimize it and promote it properly otherwise it may never be discovered. And when you do your optimization, make sure it’s done it a way that makes sense for your site and your customer first, then refine it for the search engines.

SEO Strategy for New Domains

When you first launch a new website, it’s a blank slate in the eyes of Google and the other search engines. Sure, the search engine spiders are able to glean some information about your site from the content and keywords that are present, but at first, they aren’t able to effectively measure the authority of your site based on these words alone.
Over time, they’ll make this value determination based on a number of different factors, including the number and quality of sites linking to your page, the number of links and shares your site receives on social networking sites and the hundreds of other elements that go into the natural search algorithm. But when your site has been newly launched, you’ll have the unprecedented opportunity to control how the search engines value your site, based on the actions you take.
Search expert David Wood refers to this breaking point as the “trust barrier” between Google and your site:
Search engines like Google automatically put up a wall when they come into contact with new sites. It’s like a trust barrier you can only lower over a period of time… And once you’ve successfully lowered the barrier, you can go wild!
To maximize the potential of this evaluation period, consider the following tips on how to implement an appropriate SEO strategy for your new domain:

Step #1 – Start out strong

Before you even begin thinking about conducting a backlinking campaign, make sure that your site is set up as effectively as possible from an internal SEO standpoint. Here’s what to consider:
  • Choose the best possible domain name for your site. SEOBook offers a great tutorial on selecting this important piece of digital real estate, which relies on three crucial elements: branding potential, how likely people will be to link to your site and how easily you’ll be able to get your domain ranked.
  • Make sure your internal navigation system is structured so that every page can be reached within three clicks. This helps the search engines to crawl all of your pages and ensures that any authority that is assigned to your pages over time will be distributed evenly.
  • Add breadcrumb navigation to all pages. In addition to enhancing the structure of your internal navigation system, adding breadcrumbs to each of your pages improves the user experience – a big consideration for Google.
  • Optimize your existing content. At first, you may not have a lot of content on your site, but be sure that any existing pages that are present at your launch have been optimized to include important keywords in your title tags, headlines, URL permalinks and body copy.
  • Improve your site’s loading times. The amount of time it takes your site to load is playing an increasingly large role in its SERPs rankings, so take the necessary steps to create a fast-loading site from the beginning by compressing image files, combining Javascript functions into a single file and eliminating white space in your site’s code and CSS files.
At this stage in your site’s development, it’s more important to worry about making sure things are set up correctly “inside your house”. So in addition to the structural considerations listed above, it’s also important to take the keywords you’re using into account…

Step #2 – Put some effort into your keyword research

The web runs on keywords, so targeting the wrong keywords at the outset of your SEO campaign will make it difficult for you to see results. For this reason, it’s important to invest some time in ensuring that you’re building your site around the right target phrases.
Essentially, you want to find the “Goldilocks” of keywords – the words that get good search volume but that aren’t too competitive to rank for. Targeting the keyword “lose weight”, for example, would be a mistake. Even though the search volume is good, you’ll have to invest years of effort before you could even hope to get ranked well in the SERPs.
When it comes to specifics, the ideal search volume for your site will depend on your goals. If you’re planning a small niche-oriented site, keywords with as few as 500-1,000 exact match monthly searches might provide enough traffic to meet your earning expectations. Even if you anticipate growing your site to be much larger, it might still be a good idea to start with these long-tail keywords, as focusing on them will give you the confidence to tackle keywords with higher search volume and, consequently, higher competition.
Determining your ideal keyword competition is a bit trickier. Although many sites will suggest that you simply search Google for your exact-match target keyword and measure difficulty based on the number of competing pages that appear, this approach is naïve. Instead, it’s important to fully analyze the Top 10 results in Google for your target keyword, taking things like domain age, PageRank, page optimization and page backlinks into account.

Step #3 – Start small and be mindful of your link velocity

The goal of any good off-page SEO plan should be to create a backlink profile that’s as natural looking as possible, as Google likes to reward sites that appear natural versus those whose link profiles are cluttered with spam. So how do you think it looks in Google’s eyes when a new site with only five pages of content suddenly has thousands of low quality backlinks pointing at it? If you were Google, you’d penalize this site, as they’re obviously trying to game the system.
Of course, that’s an obvious example. The question of exactly how many links per day are acceptable is one that will never be really answered (unless Google suddenly decides to release their search algorithm), although most SEO experts recommend building no more than 20-100 links per day during your site’s first few months. But instead of worrying about this specific number, it’s a better idea to…

Step #4 – Focus on building high quality links first

Yes, I know you can go out and buy 10,000 links for just $10. But guess what? $10 is an extremely generous payment for what these links are worth…
When your site is first starting out, not only do you have the trust barrier to contend with, the search engines are still trying to puzzle out exactly what your site is about and where it should fall in the SERPs. Take advantage of this golden opportunity to pursue backlinks from authority sites in your specific niche to set your site up for maximum success in the search engines.

Step #5 – Plan for ongoing optimization

Finally, recognize that SEO isn’t a “set it and forget it” type of thing. Anyone who tells you that he “already did his SEO work” is sadly misinformed as to the ongoing nature of this work.
So for the first three months of your site’s existence, commit regular time to improving your site and courting the high quality backlinks that will help you to earn trust in Google’s eyes. Over time, you’ll find that your efforts are well worth the investment.

How to Optimize your URL Structure for Search

Of course it is usually best to keep your URL stable and unchanged once it is created, but sometimes change is necessary when not properly optimized in the initial setup. Improving your site’s URLs to be more SEO- and user-friendly can greatly benefit search engine ranking and click-through rates, but be careful—if done improperly, you may face a significant decrease in ranking that could take a long time to recover from. Here, are some search engine optimization tips for URL optimization, as well as what to look out for and avoid.

Dynamic vs. Static URL

The first step is to consider changing your URL structure from dynamic to static. URLs with characters like “=”, “&” and “?” are not only difficult to read for search engines (leading to indexing problems), but also for users who are more likely to follow a link indicating what content the URL leads to.

Another problem is that dynamic URLs are often indexed in a variety of ways that can cause duplicate content issues and decrease link value. For example, if you have multiple pages with dynamic URLs that are identical until deep in the URL, search engines will often stop reading the URL once it reaches a symbol such as the “?”. Now the search engines see multiple pages but thinks they are all the same, moving on before reaching what indicates that the pages are different.  Changing your URLs to a static structure will ensure proper indexing, while preserving link value, with one unique URL for each page on your site.
Though it is sometimes best to leave the URL alone (especially if it’s ranking well), changing dynamic URLs to static URLs is almost always necessary. But of course, there’s more to it.

Make the Change Useful

It’s one thing to upgrade to a static URL, but to make the change without making it more meaningful to your audience will cut your growth short. For example, the URL http://example.com/product tells the audience very little about the content on the page. Including keywords to actually describe what your customers will find on the ‘product’ page will make your link much more valuable to users and increase click-through rates.
Incorporating page-related keyword phrases to the URL will also support SEO efforts, but be cautious of stuffing and follow a limit of less than 10 keywords (eliminating “in”, “the”, “of”, etc.). Also, while keeping URLs short and descriptive, separating multiple keywords with dashes (hyphen) will make them more SEO-friendly as search engines can better understand the individual keywords of the URL structure.

Redirect to New URL

Once the new SEO- and user-friendly URL is live, it’s important to put in place a 301 redirect to let search engines know that the URL location has permanently changed. Because 301 redirected pages do lose some of their link value, you should be prepared for a drop in rankings whenever a URL’s structure is changed. But by redirecting all of the old URLs to the newly optimized URLs, the accumulated link value will not be wasted on a 404 error page.
To expedite the recovery process of gaining back link value, invite the search engines to come crawl your new site by updating and resubmitting your XML sitemap. Rather than waiting for search engines to find the new URL, providing an updated sitemap can improve site rankings by getting your site re-indexed more quickly. The sooner, the better in this case!
One of the final and most time consuming steps of URL optimization is to now update the internal links on your site to lead visitors to the new URL. Even though a 301 redirect was setup, it’s definitely worth the time and energy to update these links in order to preserve internal link value and avoid confusing and losing visitors.

More on Search Engine Optimization and URLS

Matt Cutts Talks about URLs and Search Engine Optimization
Click the link about to read a blog by Matt Cutts on URLs and search engine optimization or simply watch the video below!

Final Note

In the end, whether or not it’s a good idea to optimize your URL to be more SEO- and user-friendly is at your discretion. If your site is already ranking well, you may be better off letting it be. In essence, these guidelines can be very useful, but it’s important to consider these changes with case-by-case perspective. If you’re really not sure, consult a qualified search engine optimization specialist before making risky changes.
 

55 Quick SEO Tips Even Your Mother Would Love

Everyone loves a good tip, right? Here are 55 quick tips for search engine optimization that even your mother could use to get cooking. Well, not my mother, but you get my point. Most folks with some web design and beginner SEO knowledge should be able to take these to the bank without any problem.
Manage your website SEO and Social Media – Free 30 Day Trial
1. If you absolutely MUST use Java script drop down menus, image maps or image links, be sure to put text links somewhere on the page for the spiders to follow.
2. Content is king, so be sure to have good, well-written and unique content that will focus on your primary keyword or keyword phrase.
3. If content is king, then links are queen. Build a network of quality backlinks using your keyword phrase as the link. Remember, if there is no good, logical reason for that site to link to you, you don’t want the link.
4. Don’t be obsessed with PageRank. It is just one isty bitsy part of the ranking algorithm. A site with lower PR can actually outrank one with a higher PR.
5. Be sure you have a unique, keyword focused Title tag on every page of your site. And, if you MUST have the name of your company in it, put it at the end. Unless you are a major brand name that is a household name, your business name will probably get few searches.

6. Fresh content can help improve your rankings. Add new, useful content to your pages on a regular basis. Content freshness adds relevancy to your site in the eyes of the search engines.
7. Be sure links to your site and within your site use your keyword phrase. In other words, if your target is “blue widgets” then link to “blue widgets” instead of a “Click here” link.
8. Focus on search phrases, not single keywords, and put your location in your text (“our Palm Springs store” not “our store”) to help you get found in local searches.
9. Don’t design your web site without considering SEO. Make sure your web designer understands your expectations for organic SEO. Doing a retrofit on your shiny new Flash-based site after it is built won’t cut it. Spiders can crawl text, not Flash or images.
10. Use keywords and keyword phrases appropriately in text links, image ALT attributes and even your domain name.
11. Check for canonicalization issues – www and non-www domains. Decide which you want to use and 301 redirect the other to it. In other words, if http://www.domain.com is your preference, then http://domain.com should redirect to it.
12. Check the link to your home page throughout your site. Is index.html appended to your domain name? If so, you’re splitting your links. Outside links go to http://www.domain.com and internal links go to http://www.domain.com/index.html.
Ditch the index.html or default.php or whatever the page is and always link back to your domain.
13. Frames, Flash and AJAX all share a common problem – you can’t link to a single page. It’s either all or nothing. Don’t use Frames at all and use Flash and AJAX sparingly for best SEO results.
14. Your URL file extension doesn’t matter. You can use .html, .htm, .asp, .php, etc. and it won’t make a difference as far as your SEO is concerned.
15. Got a new web site you want spidered? Submitting through Google’s regular submission form can take weeks. The quickest way to get your site spidered is by getting a link to it through another quality site.
16. If your site content doesn’t change often, your site needs a blog because search spiders like fresh text. Blog at least three time a week with good, fresh content to feed those little crawlers.
17. When link building, think quality, not quantity. One single, good, authoritative link can do a lot more for you than a dozen poor quality links, which can actually hurt you.
18. Search engines want natural language content. Don’t try to stuff your text with keywords. It won’t work. Search engines look at how many times a term is in your content and if it is abnormally high, will count this against you rather than for you.
19. Not only should your links use keyword anchor text, but the text around the links should also be related to your keywords. In other words, surround the link with descriptive text.
20. If you are on a shared server, do a blacklist check to be sure you’re not on a proxy with a spammer or banned site. Their negative notoriety could affect your own rankings.
21. Be aware that by using services that block domain ownership information when you register a domain, Google might see you as a potential spammer.
22. When optimizing your blog posts, optimize your post title tag independently from your blog title.
23. The bottom line in SEO is Text, Links, Popularity and Reputation.
24. Make sure your site is easy to use. This can influence your link building ability and popularity and, thus, your ranking.
25. Give link love, Get link love. Don’t be stingy with linking out. That will encourage others to link to you.
26. Search engines like unique content that is also quality content. There can be a difference between unique content and quality content. Make sure your content is both.
27. If you absolutely MUST have your main page as a splash page that is all Flash or one big image, place text and navigation links below the fold.
28. Some of your most valuable links might not appear in web sites at all but be in the form of e-mail communications such as newletters and zines.
29. You get NOTHING from paid links except a few clicks unless the links are embedded in body text and NOT obvious sponsored links.
30. Links from .edu domains are given nice weight by the search engines. Run a search for possible non-profit .edu sites that are looking for sponsors.
31. Give them something to talk about. Linkbaiting is simply good content.
32. Give each page a focus on a single keyword phrase. Don’t try to optimize the page for several keywords at once.
33. SEO is useless if you have a weak or non-existent call to action. Make sure your call to action is clear and present.
34. SEO is not a one-shot process. The search landscape changes daily, so expect to work on your optimization daily.
35. Cater to influential bloggers and authority sites who might link to you, your images, videos, podcasts, etc. or ask to reprint your content.
36. Get the owner or CEO blogging. It’s priceless! CEO influence on a blog is incredible as this is the VOICE of the company. Response from the owner to reader comments will cause your credibility to skyrocket!
37. Optimize the text in your RSS feed just like you should with your posts and web pages. Use descriptive, keyword rich text in your title and description.
38. Use captions with your images. As with newspaper photos, place keyword rich captions with your images.
39. Pay attention to the context surrounding your images. Images can rank based on text that surrounds them on the page. Pay attention to keyword text, headings, etc.
40. You’re better off letting your site pages be found naturally by the crawler. Good global navigation and linking will serve you much better than relying only on an XML Sitemap.
41. There are two ways to NOT see Google’s Personalized Search results:
(1) Log out of Google
(2) Append &pws=0 to the end of your search URL in the search bar
42. Links (especially deep links) from a high PageRank site are golden. High PR indicates high trust, so the back links will carry more weight.
43. Use absolute links. Not only will it make your on-site link navigation less prone to problems (like links to and from https pages), but if someone scrapes your content, you’ll get backlink juice out of it.
44. See if your hosting company offers “Sticky” forwarding when moving to a new domain. This allows temporary forwarding to the new domain from the old, retaining the new URL in the address bar so that users can gradually get used to the new URL.
45. Understand social marketing. It IS part of SEO. The more you understand about sites like Digg, Yelp, del.icio.us, Facebook, etc., the better you will be able to compete in search.
46. To get the best chance for your videos to be found by the crawlers, create a video sitemap and list it in your Google Webmaster Central account.
47. Videos that show up in Google blended search results don’t just come from YouTube. Be sure to submit your videos to other quality video sites like Metacafe, AOL, MSN and Yahoo to name a few.
48. Surround video content on your pages with keyword rich text. The search engines look at surrounding content to define the usefulness of the video for the query.
49. Use the words “image” or “picture” in your photo ALT descriptions and captions. A lot of searches are for a keyword plus one of those words.
50. Enable “Enhanced image search” in your Google Webmaster Central account. Images are a big part of the new blended search results, so allowing Google to find your photos will help your SEO efforts.
51. Add viral components to your web site or blog – reviews, sharing functions, ratings, visitor comments, etc.
52. Broaden your range of services to include video, podcasts, news, social content and so forth. SEO is not about 10 blue links anymore.
53. When considering a link purchase or exchange, check the cache date of the page where your link will be located in Google. Search for “cache:URL” where you substitute “URL” for the actual page. The newer the cache date the better. If the page isn’t there or the cache date is more than an month old, the page isn’t worth much.
54. If you have pages on your site that are very similar (you are concerned about duplicate content issues) and you want to be sure the correct one is included in the search engines, place the URL of your preferred page in your sitemaps.
55. Check your server headers. Search for “check server header” to find free online tools for this. You want to be sure your URLs report a “200 OK” status or “301 Moved Permanently ” for redirects. If the status shows anything else, check to be sure your URLs are set up properly and used consistently throughout your site.
Richard V. Burckhardt, also known as The Web Optimist, is an SEO trainer based in Palm Springs, CA with over 10 years experience in search engine optimization, web development and marketing.

SEO tips

Optimisasi mesin pencari (bahasa Inggris: Search Engine Optimization, biasa disingkat SEO) adalah serangkaian proses yang dilakukan secara sistematis yang bertujuan untuk meningkatkan volume dan kualitas trafik kunjungan melalui mesin pencari menuju situs web tertentu dengan memanfaatkan mekanisme kerja atau algoritma mesin pencari tersebut. Tujuan dari SEO adalah menempatkan sebuah situs web pada posisi teratas, atau setidaknya halaman pertama hasil pencarian berdasarkan kata kunci tertentu yang ditargetkan. Secara logis, situs web yang menempati posisi teratas pada hasil pencarian memiliki peluang lebih besar untuk mendapatkan pengunjung.
Sejalan dengan makin berkembangnya pemanfaatan jaringan internet sebagai media bisnis, kebutuhan atas SEO juga semakin meningkat. Berada pada posisi teratas hasil pencarian akan meningkatkan peluang sebuah perusahaan pemasaran berbasis web untuk mendapatkan pelanggan baru. Peluang ini dimanfaatkan sejumlah pihak untuk menawarkan jasa optimisasi mesin pencari bagi perusahaan-perusahaan yang memiliki basis usaha di internet.

Daftar isi

Sejarah

Menurut Danny Sullivan, istilah search engine optimization pertama kali digunakan pada 26 Juli tahun 1997 oleh sebuah pesan spam yang diposting di Usenet. Pada masa itu algoritma mesin pencari belum terlalu kompleks sehingga mudah dimanipulasi.
Versi awal algoritma pencarian didasarkan sepenuhnya pada informasi yang disediakan oleh webmaster melalui meta tag pada kode html situs web mereka. Meta tag menyediakan informasi tentang konten yang terkandung pada suatu halaman web dengan serangkaian kata kunci (keyword). Sebagian webmaster melakukan manipulasi dengan cara menuliskan katakunci yang tidak sesuai dengan konten situs yang sesungguhnya, sehingga mesin pencari salah menempatkan dan memeringkat situs tersebut. Hal ini menyebabkan hasil pencarian menjadi tidak akurat dan menimbulkan kerugian baik bagi mesin pencari maupun bagi pengguna internet yang mengharapkan informasi yang relevan dan berkualitas.
Larry Page dan Sergey Brin, dua mahasiswa doktoral ilmu komputer Universitas Stanford, berusaha mengatasi permasalahan tersebut dengan membangun Backrub, sebuah mesin pencari sederhana yang mengandalkan perhitungan matematika untuk memeringkat halaman web. Algoritma tersebut, yang dinamakan PageRank, merupakan fungsi matematika yang kompleks berupa kombinasi antara perhitungan jumlah link yang mengarah pada suatu halaman web dengan analisis atas kualitas masing-masing link tersebut.
Berdasarkan prinsip kerja PageRank, secara umum bisa dikatakan bahwa halaman web yang memperoleh peringkat tinggi adalah halaman web yang banyak di-link oleh halaman web lain. Nilai PageRank juga akan semakin tinggi apabila halaman web yang mengarah kepadanya juga memiliki kualitas yang tinggi. Nilai sebuah link dari situs berkualitas tinggi seperti Yahoo! atau DMOZ dapat bernilai lebih tinggi daripada kombinasi nilai link dari seratus situs web berkualitas rendah.
Backrub hanyalah sebuah permulaan. Pada tahun 1998 Page dan Brin mendirikan Google yang merupakan versi tingkat lanjut dari Backrub. Dalam waktu singkat Google memperoleh reputasi dan kepercayaan dari publik pengguna internet karena berhasil menyajikan hasil pencarian yang berkualitas (tidak dimanipulasi), cepat, dan relevan. PageRank lantas menjadi standar baik bagi mesin pencari lain maupun bagi webmaster yang berusaha agar situs webnya memperoleh nilai PageRank setinggi mungkin sehingga menempati posisi tertinggi pada hasil pencarian.

Webmaster dan mesin pencari

Sejak tahun 1997 perusahaan mesin pencari menyadari bahwa beberapa [webmaster] (pengelola website) melakukan segala hal untuk dapat terindeks pada urutan teratas hasil pencarian, termasuk dengan cara-cara yang manipulatif dan ilegal. Infoseek, salah satu mesin pencari generasi pertama, melakukan perbaikan pada algortima mereka untuk mencegah manipulasi dengan "meta tag" yang tidak relevan.
Bagaimanapun, dalam beberapa hal mesin pencari juga menyadari nilai ekonomi yang besar dari peringkat hasil pencarian, dan mereka kadang-kadang memiliki kepentingan terselubung dari aktivitas perusahaan SEO. Beberapa perusahaan mesin pencari mengirim perwakilan atau menjadi tamu pada event-event rutin yang diselenggarakan komunitas SEO.
Mesin pencari besar seperti Google dan Yahoo! menyediakan program dan panduan yang memungkinkan webmaster mengoptimalkan situsnya agar terindeks dengan baik. Google menyediakan aplikasi Webmaster Tool (anda harus mempunyai akun di Google guna melihat tool ini) dan memperkenalkan sitemap berbasis XML standar mereka, sedangkan Yahoo! menyediakan program Site Explorer (juga harus login dengan akun pengguna Yahoo! anda) yang memungkinkan webmaster mendaftarkan URL situs, mengecek jumlah halaman web mereka yang telah terindeks di data-base Yahoo!, dan melihat informasi link masuk. Namun demikian mesin pencari tidak mentolerir metode SEO yang manipulatif dan menghalalkan segala cara.

Etika dan legalitas

Sistem PageRank, walau menerapkan algoritma yang kompleks, belakangan juga tidak lagi sepenuhnya mampu menghadapi trik dan manipulasi. Sejumlah webmaster dan praktisi SEO telah mengembangkan beberapa metode yang memanfaatkan cara kerja PageRank agar halaman web klien mereka berada pada peringkat pertama hasil pencarian. Google secara resmi telah melarang penggunaan beberapa teknik ilegal seperti link farming, doorway pages, keyword stuffing, dan auto generated pages atau scraper pages. Situs atau layanan SEO yang tetap menggunakanannya terancam akan dihapus dari indeks pencarian.
Ancaman Google dan mesin pencari lain bukan hanya gertakan. Beberapa perusahaan layanan SEO beserta klien mereka yang tidak mengindahkan larangan tersebut menerima penalti yang serius karena perbuatan ilegal mereka. Pada tahun 2005, Matt Cutts dari Google mengatakan bahwa URL sebuah firma SEO bernama Traffic Power beserta klien-klien mereka telah dihapus dari indeks Google karena menggunakan teknik SEO ilegal. Kasus lain yang terkenal adalah ketika Google pada Februari 2006 menghapus situs web perusahaan BMW dan Ricoh Jerman dari daftar karena terbukti menggunakan metode SEO yang manipulatif. BMW dan Ricoh dengan segera meminta maaf kepada Google dan memperbaiki situs mereka. Google kemudian memasukkan kembali situs web mereka ke dalam indeks pencarian. Namun skandal tersebut tetap meninggalkan citra buruk dan memalukan bagi kedua perusahaan tersebut.
Berdasarkan panduan resmi mesin pencari, SEO bukanlah teknik ilegal sepanjang dilakukan dengan mengikuti etika dan aturan yang ada. Hal tersebut untuk menjamin setiap situs web memperoleh kesempatan yang sama dalam pencarian, dan pemeringkatan dilakukan dengan obyektif, di mana yang paling berperan dalam menentukan peringkat suatu halaman web adalah kualitas dan manfaatnya bagi pengguna internet.

Tokoh-tokoh SEO Terkemuka

Selain pendiri Google Larry Page dan Sergey Brin, beberapa orang menjadi figur yang dihargai dan pendapatnya dijadikan acuan seputar bisnis mesin pencari dan SEO.

Danny Sullivan

Mantan wartawan LA Times yang mendirikan situs web Search Engine Watch yang aktif menyoroti perkembangan bisnis dan teknologi mesin pencari. Kini dia aktif menulis dan membuat reportase di Search Engine Land.

Matt Cutts

Programmer dan mantan pegawai NSA (National Security Agency) Amerika Serikat yang bergabung dengan Google pada tahun 2001 dan saat ini mengepalai tim penanggulangan spam Google. Selain menjadi karyawan Google, Matt Cutts adalah seorang blogger terkemuka. Artikel-artikel di blognya menjadi rujukan para praktisi SEO dari seluruh dunia, karena blognya sering menjadi sumber pertama setiap informasi mengenai perkembangan teknologi pencarian Google. Matt Cutts sering dianggap sebagai juru bicara tidak resmi Google.

Vannesa Fox

Mantan karyawati Google. Vannesa dikenal di kalangan webmaster sebagai konseptor dan programmer yang mengepalai proyek Google Webmaster Central.

Strategi pemasaran internasional

Bisnis dan layanan SEO berkembang pesat seiring dengan pertumbuhan web, yang menyebabkan sebuah situs harus berusaha lebih keras agar alamatnya lebih mudah ditemukan pengunjung di antara jutaan alamat situs lain dari seluruh dunia yang menjadi kompetitornya. Mesin pencari merupakan pintu masuk utama, karena pengguna internet tidak lagi sanggup menghafalkan jutaan situs web, dan sebagai gantinya mereka mengandalkan hasil pencarian dari Google, Yahoo!, Bing, dan mesin pencari lain.
Berada pada posisi teratas atau setidaknya halaman pertama hasil pencarian untuk subyek tertentu memberikan keuntungan ganda bagi perusahaan pemasaran via internet:
  • Peluang calon pelanggan mengunjungi situs web mereka menjadi lebih besar. Hal tersebut dapat berlanjut pada meningkatnya tingkat konversi dari pengunjung biasa menjadi pembeli.
  • Berada pada peringkat pertama hasil pencarian memberikan citra dan reputasi yang baik bagi sebuah situs di mata pengunjung.
Mesin pencari pada umumnya tidak mencari keuntungan secara langsung dari hasil pencarian organik. Pendapatan usaha mereka diperoleh dari iklan yang ditampilkan pada bagian atas atau samping hasil pencarian organik tersebut. Perusahaan yang kurang berhasil menerapkan strategi SEO sehingga alamat situsnya tidak berada pada posisi teratas dalam hasil pencarian organik masih dapat memperoleh pengunjung dengan beriklan pada mesin pencari tersebut.
Pada Google, pemasangan iklan dapat dilakukan melalui Google AdWord. Google AdWord menerapkan mekanisme pay per click atau bayar per klik, artinya untuk setiap iklan yang diklik oleh pengunjung, pemasang iklan akan dikenakan biaya, yang besarnya berbeda-beda tergantung pada proses lelang (bidding) katakunci yang dilakukan oleh pemasang iklan.

Why Not Knowing About rel=“next” and rel=“prev” vs View All Could Hurt You

Google is offering a new way to deal with paginated content on the web from an SEO best practices perspective. Previously many paginated pages would feature a rel=”canonical”, list pagination the meta title or simply ignore the duplicate content errors in webmaster tools. The rel=”canonical” option was often chosen, as it acted as a strong hint to Google to rank a certain URL in a series of content pagination. Now we can use a new HTML element known as rel=”next and rel=”prev”. This new rel attribute relates specifically to paginated content and offers some interesting options to Google as far as ranking that content.

What does rel=”next and rel=”prev” do?

rel=”next and rel=”prev” indicates to Google that content is linked together through a paginated series. This could be a multipart article, product category, etc. If you use this piece of HTML on your webpage it will tell Google to consolidate the pages and to view the series as a whole. This means that link weight will be applied to the entire series, opposed to one specific page, as in the case with the rel=”canonical”. Google notes that when you use the rel=”next and rel=”prev” the search engines will, “send users to the most relevant page/URL—typically the first page of the series.”
Google is now referring to paginated pages as component pages. The counterpart to a component page is a view-all page. So instead of listing content in a paginated structure it is instead listed in full on a single page. Google has stated that they prefer view-all pages. Google states that, “Because view-all pages are most commonly preferred by searchers, we do our best to surface this version when appropriate in results rather than a component page (component pages are more likely to surface with rel=”next” and rel=”prev”).”

Why Consider the View All Option?

Google seems to prefer the view-all option. In fact, they are pretty clear about it. Google says, “User testing has taught us that searchers much prefer the view-all, single-page version of content over a component page containing only a portion of the same information with arbitrary page breaks (which cause the user to click “next” and load another URL).”
The largest issue that Google has ranking the view-all page is latency. When the page loads to slow users and Google get upset. In the case of a view-all page optimization Google recommends a few best practices. While Google will most likely be able to detect the view all option through your content structure on their own, you can make it crystal clear to Google which page is the view all page, simply use a rel=”canonical”. You would simply specify the view all page as the correct URL via rel=”canonical” on each page in the pagination.
Now if it is the case that you do not have a view-all page or that you want to surface individual component pages, you have the option of using rel=”next and rel=”prev” or simply using rel=”canonical” to rank your first page in the series.

3 Options for dealing with Paginated Content

So here are your three options for dealing with paginated content.
  1. Leave your content as it is without adding rel=”next and rel=”prev” and hope it gets indexed correctly.
  2. Optimize your view all page.
  3. Use rel=”next and rel=”prev” and hope Google ranks the correct page in the pagination. In most cases they will rank the main entry page, just as they would if you were using the rel=”canonical”. However, if a specific section of the paginated content relates to a specific keyword we could see that surface as a result of this directive.

How to Implement rel=”next and rel=”prev”

So how do you actually implement the rel=”next and rel=”prev” directive? Google provides the following information.
Let’s say you have content paginated into the URLs:
http://www.example.com/article?story=abc&page=1
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4
On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section:

On the second page, http://www.example.com/article?story=abc&page=2:


On the third page, http://www.example.com/article?story=abc&page=3:


And on the last page, http://www.example.com/article?story=abc&page=4:

Pretty straightforward right?

Google also offers these Important Points

  • Page one contains rel=”next” and no rel=”prev” markup.
  • Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup.
  • The last page only contains the rel=”prev”, not rel=”next”. It is not needed there is no next page.
  • rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag).
  • rel=”next” and rel=”prev” should be added to the section.
  • You can actually use rel=”next” and rel=”previous” and rel=”canonical” on the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:

  • Just like with rel=”canonical”, rel=”prev” and rel=”next” act as hints to Google, not absolute directives.
  • If you add this code incorrectly Google will do their best to rank your content.
This new way of dealing with paginated content by Google provokes an interesting SEO topic. The rel=”prev” and rel=”next” directive allows all link weight to be distributed equally. Working off this premise, we can speculate that content in the paginated structure will be ranked independent of link weight and solely on onsite optimization. This presents an interesting opportunity. Consider you have a series of content that covers a series of subtopics. The aggregate link juice acquired by this content will be distributed across the series of content and individual areas of the paginated structure will be ranked off of content theme. Google has said they will rank the home page in most cases, but we are really never sure how these things will work until we see them in action. It will be interesting to see how this new SEO technique develops.

Search Engine Optimization

Search engine optimization (SEO) is the process of improving the visibility of a website or a web page in search engines' "natural," or un-paid ("organic" or "algorithmic"), search results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users. SEO may target different kinds of search, including image search, local search, video search, academic search,[1] news search and industry-specific vertical search engines.
As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience. Optimizing a website may involve editing its content and HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.
The acronym "SEOs" can refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site and site content, SEO tactics may be incorporated into website development and design. The term "search engine friendly" may be used to describe website designs, menus, content management systems, images, videos, shopping carts, and other elements that have been optimized for the purpose of search engine exposure.

Contents

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[2] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.
Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997.[3] The first documented use of the term Search Engine Optimization was John Audette and his company Multimedia Marketing Group as documented by a web page from the MMG site from August, 1997.[4]
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[5][unreliable source?] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[6]
By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[7] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[8] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[9]
By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals.[10] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. SEO service providers, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs.[11][12] SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.[13]
In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[14] In 2008, Bruce Clay said that "ranking is dead" because of personalized search. It would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[15]
In 2007, Google announced a campaign against paid links that transfer PageRank.[16] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting.[17] As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript. [18]
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[19]
Google Instant, real-time-search, was introduced in late 2009 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[20]
In February 2011, Google announced the "Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique. [21]

Relationship with search engines

Yahoo and Google offices
By 1997, search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[22]
Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEO service providers. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,[23] was created to discuss and minimize the damaging effects of aggressive web content providers.
Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[24] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[25] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[26]
Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. Major search engines provide information and guidelines to help with site optimization.[27][28] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[29] Bing Toolbox provides a way from webmasters to submit a sitemap and web feeds, allowing users to determine the crawl rate, and how many pages have been indexed by their search engine.

Methods

Suppose each circle is a website, and an arrow is a link from one website to another, such that a user can click on a link within, say, website F to go to website B, but not vice versa. Search engines begin by assuming that each website has an equal chance of being chosen by a user. Next, crawlers examine which websites link to which other websites and guess that websites with more incoming links contain valuable information that users want.
Search engines uses complex mathematical algorithms to guess which websites a user seeks, based in part on examination of how websites link to each other. Since website B is the recipient of numerous inbound links, B ranks highly in a web search, and will come up early in a web search. Further, since B is popular, and has an outbound link to C, C ranks highly too.

Getting indexed

The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.[30] Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.[31] Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review.[32] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links.[33]
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[34]

Preventing crawling

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[35]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to most important pages may improve its visibility.[36] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[36] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the "canonical" meta tag[37] or via 301 redirects can help make sure links to different versions of the url all count towards the page's link popularity score.

White hat versus black hat

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[38] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[39]
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[27][28][40] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[41] although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[42] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.[43]

As a marketing strategy

SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, depending on the site operator's goals.[44] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[45]
SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[46] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic. According to Google's CEO, Erick Schmidt, in 2010, Google made over 500 algorithm changes - almost 1.5 per day.[47] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[48] Seomoz.org has suggested that "search marketers, in a twist of irony, receive a very small share of their traffic from search engines." Instead, their main sources of traffic are links from other websites.[49]

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[50] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[51] As of 2006, Google had an 85-90% market share in Germany.[52] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[52] As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.[53] That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable markets where this is the case are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[52]

Legal precedents

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[54][55]
In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[56][57]

comments

Total Pageviews

PR