Tag Archives: Search Engines

Google Gets Serious About Webspam and Advertising Tricks Finally

For all of it’s talk, sometimes it seems that Google does very little to stop the continuous rise of webspam and SEO tricks aimed at drawing in the unaware user to a webpage filled with advertising (or worse).

However, recently, Google has finally taken a concrete step in the direction of improving the average user’s search results, and in the process, knee-capped certain webspammers and black-hat SEO or gray-hat SEO gimmicks, depending upon your point of view. In the process, it has improved the quality of the Internet, or rather, its move will have the affect of making the Internet a more accurate and reliable source of information over time. As junk websites and their “made for AdSense” (MFA) pages have their traffic dry up, the incentive to keep them going and to continue creating more low quality, but high search result ranking, junk sites diminishes. This also increases the ability of quality writers to make money writing online with AdSense.

How did Google finally achieve the goal of actually hurting webspam and garbage websites? Was it a secret improvement in its oft vaunted, and overrated, ranking algorithm? Did new duplicate content monitors, or an improvement in detecting low quality websites come online? Did the company finally start taking seriously, the numerous reports of garbage search results?

Nope. Instead, a simple change in the way a common search error is handled will end up making a huge difference.

Misspelled Searches Cash Cow Killed

misspelled-google-search-engine-results-rankings For years, it was a dirty secret that by targeting misspelled searches, one could make lots of money online.

An exploitation of webpages and webmasters who were honest and focused on quality caused legitimate websites to lose out to sham websites, and caused search engine users to end up reading dubious information about their search keywords, that is if they could find their way past the abundance of ads.

It was a relatively easy exploit. Social engineering is a way of hacking computers, or scamming users. The idea is to simply do something in such a way that most people would make an incorrect assumption about what what going on and therefore, hand over valuable information without knowing a mistake was being made. The best part (worst part?) of social engineering tricks is that they circumvent carefully constructed security systems, firewalls, and policies, that might have otherwise stopped the hacker from gaining access to anything valuable.

One common example of social engineering hacking are emails pretending to be official communications from a bank, company, or even another person in which they as the user to verify their username and password. The average user makes the incorrect assumption that the only way they would get such an email was if it was legitimate, and being good people, try and be helpful by following the instructions to click a link and enter their personal account information. Upon doing so, the website, which looks exactly like the real company’s website, says thank you and that everything is find now. The user goes on about their day, while the crooks empty their bank accounts.

Although much less nefarious, the most common (until recently) hack of search engines and searchers was to target keywords that were commonly (or not so commonly) misspelled. When the searcher typed keywords into Google’s website, the misspelled words made a better match with misspelled words on the scam webpages than they did with the correctly spelled words on legitimate websites. As a result, the search engine results pages (SERP) would show the junk webpages above the real websites’ pages.

For example, if a searcher was looking to buy a new computer monitor they might go to Google and type in “computer moniter” in an effort to do research or check prices. Quality websites, including those of the companies that make and sell computer monitors, would spell “monitor” correctly. Junk websites would create webpages with “moniter”. Google’s ranking algorithm would, not unexpectedly, rank the pages with the “same” word as the search (the misspelled word) higher than those with the close, but not exact, word monitor.

For the last year or two, Google has tried to help searchers in this situation by including a note at the top of search results saying, “Did you mean monitor?” However, the search results were still displayed based on the misspelled word. Many users, MOST users in fact, would just scan down the the results and use them instead of clicking on the link to take them to the real word.

The same tactic generated another issue for Google. Ads purchased through the AdWords online advertising program of Google typically targeted properly spelled keywords. Those bids were often not extended to misspellings which means that there was a double problem for Google. First, the search result accuracy on which its livelihood depends was compromised. Second, the lower number of ads targeted at misspelled words means that those ads were displayed at the top of search results for less money than they would be if the automated ad auction included all of the properly spelled words.

Google eliminated both problems with one tiny change in the way it handles misspelled search queries.

Now, instead of just trying to notify users that they misspelled a word, the search results now display, by default, the results for the correctly spelled word, and instead, the results notify users that if they really meant to spell the word the other way that they can click a link to take them to those results. In other words, Google now does the opposite of what it once did to display search rankings of incorrectly spelled keyword searches. By default the correct spelling is displayed and the incorrect spelling is listed as an alternate search, instead of vice versa.

The result?

Higher quality websites now show up even for average users who misspell their search words and the lower quality sites thrown up by those hoping to make a quick buck on a little bit of user ignorance have seen their traffic dry up. Additionally, Google has increased its advertising income by ensuring that the full gamut of ads participates in the computerized ad auction that determines which ads show up on top of those same search results.

This change is a win-win for honest webmasters and quality vendors, as well as for Google. The only ones hurt by this action are the underworld Internet marketer community, and frankly, most people are glad to finally have even a small whack made at them.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Relevant Backlinks vs Unrelated Backlinks – Does It Matter For Improving Google Search Engine Ranking?

related-links-unrelated-links-comparisson-graphic A lot of information floating around the search engine optimization world is either old news. A lot of the so-called accepted wisdom is based on flimsy, or even non-existent, research. And, much of the search ranking conventional wisdom repeated, again, and again, on websites and blogs isn’t actually relevant to the most common scenarios.

So, when a disagreement between colleagues regarding the importance of relevant backlinks versus backlinks from sites that are not relevant arose, we looked around at trusted resources and found that they all said the same thing. In order for links to be worth counting toward a website or page’s Google PageRank or toward its search engine ranking, they had to come from relevant sites. However, we realized that, oftentimes, this bit of information came coupled with SEO strategies and tips that we knew were no longer true; if they were ever true.

Thus, the question remains. Does it matter that a page’s incoming links come from other websites or webpages that are related to the subject matter that they are linking about?

Google Search Ranking Algorithm

To understand why this question matters, and to be able to use the data found in the answer, it is important to have a basic understanding of Google’s search algorithm that ranks those results you see listed on the page after doing a Google search.

The search results page, or more specifically, the order that links are displayed on the search results page is sometimes known by the acronym SERP or Search Engine Ranking Page. The order those links are displayed in can be very important depending upon what is being searched for, and what the goal of the website on the other end of that search link is. Microsoft’s Bing Search Engine’s blog says that in researching how people use search, they found out that people stopped looking in much detail at the results after #5, and in many cases, after looking at just the top 3 results.

In the online advertising world, Internet marketers claim that the #1 position on a Google search can be worth anywhere from three times as much, to ten times as much traffic as the #2 position. They will also tell you that anything below #10 isn’t worth having, since it won’t be on the first page.

Whether any of this is true or not, is irrelevant to out question here. However, what is important is to know that the results that appear on any given SERP are not listed at random, nor are the listed alphabetically, nor by date, or any other non-discriminatory method. Rather, pages are listed in order based on how well they match up with the term entered into the search box on Google’s home page. These terms are known as keywords, even when they are actually a key phrase.

More accurately, the webpages listed high on Google search results pages are ranked based on how well they score on a secret algorithm that Google uses. The intention of that algorithm is to determine which one of all the webpages that match the query is most likely to provide what the searcher wanted to find. The reality is that a very small number of easily manipulated parameters determine the order from top to bottom of every Google search query.

One of the most important of these parameters is how many links point to a given website using the exact words entered into the search. This is by no means the only criteria, but it is very important.

Obviously, this evaluation can be very easily gamed. A determined webmaster or online ad salesman, need only create a million links on a dozen of his own websites to earn the #1 ranking over more legitimate websites.

Fortunately, the raw number of incoming links, or backlinks, is not the way rankings are scored. In fact, since the paper with the original ranking strategy that led to the found of the Google company and its famous search engine, much time and resources have been devoted toward determining which links should not count, which links should count more, which links should count less, and so on.

Thus, our million link creating Internet Marketer will get nowhere with his strategy.

However, the core of every search ranking improvement effort, or SEM engagement is building more links. They just can’t all come from your own websites, or from just two or three websites, or all from the same article.

Theoretically, one of the criteria for determining how much a link should count for is how much the site providing the link is related to the site receiving the link. The idea is that a website about Credit Cards would be more likely to provide "good" links on topics related to credit cards, like banking, loans, credit scores, and credit card reward programs. On the other hand, a website about plumbing would not be a good source to get information about financial topics.

Whether or not this concept is valid is open for debate. However, virtually any SEO consultant or SEM consultant (or whatever else they call themselves) will tell you that Google believes it, and thus related backlinks count for more than unrelated backlinks.

Do Related Links Count More Than Unrelated Links?

It is technically impossible to ever say with 100% certainty that something does or does not count at all in the Google ranking algorithm. However, what can be demonstrated is what features have so little value that they are easily pushed off of what determines the rankings of webpages under real world conditions.

In this case, it seems that whether or not a link comes from a related webpage or website is of so little value that its affect cannot be replicated in the real world! Instead, a host of other factors carry so much more weight that restricting oneself to only related backlinks is foolish.

That is not to say that getting links from spammy or MFA (Made For AdSense) sites is good. These sites can pass some of their negative marks on to your site, especially when they form a large number of your incoming links. However, a link to your home mortgages website from a legitimate site about Mickey Mouse collectibles will end up being worth every bit as much to your website’s PageRank and search engine rankings, that you shouldn’t bother finding related sites. Instead, just collect all the links you can.

Add those incoming links up with your other SEO optimization efforts, and your site’s rank will increase faster. Soon your website could be a high-ranking Google search result.

***

***

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Search Engine Rankings Link Building Myths

myth-graphic There is a lot of misinformation out there about search engine rankings and how they are determined. Most SEO advice is based on a paper published by Google founders Larry Page and Sergey Brin when they were students at Stanford, and the subsequent patent application made regarding the same. The catch is that the algorithm contained in those documents is over a decade old. Google has updated their search ranking algorithm thousands of times since then.

While the methodology published in the original Google patent application and other documents no doubt remains the basis of Google’s search rankings, there can be no question that the simplified version often given as the basis for SEO activities is very out of date.

What Is PageRank?

Essentially, the basics of this original search ranking methodology involved a site passing "link juice" to each page it links to. The power of that link juice is determined by the original page’s authority, as measured by its PageRank, which is in turn derived based upon how many pages and of what authority link to original page. This outgoing link juice is then divided evenly over all outgoing links. All of this results in a numerical value, or other ranking, which then leads directly to the results listed for any given search, and the order of the search result rankings for that search.

At first, it sounds complicated, but just a moment or two of study brings into clarity the fact that this particular algorithm is pretty simple, especially for a computer.

In fact, the original Google algorithm displayed here would be both very easy to copy, and very easy to abuse. As Microsoft’s repeated efforts at crafting a search engine have shown, it is actually not easy to duplicate Google’s quality of search results. After failing with both MSN Search, and Live Search, Microsoft finally launched a new search engine called Bing. Microsoft’s Bing strategy has been not to duplicate Google’s level of quality in search results, but rather to shake up the user interface so as to increase the likelihood that a user will perform a search which is more easily interpreted.

When it comes to the various methods of abusing the Google search engine, as most often referenced as "black-hat SEO", Google has entire teams of people working to thwart their efforts. Matt Cutts is one of Google’s most public engineers thanks to his long running blog. He is also the head of the so-called Webspam Team, whose job it is to prevent junk webpages from clogging up the company’s search engines. As a full-time engineer who heads up a "team" of people, one can only assume that the efforts that take place in this arena are substantial. Each action they take has the potential to render numerous forms of "white-hat SEO" techniques worthless, or at least worth much less that they once were.

PageRank Reality Check and SEO Myths

One recent example was Matt’s blog post in which he made a few very important points.

First, that Google has drastically changed how it interpreted the only recently launched nofollow tag after another Google team, the Search Quality team, determined that webmasters were setting nofollow tags on links to valuable webpages in order to "channel" the link juice to certain, most likely more valuable, webpages. This practice, known as link sculpting, was actively recommended by virtually all SEO experts and Search Engine Optimization consultants right up the very minute the post debunking it as an effective technique was published.

Second, Matt commented that it was necessary for him to publicly and blatantly announce the change in treatment for the nofollow link tag because the people who research and test such things had NOT NOTICED. Not only that, but with a throw away statement a few sentences earlier, he noted that they had also not noticed other bigger changes made by Google to its search engine ranking system.

Third, before getting into the specifics of this particular announcement, Mr. Cutts re-iterated the basic version of the search algorithm used all over the Internet as outlined above. Then, he said that in 2000 when he joined the company that Google was doing "more sophisticated link computation" than was shown by the original PageRank papers that everyone quotes. In other words, this is NOT the way the Google search algorithm works anymore.

Modern Search Engine Optimization

The reason that SEOs and other Internet Gurus continue to espouse the old version of authority and PageRank is that it appears to still work. That is, if you look at a small enough subset of pages and you compare them to a similarly small subset of pages based upon another small set of factors, then yes, the old model explains things nicely.

However, like the Newtonian Model of physics, there are a great many things that cannot be explained by the long standing theory. I have personally seen webpages with a PageRank of 4 with no incoming links except for the Blogroll links of 3 or 4 websites that have nothing but a PageRank 2 landing page and the rest PageRank 0 pages.

It doesn’t take long messing around with any of the SEO toolbars out there to notice that a PageRank 3 site will outrank a PageRank 5 site even if both have all of the standard "onsite SEO" things setup right and the PageRank 5 webpage has more incoming links.

So, what does this all mean?

If you want to go out and build a bunch of quick throw together websites and then use SEO techniques to try and push them up high in the search results and watch the money start rolling in, your only hope is to go with the tried and true, Link Juice + PageRank + Backlinks model and hope for the best. Just don’t be surprised when it doesn’t seem to work for you as well as it does for "everyone else."

If, on the other hand, you want to earn money by writing online, then start building websites about topics you enjoy writing about. Write plenty of content and THEN see if there is any traction there to make money with ads or by selling things online. If so, then give yourself a high-five and keep writing, especially on those high paying topics and their keywords.

If it turns out that this particular passion does not have any future as a money making enterprise, then keep writing about it for fun. You never know if or when what you are writing becomes a hotter topic or just finds an audience.

But, and this is the important part that so many writers who fail to make money by writing websites forget, you also have to move on and create a new website. Fill that one up with quality content and see how that works out. Yes, things like keyword research can help give you an educated guess at what will and will not pay off, but in the end, there are just too many variables to know for sure. So, keep writing, and watch for your opportunities to arise. When they do, hit ’em and hit ’em hard.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

HubPages HubRank Minimum to Avoid NoFollowed Links

I’ve started up a bit of an experiment regarding the all comers content publishing site called HubPages.

Recently, there was a bit of a hub bub (Hah!) when a well known Internet marketing website personality suggested that writing 30 Hubs in 30 Days could lead to improved search engine rankings for a website.

At the time, I was too busy to look into it, and frankly, I’m not really the type to jump in and do something because everyone else is doing it. However, at the conclusion of the experiment, not only were they able to get their search engine rankings to improve, but they were also actually making money off of the published Hubs.

I put it in the back of mind as something to check into at a later date. That later date, is now.

HubPage Nofollow Rules

There is a catch. As some sort of method to weed out spammers and other unsavory publishers, HubPages automatically nofollows the links of all Hubs from starting authors, or Hub Builders. My HubPages NoFollow Guide is a good place to get the juicy details.

Each Hubber, as HubPages authors are called, is given a HubRank. Your HubRank is essentially an automated ranking of you as an author. Everyone starts out low. (I don’t remember the exact number, I’ll have to look it up.) By publishing Hubs, and by “participating” on HubPages your score rises. Until your score reaches at least 75, all of your outbound links, like those being bragged about during the 30 hubs in 30 days posts, are nofollowed.

Each individual Hub is also ranked. This individual Hub rank is called a HubScore. Rankings seem to start at 50 and then work their way higher based on things like how much traffic they get, how many people vote them up, and so on. So long as the HubSocre is above 40, the links will not be nofollowed and the power of writing for HubPages is now within your hands.

According to my profile, I joined 5 weeks ago, but I only wrote my first hub 4 days ago. So far, I have published 5 Hubs and commented on a dozen or so posts. My HubRank has risen to 71, so I’m 4 away from the promised land of 75 and all of my links having their nofollow tags removed.

HubPage AdSense Challenge

While reading various hubs, I came across one where the author noted how many highly ranked (several #1 results) pages he had in Google search results and yet how little money he made from his AdSense ads.

It didn’t take long to see a couple of common misconceptions in how online advertising programs like Google AdSense work. I wrote up a Hub (nach) describing the misunderstandings many people have about working with AdSense, and as challenged by the original Hub poster, I also laid out step by step instructions for how to make some AdSense income based on his hubs.

Basically, it is finding a better paying keyword with low competition and the leveraging those high ranking website’s authority to drive higher and better paying traffic to a new Hub with a better keyword. Hopefully it works for him and help make money with AdSense.

Drop by and check out my profile: Hub Llama

If you have a HubPages account, do me a favor and add me as a favorite so it doesn’t say no one has added Hub Llama as a favorite on every page. I don’t need to be favorited by thousands, but the “no one” think isn’t very fun 🙂

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS