Google Panda is a change to Google‘s search results ranking algorithm that was first released in February 2011. The change aimed to lower the rank of “low-quality sites” or “thin sites”, and return higher-quality sites near the top of the search results. CNET reported a surge in the rankings of news websites and social networking sites, and a drop in rankings for sites containing large amounts of advertising. This change reportedly affected the rankings of almost 12 percent of all search results. Soon after the Panda rollout, many websites, including Google’s webmaster forum, became filled with complaints of scrapers/copyright infringers getting better rankings than sites with original content. At one point, Google publicly asked for data points to help detect scrapers better. Google’s Panda has received several updates since the original rollout in February 2011, and the effect went global in April 2011. To help affected publishers, Google provided an advisory on its blog, thus giving some direction for self-evaluation of a website’s quality. Google has provided a list of 23 bullet points on its blog answering the question of “What counts as a high-quality site?” that is supposed to help webmasters “step into Google’s mindset”.
Google Panda is a filter that prevents low quality sites and/or pages from ranking well in the search engine results page. The filter’s threshold is influenced by Google Quality Raters. Quality Raters answer questions such as “would I trust this site with my credit card?” so that Google can distinguish the difference between high and low quality sites.
The Google Panda patent (patent 8,682,892), filed on September 28, 2012, was granted on March 25, 2014. The patent states that Google Panda creates a ratio with a site’s inbound links and reference queries, search queries for the site’s brand. That ratio is then used to create a sitewide modification factor. The sitewide modification factor is then used to create a modification factor for a page based upon a search query. If the page fails to meet a certain threshold, the modification factor is applied and, therefore, the page would rank lower in the search engine results page.
Google Panda affects the ranking of an entire site or a specific section rather than just the individual pages on a site.
In March 2012, Google updated Panda.
Google says it only takes a few pages of poor quality or duplicated content to hold down traffic on an otherwise solid site, and recommends such pages be removed, blocked from being indexed by the search engine, or rewritten.However, Matt Cutts, head of webspam at Google, warns that rewriting duplicate content so that it is original may not be enough to recover from Panda—the rewrites must be of sufficiently high quality, as such content brings “additional value” to the web. Content that is general, non-specific, and not substantially different from what is already out there should not be expected to rank well: “Those other sites are not bringing additional value. While they’re not duplicates they bring nothing new to the table.”.
For the first two years, Google Panda updates were rolled out about once a month but Google stated in March 2013 that future updates would be integrated into the algorithm and, therefore, less noticeable and continuous.
Google released a “slow rollout” of Panda 4.1 over the week of September 21, 2014.
SOURCE: Wikipedia – Google Panda