Perficient Digital Author, Author at Perficient Blogs https://blogs.perficient.com/author/pdauthor/ Expert Digital Insights Thu, 07 Oct 2021 19:30:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Perficient Digital Author, Author at Perficient Blogs https://blogs.perficient.com/author/pdauthor/ 32 32 30508587 Facebook Post Types: Picking a Winner Isn’t Easy https://blogs.perficient.com/2014/09/25/facebook-post-types-picking-a-winner-isnt-easy/ https://blogs.perficient.com/2014/09/25/facebook-post-types-picking-a-winner-isnt-easy/#respond Thu, 25 Sep 2014 14:36:45 +0000 https://www.stonetemple.com/?p=6139

It’s no secret that at StoneTemple we are Google+ enthusiasts.
It’s widely known in the Google+ community that image posts are the most effective. And it makes sense; today’s users expect a rich experience when they interact with the web, with quality, eye-catching visuals that convey the tone and message of the content.
For most digital marketing campaigns, compelling and engaging images are a key component; whether it’s a content marketing campaign centered around an infographic or great pictures to build a Pinterest audience.
So we were a bit surprised to find that when it comes to Facebook, the effectiveness of image posts is much less clear.
If you are not familiar with Facebook post types there are three:

  1. Image posts (where you upload an image)
  2. Link shares (where you share a link in the post and Facebook generates a preview of the linked web page)
  3. Status updates, which are text-based

A short history of Facebook changes for the different post types

Image posts used to work wonderfully on Facebook, but as early as the spring of 2013, Facebook marketers noticed the reach of their image posts declining. “Reach” is a metric defined by Facebook as the “the number of unique users who saw your Page post in news feed, ticker, or on your Page’s timeline.”
This is a meme. Gotta problem wit dat?Then on August 23, 2013, Facebook announced an update to their News Feed ranking algorithm that would reward “high-quality posts.” This algorithmic update was based on feedback from a survey that Facebook conducted which asked participants questions that included “Would you call this a low-quality post or meme?” On that same day Techcrunch published comments from Facebook that confirmed that “Pages that are exclusively posting low quality, meme content might see a bigger drop”
Now not all image posts are memes, but Facebook marketers continued to report that all image posts (not just memes) had a significantly smaller reach than other types of posts, especially status updates. Coupled with the overall (and continuing) declining organic reach of Facebook Page posts, the seemingly poor performance of image posts caused consternation among some marketers which had come to rely on this “free” form of marketing.
Then on January 21, 2014, Chris Turitzin of Facebook announced a change to the algorithm that would decrease the reach of status updates from Pages. In the announcement Chris said:

“Page admins can expect a decrease in the distribution of their text status updates, but they may see some increases in engagement and distribution for other story types … we recommend that you use the story type that best fits the message that you want to tell – whether that’s a status, photo, link or video.”

Did this mean that link shares and image posts would perform better in 2014?

Not according to one of our clients, savings.com. Their social media team was consistently observing that status updates had a drastically higher reach than either image posts or link shares. There was such a difference that when the post had links the team was removing the link preview so that the post would be categorized as a status update rather than a link share. Certainly this practice was not following the Facebook recommendation of “using the story type that best fits the message”.
On Facebook, your post’s reach can depend on the time of day (your reach will be higher when your audience is online) as well as the engagement it gets. So to confirm our impressions of the performance, we collaborated on a test where we would post very similar posts, one an image post and one status update at the same time of the day a day or two apart.
Here is a sample of the results:

Post Type Post Date Organic Post Reach
Photo  8/5/2014 10:37 AM 516
Status  8/1/2014 10:24 AM 2155
Photo 7/21/2014 10:35 AM 559
Status 7/23/2014 11:00 AM 4030
Photo  7/4/2014 11:00 AM 620
Status  7/3/2014 10:30 AM 3784
Photo 6/30/2014 10:37 AM 406
Status  7/1/2014 11:35 AM 2728
Photo 5/28/2014 10:41 AM 677
Status 5/27/2014 09:55 AM 3864

As you can see the reach of the status posts is a staggering 400% to 700% higher than a similar image post! And while we didn’t test link shares as extensively, their reach was similarly small.

But is reach the most important stat to look at?

As I mentioned before, Reach is a measure of how many people saw your post. While a high reach is excellent for branding, your organization might be looking for engagement instead, as measured by clicks, likes or comments on the post.
Even with savings.com’s small reach, I did observe that the link shares and to a greater extent, image posts, had better engagement. You would expect this; link share and image posts takes up more real estate in the news feed and catch the eye of the reader, while a small text post might slip by unnoticed. But the next question was: Did the higher engagement compensate for the drastically smaller reach?

The virality of image posts 

Virality of image posts on FacebookTo answer that question I sifted through over 2-1/2 months of post-performance data from Facebook Insights. I decided to focus on Viral Reach (the number of unique users who saw your page post in a story from a friend) and Engaged Users (the number of unique users that clicked anywhere in your posts) as measures of a post’s effectiveness.
I first excluded any boosted posts as well as any posts that had a strong call to action. For example, a post that let people know they could get a free drink at an LA bar by mentioning “SAVINGS” or any post that advertised a time-sensitive deal I excluded as CTA posts. These were mostly image posts that enjoyed higher reach and engagement and it didn’t seem fair to compare them to a status update that included a link to a blog post.
However, even with the best performing image posts excluded, and handicapped by the significantly smaller reach, the image posts still drove more engagement and viral activity than the status posts. To get the numbers below I calculated the percentage of users of the total reach who saw the post virally or engaged with it and averaged it per post type.
(Although I’ve included link share data below, please note link shares only comprised 10% of the posts).

Average Reach Avg Viral Reach (%) Avg. Viral Reach (#) Avg. Engaged Users (%) Avg. Engaged Users (#)
Link Share 754 0.0385 0 1.4329 10
Image 670 1.7406 5 2.9378 18
Status Update 3591 0.0449 1 0.3443 8

As you can see from the data, image posts, despite their much smaller reach are the winners when it comes to virality and engagement.

So what’s the takeaway here?

So does this mean that to get higher engagement and more viral activity you should exclusively post image posts on your Facebook page? I wouldn’t recommend it.
Your mileage may vary. First of all, if this data proves anything, it is that you should take any published statements (including from me) with a large grain of salt and do your own testing, as your results may vary. And although I have no proof to support this speculation, I wouldn’t be surprised if a page heavy with image posts trips some Facebook filter that further hurts the performance of your posts.
Know your goals. After you have analyzed your own Facebook insights data, get clear on your social media goals. If you are looking for better reach but don’t want to sacrifice engagement, study the image posts and link shares that have had better reach (you’ll likely find a few that have done much better than their peers). One link share that had 3x the reach of the average asked a compelling question.
Beware link shorteners. One change that we will be making that could improve engagement for the status updates is to discontinue using bit.ly links. This Social Media Examiner article cites a Buddy Media study that found engagement rates were three times higher for Facebook posts that used a full URL rather than a link shortener. Using a branded URL shortener instead could remove the hesitation some readers might have on clicking on the links. Whether this change will make status updates the clear winner in engagement, as well as reach, is yet to be seen.
What has been your experience with the performance of the different Facebook post types? Does your data look different?

]]>
https://blogs.perficient.com/2014/09/25/facebook-post-types-picking-a-winner-isnt-easy/feed/ 0 256779
HTTPS Now a Google Ranking Factor: Some Questions Answered https://blogs.perficient.com/2014/08/11/https-now-a-google-ranking-factor-some-questions-answered/ https://blogs.perficient.com/2014/08/11/https-now-a-google-ranking-factor-some-questions-answered/#respond Mon, 11 Aug 2014 18:31:30 +0000 http://www.stonetemple.com/?p=5641

It isn’t often that Google tells us something that will directly affect site rankings. If they do, it’s often because they want to modify our behavior in a certain direction. We saw this tactic  from Google when they first began talking about web performance a few years back, and later announced that site speed is in fact a ranking factor.
A similar motivation seems to be in play with their recent announcement that sites that use HTTPS encryption will get a small ranking boost. In fact the rollout of the HTTPS support is eerily similar to the buildup of site speed evangelism at Google, and even shares one of its main actors, Ilya Grigorik. Read the official announcement here.
Why is Google rewarding such sites? Whatever gives users a better and safer user experience after clicking a Google result ultimately benefits Google. It’s clear now, with Google suggesting that over half of all searches will take place on mobile devices by the end of this year, and understanding that one significant market growth area for Google is in regions that don’t yet have broadband internet,  that this is a big part of the motivation for speed.
In a similar vein, in a time of increasing identity crimes and privacy challenges, issues likely to continue to grow in the coming Internet Of Things era, people will likely prefer Google Search if they feel they are being served well when they click a result there. Google realizes that having privacy and data protection is an important part of a good user experience.
Google says that for now the boost will be very small, affecting only 1% of all queries, and with less effect than important ranking factors such as links and content quality. But over time they say they may boost its ranking effect as they want to encourage all sites to adopt HTTPS security.

Resolving Some Questions

I think it’s a no-brainer that sites should adopt HTTPS as soon as they can, and we are going to recommend that our clients put this on their development radar. But I had a couple of questions to consider before making up my mind about such a switch.

Site speed matters!1. Is there a performance penalty? 

HTTPS can affect site load speed, and as mentioned, Google has said that sites with slow loading times may get reduced rankings. How is Google resolving this potential dilemma?

Among other things, they’re throwing some serious evangelism at it, and doing so with some really smart people. Ilya Grigorik, who continues to lead a similar charge on the web performance front, has recently been quite visible on the topic of HTTPS performance.  (See a few excellent resources listed below.)

The bottom line is that Google seems to say that properly implemented, HTTPS won’t cause a significant performance penalty.  I doubt they would promote this so heavily if they felt that it seriously compromised their equally zealous effort to promote web performance.

2. Do you redirect http to https?

Yes, if you can. Put simply, http://www.example.com is different from https://www.example.com, and serving the same content on two different URLs is, of course, duplicate content.

Most of us have seen this problem on one site or another, and we approach the solution as we would with any other duplicate content problem.  That is, if a 301 redirect is feasible and practical, we do that.

Alternatively, consider adding a canonical link element pointing from the HTTP version to the https version. Use this Google Webmaster Tools help page to guide you through the process. The wonderfully accessible John Mueller reaffirms this advice in this post, and answers some other questions. Take particular note of his comment about why “content only” sites (with no eCommerce) still benefit from encryption.

Some Resources

Here are some additional resources related to this that you might find useful:
Ilya Grigorik’s detailed discussion of TLS performance tuning
Google’s guidelines for site security
In a Google Webmaster Central Hangout, John Mueller answered some more question about the new ranking factor.
Ilya Grigorik and Pierre Far at Google I/O issuing Google’s call for “HTTPS Everywhere”:

]]>
https://blogs.perficient.com/2014/08/11/https-now-a-google-ranking-factor-some-questions-answered/feed/ 0 236893
Two Exercises for a Simple, Real-Life Mobile SEO Audit https://blogs.perficient.com/2013/01/03/two-exercises-for-a-simple-real-life-mobile-seo-audit/ Thu, 03 Jan 2013 15:30:23 +0000 http://www.stonetemple.com/?p=1655

Mercedes made headlines recently with their revamped mobile site. The good news is that they were able to increase mobile traffic 85% year to date, and 170% over last year. The bad news is that when I looked at their revamped site it was evident that they didn’t account for SEO as part of the redesign, and could have driven mobile traffic up much more if they had.  Recently in Search Engine Land I explained that mobile SEO is not a myth. To further prove that it exists I’m going to go through a basic mobile SEO audit for Mercedes’ new site, to demonstrate how one brand failed to take advantage of mobile search traffic by thinking about how mobility affects search behavior and site architecture. Hopefully, this exercise will help the rest of you avoid the same mistakes.

Basic [brand + “mobile”] Search in Mobile

When I audit a mobile site, one of the first things that I’ll do is search on the phrase “[insert brand name here] mobile site” in order to see if a brand can be found for navigational mobile queries. In this case we’ll use “Mercedes mobile site”, which according to the Google Adwords Keyword tool gets about 1,300 searches per month in Google.
Entering these keywords in Google should return the m.mbusa.com site, since the query is navigational and there’s little competition. However, when I entered the query, no such website was found.
In fact, the first result was Mercedes.mobi, which was the only thing that looked like an official site on the page. The rest of the articles had to do with the recent site redesign.
Doing a site: search in Google, it became evident that the site was not listed for the navigational query because it was not eligible for that query—that the site developers had neutered it by nofollowing the site with robots.txt.


Here is a look at the Robots.txt file for the site:

Two clear, and common, problems become evident:
A) Multiple sites competing for the same keywords
A site appears for the navigational keyword, but it’s not the site they just redesigned. It’s very likely that Mercedes would prefer that their best performing, most relevant site appears in search results, and Mercedes.mobi likely isn’t it.
To remedy this, some brand manager at Mercedes would have to decide which mobile site she wants to appear, and that site would then be optimized for search. If it’s not Mercedes.mobi, then we would focus our optimization efforts on m.mbusa.com
To be fair, this problem exists with the desktop site as well, as both mbusa.com and Mercedes-benz.com/en are often competing for the same keywords, even if Mercedes-Benz.com and mbusa.com theoretically have different audiences. This is a mobile-specific problem, however, because Mercedes also has this dotmobi indexed, which compounds the problem with the mobile-specific domain.
B) One site excluded from search entirely
The most pressing problem is the exclusion with robots.txt, which happens more often with mobile sites than you might think. Often the developers or well-meaning but uninformed SEOs will exclude the site with robots.txt in order to ensure that the site is not seen as duplicate content by Google. However, this is unnecessary, as I’ve been saying for a few years now, and as Google has confirmed in their recent mobile search guidelines. If mobile URLs are used, it’s only necessary to add switchboard tags to the mobile and desktop sites to let Google know which site is preferred for mobile searchers. If you’re not familiar with switchboard tags, Google explained them in their recent smartphone guidelines:
1. Annotation in the HTML
On the desktop page, add:

 

and on the mobile page, the required annotation should be:

 

This rel=”canonical” tag on the mobile URL pointing to the desktop page is required.
Using this method, sites can ensure Google shows the mobile site to mobile searchers and the desktop site to desktop users, and not have to worry about split link equity.
As it is the m.mbusa.com site is a non-sequitur when it comes to Google mobile search traffic, as the only hope it has for getting traffic from Google is redirecting from the desktop URL. Adding the switchboard tags removes the redirect, puts the m.mbusa.com URL in search results (which might give them a slight boost in CTR), and improves the user experience from search overall.
2. Align Site Architecture with Mobile Search Behavior
The next exercise is simple: comparing mobile and desktop searches for the brand and nonbrand terms, and comparing those to the site architecture of the site being audited.
All of this is moot for Mercedes since the entire mobile site has been nofollowed, but we can still do the exercise as a hypothetical, for when the indexing problem is fixed.
The first thing that we want to do is look at their brand keywords to ensure the concepts people are looking for on mobile devices related to the brand are prominent (or at least represented somewhere) on the site.
For this you can use the Google keyword tool, and enter just the brand name.

Download both the “All mobile devices” report and the “Desktops and laptops” reports. Use Excel’s Vlookup function to map the information into a report that looks like this:

Note that for this example I’ve only pulled U.S. search volume for mobile and desktop keywords with over 1,000 searches each but for your purposes the long tail may be valuable.
Once we have this report we can look for keywords that both have a lot of volume from mobile devices and have more than 30% of the total available search volume between mobile and desktop (represented in green). A couple of things stand out for me when scanning the list:
A) There are a few concepts with a lot of volume from mobile devices that aren’t represented on m.mbusa.com. For example, Mercedes AMG has a lot of search volume from mobile devices, and a lot relative to the total; but the site is on a separate domain: Mercedes-amg.com. There is a mobile version of that site, but it might as well be a desktop site considering how it looks when the phone is oriented vertically.

What’s more, there is a page on m.mbusa.com devoted to this model of Mercedes, but it doesn’t resolve.

Not supported yet? Or not still supported? Either way, it’s a bad user experience and something that is unlikely to rank for these contextually relevant, high volume terms.
AMG isn’t the only concept for which there is search volume but no content. The most egregious is probably the car types, represented by the keyword [Mercedes suv]. There are no category pages representing all of the SUVs that Mercedes has, nor coupes, sports cars, etc. This is because the site is using a transcoder that crawls a site and builds reformatted versions of the pages automatically, and they can’t reformat a page that doesn’t exist.
If you’re not familiar with transcoders, many brands choose mobile solutions that transcode desktop pages because they don’t require internal resources or a lot of budget to implement, but they can have many disadvantages when it comes to SEO. The biggest disadvantage with most is that they only transcode desktop pages and can’t add pages that make sense for the mobile paradigm, or pages that don’t exist on the desktop site.
Because the car type pages on the desktop site use hashtags in the URLs they are inaccessible to spiders, and can’t be indexed as individual pages. There are workarounds, like SWF address, for making these URLs accessible with this technology, or Mercedes could choose to redesign the site with static URLs.

Either way they’re missing out on a big opportunity, as our initial research has shown that those in the market for a new vehicle who don’t yet know what brand they’re looking for mostly search by car type. We categorized the non-brand keywords and put them in a pivot table to get a better sense of the opportunity available, and discovered that the lion’s share of the search volume comes from car types.

There are also many searches for mobile wallpaper, which Mercedes has on their Mercedes.mobi site but not on m.mbusa.com.

When we look at the search volume for mobile wallpaper, and mobile wallpaper for cars, it’s clearly the one concept that is relevant to the brand that searchers are looking for on mobile devices much more than desktops; but it’s not included at all on this site.
It’s difficult to be visible for a query if you don’t have content that’s relevant to it, so for this client, we would recommend building out specific pages for relevant car types and characteristics of those car types, as well as combining the Mercedes.mobi site with the m.mbusa.com site (including the wallpaper) in order to better align their site’s information architecture with what consumers are actually looking for.
B) The second part of this that struck me as odd is that when we look at local search terms the volume doesn’t appear to support such prominent placement on the homepage.
When we look at the home page we see that the primary navigation consists of three calls to action: select a vehicle, find a dealer and special offers.

Now it could be that these were selected for reasons other than search behavior, but from the keywords that we’ve seen local information is not as important to this audience as it is to mobile searchers in general. In fact, if we look at the branded keywords above, the few local keywords that appear (Mercedes Benz Houston, Fletcher Jones Mercedes, Mercedes Benz Chicago, Mercedes Benz of Buckhead, Mercedes Benz Dallas, etc.) actually have less search volume from mobile devices than the average. I don’t know if I would remove it from the homepage, but at least make car types more prominent, as they’re currently absent from the site.

The Results

In performing these two exercises we can see clearly that Mercedes needs help on the mobile SEO front. Based on the initial audit we would recommend the following (in order of priority):

  1. Allow Google to crawl m.mbusa.com by changing robots.txt file of the site

  2. Implement switchboard tags in the short term to allow Google to understand the relationship between mobile and desktop pages

  3. Consolidate duplicate content on Mercedes.mobi and m.mbusa.com with canonical tags or redirects.

  4. Address information architecture of the desktop site, as it’s preventing traffic from the desktop and mobile site by excluding certain pages like category pages for car types and characteristics.

  5. Address microsites that are duplicating content both for mobile and desktop properties.

  6. In the long term we would recommend either using responsive web design for one site, or ideally a hybrid of responsive and mobile-only pages so that we can continue to offer things like mobile wallpaper that are important for loyalty marketing and branding. The search behavior is different enough to warrant a few mobile pages, but probably not an entire site.

If this were an actual mobile SEO audit it would be much longer and more in-depth, but I wanted to present a limited version here for this reason:
Mobility changes the SEO game. Whether you have a mobile or responsive site there are issues with search behavior, site architecture and link building that SEOs need to address now if they are going to do their jobs fully in a world where one out of every seven people on the planet own a smartphone. This is one example of how I might do it for a client. Hopefully, you have your own ideas and we can have a discussion that will advance the practice, as we’ve done with traditional SEO for more than ten years. But the last thing we can do is ignore it, because these problems aren’t going to detect and resolve themselves.
Hopefully, this simple audit helps you do actual mobile SEO on your own sites. If you have questions, feel free to ask in the comments, or on my blog.
Disclosure: Resolution Media does some paid search for Mercedes Benz, but has not been engaged for SEO.

]]>
256398
Google Local Business Center Adds Detailed Statistics https://blogs.perficient.com/2009/06/01/google-local-business-center-adds-detailed-statistics/ https://blogs.perficient.com/2009/06/01/google-local-business-center-adds-detailed-statistics/#respond Mon, 01 Jun 2009 16:33:16 +0000 http://www.stonetemple.com/blog/?p=403

Some time last week Google introduced a fascinating new feature into one of the Local Business Center accounts I manage for a client.  I haven’t seen anything written about this among the Local blogger community, or on the Google blogs, so this appears to be a bit of a stealth feature that Google is resting quietly.
The following links showed up in this GLBC account (and notably not in any others I use) around the middle of last week:
GLBC Report Links
Clicking on one of the “View Report” links leads to a detailed set of statistics. To keep the identity of my client private, I’ve sanitized the report and broken it into several pieces. The first, and possibly most interesting, piece is the Activity report, which shows “Impressions” and “Actions” for this particular listing graphed over time:
glbc-report-activtytotals1
You can float over the data points in the chart and get a small information “bubble” displayed on the chart showing you the date and the data value for the point. You can adjust the time scale — either using the pre-defined past 7-day or past 30-day window, or using your own custom date range.
Google on-page help defines impressions as follows: “We add 1 to your total count of impressions each time your business listing is shown as a local search result on Google or Google Maps”. So impressions count both 1, 3, and 10-packs shown as part of Universal Search Results, as well as searches performed on maps.google.com.
Actions include:

  • Clicks for More Info
  • Clicks for Driving Directions
  • Clicks through to the Web Site

Overall, Google is providing several major improvements to the very simple clicks and impressions data it has provided in the past: historical trending, and a breakout of the kind of clicks/actions taken by users.   I’m particularly pleased with the historical trending, as this should allow one to carefully monitor the performance of listings over time based on optimization efforts, seasonality changes, market changes, etc.
We can also begin to understand actions taken from click-throughs at a much finer level of granularity.  Clearly clicks through to the website are very desirable, but we can also begin to understand our geographic market by looking at the volume and distribution of requests for driving directions.
Indeed, Google is providing a wealth of useful information in the bottom section of the report, labeled “Where driving directions requests come from”, but we’ll get back to that in a moment.
Visually, the next two sections of the report (again, I’ve broken this up for formatting and discussion, but all of these sections appear on a single integrated web page for each location in the GLBC), appearing just below the Activity and Totals section, contain information about keywords driving impressions and the driving directions section.  These sections are shown below:
glbc-report-queriesdriving
In language similar to the top queries report found in Google Webmaster Tools, Google defines “Top search queries” as: “The top Google search queries for which your business listing appeared, along with the number of times users saw your business listing in the search results for those queries.”. In the screenshot above, I’ve sanitized the search queries, but envision this as a list of 10 keyword phrases.  Next to each phrase is the number of impressions that phrase drew in local queries (i.e., impressions, as defined above), along with a horizontal bar proportional to this value. This, of course, is quite valuable keyword intelligence.
Finally, below that, is yet more business intelligence in the section titled “Where driving directions requests come from”. Here, you can see what appear to be a count, aggregated by zip code, of the location of users requesting directions. In a nice bit of Google Maps eye candy, the city/zip phrase in the ranked list turns out to be a link that, when clicked on, causes the map to pan and display the region containing the zip code. Further, when you float your mouse over the map marker with the count number displayed for that zip code on the map, Google visually highlights the zip code.
Altogether, this is a stunning little bit of wizardry that would actually seem pretty useful for visualizing your geographic market.
In addition to the statistical data, the report page includes a nice pane displaying most all of your data – much like the main data entry page in the “Add new listing” (or “Edit”) panel of the GLBC. I’ve omitted this pane as, with one exception, there’s nothing new here and since it has so much of my client’s identifiable data it was difficult to sanitize. However, there was one intriguing tidbit worth pointing out. Above the info pane on the right, the following indicator appeared on my listings:
businfo-86percent
Note the “86% complete” indicator. I’m a little unsure of how this is being calculated. For this particular listing, the only GLBC data we haven’t provided is a “Mobile phone”, “Fax” and “TTY/TDD” phone number. We’ve included everything else, including photos, videos, hours of operation, categories, and “Additional Details” (i.e., custom attributes). I’m pretty curious whether Google is asserting that the lack of the 3 phone numbers above is what constitutes my “14% missing data”, or if there’s something else I’m missing (unrealized opportunities?!?!). Guess I can test this and report back.
Well, there’s lots more analysis and discussion of the various data elements of the report, but in the interest of getting this out, and getting a bit more insight into the mystery of “how widespread is the ‘preview’ of this feature?”, I’m going to go ahead and close out for today. I’d love to get peoples’ comments and questions, and if my client’s account is, indeed, a rarity at the moment, I’d be glad to provide more observations and feedback on what I’m seeing.

]]>
https://blogs.perficient.com/2009/06/01/google-local-business-center-adds-detailed-statistics/feed/ 0 255946
Drupal and Search Engine Optimization https://blogs.perficient.com/2007/09/26/drupal-and-search-engine-optimization/ https://blogs.perficient.com/2007/09/26/drupal-and-search-engine-optimization/#respond Wed, 26 Sep 2007 23:26:58 +0000 http://www.stonetemple.com/blog/?p=189

Drupal is known for being a very SEO friendly content management system (CMS). The way it assembles its pages is crawler friendly. This makes it a popular choice for people looking to build dynamic web sites. However, there are a number of potential SEO problems with Drupal as well. These need to be dealt with to ensure that you get optimal results.
The very fact that Drupal is such a dynamic system is a factor that leads to some of its SEO problems. The content is stored in a database and retrieved at runtime. Almost all information is stored as a “node”, a basic, unstructured unit of content. Often, each “node” is associated with groups of keywords, known as “taxonomies”, and Drupal makes it easy to retrieve and sort information by these taxonomies. Since all content can be retrieved dynamically, Drupal generates generic URLs for the content, such as www.example.com/?q=node/3 or www.example.com/node/3.
These “internal” URLs are always present in Drupal, even though Drupal provides features that allow you to hide them, and instead present much friendlier URLs, known as aliases, to website users. There are multiple optional modules that may affect the generation of pages and the naming of URLs, and there are many modules that remain aware of the internal naming conventions, even when user-friendly URLs are being used. As a result, Drupal may expose both the internal URLs and the user-friendly URLs to users and web crawlers.
As a result of these kinds of architectural issues, many Drupal sites end up exposing content to the web via multiple URLs. When this happens, the multiple URLs can be crawled by the search engines, creating duplicate content problems. Here are some examples of duplicate content issues, and some other problems that can arise in Drupal.
1.Problem: duplicate content from aliases
Example: www.example.com/node/5 and www.example.com/content/how-to-surf, both pointing at the same physical document.
Solution: use robots.txt to disallow URLs that include “/node/” For example, you can include the following lines in robots.txt:Disallow: /node
Disallow: /*/node/Considerations: Note that this assumes that all URLs are available via friendly aliases. This should be the case if you’re using the pathauto module.>[?

  1. Problem: Drupal’s default robots.txt has errors.
    Example: the default robots.txt uses “Disallow: /search”. This disallows only a page ending with /search, but not all of the Drupal internal search results pages, which is desired.
    Solution: update the robots.txt to read:Disallow: /search/

  2. Problem: Pathauto can create many extra pages on the site if configured incorrectly.

Example: If you turn on “Create index aliases”, and you have a hierarchical alias (e.g., a page with a path containing a slash, such as music/concert/beethoven) Drupal automatically generates index pages that contain all pages in each category — for example all music, and all concerts.
Solution: Do not check the “Create index alias” checkbox in the Pathauto module.

  1. Problem: Incorrect setting of the Pathauto “Update action”, in a production environment, can cause URLs of published pages, which may already be indexed by the search engines, to change.

Solution: In development mode (before exposing the site to the search engines), use “Create a new alias, replacing the old one” to regenerate URLs whenever necessary (for example, if your Pathauto rules change). In production, once the site is exposed, set this to “Do nothing, leaving the old alias intact”.

  1. Problem: Some modules, such as Forums and Views, create sortable lists that can generate multiple URLs with duplicate content.

Solution: If you use such a module, be sure to exclude the sorted variations using the following robots.txt rule:Disallow: /*sort=

  1. Problem: The Forward module creates a link to a URL, on each page, that allows the page to be forwarded to a friend. You can easily end up with hundreds or thousands of such low-quality pages that are essentially boilerplates.

Solution: If you use this module, be sure to exclude the forward pages using the following robots.txt rule:
Disallow: /forward/
These problems can crop up on many Drupal systems, and all Drupal users should review their sites for these issues. Drupal may also have other issues, depending on the site and the degree of customization. For example, on several sites, we’ve seen Drupal generate complex CSS hierarchies that end up building hidden text into the pages. While search engines try to detect hidden text scenarios that are not a result of bad intent, this is a risk you don’t need. As long as you recognize what the issues are, they can be dealt with, and Drupal can be a great choice as a content management system. Most content management systems present even greater challenges to SEO.

]]>
https://blogs.perficient.com/2007/09/26/drupal-and-search-engine-optimization/feed/ 0 236519
Can Google Drop You from Your Own Custom Search Engine? https://blogs.perficient.com/2006/11/03/can-google-drop-you-from-your-own-custom-search-engine/ https://blogs.perficient.com/2006/11/03/can-google-drop-you-from-your-own-custom-search-engine/#respond Fri, 03 Nov 2006 18:12:19 +0000 http://www.stonetemple.com/blog/?p=70

Has your mother created a Google Custom Search Engine yet?
Yes, I’m being facetious. But I want to make a point. Custom Search Engines are drop-dead easy to create. According to Google’s new Custom Search Blog, “tens of thousands of people have already started contributing”. The volume of posts on the Google Co-op Group, which I monitor and contribute to daily (handle = “greyhound”) is high and growing. The program looks like it’s off to a roaring start. But amid all this success, lies a hidden problem that is baffling many new CSE users. We’ll call it the “Supplemental Results Syndrome”, and if it bites you, it can be fatal to your CSE.
The ease of creating CSEs belies the tremendously powerful framework upon which they sit. Because they’re so easy, many people who might have stayed away from advanced search engine technology are flocking to it. And herewith the problem: when you’re literally building your own search engine, there are going to be some things that are a little bit harder than filling out forms.
Today, I want to describe the problem that smacks sufferers of Supplemental Results Syndrome (SrS) right in the face, and how you can diagnose it. (Over the next few days, we’ll provide some recommendations about how to cure this devastating website illness). To SrS sufferers, Custom Search Engines are broken. “Why don’t I see my site when I do a search on my custom search engine?”, they cry. “I can see it when I search at google.com, but not in my own search engine!” Why not? The answer is simple and complex. The simple answer is that their site is locked up in Google’s supplemental index and… drum roll… Google Custom Search Engines do not include results from the supplemental index.
Let’s talk more about the supplemental index. Before CSEs, the supplemental index was transparent to most people. Do a search on some unique term that turns up a page on your site, and there — in glorious color for all to behold — is your page nestled in the Search Engine Results Page. But look closer at those results. If the words “Supplemental Result” appears next to your URL, then I’m sorry to say it, but this page is afflicted, and it ain’t gonna make it into anyone’s CSE, no way, no how.
Sadly, many low traffic, niche web sites — run by some of the very people who are so attracted to CSEs in the first place — have much or all of their site in the supplemental index. Why? Reasons vary, but far and away the biggest cause of SrS is lack of quality inbound links. Put simply, Google maintains two indexes: the “main” index, where all the big boys live, and where you definitely want to be, is link party central. If you’re in the main index, it’s because you’ve got lots of inbound links, you’ve probably got some PageRank, and life is good. If you’re in the supplemental index, you’re a worldwide web wallflower. (Sticking with the high school dance analogy) not enough people know you (link to you), and you aren’t one of Google’s favorite dance partners (you’re not showing up in a lot of searches, so you’re not getting the level of search engine driven traffic that you want).
We’ll wrap up today’s post by showing you the definitive diagnostic test for SrS. Go to google.com and search on “site:yourdomain.com” (no quotes). Examine the results. If many/most of your pages display “Supplemental Result” next to the URL, you’re an SrS sufferer, and we need to prescribe some Search Engine Optimization medicine to make your site well. Remember, if you’ve got SrS, CSEs are not the cure — they will only frustrate you. The good news is that once you discover you have SrS, perhaps by accident when you built your first baffling CSE, you’ve taken the first step towards recovery (that is, knowing you have a problem!). And when you get your website all healthy, you can build a CSE that even your mother will be proud of.
Stay tuned for upcoming posts that will take you through the steps needed to cure SrS once and for all.

]]>
https://blogs.perficient.com/2006/11/03/can-google-drop-you-from-your-own-custom-search-engine/feed/ 0 255640