From my experience: there is no direct benefit of running your websites from the Google cloud services instead of using Amazon. Yet one aspect is there and can not be denied: You are closer to the potential customer, aka FASTER. Your data goes faster into the search engine (the robot does not need to walk as far to get to you) and the customers reach you via the freaking fast Google backbone networks.
So, if you want to bring your server closer to the search engine, a very simple trick is to install a SQUID box on a dedicated computing unit. I have a slideshare file online, to show you, how EASY it is to install a squid proxy.
How Google Cloud Compute Engine Works
You log into the Google Cloud services with your standard Google account and my first step was to enable billing with a credit card, so I can access all the billable services they offer. On the overview you pick a service you want to run on the cloud there and for my first test, I started a standard CentOS 6.x machine under “Compute Engine”.
It is quite clear that they mainly target App developers on Android, but the compute engine server I installed is basically a virtual machine you also get from other providers like Amazon or Softlayer.
You would select the storage size and your disk image, give the instance a name and fire it up. A fresh CentOS Linux box as a dedicated server is up in a few seconds.
Accessing a computing unit on Google Cloud
As a system administrator, you probably want to access your machine with SSH to install all the tools you need. Google asks you to use a bash/shell tool called “gcutil”, which is a Linux command line tool, so it is best if you are logging in to your server FROM a Linux box!
The login and download of GCUTIL is nicely described in the Google documentation.
And once logged in, it looks like this:
Speed and stability of Infrastructure
Now to the core aspects of this setup: the speed and SEO relations. The delivery of our homepage is happening fully dynamic from different sources. It is a little bit complicated, but to sum it up: our homepage calculates a lot before it starts to deliver content. That happens in Texas with our hoster Softlayer on our own cluster. If I call the page from Germany, you have to add some Milliseconds for data travel over the pond (roughly 200ms) and the response time from Germany is around 600ms for tradebit.com:
Now, as I always say: over 400ms in response time is not very good for conversion reasons, others even want to drill it down to below 150ms. But as a small company with fully dynamic pages, I settle for anything below 400ms for complex serverside calculations including delivery distance of half around the world. The page should load within 200ms if you are right next to it.
And that exactly happens with the Google compute engine as a buffer:
here you see, that the delivery was brought down by around 35%. That is freaking fast for data transit around the globe. We will continue to test drive this setup especially for the pricing of the whole Squid-Show that we want to start, but I am pretty amazed by the results already and thought I share that with you.
It should have a positive effect on sales, conversions and ultimately SEO aspects as well.