SaaS vs Cloud Computing
I really don't like buzz-words, but those two have been appearing consistently in various media and literature for the past five years at least, and it appears they aren't going to die a merciful death any time soon.
First we need a little history in the evolution of computers:
1. Computers had command line interfaces, but you had to log in to the console locally
2. Computers with networking allowed "timesharing", or remote access via something like telnet via "dumb terminals"
3. Computers with GUI systems appeared, but you had to use the GUI locally
4. Systems of operating a GUI remotely appeared. These ranged from X-Windows to VNC and similar systems like Remote Desktop, PC Anywhere, and Citrix.
5. Client-Server models (and three tier architectures) appeared. (so called "fat clients").
6. The internet became popular, and web servers/browsers appeared.
Ok, so what is SaaS? SaaS literally stands for "Software as a Service", which means you pay someone for access to the software, and they provide it. This could be by a number of methods, but is usually by being able to access a web site hosting the software you want. Sometimes it is by using Citrix or something similar. Logically speaking, though, it could be by any method feasible.
Now, if we look back in time, computers used to be big and expensive. When people logged onto mainframes remotely via dumb terminals, all of the power was at big fancy servers, and the terminals did very little except display text and allow input.
Things moved to client-server later, which means basically that there was a GUI program running locally, which handled things like scrolling on the PC side, while database requests were sent over the network. The advantage was that the power of your local machine could be used. The disadvantage was that clients had to be installed on the local machines, kept up to date with the server software. Configuration, including database links, etc., had to be set up as well. All in all, it was fragile. Also, in many cases, it was much slower than it theoretically should have been.
Later, things came full circle with tools like VNC, Citrix, and remote desktop. These tools basically allow you to control a remote screen to the server, where all the software runs. Although this method means that all of the graphics are sent over the network, amazingly this was often faster than fat client/server applications.
Finally, recently there has been a trend towards web applications. These turn your web browser into the client, which can run some things (like JavaScript) locally, but submits requests to the server for processing. The advantage here is that the web browser is a relatively standardized client, and web servers are common and available for free too. The disadvantage is that they don't usually allow pixel-perfect control.
At any rate, when companies want to use an external third party solution, they can access normal desktop software via something like Citrix, or they can use a web client. In either case, the software is "hosted" (that is, managed and run on a server owned and operated by). This means a company offering (for example) an outsourced payroll system can operate a system for many clients and achieve good scales of economy. However, while this is Software-as-a-service, it is not "Cloud Computing", at least not in my book.
The thing is, if you are using a third party provider by Citrix, you still need to know specifically which machine to connect to. It's likely that there is a machine assigned to you which you must use. Even if the service provider has a fail-over server, or even a cluster of several computers you can use, the environment is still very static. Most web applications fall into a similar category. The remote side may have a few static web servers, a few dynamic web servers, and a database server or so that they are connected to.
On the other hand, some places (Mainly Amazon, Google, and Yahoo) have worked out true "Cloud" systems. What sets cloud computing apart from normal hosted solutions? Well, for example, when you use GMail, it stores your messages in the "cloud". Google's "Cloud" consists of thousands of machines. Rather than storing your mail on a specific machine, the mail program actually sends a request to store or retrieve the data into the cloud of machines, and it is magically done. This isn't just sleight-of-hand, because as Google adds machines, they just plug one in, and it becomes part of the "Cloud", and automatically start taking their part of the load. Google keeps every message of yours on several different servers, so that if one explodes tomorrow, nobody will even notice. They just replace it, and things will be replicated to it automatically as if nothing ever happened. This isn't just done at the database level, but at the application level as well. What this means is that there is literally no single point of failure on these systems, and they can grow in an almost unlimited fashion without hitting the limitations that plague normal client/server and web applications.
This type of architecture is difficult to design, and thus isn't usually worth the trouble at smaller scale sites, which is why all of the companies that typically are using it now are large specialist companies. Still, the technology is available now for anyone to implement. (for example, look up "Hadoop" for more information).
The point I am trying to make here is that if you are a service provider, think about what you are saying before you spout off the words "Cloud Computing", and think about whether your infrastructure really justifies the term. If you are a client, ask for more details, and see whether the salespeople are stretching the truth or not. Real cloud computing may be worth paying a little extra for in some cases where availability and redundancy is key. On the other hand, the true cloud computing systems have slightly less timely data consistency, which makes them less suitable for applications like finance. If the sales person tells you they have both "Cloud Computing" and "100% ACID compatibility" (immediate database consistency), then they are lying to you.
First we need a little history in the evolution of computers:
1. Computers had command line interfaces, but you had to log in to the console locally
2. Computers with networking allowed "timesharing", or remote access via something like telnet via "dumb terminals"
3. Computers with GUI systems appeared, but you had to use the GUI locally
4. Systems of operating a GUI remotely appeared. These ranged from X-Windows to VNC and similar systems like Remote Desktop, PC Anywhere, and Citrix.
5. Client-Server models (and three tier architectures) appeared. (so called "fat clients").
6. The internet became popular, and web servers/browsers appeared.
Ok, so what is SaaS? SaaS literally stands for "Software as a Service", which means you pay someone for access to the software, and they provide it. This could be by a number of methods, but is usually by being able to access a web site hosting the software you want. Sometimes it is by using Citrix or something similar. Logically speaking, though, it could be by any method feasible.
Now, if we look back in time, computers used to be big and expensive. When people logged onto mainframes remotely via dumb terminals, all of the power was at big fancy servers, and the terminals did very little except display text and allow input.
Things moved to client-server later, which means basically that there was a GUI program running locally, which handled things like scrolling on the PC side, while database requests were sent over the network. The advantage was that the power of your local machine could be used. The disadvantage was that clients had to be installed on the local machines, kept up to date with the server software. Configuration, including database links, etc., had to be set up as well. All in all, it was fragile. Also, in many cases, it was much slower than it theoretically should have been.
Later, things came full circle with tools like VNC, Citrix, and remote desktop. These tools basically allow you to control a remote screen to the server, where all the software runs. Although this method means that all of the graphics are sent over the network, amazingly this was often faster than fat client/server applications.
Finally, recently there has been a trend towards web applications. These turn your web browser into the client, which can run some things (like JavaScript) locally, but submits requests to the server for processing. The advantage here is that the web browser is a relatively standardized client, and web servers are common and available for free too. The disadvantage is that they don't usually allow pixel-perfect control.
At any rate, when companies want to use an external third party solution, they can access normal desktop software via something like Citrix, or they can use a web client. In either case, the software is "hosted" (that is, managed and run on a server owned and operated by). This means a company offering (for example) an outsourced payroll system can operate a system for many clients and achieve good scales of economy. However, while this is Software-as-a-service, it is not "Cloud Computing", at least not in my book.
The thing is, if you are using a third party provider by Citrix, you still need to know specifically which machine to connect to. It's likely that there is a machine assigned to you which you must use. Even if the service provider has a fail-over server, or even a cluster of several computers you can use, the environment is still very static. Most web applications fall into a similar category. The remote side may have a few static web servers, a few dynamic web servers, and a database server or so that they are connected to.
On the other hand, some places (Mainly Amazon, Google, and Yahoo) have worked out true "Cloud" systems. What sets cloud computing apart from normal hosted solutions? Well, for example, when you use GMail, it stores your messages in the "cloud". Google's "Cloud" consists of thousands of machines. Rather than storing your mail on a specific machine, the mail program actually sends a request to store or retrieve the data into the cloud of machines, and it is magically done. This isn't just sleight-of-hand, because as Google adds machines, they just plug one in, and it becomes part of the "Cloud", and automatically start taking their part of the load. Google keeps every message of yours on several different servers, so that if one explodes tomorrow, nobody will even notice. They just replace it, and things will be replicated to it automatically as if nothing ever happened. This isn't just done at the database level, but at the application level as well. What this means is that there is literally no single point of failure on these systems, and they can grow in an almost unlimited fashion without hitting the limitations that plague normal client/server and web applications.
This type of architecture is difficult to design, and thus isn't usually worth the trouble at smaller scale sites, which is why all of the companies that typically are using it now are large specialist companies. Still, the technology is available now for anyone to implement. (for example, look up "Hadoop" for more information).
The point I am trying to make here is that if you are a service provider, think about what you are saying before you spout off the words "Cloud Computing", and think about whether your infrastructure really justifies the term. If you are a client, ask for more details, and see whether the salespeople are stretching the truth or not. Real cloud computing may be worth paying a little extra for in some cases where availability and redundancy is key. On the other hand, the true cloud computing systems have slightly less timely data consistency, which makes them less suitable for applications like finance. If the sales person tells you they have both "Cloud Computing" and "100% ACID compatibility" (immediate database consistency), then they are lying to you.
Comments
Post a Comment