fbpx

Chapter 01: Introduction to Cloud Computing

 

Chapter 01: Introduction to Cloud Computing

                       

What is Cloud Computing?

Cloud Computing is the practice of using a network of remote servers hosted on the internet to store, manage and process data rather than using a local server or personal computer. It is the on-demand delivery of computing resources through a cloud service platform with pay-as-you-go pricing.

Advantages of Cloud Computing

  1. Trade capital expense for variable expense

Pay only for the resources consumed instead of heavily investing in datacenters and servers before knowing your requirements.

  1. Benefit from massive economies of scale

Achieve lower variable costs than you can get on your own. Cloud computing providers, such as Amazon, build their own data centers and achieve higher economies of scale that results in lower prices.

  1. Stop guessing capacity

Access as much or as little resources needed instead of buying too much or too little resources by guessing your needs. Scale up and down as required with no long-term contracts.

  1. Increase speed and agility

New IT resources are readily available so that you can scale up infinitely with demand. The result is a dramatic increase in agility for the organizations.

  1. Stop spending money on running and maintaining datacenters

Eliminates the traditional need for spending money on running and maintaining datacenters, which are managed by the cloud provider.

  1. Go global in minutes

Provide lower latency at minimal cost by easily deploying your application in multiple regions around the world.

Types of Cloud Computing

Figure 1-01: Types of Cloud Computing

Cloud Computing Deployments Models

Figure 1-02: Cloud Computing Deployment Model

Amazon Web Services Cloud Platform

Amazon Web Services (AWS) is a secured cloud service platform, offering computing power, database storage, content delivery and other functionality on-demand to help businesses scale and grow. AWS cloud products and solutions can be used to build sophisticated applications with increased flexibility, scalability and reliability.

Figure 1-03: AWS Platform

The Cloud Computing Difference

This section compares cloud computing with the traditional environment; it reviews and provides the information to why these new and better practices have emerged.

IT Assets Become Programmable ResourcesIn a traditional environment, it would take days to weeks depending on the complexity of the environment to setup IT resources such as servers and networking hardware, etc. On AWS, servers, databases, storage, and higher-level application components can be instantiated within seconds. These instances can be used as temporary and disposable resources to meet the actual demand, while only paying for what you have used.

Global, Available, and Unlimited CapacityWith AWS cloud platform you can deploy your infrastructure into different AWS regions around the world. Virtually unlimited on-demand capacity is available to enable future expansion of your IT architecture. The global infrastructure ensures high availability and fault tolerance.

Higher Level Managed ServicesApart from computing resources in the cloud, AWS also provides other higher-level managed services such as storage, database, analytics, application, and deployment services. These services are instantly available to developers, consequently reducing dependency on in-house specialized skills.

Security Built-in: In a non-cloud environment, security auditing would be a periodic and manual process. The AWS cloud provides plenty of security and encryption features with governance capabilities that enable continuous monitoring of your IT resources. Your security policy can be embedded in the design of your infrastructure.

AWS Cloud Economics

Weighing financial aspects of a traditional environment versus the cloud infrastructure is not as simple as comparing hardware, storage, and compute costs. You have to manage other investments, such as:

  • Capital expenditures
  • Operational expenditures
  • Staffing
  • Opportunity costs
  • Licensing
  • Facilities overhead

Figure 1-04: Typical Data Center Costs

On the other hand, a cloud environment provides scalable and powerful computing solutions, reliable storage, and database technologies at lower costs with reduced complexity, and increased flexibility. When you decouple from the data center, you are able to:

  • Decrease your TCO: Eliminate the costs related to building and maintaining data centers or co-location deployment. Pay for only the resources that you have consumed.
  • Reduce complexity: Reduce the need to manage infrastructure, investigate licensing issues, or divert resources.
  • Adjust capacity on the fly: Scale resources up and down depending on the business needs using secure, reliable, and broadly accessible infrastructure.
  • Reduce time to market: Design and develop new IT projects faster.
  • Deploy quickly, even worldwide: Deploy applications across multiple geographic areas.
  • Increase efficiencies: Use automation to reduce or eliminate IT management activities that waste time and resources.
  • Innovate more: Try out new ideas as the cloud makes it faster and cheaper to deploy, test, and launch new products and services.
  • Spend your resources strategically: Free your IT staff from handling operations and maintenance by switching to a DevOps model.
  • Enhance security: Cloud providers have teams of people who focus on security, offering best practices to ensure you are compliant.

Figure 1-05: Cost Comparisons of Data Centers and AWS

AWS Virtuous Cycle

The AWS pricing philosophy is driven by a virtuous cycle. Lower prices mean more customers are taking advantage of the platform, which in turn results in driving the costs down further.

Figure 1-06: AWS Virtuous Cycle

AWS Cloud Architecture Design Principles

A good architectural design should take advantage of the inherent strengths of the AWS cloud-computing platform. Below are the key design principles that need to be taken into consideration while designing.

Scalability

Systems need to be designed in such a way that they are capable of growing and expanding over time with no drop in performance. The architecture needs to be able to take advantage of the virtually unlimited on-demand capacity of the cloud platform and scale in a manner where adding extra resources results in an increase in the ability to serve additional load.

There are generally two ways to scale an IT architecture; vertically and horizontally.

Scale Vertically– Increase specifications such as RAM, CPU, IO, or networking capabilities of an individual resource.

Scale Horizontally– Increase the number of resources such as adding more hard drives to a storage array or adding more servers to support an application.

  • Stateless Applications– An application that needs no knowledge of previous interactions and stores no sessions. It can be an application that when given the same input, provides the same response to an end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g., Amazon EC2 instances, AWS Lambda functions). With no session data to be shared, you can simply add more compute resources as needed and terminate them when the capacity is no longer required.
  • Stateless Components– Most applications need to maintain some kind of state information, for example, web applications need to track previous activities such as whether a user is signed in or not, etc. A portion of these architectures can be made stateless by storing state in the client’s browser using cookies. This can make servers relatively stateless because the sessions are stored in the user’s browser.
  • Stateful Components â€“ Some layers of the architecture are Stateful, such as the database. You need databases that can scale. Amazon RDS DB can scale up, and by adding read replicas, it can also scale out. Whereas, Amazon DynamoDB scales automatically and is a better choice. It requires the consistent addition of Read Replicas.
  • Distributed Processing â€“ Processing of a very large data requires a distributed processing approach where big data is broken down into pieces and have computing instances work on them separately in parallel. On AWS, the core service that handles this is Amazon Elastic Map Reduce (EMR). It manages a fleet of EC2 instances that work on the fragments of data simultaneously.