Chapter 1: Introduction to Cloud Computing

Chapter 1: Introduction to Cloud Computing D:\ABC\SAMPLE DOC\IPS New Logo (1).png                                   

 

What is Cloud Computing?

Cloud Computing is the practice of using a network of remote servers hosted on the internet to store, manage and process data rather than using a local server or personal computer. It is the on-demand delivery of computing resources through a cloud service platform with pay-as-you-go pricing.

Advantages of Cloud Computing

  1. Trade capital expense for variable expense

Pay only for the resources consumed instead of heavily investing in datacenters and servers before knowing your requirements.

  1. Benefit from massive economies of scale

Achieve lower variable costs than you can get on your own. Cloud computing providers, such as Amazon, build their own data centers and achieve higher economies of scale that results in lower prices.

  1. Stop guessing capacity

Access as much or as little resources needed instead of buying too much or too little resources by guessing your needs. Scale up and down as required with no long-term contracts.

  1. Increase speed and agility

New IT resources are readily available so that you can scale up infinitely with demand. The result is a dramatic increase in agility for the organizations.

  1. Stop spending money on running and maintaining datacenters

Eliminates the traditional need for spending money on running and maintaining datacenters, which are managed by the cloud provider.

  1. Go global in minutes

Provide lower latency at minimal cost by easily deploying your application in multiple regions around the world.

Types of Cloud Computing

Figure 1-01: Types of Cloud Computing

Cloud Computing Deployments Models

Figure 1-02: Cloud Computing Deployment Model

Amazon Web Services Cloud Platform

Amazon Web Services (AWS) is a secured cloud service platform, offering computing power, database storage, content delivery and other functionality on-demand to help businesses scale and grow. AWS cloud products and solutions can be used to build sophisticated applications with increased flexibility, scalability and reliability.

Figure 1-03: AWS Platform

The Cloud Computing Difference

This section compares cloud computing with the traditional environment; it reviews and provides the information to why these new and better practices have emerged.

IT Assets Become Programmable ResourcesIn a traditional environment, it would take days to weeks depending on the complexity of the environment to setup IT resources such as servers and networking hardware, etc. On AWS, servers, databases, storage, and higher-level application components can be instantiated within seconds. These instances can be used as temporary and disposable resources to meet the actual demand, while only paying for what you have used.

Global, Available, and Unlimited CapacityWith AWS cloud platform you can deploy your infrastructure into different AWS regions around the world. Virtually unlimited on-demand capacity is available to enable future expansion of your IT architecture. The global infrastructure ensures high availability and fault tolerance.

Higher Level Managed ServicesApart from computing resources in the cloud, AWS also provides other higher-level managed services such as storage, database, analytics, application, and deployment services. These services are instantly available to developers, consequently reducing dependency on in-house specialized skills.

Security Built-in: In a non-cloud environment, security auditing would be a periodic and manual process. The AWS cloud provides plenty of security and encryption features with governance capabilities that enable continuous monitoring of your IT resources. Your security policy can be embedded in the design of your infrastructure.

AWS Cloud Economics

Weighing financial aspects of a traditional environment versus the cloud infrastructure is not as simple as comparing hardware, storage, and compute costs. You have to manage other investments, such as:

  • Capital expenditures
  • Operational expenditures
  • Staffing
  • Opportunity costs
  • Licensing
  • Facilities overhead

Figure 1-04: Typical Data Center Costs

On the other hand, a cloud environment provides scalable and powerful computing solutions, reliable storage, and database technologies at lower costs with reduced complexity, and increased flexibility. When you decouple from the data center, you are able to:

  • Decrease your TCO: Eliminate the costs related to building and maintaining data centers or co-location deployment. Pay for only the resources that you have consumed.
  • Reduce complexity: Reduce the need to manage infrastructure, investigate licensing issues, or divert resources.
  • Adjust capacity on the fly: Scale resources up and down depending on the business needs using secure, reliable, and broadly accessible infrastructure.
  • Reduce time to market: Design and develop new IT projects faster.
  • Deploy quickly, even worldwide: Deploy applications across multiple geographic areas.
  • Increase efficiencies: Use automation to reduce or eliminate IT management activities that waste time and resources.
  • Innovate more: Try out new ideas as the cloud makes it faster and cheaper to deploy, test, and launch new products and services.
  • Spend your resources strategically: Free your IT staff from handling operations and maintenance by switching to a DevOps model.
  • Enhance security: Cloud providers have teams of people who focus on security, offering best practices to ensure you are compliant.

Figure 1-05: Cost Comparisons of Data Centers and AWS

AWS Virtuous Cycle

The AWS pricing philosophy is driven by a virtuous cycle. Lower prices mean more customers are taking advantage of the platform, which in turn results in driving the costs down further.

Figure 1-06:  AWS Virtuous Cycle

AWS Cloud Architecture Design Principles

A good architectural design should take advantage of the inherent strengths of the AWS cloud-computing platform. Below are the key design principles that need to be taken into consideration while designing.

Scalability

Systems need to be designed in such a way that they are capable of growing and expanding over time with no drop in performance. The architecture needs to be able to take advantage of the virtually unlimited on-demand capacity of the cloud platform and scale in a manner where adding extra resources results in an increase in the ability to serve additional load.

There are generally two ways to scale an IT architecture; vertically and horizontally.

Scale Vertically– Increase specifications such as RAM, CPU, IO, or networking capabilities of an individual resource.

Scale Horizontally– Increase the number of resources such as adding more hard drives to a storage array or adding more servers to support an application.

  • Stateless Applications– An application that needs no knowledge of previous interactions and stores no sessions. It can be an application that when given the same input, provides the same response to an end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g., Amazon EC2 instances, AWS Lambda functions). With no session data to be shared, you can simply add more compute resources as needed and terminate them when the capacity is no longer required.
  • Stateless Components– Most applications need to maintain some kind of state information, for example, web applications need to track previous activities such as whether a user is signed in or not, etc. A portion of these architectures can be made stateless by storing state in the client’s browser using cookies. This can make servers relatively stateless because the sessions are stored in the user’s browser.
  • Stateful Components â€“ Some layers of the architecture are Stateful, such as the database. You need databases that can scale. Amazon RDS DB can scale up, and by adding read replicas, it can also scale out. Whereas, Amazon DynamoDB scales automatically and is a better choice. It requires the consistent addition of Read Replicas.
  • Distributed Processing â€“ Processing of a very large data requires a distributed processing approach where big data is broken down into pieces and have computing instances work on them separately in parallel. On AWS, the core service that handles this is Amazon Elastic Map Reduce (EMR). It manages a fleet of EC2 instances that work on the fragments of data simultaneously.

Figure 1-07: Vertical vs. Horizontal Scalability

Disposable Resources Instead of Fixed Servers

In a cloud-computing environment, you can treat your servers and other components as temporary disposable resources instead of fixed components. Launch as many as needed and use as long as you need them. If a server goes down or needs a configuration update, it can be replaced with the latest configuration server instead of updating the old one.

Instantiating Compute Resources– When deploying resources for a new environment or increasing the capacity of the existing system, it is important to keep the process of configuration and coding as an automated and repeatable process to avoid human errors and long lead times.

  • Bootstrapping– Executing bootstrapping after launching a resource with the default configuration, enables you to re-use the same scripts without modifications.
  • Golden Image– Certain resource types such as Amazon EC2 instances, Amazon RDS DB instances, Amazon Elastic Block Store (Amazon EBS) volumes, etc., can be launched from a golden image, which is a snapshot of a particular state of that resource. This is used in auto-scaling, for example, by creating an Amazon Machine Image (AMI) of a customized EC2 instance; you can launch as many instances as needed with the same customized configurations.
  • Hybrid– Using a combination of both approaches, where some parts of the configuration are captured in a golden image, while others are configured dynamically through a bootstrapping action. AWS Elastic Beanstalk follows the hybrid model.

Infrastructures as Code– AWS assets are programmable, allowing you to treat your infrastructure as code. This lets you repeatedly deploy the infrastructure across multiple regions without the need to go and provision everything manually. AWS Cloud Formation and AWS Elastic Beanstalk are the two such provisioning resources.

Automation

One of the design’s best practices is to automate whenever possible to improve the system’s stability and efficiency of the organization using various AWS automation technologies. These include AWS Elastic Beanstalk, Amazon EC2 Auto recovery, Auto Scaling, Amazon Cloud Watch Alarms, Amazon Cloud Watch Events, AWS Ops Works Lifecycle events and AWS Lambda Scheduled Events.

Loose Coupling

IT systems can ideally be designed with reduced interdependency. As applications become more complex, you need to break them down into smaller loosely coupled components so that the failure of any one component does not cascade down to other parts of the application. The more loosely coupled a system is, the more resilient it is.

Well-Defined Interfaces– Using technology-specific interfaces such as RESTful APIs, components can interact with each other to reduce inter-dependability.  This hides the technical implementation detail allowing teams to modify any underlying operations without affecting other components. Amazon API Gateway service makes it easier to create, publish, maintain and monitor thousands of concurrent API calls while handling all the tasks involved in accepting and processing including traffic management, authorization, and accessing control.

Service Discovery– Applications deployed as a set of smaller services require the ability to interact with each other since the services may be running across multiple resources. Implementing Service Discovery allows smaller services to be used irrespective of their network topology details through the loose coupling. In AWS platform service discovery can be achieved through Amazon’s Elastic Load Balancer that uses DNS end points; so if your RDS instance goes down and you have Multi-AZ enabled on that RDS database, the Elastic Load Balancer will redirect the request to the copy of the database in the other Availability Zone.

Asynchronous Integration– Asynchronous Integration is a form of loose coupling where an immediate response between the services is not needed, and an acknowledgment of the request is sufficient. One component generates events while the other consumes. Both components interact through an intermediate durable storage layer, not through point-to-point interaction. An example for this is an Amazon SQS Queue. If a process fails while reading messages from the queue, messages can still be added to the queue for processing once the system recovers.

Figure 1-08: Tight and Loose Coupling

Graceful Failure– Increases loose coupling by building applications that handle component failure in a graceful manner. In the event of component failure, this helps to reduce the impact on the end users and increase the ability to progress on offline procedures.

Services, Not Servers

Developing large-scale applications requires a variety of underlying technology components. Best design practice would be to leverage the broad set of computing, storage, database, analytics, application, and deployment services of AWS to increase developer productivity and operational efficiency.

Managed Services– Always rely on services, not severs. Developers can power their applications by using AWS managed services that include databases, machine learning, analytics, queuing, search, e-mail, notifications, and many more. For example, Amazon S3 can be used to store data without having to think about capacity, hard disk configurations, replication, etc. Amazon S3 also provides a highly available static web hosting solution that can scale automatically to meet traffic demand.

Exam tip: Amazon S3 is great for static website hosting.

Server-less Architectures â€“ Server-less architectures reduce the operational complexity of running applications. Event-driven and synchronous services can both be built without managing any server infrastructure. For example, your code can be uploaded to AWS Lambda compute service that runs the code on your behalf. Develop scalable synchronous APIs powered by AWS Lambda using Amazon API Gateway. Lastly combining this with Amazon S3 for serving static content, a complete web application can be produced.

Exam tip: For event-driven managed service/server-less architecture, use AWS Lambda. If you want to customize to your own needs, then Amazon EC2 offers flexibility and full control.

Databases

AWS managed database services remove constraints that come with licensing costs and the ability to support diverse database engines. While designing system architecture, keep these different kinds of database technologies in mind:

Relational Databases

  • Often called RDBS or SQL databases.
  • Consist of normalized data in well-defined tabular structures known as tables, consisting of rows and columns.
  • Provide powerful query language, flexible indexing capabilities, strong integrity controls, and ability to combine data from multiple tables fast and efficiently.
  • Amazon Relational Database Service (Amazon RDS) and Amazon Aurora.
  • Scalability: Can scale vertically by upgrading to a larger Amazon RDS DB instance or adding more and faster storage. For read-heavy applications, use Amazon Aurora to horizontally scale by creating one or more Read Replicas.
  • High Availability: using Amazon RDS Multi-AZ deployment feature creates synchronously replicated standby instance in a different Availability Zone (AZ). In case of failure of the primary node, Amazon RDS performs an automatic fail over to the standby without manual administrative intervention.
  • Anti-Patterns: If your application does not need joins or complex transactions, consider a NoSQL database instead. Store large binary files (audio, video, and image) in Amazon S3 and only hold the metadata for the files in the database.

Non-Relational Databases

  • Often called NoSQL databases
  • The trade off query and transaction capabilities of relational databases for a more flexible data model
  • Utilize a variety of data models, including graphs, key-value pairs, and JSON documents
  • Amazon DynamoDB
  • Scalability: Automatically scales horizontally by data partitioning and replication
  • High Availability: Synchronously replicates data across three facilities in an AWS region to provide fault tolerance in case of a server failure or Availability Zone disruption
  • Anti-Patterns: If your schema cannot be de-normalized and requires joins or complex transactions, consider a relational database instead. Store large binary files (audio, video, and image) in Amazon S3 and only hold the metadata for the files in the database

Exam tip: In any kind of given scenario, if you have to work on complex transactions or using JOINs, then you should use Amazon Aurora, Amazon RDS, MySQL or any other relational database. However, if you are not, then you should use a non-relational database like Amazon DynamoDB.

Data Warehouse

  • A special type of relational database optimized for analysis and reporting of large amounts of data.
  • Used to combine transactional data from disparate sources, making them available for analysis and decision-making.
  • Running complex transactions and queries on the production database create massive overhead and require immense processing power, hence the need for data warehousing arises.
  • Amazon Redshift
  • Scalability: Amazon Redshift uses a combination of massively parallel processing (MPP), columnar data storage and targeted data compression encoding to achieve efficient storage and optimum query performance. It increases performance by increasing the number of nodes in data warehouse cluster.
  • High Availability: By deploying production workloads in multi-node clusters, it enables the data written to a node to be automatically replicated to other nodes within the cluster. Data is also continuously backed up to Amazon S3. Amazon Redshift automatically re-replicates data from failed drives and replaces nodes when necessary.
  • Anti-Patterns: It is not meant to be used for online transaction processing (OLTP) functions, as Amazon Redshift is a SQL-based relational database management system (RDBMS). For high concurrency workload or a production database, consider using Amazon RDS or Amazon DynamoDB instead.

Search

  • Search service is used to index and search; it is in both structured and free text format.
  • Sophisticated search functionality typically outgrows the capabilities of relational or NO SQL databases. Therefore a search service is required.
  • AWS provides two services, Amazon CloudSearch and Amazon ElasticSearch Service (Amazon ES).
  • Amazon CloudSearch is a managed search service that requires little configuration and scales automatically; whereas Amazon ES offers an open source API offering more control over the configuration details.
  • Scalability: Both use data partitioning and replication to scale horizontally.
  • High-Availability: Both services store data redundantly across Availability Zones.

Removing Single Points of Failure

A system needs to be highly available to withstand any failure of the individual or multiple components (e.g., hard disks, servers, network links, etc.).You should have resiliency built across multiple services as well as multiple Availability Zones to automate recovery and reduce disruption at every layer of your architecture.

Introducing Redundancy – Have multiple resources for the same task. Redundancy can be implemented in either standby or active mode. In standby mode, functionality is recovered through secondary resource while the initial resource remains unavailable. In active mode, requests are distributed to multiple redundant compute resources when one of them fails.

Detect Failure– Detection and reaction to failure should both be automated as much as possible. Configure health checks and mask failure by routing traffic to healthy endpoints using services like ELB and Amazon Route53. Auto Scaling can be configured to replace unhealthy nodes using the Amazon EC2 autorecovery feature or services such as AWS OpsWorks and AWS Elastic Beanstalk.

Durable Data Storage– Durable data storage is vital for data availability and integrity. Data replication can be achieved by introducing redundant copies of data. The three modes of replication that can be used are: asynchronous replication, synchronous replication, and Quorum-based replication.

  • Synchronous replication only acknowledges a transaction after it has been durably stored in both the primary location and its replicas.
  • Asynchronous replication decouples the primary node from its replicas at the expense of introducing replication lag.
  • Quorum-based replication combines synchronous and asynchronous replication to overcome the challenges of large-scale distributed database systems.

Automated Multi-Data Center Resilience–This is achieved by using the multiple Availability Zones offered by the AWS global infrastructure. Availability Zones are designed to be isolated from failures of the other Availability Zones. For example, a fleet of application servers distributed across multiple Availability Zones can be attached to the Elastic Load Balancing service (ELB). When health checks of the EC2 instances of a particular Availability Zone fail, ELB will stop sending traffic to those nodes. Amazon RDS provides automatic failover support for DB instances using Multi-AZ deployments, while Amazon S3 and Amazon DynamoDB stores data redundantly across multiple facilities.

Fault Isolation and Traditional Horizontal Scaling–Fault isolation can be attained through sharding. Sharding is a method of grouping instances into groups called shards. Each customer is assigned to a specific shard instead of spreading traffic from all customers across every node. Shuffle sharding technique allows the client to try every endpoint in a set of shared resources until one succeeds.

Optimize for Cost

Reduce capital expenses by benefiting from the AWS economies of scale. Main principles of optimizing for cost include:

Right-Sizing– AWS offers a broad set of options for instance types. Selecting the right configurations, resource types and storage solutions that suit your workload requirements can reduce cost.

Elasticity– Implement Auto Scaling to horizontally scale up and down automatically depending upon your need to reduce cost. Automate turning off non-production workloads when not in use. Use AWS managed services wherever possible that helps in taking capacity decisions as and when needed.

Take Advantage of the Variety of Purchasing Options– AWS provides flexible purchasing options with no long-term commitments. These purchasing options can reduce cost while paying for instances. Two ways to pay for Amazon EC2 instances are:

  • Reserved Capacity– Reserved instances enable you to get a significantly discounted hourly rate when reserving computing capacity as oppose to On-Demand instance pricing. Ideal for applications with predictable capacity requirements.
  • Spot Instances – Available at discounted pricing compared to On-Demand pricing. Ideal for workloads that have flexible start and end times. Spot instances allow you to bid on spare computing capacity. When your bid exceeds the current Spot market price, your instance is launched. If the Spot market price increases above your bid price, your instance will be terminated automatically.