Photo by Alberto Almajano on Unsplash

Amazon Web Services (AWS) is an evolving cloud computing platform that provides on-demand service to individuals, companies, businesses, and governments. It’s a secure cloud service platform, offering database storage, computing power, content delivery and various other functionalities to help businesses scale and grow. Amazon’s AWS offers an annual subscription pricing model with a free-tier option giving you hands-on experience with a range of AWS services at no sub-charges.

IaaS, PaaS, and SaaS are the very nebulous concepts of cloud computing that describes the tasks and responsibilities of Amazon AWS.

Types of Cloud Computing

The following kinds of cloud computing may be available based on the Three types of deployment model:

  1. Public
  2. Community Cloud
  3. Private
  4. Hybrid

Public Cloud

The public cloud is described as computing services provided through the public internet by third-party suppliers, making them accessible to those who want to use them or buy them. It can be free or on-demand for customers to pay for the cycles, storage or bandwidth they consume per usage.

Example: Sun Cloud, AWS, Microsoft Azure,

Community Cloud

A particular group of customers from organizations with shared issues can only use cloud infrastructure. It may be owned, operated, managed and run by one or more of the communal organizations, a third party or a mixture of them.

Private Cloud

A private cloud is a cloud computing system in which IT services are supplied for the specialized use of one organization over private IT facilities. A single organization operates the cloud infrastructure only. It can be run on-site or off-site by the organization or a third party. Private cloud terms are often employed interchangeably with the virtual private cloud (VPC). Technically speaking, a VPC is a private cloud that uses the infrastructure of a third-party cloud provider, while an inner cloud is enforced.

Hybrid Cloud

A hybrid cloud is a computer environment that incorporates a government and private cloud to share information and apps. Organizations achieve flexibility and the computer capacity of a government cloud for fundamental and insensitive computing functions, whilst safe behind a corporate firewall for business-critical apps and information.

The Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center in the city of La Cañada Flintridge with a Pasadena mailing address,[1] within the state of California, United States.

Founded in the 1930s, JPL is currently owned by NASA and managed by the nearby California Institute of Technology (Caltech) for NASA.[2] The laboratory’s primary function is the construction and operation of planetary robotic spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

NASA’s Jet Propulsion Laboratory (JPL) has developed the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot. As a multi-purpose vehicle, each of the ATHLETE’s six limbs is attached to a wheel, enabling the vehicle to travel across various types of terrain — ranging from smooth surfaces to rolling hills to ruggedly steep terrain. However, the wheels can also be locked to transform the limbs into general purpose legs that can be used as feet. The ATHLETE robot can also be used for loading, unloading, and transporting cargo for long distances.

❝AWS resources completed the work in less than two hours on a cluster of 30 Cluster Compute Instances. This demonstrates a significant improvement over previous implementations.❞

Khawaja Shams
Senior Solution Architect, NASA/JPL

The Challenge

As part of the Desert Research and Training Studies (D-RATS), NASA/JPL performs annual field tests on the ATHLETE robot in conjunction with robots from other NASA centers. While driving the robots, operators depend on high-resolution satellite images for guidance, positioning, and situational awareness. To streamline the processing of the satellite images, NASA/JPL engineers developed an application that takes advantage of the parallel nature of the workflow. NASA/JPL relies on Amazon Web Services (AWS) for this effort.

Why Amazon Web Services…?

The application is built on Polyphony, which is a modular workflow orchestration framework designed to streamline the process of leveraging hundreds of nodes on Amazon Elastic Compute Cloud (Amazon EC2). By accommodating excess capacity on local machines and spare resources in the supercomputing center, Polyphony meshes perfectly with the AWS Cloud. Most important, Polyphony enables the resources to work together to achieve a common goal. By using Amazon Simple Queue Service (Amazon SQS), NASA/JPL developers can deploy massive computations on Amazon EC2 by writing as little as a single class.

NASA/JPL had previously used Polyphony to validate the utility of cloud computing for processing hundreds of thousands of small images in an Amazon EC2 environment. However, NASA/JPL has adopted the cluster compute environment for processing huge images and recently processed a 3.2 giga-pixel image to support the ATHLETE robot operations in its 2010 D-RATS field test.

The Benefits

In addition to its support for the ATHLETE robot, Polyphony has been delivered to the Mars Science Laboratory to serve as one of the primary data processing and delivery pipelines that process data downloaded from Mars. Shams explains that the application “allowed us to process nearly 200,000 Cassini images within a few hours under $200 on AWS.” Due to the lack of elasticity available internally before switching to AWS, Shams says that “we were only able to use a single machine locally and spent more than 15 days on the same task.” The efficiency and cost-savings offered by AWS has proven invaluable.

Thank you..!!