LLMpediaThe first transparent, open encyclopedia generated by LLMs

Amazon Simple Storage Service

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Amazon Web Services Hop 4
Expansion Funnel Raw 59 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted59
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Amazon Simple Storage Service
NameAmazon Simple Storage Service
DeveloperAmazon.com
ReleasedMarch 14, 2006
GenreCloud storage
LicenseProprietary

Amazon Simple Storage Service. It is an object storage service offered by Amazon Web Services that provides industry-leading scalability, data availability, security, and performance. The service allows customers to store and protect any amount of data for a range of use cases, from data lakes and websites to mobile applications and backup systems. It is designed to deliver 99.999999999% durability and stores data for millions of applications used by companies all around the world.

Overview

Launched in 2006, it became a foundational service for the modern cloud computing industry, enabling developers to build scalable applications without managing physical hardware. The service's design principles of simplicity, robustness, and scalability have influenced numerous other cloud storage platforms and architectures. It integrates deeply with the broader Amazon Web Services ecosystem, including Amazon EC2, AWS Lambda, and Amazon CloudFront, forming the backbone for countless internet-scale applications. Its global infrastructure supports major enterprises like Netflix, Airbnb, and NASA, handling exabytes of data across multiple geographic regions.

Features

Core capabilities include unlimited storage, with individual objects capable of scaling up to 5 terabytes in size. It offers multiple storage classes such as S3 Standard, S3 Intelligent-Tiering, and S3 Glacier to optimize costs based on access patterns. Advanced features include versioning for object recovery, cross-region replication for disaster recovery, and detailed access control lists for security management. The service also provides robust event notifications that can trigger workflows in other services like AWS Step Functions or send alerts via Amazon SNS.

Architecture

The fundamental architecture is built around a simple key-value store where data is organized into buckets within a specified AWS Region. Each object consists of data, a key, and metadata, and is accessed via a REST API using HTTP or HTTPS protocols. The system employs a flat namespace with unique keys for massive scalability and uses erasure coding and data distribution across multiple facilities to ensure durability. For performance, it offers features like Transfer Acceleration for fast long-distance uploads and S3 Select for retrieving specific data from within objects using simple SQL statements.

Use cases

Primary applications include building highly available and durable data lakes for big data analytics using services like Amazon Athena and Amazon Redshift. It is extensively used for static website hosting, often behind a content delivery network like Amazon CloudFront. Enterprises leverage it for backup, archiving, and disaster recovery, often integrating with solutions from Veeam or Commvault. In media and entertainment, companies such as Disney and BBC use it to store and distribute vast libraries of video content globally.

Security and compliance

Security is managed through fine-grained identity and access management policies, bucket policies, and optional object-level encryption using keys from AWS Key Management Service. It supports multiple encryption options including Server-Side Encryption and client-side encryption. The service complies with numerous global standards, including PCI DSS, HIPAA, GDPR, and FedRAMP, making it suitable for regulated industries. Access logging via AWS CloudTrail and network security with Amazon VPC endpoints provide additional layers of control and auditability for sensitive workloads in finance and healthcare.

Pricing

The pricing model is based on storage volume, the number of requests, and data transfer out of the AWS Cloud. Costs vary by storage class, with lower rates for infrequently accessed data in S3 Standard-IA or archived data in S3 Glacier Deep Archive. There are no minimum fees or setup costs, and detailed billing is provided through the AWS Cost Management console. Many enterprises use tools like AWS Cost Explorer and implement S3 Lifecycle policies to automate data tiering and optimize spending, similar to strategies used with Microsoft Azure Blob Storage.

History

The service was conceived and developed by a team at Amazon.com led by Andy Jassy and launched publicly in March 2006. Its initial release revolutionized web application development by providing on-demand, pay-as-you-go storage. Major milestones include the 2010 introduction of versioning and the 2011 launch of cross-region replication. In 2018, it achieved the capability to store over two trillion objects, underscoring its massive scale. Its ongoing evolution continues to shape the strategies of competitors like Google Cloud Storage and IBM Cloud Object Storage.

Category:Amazon Web Services Category:Cloud storage Category:2006 software