Amazon Web Services has marked the 20th anniversary of its flagship cloud storage platform. Launched in 2006, Amazon S3 has evolved from a modest storage system into one of the largest data infrastructures in the world, now storing hundreds of exabytes of data.
The milestone was announced by Amazon Web Services, which revealed new details about the platform’s massive global scale.
When Amazon S3 launched on March 14, 2006, it had relatively small capacity by today’s standards.
According to Sébastien Stormacq, principal developer advocate at Amazon Web Services, the service initially offered about one petabyte of storage.
The system operated across around 400 storage nodes, spread over 15 racks in three data centers, with 15 Gbps of total bandwidth.
Massive global scale today
Two decades later, the platform has grown exponentially.
AWS says Amazon S3 now stores more than 500 trillion objects and processes over 200 million requests per second globally.
The service operates across 123 Availability Zones in 39 AWS regions, storing hundreds of exabytes of data.
To illustrate the scale, AWS said stacking the tens of millions of S3 hard drives would stretch to the International Space Station and almost back to Earth.
One of S3’s biggest achievements
Stormacq highlighted the service’s long-term reliability as one of its most remarkable achievements.
He noted that applications written for S3 in 2006 still work today without any changes. Despite multiple infrastructure upgrades and storage technology changes over the years, AWS has maintained complete API backward compatibility.
That means data uploaded two decades ago is still accessible today even though the underlying systems have been continuously rebuilt and modernized.
Industry standard for cloud storage
The S3 API has become widely adopted across the storage industry.
Many vendors now offer S3-compatible storage systems, using the same programming patterns and interface design first introduced by AWS.
The platform also helped transform the way companies manage large-scale data. Early backup startups quickly adopted the service to create new storage tiers that were far cheaper than traditional on-premises infrastructure.
Beyond enterprise computing, Amazon S3 has also played a major role in powering modern digital platforms.
Streaming services like Netflix and Spotify have used the service to scale rapidly and deliver content to millions of users worldwide.
Their success encouraged other companies in the video and music industries to adopt similar cloud-based infrastructure.
Security issues and past outages
Despite its success, the service has not been without challenges.
In its early years, S3 resources were publicly accessible by default, unless users manually restricted access. This led to thousands of misconfigured storage buckets, exposing sensitive data online.
The service has also experienced outages. One of the most notable incidents occurred in 2017, when issues in AWS’s US-EAST-1 region caused widespread disruptions and temporarily knocked major websites offline.
AWS says the platform achieves 99.999999999% durability, often referred to as “11 nines” reliability.
Stormacq explained that a network of microservices continuously scans every byte of stored data across the system. These automated auditors detect potential issues and immediately trigger repair processes when signs of data degradation appear.
AWS has also been modernizing the system’s internal architecture by rewriting performance-critical components in Rust, including parts of the data movement and disk storage layers.
Looking ahead, AWS plans to expand the role of Amazon S3 beyond traditional cloud storage.
The company envisions the platform becoming a universal data foundation for analytics and artificial intelligence workloads.
The goal is to allow organizations to store data once in S3 and use it directly across different systems without needing to move or duplicate it—reducing complexity and infrastructure costs.







