Friday, November 24, 2023
HomeBig DataAmazon Redshift continues its price-performance management

Amazon Redshift continues its price-performance management

Information is a strategic asset. Getting well timed worth from knowledge requires high-performance methods that may ship efficiency at scale whereas conserving prices low. Amazon Redshift is the most well-liked cloud knowledge warehouse that’s utilized by tens of hundreds of consumers to research exabytes of information day-after-day. We proceed so as to add new capabilities to enhance the price-performance ratio for our prospects as you carry extra knowledge to your Amazon Redshift environments.

This put up goes into element on the analytic workload traits we’re seeing from the Amazon Redshift fleet’s telemetry knowledge, new capabilities we’ve got launched to enhance Amazon Redshift’s price-performance, and the outcomes from the newest benchmarks derived from TPC-DS and TPC-H, which reenforce our management.

Information-driven efficiency optimization

We relentlessly deal with bettering Amazon Redshift’s price-performance so that you just proceed to see enhancements in your real-world workloads. To this finish, the Amazon Redshift crew takes a data-driven strategy to efficiency optimization. Werner Vogels mentioned our methodology in Amazon Redshift and the artwork of efficiency optimization within the cloud, and we’ve got continued to focus our efforts on utilizing efficiency telemetry from our giant buyer base to drive the Amazon Redshift efficiency enhancements that matter most to our prospects.

At this level, you would possibly ask why does price-performance matter? One essential side of a knowledge warehouse is the way it scales as your knowledge grows. Will you be paying extra per TB as you add extra knowledge, or will your prices stay constant and predictable? We work to ensure that Amazon Redshift delivers not solely robust efficiency as your knowledge grows, but additionally constant price-performance.

Optimizing high-concurrency, low-latency workloads

One of many traits that we’ve got noticed is that prospects are more and more constructing analytics functions that require excessive concurrency of low-latency queries. Within the context of information warehousing, this will imply a whole lot and even hundreds of customers operating queries with response time SLAs of below 5 seconds.

A typical situation is an Amazon Redshift-powered enterprise intelligence dashboard that serves analytics to a really giant variety of analysts. For instance, one in all our prospects processes international trade charges and delivers insights based mostly on this knowledge to their customers utilizing an Amazon Redshift-powered dashboard. These customers generate a median of 200 concurrent queries to Amazon Redshift that may spike to 1,200 concurrent queries on the open and shut of the market, with a P90 question SLA of 1.5 seconds. Amazon Redshift is ready to meet this requirement, so this buyer can meet their enterprise SLAs and supply the most effective service attainable to their customers.

A selected metric we monitor is the proportion of runtime throughout all clusters that’s spent on short-running queries (queries with runtime lower than 1 second). Over the past 12 months, we’ve seen a big improve in brief question workloads within the Amazon Redshift fleet, as proven within the following chart.

As we began to look deeper into how Amazon Redshift ran these sorts of workloads, we found a number of alternatives to optimize efficiency to offer you even higher throughput on quick queries:

  • We considerably decreased Amazon Redshift’s query-planning overhead. Regardless that this isn’t giant, it may be a good portion of the runtime of quick queries.
  • We improved the efficiency of a number of core elements for conditions the place many concurrent processes contend for a similar sources. This additional decreased our question overhead.
  • We made enhancements that allowed Amazon Redshift to extra effectively burst these quick queries to concurrency scaling clusters to enhance question parallelism.

To see the place Amazon Redshift stood after making these engineering enhancements, we ran an inside take a look at utilizing the Cloud Information Warehouse Benchmark derived from TPC-DS (see a later part of this put up for extra particulars on the benchmark, which is on the market in GitHub). To simulate a high-concurrency, low-latency workload, we used a small 10 GB dataset so that every one queries ran in a couple of seconds or much less. We additionally ran the identical benchmark in opposition to a number of different cloud knowledge warehouses. We didn’t allow auto scaling options equivalent to concurrency scaling on Amazon Redshift for this take a look at as a result of not all knowledge warehouses help it. We used an ra3.4xlarge Amazon Redshift cluster, and sized all different warehouses to the closest matching price-equivalent configuration utilizing on-demand pricing. Based mostly on this configuration, we discovered that Amazon Redshift can ship as much as 8x higher efficiency on analytics functions that predominantly required quick queries with low latency and excessive concurrency, as proven within the following chart.

With Concurrency Scaling on Amazon Redshift, throughput could be seamlessly and mechanically scaled to further Amazon Redshift clusters as person concurrency grows. We more and more see prospects utilizing Amazon Redshift to construct such analytics functions based mostly on our telemetry knowledge.

That is only a small peek into the behind-the-scenes engineering enhancements our crew is frequently making that can assist you enhance efficiency and save prices utilizing a data-driven strategy.

New options bettering price-performance

With the always evolving knowledge panorama, prospects need high-performance knowledge warehouses that proceed to launch new capabilities to ship the most effective efficiency at scale whereas conserving prices low for all workloads and functions. Now we have continued so as to add options that enhance Amazon Redshift’s price-performance out of the field at no further value to you, permitting you to resolve enterprise issues at any scale. These options embody using best-in-class {hardware} by means of the AWS Nitro System, {hardware} acceleration with AQUA, auto-rewriting queries in order that they run sooner utilizing materialized views, Automated Desk Optimization (ATO) for schema optimization, Automated Workload Administration (WLM) to supply dynamic concurrency and optimize useful resource utilization, quick question acceleration, computerized materialized views, vectorization and single instruction/a number of knowledge (SIMD) processing, and way more. Amazon Redshift has advanced to grow to be a self-learning, self-tuning knowledge warehouse, abstracting away the efficiency administration effort wanted so you’ll be able to deal with high-value actions like constructing analytics functions.

To validate the impression of the newest Amazon Redshift efficiency enhancements, we ran price-performance benchmarks evaluating Amazon Redshift with different cloud knowledge warehouses. For these assessments, we ran each a TPC-DS-derived benchmark and a TPC-H-derived benchmark utilizing a 10-node ra3.4xlarge Amazon Redshift cluster. To run the assessments on different knowledge warehouses, we selected warehouse sizes that the majority intently matched the Amazon Redshift cluster in worth ($32.60 per hour), utilizing printed on-demand pricing for all knowledge warehouses. As a result of Amazon Redshift is an auto-tuning warehouse, all assessments are “out of the field,” that means no handbook tunings or particular database configurations are utilized—the clusters are launched and the benchmark is run. Value-performance is then calculated as value per hour (USD) instances the benchmark runtime in hours, which is equal to the fee to run the benchmark.

For each the TPC-DS-derived and TPC-H-derived assessments, we discover that Amazon Redshift persistently delivers the most effective price-performance. The next chart exhibits the outcomes for the TPC-DS-derived benchmark.

The next chart exhibits the outcomes for the TPC-H-derived benchmark.

Though these benchmarks reaffirm Amazon Redshift’s price-performance management, we all the time encourage you to attempt Amazon Redshift utilizing your individual proof-of-concept workloads as the easiest way to see how Amazon Redshift can meet your knowledge wants.

Discover the most effective price-performance in your workloads

The benchmarks used on this put up are derived from the industry-standard TPC-DS and TPC-H benchmarks, and have the next traits:

  • The schema and knowledge are used unmodified from TPC-DS and TPC-H.
  • The queries are used unmodified from TPC-DS and TPC-H. TPC-approved question variants are used for a warehouse if the warehouse doesn’t help the SQL dialect of the default TPC-DS or TPC-H question.
  • The take a look at contains solely the 99 TPC-DS and 22 TPC-H SELECT queries. It doesn’t embody upkeep and throughput steps.
  • Three energy runs (single stream) have been run with question parameters generated utilizing the default random seed of the TPC-DS and TPC-H kits.
  • The first metric of whole question runtime is used when calculating price-performance. The runtime is taken as the most effective of the three runs.
  • Value-performance is calculated as value per hour (USD) instances the benchmark runtime in hours, which is equal to value to run the benchmark. Revealed on-demand pricing is used for all knowledge warehouses.

We name this benchmark the Cloud Information Warehouse Benchmark, and you may simply reproduce the previous benchmark outcomes utilizing the scripts, queries, and knowledge out there on GitHub. It’s derived from the TPC-DS and TPC-H benchmarks as described earlier, and as such will not be similar to printed TPC-DS or TPC-H outcomes, as a result of the outcomes of our assessments don’t adjust to the specification.

Every workload has distinctive traits, so should you’re simply getting began, a proof of idea is the easiest way to know how Amazon Redshift performs in your necessities. When operating your individual proof of idea, it’s necessary to deal with the appropriate metrics—question throughput (variety of queries per hour) and price-performance. You may make a data-driven determination by operating a proof of idea by yourself or with help from AWS or a system integration and consulting accomplice.


This put up mentioned the analytic workload traits we’re seeing from Amazon Redshift prospects, new capabilities we’ve got launched to enhance Amazon Redshift’s price-performance, and the outcomes from the newest benchmarks.

If you happen to’re an current Amazon Redshift buyer, join with us for a free optimization session and briefing on the brand new options introduced at AWS re:Invent 2021. To remain updated with the newest developments in Amazon Redshift, observe the What’s New in Amazon Redshift feed.

Concerning the Authors

Stefan Gromoll is a Senior Efficiency Engineer with Amazon Redshift the place he’s liable for measuring and bettering Redshift efficiency. In his spare time, he enjoys cooking, enjoying along with his three boys, and chopping firewood.

Ravi Animi is a Senior Product Administration chief within the Redshift Group and manages a number of purposeful areas of the Amazon Redshift cloud knowledge warehouse service together with efficiency, spatial analytics, streaming ingestion and migration methods. He has expertise with relational databases, multi-dimensional databases, IoT applied sciences, storage and compute infrastructure companies and extra just lately as a startup founder utilizing AI/deep studying, laptop imaginative and prescient, and robotics.

Florian Wende is a Efficiency Engineer with Amazon Redshift.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments