Performance Benchmark: Redis Cloud vs. ElastiCache vs. openredis vs. RedisGreen vs. Redis To Go
In most performance comparisons, Redis (an open source key-value cache/store) is mainly treated as a caching-only solution. Others are only focused on a single provider. However, customers are interested in deeper utilization of built-in data types and server-side operations. In production, you may have several loads that query your database simultaneously—with different types of tasks.
For this reason, we designed a scenario that evaluates Redis performance in more complicated conditions. It combines two different types of queries (both simple and complex) generated concurrently. We’ve just published the performance results here (latencies, throughput, etc.). In this blog post, you’ll find some of the main findings.
Redis-as-a-Service providers available
We compared the following Redis-as-a-Service offerings available on Amazon AWS:
- Redis Cloud Standard (by Redis Labs)
- Redis Cloud Cluster (by Redis Labs)
- ElastiCache (by Amazon)
- openredis (by Amakawa)
- RedisGreen (by Stovepipe Studios)
- Redis To Go (by Exceptional Cloud Services / Rackspace)
For this study, each vendor provided us with their best performing Redis-as-a-Service plan, using no more than a single AWS EC2 instance. All of the server instances were located in the same region as the benchmarking client.
Workload scenarios in use
Almost all the benchmarks I encountered were bound to CRUD operations only (Create, Read, Update, and Delete), evaluating basic Redis capabilities. To test its advanced functionality, we put six popular RaaS offerings under three totally different scenarios.
The simple workload was compiled from the SET/GET operations in the proportion of 1:1 and utilized different pipeline sizes (4 and 50).
The complex workload comprised of the ZUNIONSTORE operations on four sorted sets. Within this scenario, an increased number of client threads consequently increased the number of requests for Redis to process, evaluating scalability of the RaaS solutions.
The combined workload imitated a real-life use case, where Redis-as-a-Service providers were benchmarked under CRUD operations and server-side operations on built-in data types (running concurrently).
Reality: how parallel loads behave
The differences between the RaaS solutions were most clearly demonstrated during the combined scenario (when launching simple and complex workloads synchronously). Besides, as the number of threads were increasing, some intriguing findings were discovered in behavior of simple and complex operations (compare the pics below).
Some of the insights based on the performance results of the combined workload:
While throughput of complex operations was either increasing or stayed pretty much the same, performance of simple operations degraded as the number of threads was increased. This behavior was noticed across all RaaS providers.
Redis Cloud Cluster (by Redis Labs) outperformed the rest of the RaaS offerings in most use cases. This was the only solution that still scaled linearly in the combined scenario.
Taking into account how complex operations influence performance of simple scenarios, Redis-as-a-Service solutions may behave quite in a distinctive manner under different types of load. It brings in the need for evaluation of a particular case—according to its complexity, type of load, number of threads, etc.
For all 8 performance diagrams (and 3 tables with exact latencies and throughput data), check out the full version of this Redis-as-a-Service benchmark:
About the author: Vladimir Starostenkov is a Senior R&D Engineer at Altoros. He is focused on implementing complex software architectures, including data-intensive systems and Hadoop-driven apps. Having background in computer science, Vladimir is passionate about artificial intelligence and machine learning algorithms. His NoSQL and Hadoop studies were published in NetworkWorld, CIO.com, and other industry media.
Related study: “2014 NoSQL Tech Comparison: Couchbase vs. MongoDB vs. Cassandra”No Comments