Redis Performance – Does key length matter?

I’m currently building a project using redis as a high performance cache in a node.js application (using the excellent node_redis). My key values will be fairly large ( between 512b and 1kb). The Redis documentation doesn’t specifically warn against keys of this size, but it still seems appropriate to do a benchmark, and see how Redis reacts to large keys (and whether or not 1k is really a large key, or just par for the course).

Test Script (source)

Basically, we insert 1000 records into redis, each with a 10,000 character value. After the writes are all complete, we read each key back from redis.

redis = require "redis"

randomString = (length) ->
  chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
  result = ""
  i = length

  while i > 0
    result += chars[Math.round(Math.random() * (chars.length - 1))]
    --i
  result

writeTest = (keyLength) ->
	console.log "1000 set statements for #{keyLength} character keys"
	keys = []
	for x in [1..1000]
		keys.push randomString(keyLength)
	startTime = new Date().getTime()
	for x in keys
		client.set x, randomString(10000)
	client.quit ->
		console.log "1000 keys inserted in #{new Date().getTime() - startTime} ms"
		readTest(keys)

readTest = (keys) ->
	client = redis.createClient()
	startTime = new Date().getTime()
	for x in keys
		client.get x
	client.quit ->
		console.log "1000 keys retreived in #{new Date().getTime() - startTime} ms"

client = redis.createClient()

client.flushdb ->
	writeTest(20000)

This test was performed for key lengths of 10, 100, 500, 1000, 2500, 5000, 7500, 10,000, and 20,000 characters. Three runs of each were performed to avoid any fluke results. Without further ado, the results.

Write Performance (in ms)

Key Length Run 1 Run 2 Run 3
10 1235 1216 1259
100 1231 1242 1223
500 1283 1240 1270
1000 1277 1317 1345
2500 1318 1279 1294
5000 1376 1391 1386
7500 1223 1204 1265
10000 1220 1252 1235
20000 2065 2014 2016

Read Performance (in ms)

Key Length Run 1 Run 2 Run 3
10 43 41 51
100 45 45 43
500 60 54 58
1000 69 73 79
2500 97 101 102
5000 113 114 110
7500 134 133 136
10000 147 156 151
20000 244 234 241

Not surprisingly, as the key length increases, times do increase.  However, write times are relatively unaffected by key length, while read times seem to be impacted more.   To put it in perspective:

  • Key length 10 – an average write takes 1.24ms, an average read takes 0.045ms
  • Key length 10,000 – an average write takes 1.24ms, an average read takes 0.15ms

Whether or not this is significant is really up to you, however, for my purposes, it seems like an insignificant difference.  At the end of the day, redis is a fast and flexible tool for caching data.

Advertisements
Tagged , , ,

6 thoughts on “Redis Performance – Does key length matter?

  1. bijoor says:

    Very useful! Thanks for sharing this!

  2. Paranormal says:

    Awesome. Good info. As for maximum key size, there is an interesting post here: https://groups.google.com/forum/#!topic/redis-db/HH4z-8mHNLM

  3. Gulch says:

    Useful information. Thank you!

  4. Andy says:

    Since you are generating key_length random strings on the fly you are influencing the results of your write test (not measuring just REDIS)…

    That said, I’m going to try to use your code as a starting point with redis-python… Thanks

    • Andy says:

      My results are on-par with yours, though I have to say the generation of the keys/values in python takes a considerable amount of time (enough to make me think that your scripting engine or hardware is much faster than mine 🙂

      21.5920000076 seconds to prep 1000 10000-byte values
      44.7279999256 seconds to prep 1000 20000-byte keys

      Results:
      key_sizes, write_data, read_data = (
      [10, 100, 1000, 10000, 20000], # bytes
      [1023, 1007, 1006, 1104, 1565], # ms write
      [1184, 1143, 1076, 1336, 1773]) # ms read

      ^— those read times are terrible IMO

      I tried it again with the redis-py pipeline to do the whole read/whole write at the same time.

      Also, it’s worth noting in MY test that client was on a different server from server (and server is the Windows port)

      Using REDIS.pipeline():
      ([10, 100, 1000, 10000, 20000],
      [197, 196, 249, 528, 792],
      [2587, 2809, 2657, 2816, 3536])

      ^— GAH! What’s happening? Writes were faster, reads were SLOWER?

      Anyway, this might help someone, not sure. Looks like these numbers at least show that with increasing key size there is not a whole lot of difference in the read/write times.

      And something about my setup (over a corporate LAN) is not as awesome as it apparently can be 😦

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: