Golang is a useful programming language that can solve daily problems in an efficient way. It’s easy to learn, and doesn’t require writing a lot of code to perform well.

Let’s take a look at how Golang can help in a simple and practical case involving copying large amounts of Redis keys.

At some point it became necessary to split our Amazon ElastiCache store into two parts— one for storing cached data, and the other for storing users’ sessions.

We, unfortunately, had them on the same instance previously. We also didn’t want to interrupt long-living sessions by resetting the storage.

Amazon ElastiCache is compatible with the Redis protocol, though with certain limitations. Redis supports the MIGRATE command, allowing you to move keys matched by a pattern from one instance to another.

Internally it works by executing DUMP+DEL commands on the source instance and creating them in target instance using RESTORE. However, Amazon’s version didn’t support this command at the time.

Back then, my practical experience with Golang was limited. I’d only implemented projects for fun and was familiar with basic syntax and concepts like goroutine and channels. But I’d decided that was enough to make use of Golang’s strengths to solve the problem I was facing.

Let’s assume that Golang is fast enough to do the job. Keep in mind that Redis is, mostly, a single-threaded server from the point of view of commands execution and implements replication with no concurrency.

I’ve picked two base libraries for this challenge:

The interface is ready, it supports the “pattern” parameter to match keys, and the “limit” parameter to define the maximum number of keys. The source and destination are provided as arguments and are also required.

Radix supports creating a “scanner” structure that helps you iterate over keys:

The loop is now ready. What’s left is to read and restore data in the target. I joined the PTTL and DUMP command to fetch time to live and value of the key in a pipeline to save execution time.

That’s already enough for the code to work, but adding some reporting logic would definitely improve the user experience.

The complete code can be found here:

But is it really that good?

Let’s run some benchmark tests by quickly spawning two Redis instances locally with Docker, and seeding the source with data (453,967 keys in total, but we only copy part of them by matching a pattern).