In-memory key-value store in C, Go and Python
Subtitle: Wow Go’s net library is fast
On paternity leave for my second child, I found myself writing an in-memory hashmap (a poor-man’s memcached), in Go, Python and C. I was wondering how hard it would be to replace memcached, if we wanted to do something unusual with our key-value store. I also wanted to compare the languages, and, well, I get bored easily!
The code is on github as Key-Value-Polyglot.
Each version implements enough of the
set commands from the memcached protocol that we can test them with a memcached client.
If you write a version in a different language (see Spec at bottom), please send it to me, either directly or as a pull-request on github. I’ll add it to the repo.
The C and Python versions perform similarly. They are spending all their time in the recv system call.
The Go version is many times faster than the C and Python version. It is the same speed as memcached. Internally, Go sets the socket to be non-blocking and uses epoll and futex to wait for data. Memcached does the same thing, via libevent.
The wonderful part is I had no idea Go did this. I wrote each version as simply as I could, and the Go version just turned out amazingly fast. You could definitely use epoll in C and Python, and get the same speed as the Go version. What I enjoyed is that Go did that for me.
To time the different versions on your machine run:
time python test.py. That script requires python 2 and pylibmc (
pip install pylibmc or
sudo apt-get install python-pylibmc).
Try test.py against memcached too.
The socket part was easy. Our languages are wrapping POSIX socket system calls. C maps the calls exactly, and Python maps the C calls very closely. Go does a bit more work for us, replacing socket-bind-listen-accept with just listen-accept.
Accepting multiple concurrent requests was straighforward too. Go has the magic go statement, Python has the threading library, and C has pthread_create. None of my implementations are thread safe (thanks timtadh on HN).
The tricky part is managing the data you get from the socket. Sockets are just streams of data, it’s up to the protocol to say when a message starts and ends. For the memcached protocol, we need to be able to read until \r\n, and also read a specific amount of bytes (which might include \r\n).
The C version took the longest to write, and is the least legible. Partly that is because my C was rusty, partly C is just quite verbose, but the biggest part is because I had to implement buffering around the socket ‘recv’ call.
The Python version took me a while to get right, and that’s mostly my fault. I knew Python wouldn’t make me write my own buffering code, and I spent too long playing with timeout and non-blocking setups. Once I realized I could treat the socket as a file (‘makefile’) things got easier.
Python 3′s universal newline support kindly normalizes \r\n to \n, but we’re disabling it here to preserve python 2 compatibility.
The Go version was the easiest to write because I had already used the bufio package, which solves our stream buffering problem.
Read on for how to build and profile the different versions yourself.
Go version: You’ll need a recent weekly snapshot of Go (
hg update weekly), or Go v1 if it is released by the time you read this. Build it with:
go build memg.goand run the
memgexecutable it makes.
Python version: Uses Python 2.7 or Python 3, just run the script.
C version: Build with
gcc -std=gnu99 -g -W -Wall -pedantic -pthread -o memg memg.c. There should be no warnings.
To make profiling easier, each version accepts a
--single command line parameter, so that it exits after a single connection.
Start it in single mode, under the profiler:
python -m cProfile -o stats memg.py --single
test.py in a different window.
View the output with pstats:
$ python > import pstats > p = pstats.Stats("stats") > p.strip_dirs().sort_stats("cumulative").print_stats(20)
Go has a sampling profiling, but because it is so fast we need more iterations. Edit test.py and increase to 10000 or more.
Run memg under prof, with
--single so that it exits after the test connection closes:
go tool prof memg --single
See: prof command
You can get more detail using the runtime/pprof package and
go tool pprof to examine the created file.
I couldn’t find a C profiler that would include I/O wait times (please let me know in the comments!), and as we’re heavily I/O bound, profiling without it doesn’t make much sense.
The most useful tool in this situation was strace. Run the following for a summary of the system calls issued, and run test.py in a different window. The output shows that it spends ~95% of the time in a recv call.
strace -c ./memg --single
If you still want to profile it, despite the lack of I/O wait information, start it in single mode under valgrind’s callgrind
valgrind --tool=callgrind ./memg --single
test.py in a different window.
View the output with
If you’re interested in trying it in a different language, or a different approach, here’s the short spec.
The key-value stores are an in-memory hash map. The listen on port 11211, and accept
get commands, so that you can reproduce the following telnet interaction:
telnet 127.0.0.1 11211 set greeting 0 0 11 # User Hello World # User, 11 bytes STORED # Server response get greeting # User VALUE greeting 0 11 # Server response Hello World # Server response END # Server response
You type the ‘set’, ‘Hello World’ and ‘get’ lines. The rest are replies. In the ‘set’ line, the ’11′ is the length of the value (‘Hello World’) in bytes. The two zero’s are ignored. All lines terminate with \r\n.
The repo has a
test.py script which uses pylibmc. To get pylibmc:
sudo pip install pylibmc
sudo apt-get install python-pylibmc
If you make a version in a different language, please send it to me on github, or link it in the comments.
Usain Bolt won the 200m short run gold medal in Beijing. When he wlked off the track, he was suddenly hit by a camera man. Oh, no. This should be seen on a football court only.