Today's Question:  What does your personal desk look like?        GIVE A SHOUT

Rediscovering the RSync Algorithm

  rslootma        2012-02-14 10:47:24       2,077        0    

A:Ok, you’re synchronizing this over the web;
and what do you use for the synchronization?

B: Oh, we implemented the rsync algorithm.
A: uhu. And what do you do with really big files?
B: The same.
A: And you also synchronise folders?
B: Yes.
A: And how do you do that?
B: we iterate over the folder, using the algorithm on every file, recursing over subfolders.
A: Can you try 2 things for me? First, a very large file; and second, a large codebase, and see if it holds.

Introduction

First of all, I am an admirer of the (original) rsync algorithm. (I think) it was a first stab at file synchronization, and it was so good people still implement it today.
But if you don’t understand its strenghts and weaknesses you might be in for a disappointment.

The Problem

You have 2 files, A’ and A that are different but alike. They reside on 2 different locations connected through a network. You want to make A’ identical to A.

The simplest solution is to just copy A, but given the similarity between the two files, there has to be a better way.

Historical Interlude

Networks used to be notoriously bad in the early 90s. Everybody who was transferring archives over large distances instinctively knew about a critical download size.
If the archive was too large, it would take too long, yielding a 100% chance something would go wrong somewhere resulting in an interrupted download. Even if the (up- or) download succeeded, chances were a small part of the file got corrupted, and you had to start over. The two first alleviations to this problem were checksums to detect accidental corruptions, and resumptions (being able to start a download at a certain offset).

RSync took care of interrupted downloads, and also provided a better solution when your file was corrupt. On top of that, it allowed low cost propagation of small changes, opening up a whole new range of applications. System administrators had a shiny new hammer.

The RSync Strategy

RSync just does a single round trip. First it creates a signature of A’, sends it over. On the other location it scans the local file, tries to find parts that are in the signature, while constructing a recipe as a stream of instructions. It’s possible to derive the algorithm starting from a primitive version, improving it step by step.
Since it’s fun too, I’ll be doing that here. While we’re playing, we’ll abstract away from IO, because it clouds the algorithmical view.

Mark 0

Let’s attack this in pure caveman style. Making a signature is splitting the file in blocks of equal size (except maybe the last). Iterating over the blocks, you calculate a digest and accumulate digests and block identifiers. Block identifiers are just their number: the first block has id 0, the second block id 1 aso.

01let file_signature f b_size =
02  let f_len = String.length f in
03  let rec loop acc s i =
04    if s = f_len
05    then acc
06    else
07      let b_len = min b_size (f_len - s) in
08      let b = String.sub f s b_len in
09      let b_sig = block_signature b in
10      let acc' = (b_sig,i) :: acc in
11      loop acc' (s+b_len) (i+1)
12  in
13  loop [] 0 0

We have lots of choice to calculate a block signature. Let’s be lazy and pick Digest.string which is the md5 checksum of the block.

1let block_signature block = Digest.string block

To recreate the file you need to interprete the stream of instructions. But what would these instructions be?
Well, in this version, you can be told to copy over a block or write a char.

1type instruction =
2  | C  of char
3  | B  of int

Ok, how do you combine the signature together with the new file to generate a stream of instructions?
First thing that comes to mind is to scan over the new file, starting at position s

  • consider the block starting at s and try to find it in the signature.
  • if you find it, add a B j instruction, and jump a block forward.
  • if you miss it, add a C c instruction, and step forward 1 position.

Let’s do that:

01let updates f0_sig b_size f1 =
02  let f1_len = String.length f1 in
03  let rec loop acc s =
04    if s = f1_len
05    then List.rev acc
06    else
07      let b_len = min b_size (f1_len - s) in
08      let block = String.sub f1 s b_len in
09      let u,step =
10        try
11          let d = block_signature block in
12          let i = List.assoc d f0_sig in
13          (B i), b_len
14        with Not_found -> (C block.[0]), 1
15      in
16      let acc' = u :: acc in
17      loop acc' (s + step)
18  in
19  loop [] 0

The last step in our synchronisation scheme is to create a file using the old file,
together with the stream of instructions.

01let apply old b_size ins =
02  let old_len = String.length old in
03  let buf = Buffer.create old_len in
04  let add_block i =
05    let s = i * b_size in
06    let b_len = min b_size (old_len - s) in
07    let block = String.sub old s b_len in
08    Buffer.add_string buf block
09  in
10  let add_char c = Buffer.add_char buf c in
11  let () = List.iter (function
12    | B i -> add_block i
13    | C c -> add_char c)
14    ins
15  in
16  Buffer.contents buf

So it took 60 lines of code to have a first stab at a synchronisation algorithm.
Let’s try this on an example:

01let bs = 5
02let remote = "aaaaabbbbbcccccdddddeeeeefffffggggghhhhhiiiiijjjjjkkk"
03let mine = "aaaaabXbbbcccccddddde012"
04let mine_s = file_signature mine bs
05 
06let delta = updates mine_s bs remote
07let mine2 = apply mine bs delta;;
08 
09 
10val bs : int = 5
11val remote : string = "aaaaabbbbbcccccdddddeeeeefffffggggghhhhhiiiiijjjjjkkk"
12val mine : string = "aaaaabXbbbcccccddddde012"
13 
14val mine_s : (Digest.t * int) list =
15[("$\240\t\221\19200\222\199\2035\190|\222~#\n", 4);
16("P\248M\175:m\253j\159 \201\248\239B\137B", 3);
17("g\199b'k\206\208\158\228\22314\2137\209d\234", 2);
18("\196\148\"\21926Lm\179V E=\201O\183,", 1);
19("YO\128;8\nA9n\214=\2029P5B", 0)]
20 
21val delta : instruction list =
22[B 0; C 'b'; C 'b'; C 'b'; C 'b'; C 'b'; B 2; B 3; C 'e'; C 'e'; C 'e';
23C 'e'; C 'e'; C 'f'; C 'f'; C 'f'; C 'f'; C 'f'; C 'g'; C 'g'; C 'g';
24C 'g'; C 'g'; C 'h'; C 'h'; C 'h'; C 'h'; C 'h'; C 'i'; C 'i'; C 'i';
25C 'i'; C 'i'; C 'j'; C 'j'; C 'j'; C 'j'; C 'j'; C 'k'; C 'k'; C 'k']
26val mine2 : string = "aaaaabbbbbcccccdddddeeeeefffffggggghhhhhiiiiijjjjjkkk"

Ok, it works, but how good is it?
Before I can answer this, first a note on block size. There are some ‘forces’ to be reckoned with

  • the blocksize needs to be big compared to the block signature.
  • if the blocksize is big, you will find less matches between the signature and the new file, so you need send more data back
  • if the blocksize is small, you have lots of them, meaning your signature will be bigger
    and you need an efficient lookup

The best blocksize will be depend not only on the file size, but on the actual changes.
How important is it really?
Well, let’s sync 2 images:

These 2 images are bitmaps of 5.5 MB each (stored as .bmp).
(I actually uploaded smaller versions as it seems useless to let your browser download more than 10MB for just 2 silly image samples)
I’ll sync’em using different blocksizes and see what gives.

01let main () =
02  let bs = int_of_string (Sys.argv.(1)) in
03  let () = Printf.printf "bs=%i\n" bs in
04  let remote = get_data "new_image.bmp" in
05  let () = Printf.printf "remote: size=%i%!\n" (String.length remote) in
06  let mine = get_data "old_image.bmp" in
07  let mine_s = X.file_signature mine bs in
08  let () = Printf.printf "mine_s: len=%i%!\n" (Array.length mine_s) in
09  let delta = X.updates mine_s bs remote in
10  let (nbs,ncs) = List.fold_left (fun(bs,cs) i ->
11    match i with
12      | X.B _ -> (bs+1,cs)
13      | X.C _ -> (bs,cs+1)
14  ) (0,0) delta
15  in
16  let mine2 = X.apply mine bs delta in
17  let () = Printf.printf "mine2: size=%i%!\n" (String.length mine2) in
18  let () = Printf.printf "bs=%i;cs=%i\n" nbs ncs in
block size # block signatures blocks chars time
8192 704 535 1384448 71s
4096 1407 1084 1323008 92s
2048 2813 2344 960512 92s
1024 5626 4995 646144 115s
512 11251 10309 482304 172s
256 22501 20972 391424 283s
128 45001 42508 319104 537s

The first column is the block size. The second is the number of blocks in the file, the third is the number of B instructions and the fourth is the number of C instructions.
The last columns is measured execution time on my laptop. Processing time is the biggest issue. Ocaml is fast, but not fast enough to compensate for my brutal inefficiency. Imagine what it would do to a 4GB movie.

Mark 1

The problem is quickly revealed: List.assoc is not suited for long lists.
A better solution is use an array, sort it on block signature, and use binary search
(using a hashtable would be viable too).

01let block_signature block = Digest.string block
02 
03let file_signature f b_size =
04  let f_len = String.length f in
05  let s = ref 0 in
06  let n_blocks = (f_len + b_size -1) / b_size in
07  Array.init n_blocks
08    (fun i ->
09      let start = !s in
10      let b_len = if start + b_size > f_len then f_len - start else b_size in
11      let b = String.sub f start b_len in
12      let b_sig = block_signature b in
13      let () = s := start + b_len in
14      (b_sig,i)
15    )
16 
17type instruction =
18  | C  of char
19  | B  of int
20 
21let updates f0_sig b_size f1 =
22  let my_cmp (s0,i0) (s1,i1) = String.compare s0 s1 in
23  let () = Array.sort my_cmp f0_sig in
24  let len = Array.length f0_sig in
25  let rec lookup b=
26    let rec find min max =
27      let mid = (min + max) / 2 in
28      let (ms,mi) = f0_sig.(mid) in
29      if ms = b
30      then mi
31      else if min > max then raise Not_found
32      else if b > ms then find (mid+1) max
33      else find min (mid -1)
34    in
35    find 0 (len -1)
36  in
37  let f1_len = String.length f1 in
38  let rec loop acc s =
39    if s = f1_len
40    then List.rev acc
41    else
42      let b_len = min b_size (f1_len - s) in
43      let block = String.sub f1 s b_len in
44      let u,step =
45        try
46          let d = block_signature block in
47          let i = lookup d in
48          (B i), b_len
49        with Not_found -> (C block.[0]), 1
50      in
51      let acc' = u :: acc in
52      loop acc' (s + step)
53  in
54  loop [] 0
55 
56let apply old b_size ins =
57  let old_len = String.length old in
58  let buf = Buffer.create old_len in
59  let add_block i =
60    let s = i * b_size in
61    let b_len = min b_size (old_len - s) in
62    let block = String.sub old s b_len in
63    Buffer.add_string buf block
64  in
65  let add_char c = Buffer.add_char buf c in
66  let () = List.iter (function
67    | B i -> add_block i
68    | C c -> add_char c)
69    ins
70  in
block size # block signatures blocks chars time(s)
8192 704 535 1384448 41
4096 1407 1084 1323008 20
2048 2813 2344 960512 7.8
1024 5626 4995 646144 1.9
512 11251 10309 482304 1.1
256 22501 20972 391424 0.8
128 45001 42508 319104 0.9

Wow, this is quite unexpected (but we’re not complaining). So what happened? Well, the lookup was so dominating, it was cloaking all other behaviour.
Now, with the lookup out of the way, other things can be observed. The problem now is that a ‘miss’ not only causes a C instruction to be emitted, but more importantly, it causes another digest to be calculated. The less blocks are found, the higher the processing time.

Mark 2

The cost of the miss was solved by Andrew Tridgell by introducing a second, weaker hash function.
He used the Adler-32 checksum which is a rolling checksum. ‘Rolling’ means that
adler32(buffer[a+1:b+1])= cheap(adler32(buffer[a:b]), which makes it suitable for our problem here. The ocaml standard library does not have an adler32 module, so I hacked one up.
It’s a naieve implementation in that it follows the definition closely. In fact, most of the modulo operations can be avoided by doing some extra administration.
I didn’t bother.

01module Adler32 = struct
02  type t = {mutable a: int; mutable b: int; mutable c: int}
03       
04  let padler = 65521
05     
06  let make () = {a = 1 ;b = 0; c = 0}
07     
08  let from buf offset length =
09     
10    let too_far = offset + length in
11    let rec loop a b i=
12      if i = too_far
13      then {a; b; c = length}
14      else (* modulo can be lifted with a bit of math *)
15        let a' = (a + Char.code(buf.[i])) mod padler in
16        let b' = (b + a') mod padler in
17        loop a' b' (i+1)
18    in
19    loop 1 0 offset
20     
21  let reset t = t.a <- 1;t.b <- 0; t.c <- 0
22     
23  let digest t = (t.b lsl 16) lor t.a
24     
25  let out t c1 =
26    let x1 = Char.code c1 in
27    t.a <- (t.a - x1) mod padler;
28    t.b <- (t.b - t.c  * x1 -1) mod padler;
29    t.c <- t.c - 1
30 
31  let rotate t c1 cn =
32    let up x = if x >= 0 then x else x + padler in
33    let x1 = Char.code c1 in
34    let xn = Char.code cn in
35    t.a <- up ((t.a - x1 + xn) mod padler);
36    t.b <- let f = (t.c * x1) mod padler in
37           let r = (t.b - f + t.a -1) mod padler in
38           up r
39              
40  let update t buf offset length =
41    let too_far = offset + length in
42    let rec loop i =
43      if i = too_far then ()
44      else
45        let x = Char.code buf.[i] in
46        let () = t.a <- (t.a + x) mod padler  in
47        let () = t.b <- (t.b + t.a) mod padler in
48        let () = t.c <- t.c + 1 in
49        loop (i +1)
50    in
51    loop offset
52       
53  let string block =
54    let t = from block 0 (String.length block) in
55    digest t
56end

Adler32 is much weaker than md5 and using it exposes you to collisions. So in fact, it acts as a cheap test that might yield false positives. That’s the reason the rsync algo keeps both: if the adler32 of the buffer is found in the signature, there is a second check to see if the md5s match. The fact one sometimes needs to rotate the checksum and at other times needs to reinitialize if from a part of the buffer, complicates the calculation of the updates a bit.

The code is shown below.

01let updates f0_sig b_size f1 =
02  let my_cmp (wh0,sh0,i0) (wh1, sh1,i1) = compare wh0 wh1 in
03  let () = Array.sort my_cmp f0_sig in
04  let len = Array.length f0_sig in
05  let rec lookup w =
06    let rec find min max =
07      let mid = (min + max) / 2 in
08      let (mw, ms,mi) = f0_sig.(mid) in
09      if mw = w
10      then (ms,mi)
11      else if min > max then raise Not_found
12      else if w > mw then find (mid+1) max
13      else find min (mid -1)
14    in
15    find 0 (len -1)
16  in
17  let f1_len = String.length f1 in
18  let weak = Adler32.from f1 0 b_size in
19  let rec loop acc b e =
20    if b = f1_len
21    then List.rev acc
22    else
23      let wh = Adler32.digest weak in
24      let step_1 () =
25        let bc = f1.[b] in
26        let a = C bc in
27        let b' = b + 1 in
28        if e +1 < f1_len
29        then
30          let e' = e + 1 in
31          let ec = f1.[e] in
32          let () = Adler32.rotate weak bc ec in
33          loop (a:: acc) b' e'
34        else
35          let e' = e in
36          let () = Adler32.out weak bc in
37          loop (a:: acc) b' e'
38      in
39      try
40        let (s0,i0) = lookup wh in
41        let sh = Digest.substring f1 b (e - b) in
42        if s0 = sh
43        then
44          let b' = e in
45          let e' = min f1_len (e + b_size) in
46          let a = B i0 in
47          let () = Adler32.reset weak in
48          let () = Adler32.update weak f1 b' (e' - b') in
49          loop (a :: acc) b' e'
50        else step_1 ()
51      with Not_found -> step_1 ()
52  in
53  loop [] 0 b_size

The code indeed is a bit messier as we have more things to control at the same time, but it’s still quite small. Let’s see how wel it performs:

block size # block signatures blocks chars time(s)
8192 704 535 1384448 0.9
4096 1407 1084 1332008 0.9
2048 2813 2344 960512 0.8
1024 5626 4995 646114 0.6
512 11251 10309 482304 0.6
256 22501 20401 537600 0.7
128 45001 42508 319104 0.7

This almost completely removes the cost of a miss; at least for things of this size. The choice of blocksize does however affect the amount of data you need to send over.
There is a lot of other things you can do here:

  • Block Size Heuristic: something like O(sqrt(file_size))
  • SuperInstructions: make instructions for consecutive Blocks, and consecutive Chars
  • Emit function: don’t accumulate the updates, but emit them (using a callback function)
  • Smaller signatures: you can consider to drop some bytes from the strong hash
  • IO
  • Compress update stream
  • …

The last remaining problem is that some modifications are not handled well by the algorithm (for example 1 byte changed in each block).
Maybe you could try a better algorithm.
There are lot’s of them out there, and they are worth checking out. (Before you know it you’ll be dabling into merkle trees or set reconciliation )

Anyway, I already exceeded my quotum for this post, but I still want to say a little thing about synchronisation of folders

The Next Problem

You have 2 trees of files, A’ and A that are different but alike. They reside on 2 different locations connected through a network. You want to make A’ identical to A.

What Not To Do

Don’t walk the folder and ‘rsync’ each file you encounter.
A small calculation will show you how bad it really is.
Suppose you have 20000 files, each 1KB. Suppose 1 rsync costs you about 0.1s
(reading the file, sending over the signature, building the stream of updates, applying them).
This costs you about 2000s or more than half an hour. System administrators know better:
they would not hesitate: “tar the tree, sync the tars, and untar the synced tar”.
Suppose each of the actions takes 5s (overestimating) you’re still synced in 15s.
Or maybe you can try unison. It’s ocaml too, you know.

have fun,

Romain.

Source:http://blog.incubaid.com/2012/02/14/rediscovering-the-rsync-algorithm/

RESYNC ALGORITHM  DISCOVERY 

Share on Facebook  Share on Twitter  Share on Weibo  Share on Reddit 

  RELATED


  0 COMMENT


No comment for this article.