Open research/dev problem for SwiftSync
You can represent the unspent transaction output (UTXO) set as a set of bits over the ordered set of all transaction outputs (TXOs), signifying whether it is spent or unspent. The majority of the TXOs will be spent, resulting in many 0 bits and relatively few 1 bits (e.g. 00100001). The number of unspent outputs (1 bits) will increase as we get closer to the tip. These predictable patterns are well-suited for compression.
We currently use the TXO bitfield in SwiftSync inside the so-called "hints file" to speed up IBD and intend to implement this into Bitcoin Core, but the distribution of the hints file is made harder by its size. For the initial release we will likely forgo optimal encoding, but it would be nice to eventually ship it in an optimally compressed state.
Our current most concise known method is to essentially count the number of 0s between each 1. So for 00100001 that would be [2, 4]. This gets us a ~170MB file that goes down further to ~80MB using generic zip compression. This implies that our current encoding still has plenty of room for optimization.
A possible area for optimization is the number of bits used to represent the number of 0s between each 1 (something that fits most values, and handle the exceptions in a more expensive way). The number of bits can probably go down as we get closer to the end of the chain (fewer 0s between each 1) and as the distribution of 1s and 0s approaches 50% it may make sense to simply start encoding the bitfield as-is.
A basic functional implementation would have a compress and unpack function. The former takes the bitfield and turns it into an efficiently encoded version, while the latter reverses the operation. Code complexity does matter - try not to introduce complex code for minimal gain.
All of the above are just suggestions on how to tackle the problem. There may be better ways none of us have thought of, and it's plausible that more patterns exist which can be taken advantage of (maybe empty early blocks with a single unspent coinbase output). Hopefully some of you will find this an interesting problem to work on.
I believe you have the right idea here on encoding. Compatibility in data representation between file versions is not necessary, however, so the reserve
bytesbits can be omitted. I have implemented the hybrid scheme on my branch where you can see the diff here. The encoding first computes the size of each representation, and encodes the smallest one for that block. The flag for which encoding field was used is included in the block height field. This achieves a size of 161MB for block height 930,000. I encourage you build the branch yourself and run the RPC if you are interested.For you (or anyone else interested in this problem), there is a data structure called a Roaring bitmap which has been developed for efficient bitmap encodings. I would be interested to see if we can achieve better than 161MB using this technique. There is a C++ library that should be usable off-the-shelf, which could replace my current implementation. Note that we will want to preserve the properties of the current hintsfile if possible, in that it can be accessed concurrently and is indexed by block height.
Also note that with a generic compression algorithm like
zip, the file is reduced to 98MB, so this is a good baseline for what is possible, but is certainly not required to settle on a format. If possible I would like to strike a balance between size and ease-of-maintenance