For this the dataset is modified to add effects and modifier.
Additionally, some bugs in dogma-engine resulted in wrong numbers, so we
bump that while at it.
When data-files are not cached, some chunks are really small (like 36
bytes small). This could cause the buffer to be too small to decode the
next entry, causing a failure.
Solve this by always reading at least 2048 bytes in the next buffer.
This is a bit slow, as it has to copy over all the chunks leading up to
2048 bytes into a new buffer.