06-20-2014, 04:43 PM
This would make the server very CPU-heavy, and it already is so.
The storage is not only for the block types, but also for the lighting. I've made some profiling just a few days ago, and found that generating uses about 20% CPU and lighting uses about 60 % CPU. By removing the storage for these, you'll be making the server generate and light chunks an order of magnitude more times.
If we went with this scheme, it would mean that for each chunk save, we'd need to generate the original chunk data, and compare with current content. Save the differences. To load, we'd need to generate the original chunk data, load the differences and apply them. This is 2 generations more just for making a slight saving in the savefile size.
This gets even worse when needing to light the chunk, because for lighting, you need all the 3x3 neighbors' blocktypes. So you need to load 9 chunks (including their generating) and then light one chunk. Sure, when loading neighboring chunks the load gets distributed among the chunks so that there's an almost 1:1 ratio of generating / lighting, but the regular loading patterns say that the loading isn't done in neighbors that much.
Also, consider that there's already compression at work while saving. You might not believe it, but the compression actually does help *a lot*. And there's no telling if the differences would be compressible as good as the original data, I have a feeling it won't.
The storage is not only for the block types, but also for the lighting. I've made some profiling just a few days ago, and found that generating uses about 20% CPU and lighting uses about 60 % CPU. By removing the storage for these, you'll be making the server generate and light chunks an order of magnitude more times.
If we went with this scheme, it would mean that for each chunk save, we'd need to generate the original chunk data, and compare with current content. Save the differences. To load, we'd need to generate the original chunk data, load the differences and apply them. This is 2 generations more just for making a slight saving in the savefile size.
This gets even worse when needing to light the chunk, because for lighting, you need all the 3x3 neighbors' blocktypes. So you need to load 9 chunks (including their generating) and then light one chunk. Sure, when loading neighboring chunks the load gets distributed among the chunks so that there's an almost 1:1 ratio of generating / lighting, but the regular loading patterns say that the loading isn't done in neighbors that much.
Also, consider that there's already compression at work while saving. You might not believe it, but the compression actually does help *a lot*. And there's no telling if the differences would be compressible as good as the original data, I have a feeling it won't.