08-30-2012, 10:39 PM
08-30-2012, 10:44 PM
Quote:if you'd like svn write access, contact FakeTruth. So next time you can commit fixes directlyIt will take whole eternity (probably two of them) for me to download svn rep.

I'll probably add "pause/stop" functionality today...
08-30-2012, 10:53 PM
You could checkout just the Plugins subdirectory.
Currently (rev 806) downloading the entire repository takes a transfer of 2.6 MiB, that's not that bad. Comparing that to the nightbuilds at 0.5 MiB each...
Currently (rev 806) downloading the entire repository takes a transfer of 2.6 MiB, that's not that bad. Comparing that to the nightbuilds at 0.5 MiB each...
08-30-2012, 10:56 PM
Quote:Currently (rev 806) downloading the entire repository takes a transfer of 2.6 MiBHmmmm... Thought it would take about 1+GiB because of MSVS projects stuff

08-30-2012, 11:41 PM
~700 MiB is needed if you want to download the MSVS 2008 Express ISO image for installing, in order to compile the code. But if you only want to edit the plugins, no need for that.
08-30-2012, 11:56 PM
Wheee! V6 here with "stop" function and realtime (sort of) web progress report!
Maybe one day I'll even make "pause/resume" function
Cheers!
Maybe one day I'll even make "pause/resume" function

Cheers!
08-31-2012, 12:30 AM
We're out of sync on the versions, I'm already on v5 in SVN (so this would make this v6)
08-31-2012, 01:05 AM
(08-30-2012, 10:56 PM)Taugeshtu Wrote: [ -> ]Quote:Currently (rev 806) downloading the entire repository takes a transfer of 2.6 MiBHmmmm... Thought it would take about 1+GiB because of MSVS projects stuffThen I'm in, I guess!
(08-30-2012, 11:41 PM)xoft Wrote: [ -> ]~700 MiB is needed if you want to download the MSVS 2008 Express ISO image for installing, in order to compile the code. But if you only want to edit the plugins, no need for that.
Maybe he was talking about the .ncb files and such that can become pretty big

08-31-2012, 01:56 AM
V6 functionality commited as r807 
And yes, I'm surprised with how MSVS don't make svn folder HUUUUUUGE (Unity3d like to pump everything up with cache files)

And yes, I'm surprised with how MSVS don't make svn folder HUUUUUUGE (Unity3d like to pump everything up with cache files)
03-25-2013, 01:55 AM
I know this plugin is considered more or less finished, but still, I have to try this
Can I make a request for a change? 
Would it be possible to change the order of the chunks to one of the following patterns?
1, Spiral from the center to the outside
2, Spiral from the outside to the center
3, Zig-zag lines (one line from left to right, the next line from right to left)
4, Hilbert curve in the specified area (how to fit on non-2^N dimensions?)
Any one of these patters should provide a significant increase in chunk generation speed.
Reasoning (will get a bit technical):
Some of the chunk generator components use the knowledge of neighboring chunks. For example, the height generator looks through an 8x8 column area around the current column for biomes, and for each biome it encounters, it adds a bit of that biome's flavor to the height. So it needs a 3x3 area of chunks' biome data around the currently generated chunk. Another example - the tree stucture generator looks at neighboring chunks' composition to see if the tree would actually fit. Now combine these two: to generate trees for a single chunk, composition (and thus height) must be generated for a 3x3 chunk area; which in turn means biomes must be generated for a 5x5 chunk area. Then a next chunk is populated with trees, another 5x5 chunk area of biomes. BUT most of those have been already generated for the previous chunk!
So, in order to keep things feasible, I added a cache to those biome and height generators. Once generated, an item stays in the cache until it's too old and gets pushed out by newer data.
Now ChunkWorx comes, and it generates chunks by rows. Everything works fine as long as it's going on one row - for each chunk its 5x5 biome neighbors are calculated, but 4x5 are already in the cache - fast. Then ChunkWorx skips to next row. Ouch, a new chunk completely foreign to the data so far. The cache misses everything and has to be refilled.
You can actually see this behavior on slower computers (or debug builds), when they generate one row of chunks, they go quite fast. But then the first chunk on the next row takes about 8 times longer to generate.


Would it be possible to change the order of the chunks to one of the following patterns?
1, Spiral from the center to the outside
2, Spiral from the outside to the center
3, Zig-zag lines (one line from left to right, the next line from right to left)
4, Hilbert curve in the specified area (how to fit on non-2^N dimensions?)
Any one of these patters should provide a significant increase in chunk generation speed.
Reasoning (will get a bit technical):
Some of the chunk generator components use the knowledge of neighboring chunks. For example, the height generator looks through an 8x8 column area around the current column for biomes, and for each biome it encounters, it adds a bit of that biome's flavor to the height. So it needs a 3x3 area of chunks' biome data around the currently generated chunk. Another example - the tree stucture generator looks at neighboring chunks' composition to see if the tree would actually fit. Now combine these two: to generate trees for a single chunk, composition (and thus height) must be generated for a 3x3 chunk area; which in turn means biomes must be generated for a 5x5 chunk area. Then a next chunk is populated with trees, another 5x5 chunk area of biomes. BUT most of those have been already generated for the previous chunk!
So, in order to keep things feasible, I added a cache to those biome and height generators. Once generated, an item stays in the cache until it's too old and gets pushed out by newer data.
Now ChunkWorx comes, and it generates chunks by rows. Everything works fine as long as it's going on one row - for each chunk its 5x5 biome neighbors are calculated, but 4x5 are already in the cache - fast. Then ChunkWorx skips to next row. Ouch, a new chunk completely foreign to the data so far. The cache misses everything and has to be refilled.
You can actually see this behavior on slower computers (or debug builds), when they generate one row of chunks, they go quite fast. But then the first chunk on the next row takes about 8 times longer to generate.