Could I have some performance test data?
#21
(04-26-2015, 02:39 AM)xoft Wrote: Let me give you a quick explanation of minecraft performance:
All minecraft servers, regardles of their underlying technology, will need to accomodate chunks around each player in RAM. Let's find the upper bound of such RAM usage. Say you want to guarantee the minimum viewdistance of 4 chunks in each direction. This means each player will have up to (4 + 1 + 4) * (4 + 1 + 4) = 81 chunks loaded for them. In the worst case scenario, each player is in a different location and none of their loaded chunks overlap. For a thousand players, this means 81.000 chunks loaded in the RAM and processing events. If you consider an average chunk being 1/3 used (has non-air blocks in 6 sections out of the 16 vertical sections), this means that the server needs to hold 81.000 * 6 = 486.000 chunk sections in the memory. Each section contains block types, block metadata, skylight and blocklight values, which comes to 10.240 raw bytes. Your current RAM demands are thus 486.000 * 10.240 = 4.976.640.000 bytes, or some 4750 MiB of RAM. And that's just the absolute minimum of data, your players won't be happy with a viewdistance of 4; doubling the viewdistance effectively quadruples the amount of data needed, so for a viewdistance of 8 you'd need more than 19 GiB of RAM for 1000 players. Next, my semi-qualified estimate is that the additional housekeeping data will take about 20% of the base RAM, so we're at 23 GiB of RAM. Still convinced this is viable?

I'm pretty sure you'll reach the CPU bounds earlier than the RAM bounds. However, there's not an easy way to provide any kind of estimate. Let's try this: The server will have to "tick" mobs in all the loaded chunks. That means that each chunk will get scanned, mobs will drown / suffocate / burn / fall / pathfind. This means that all that chunk data we've calculated earlier will need to be read, at least 20 times per second, to provide a reasonable gameplay. This meand 23 GiB * 20 ticks / second = 460 GiB / second.
If we are to trust the RAM benchmarks here: http://www.memorybenchmark.net/read_unca...intel.html , we should assume that current technology can read up to 20 GiB per second - nowhere near the value we need!

Many thanks to your explain. If I have many machines to compute at the same time , dose this server support the distributed computing ?


I have use Lua for some projects, it's really easy to use, but I mean if I change from java , my boss won't give me such time to rewrite so many plugins cause there are not many plugins had already been written.Smile
Reply
Thanks given by:
#22
There have been a few attempts to implement a distributed minecraft server, all of those have failed at their startup phase - they usually implement some basic protocol support and then die off because the issues they face are far too large to overcome.
So, no, MCServer doesn't support distributing one world across multiple machines. However, we do support BungeeCoord, which allows us to distribute *some* workload over multiple machines - each world can be run on a different machine.
Reply
Thanks given by:
#23
That was why I said that you would need to work with the server makers. MCServer has the potential to able to use more than one machine. Not distributing the world, but both the world generation and lighting could be spread onto different machines if you could get them to compile separately. It would however be a lot of work to separate them on to separate machines.

Also we could probably use the automatic teleportation technique used by some servers, but again that would be a new feature that would have to be added.
Reply
Thanks given by:
#24
Offtopic posts cleared.
Reply
Thanks given by:
#25
Also where is the one million clients numbers coming from? There's no way the official server or any of its derivatives (Bukkit, CraftBukkit, etc) can support that many clients in a single world, so if you're distributing them in some way then MCServer can probably use similar techniques. Otherwise it seems like an unrealistic number.
Reply
Thanks given by:
#26
(04-26-2015, 05:03 PM)xoft Wrote: There have been a few attempts to implement a distributed minecraft server, all of those have failed at their startup phase - they usually implement some basic protocol support and then die off because the issues they face are far too large to overcome.
So, no, MCServer doesn't support distributing one world across multiple machines. However, we do support BungeeCoord, which allows us to distribute *some* workload over multiple machines - each world can be run on a different machine.

Thanks , I got it.

(04-27-2015, 03:41 AM)worktycho Wrote: Also where is the one million clients numbers coming from? There's no way the official server or any of its derivatives (Bukkit, CraftBukkit, etc) can support that many clients in a single world, so if you're distributing them in some way then MCServer can probably use similar techniques. Otherwise it seems like an unrealistic number.

For trade secret, I can't tell you where the players come from , I'm finding some existing servers to see the situation. It is very common that a game has one million players on-line at the same time here. If the game become popular , we can rewrite a brand new server to support such big number of players.
Smile
Reply
Thanks given by:
#27
If you're willing to write a server from scratch, please consider using MCServer as a base for the game logic, and contributing the changes back to the community. I would certainly be interested in helping towards distributed server support in MCServer.
Reply
Thanks given by:
#28
(04-27-2015, 06:50 PM)worktycho Wrote: If you're willing to write a server from scratch, please consider using MCServer as a base for the game logic, and contributing the changes back to the community. I would certainly be interested in helping towards distributed server support in MCServer.

If I get this chance , I will . Smile
Reply
Thanks given by:
#29
I think a thousand players is well within capabilities of modern hardware. I'd venture to say 23GiB of memory is ez for a server, and given single core performance has stalled a bit, memory size (and core count) are the most easily scalable things right now. Also, when we get chunk palettes implemented I'm sure memory usage can get way lower.

Every tick reading every single block in every chunk might be too much of a worst-case. We're updating 50 blocks per chunk every tick, and assuming there aren't stupid numbers of entities/redstone wire I don't think saying on average 1% of 460GiB/second memory throughput is unreasonable.

And all this talk of independent, nonoverlapping chunks gives rise to another question. Given the concept of chunking exists anyway, why bottleneck ticks through the World? It could be a massive boon to dispatch chunk ticks to a thread pool. Any sort of reads, even across chunks are fine. Block writes can be redirected to some sort of overlay that gets applied after everything finishes. Client modifications are applied as normal before ticking begins, and updates are computed from the overlay (basically PendingSendBlocks) and sent back to clients after ticking ends.

In fact, if each chunk gets a lock we don't need any sort of overlay; just hope most of the time cross-chunk stuff doesn't happen for locking overheads to become an issue.
Reply
Thanks given by:




Users browsing this thread: 1 Guest(s)