Could I have some performance test data? - Printable Version +- Cuberite Forum (https://forum.cuberite.org) +-- Forum: Cuberite (https://forum.cuberite.org/forum-4.html) +--- Forum: Discussion (https://forum.cuberite.org/forum-5.html) +--- Thread: Could I have some performance test data? (/thread-1888.html) |
RE: Could I have some performance test data? - worktycho - 04-26-2015 I suspect the reason we don't have mods is that they would be quite difficult to make due to the rapid development and C++ codebase and it is easier to either write the feature as a plugin or get it integrated into server core. RE: Could I have some performance test data? - leozzyzheng - 04-26-2015 (04-26-2015, 02:27 AM)NiLSPACE Wrote: I haven't seen any mods developers (as in write the mod in the server code), but there are a few people writing plugins actively (including me ^^). Most of our plugin's are pretty small though. I think the biggest plugin we currently have is either WorldEdit, or Gallery. Thanks for your detailed reply, I have got much information, I will do some test next week and I'm afraid I can't persuade my boss to use this server. But it very nice to find a MC server written in C++, I will pay much attention to it , When it becomes stable , maybe I will change from Spigot . RE: Could I have some performance test data? - NiLSPACE - 04-26-2015 Yes, that and probably because it's hard to have multiple mods at the same time. You'd have to change the source files with the source files from the mod you're trying to install. By doing that you can overwrite the source code of a different mod. Not to mention that you would have to recompile the server after that, RE: Could I have some performance test data? - leozzyzheng - 04-26-2015 (04-26-2015, 02:34 AM)NiLSPACE Wrote: Yes, that and probably because it's hard to have multiple mods at the same time. You'd have to change the source files with the source files from the mod you're trying to install. By doing that you can overwrite the source code of a different mod. Not to mention that you would have to recompile the server after that, That's really hard to developers. Thanks . RE: Could I have some performance test data? - xoft - 04-26-2015 Let me give you a quick explanation of minecraft performance: All minecraft servers, regardles of their underlying technology, will need to accomodate chunks around each player in RAM. Let's find the upper bound of such RAM usage. Say you want to guarantee the minimum viewdistance of 4 chunks in each direction. This means each player will have up to (4 + 1 + 4) * (4 + 1 + 4) = 81 chunks loaded for them. In the worst case scenario, each player is in a different location and none of their loaded chunks overlap. For a thousand players, this means 81.000 chunks loaded in the RAM and processing events. If you consider an average chunk being 1/3 used (has non-air blocks in 6 sections out of the 16 vertical sections), this means that the server needs to hold 81.000 * 6 = 486.000 chunk sections in the memory. Each section contains block types, block metadata, skylight and blocklight values, which comes to 10.240 raw bytes. Your current RAM demands are thus 486.000 * 10.240 = 4.976.640.000 bytes, or some 4750 MiB of RAM. And that's just the absolute minimum of data, your players won't be happy with a viewdistance of 4; doubling the viewdistance effectively quadruples the amount of data needed, so for a viewdistance of 8 you'd need more than 19 GiB of RAM for 1000 players. Next, my semi-qualified estimate is that the additional housekeeping data will take about 20% of the base RAM, so we're at 23 GiB of RAM. Still convinced this is viable? I'm pretty sure you'll reach the CPU bounds earlier than the RAM bounds. However, there's not an easy way to provide any kind of estimate. Let's try this: The server will have to "tick" mobs in all the loaded chunks. That means that each chunk will get scanned, mobs will drown / suffocate / burn / fall / pathfind. This means that all that chunk data we've calculated earlier will need to be read, at least 20 times per second, to provide a reasonable gameplay. This meand 23 GiB * 20 ticks / second = 460 GiB / second. If we are to trust the RAM benchmarks here: http://www.memorybenchmark.net/read_uncached_ddr3_intel.html , we should assume that current technology can read up to 20 GiB per second - nowhere near the value we need! RE: Could I have some performance test data? - NiLSPACE - 04-26-2015 (04-26-2015, 02:35 AM)leozzyzheng Wrote: That's really hard to developers. Thanks . That's why we have the Lua API Whoah xoft, that's allot of information :O EDIT: Did you also take in account that not all the player will need 81 chunks for themselves? Many of the player will overlap with the chunks they need. RE: Could I have some performance test data? - xoft - 04-26-2015 Heh, it took me so long to write that reply that there are already 2 pages worth of text catching up. RE: Could I have some performance test data? - worktycho - 04-26-2015 Even if you take that into account until we fix issue #1800 the server has to touch every chunk section individually per player every time a chunk needs to be updated significantly. So for 20 GiB/sec ignoring everything else just sending chunks to client will cap out at <120,000 chunk downloads per second. And following xoft estimates for 20 GiB/sec at 1,000,000 players you need an average of >1000 players per chunk. This would cause far more chunk sending events so you would have issues with at least one of chunk updates or ticking, probably both. This is an issue that will effect any minecraft server that does not support clustering. Thousands or millions? - LogicParrot - 04-26-2015 I thought xoft's estimate was for a thousand players, not million? RE: Could I have some performance test data? - worktycho - 04-26-2015 The 1,000,000 number comes from leozzyzheng's post on the first page, I was pointing out that if you can handle ticking for 1000 players on independent chunks you can handle 1,000,000 at 1000 sharing each chunk. Although I'm not sure about the 20 GiB number as a max. With NUMA and multiple memory controllers you can load in more in parallel. Looking at the specs of something like the dell R920 (dells top of the range server) should theoretically be able to go up to 8 parallel DDR channels, so you might with a befy enough server be able to go up to 160 GiB/s. But thats only 8 times as much data, and I have no idea about how much the server would be able to take advantage of that sort of memory architecture. |