Posts: 6,485
Threads: 176
Joined: Jan 2012
Thanks: 131
Given 1075 thank(s) in 852 post(s)
Unfortunately it is, since it uses the same protocol. I expect the PoC crasher to seriously affect MCS as well. Thanks for bringing this up, we should have this fixed, too.
Posts: 6,485
Threads: 176
Joined: Jan 2012
Thanks: 131
Given 1075 thank(s) in 852 post(s)
Oh well, reducing the payload a bit resulted in the server spitting out 110 MiB of output into the log and dying because of a deadlock detection false positive. So we are indeed vulnerable, although at least not to the unmodified generally available attack.
Posts: 6,485
Threads: 176
Joined: Jan 2012
Thanks: 131
Given 1075 thank(s) in 852 post(s)
So now the question is, how do we fix it in our codebase? Should we limit the amount of data in a single packet? Should we limit the amount of data being parsed in the NBT parser? What should the limits be? Or some different solution, perhaps?
Posts: 274
Threads: 48
Joined: Mar 2015
Thanks: 107
Given 11 thank(s) in 10 post(s)
Sadly, I know nothing John Snow, er I mean, of how to work it. Hope you all on the team find it's not too much hassle.
Posts: 116
Threads: 3
Joined: Sep 2014
Thanks: 31
Given 15 thank(s) in 11 post(s)
How about creating a time limit instead?
The server could abort processing a packet after 0.01 s (or 0.05 s) of work.
Posts: 783
Threads: 12
Joined: Jan 2014
Thanks: 2
Given 73 thank(s) in 61 post(s)
I think we need to do to things to mitigate this attack. First limit the size of uncompressed packets. This also helps mitigate against compression bombs. Then we need to add limits to the recursion depth of the NBT parser as it can cause a stack-overflow as it is a recursive descent parser. I think we've avoided a code execution vulnerability because we don't create large stack based buffers to store the data, but I still think this is a potentially very dangerous attack.