Visual studio "optimization' wtf
#1
I was wondering why generation was slow. Then I looked at the code that VS is generating for the hottest function in our code, cNoise::IntNoise3D(). And I could only wonder: WTF??? Such code in a release build?
[Image: vs_optimization.png]
Aren't we doing something wrong if this is the "optimized" code? I've always thought compilers were better than this.
I wonder how gcc is doing in this function, any *nix geeks out there who could help dig the asm listing of that function out of gcc or gdb?
Reply
Thanks given by:
#2
I'm a total noob @ assembly, but if you know how it works you could simply inline some asm and optimize it yourself?

Also maybe it'll optimize it properly if you put it between parentheses?
Reply
Thanks given by:
#3
I've got an even easier "fix": putting the 57*57*57 into parenthesis made VS emit the optimized codeTongue
Even weirder - if I pull that function out into a standalone project for testing, the optimizer works wonders - for a few calls with constants, it even replaced the calls with the constant results; in other cases, it inlined the function and optimized it much better. It seems something's wrong with our projectTongue
Reply
Thanks given by:
#4
I tried with VC2010, and whaddya know, it produces the very same stupid code, doing three multiplications one right after another.

However, when I pulled only the cNoise sources and all the headers they depend on from MCServer into a clean new project, added a few testcases into a main() function, then the code optimized nicely, both with 2008 and 2010. Weird.
Reply
Thanks given by:
#5
Maybe it's a problem with the specific settings the for cNoise.cpp file
Reply
Thanks given by:
#6
I tried compiling Rev 180 which didn't have the specific settings, still no good. It's no use, there hasn't been a time when that function was optimized on MCServer, so I can't find the culprit by walking through history. Pity.
Reply
Thanks given by:
#7
Finally, after all this time, I found out what the culprit was (kinda - I know how it manifests and how to work around it, but not the root cause). It was in the project setting, but something weird seems to be going on:
In the Project options -> Configuration Properties -> C/C++ -> Optimization -> Optimization, it said "Maximize speed (/O2)", and the value was not bold, indicating a default of some kind. However, when the compiler ran, it got the "Disabled (/Od)" option instead, because that is the true default. Dunno where the mixup came from, but when I changed the value to "Full Optimization (/Ox)", it suddenly generates fully optimized code. Yay! Smile
Wow! Just... wow! The generator is now soooo fast!!! There's about a 6x speedup! I get more than 100 chunks generated per second, instead of the regular ~20 per second.
Reply
Thanks given by:
#8
NiceBig Grin
Reply
Thanks given by:
#9
Wow, 6 times faster!!! niceBig Grin
but does that mean it uses 6 times more CPU/RAM??
Reply
Thanks given by:
#10
No, it uses the same amount of everything (meybe even a bit less), it's just that the compiler finally does what it's supposed to do.
Reply
Thanks given by:




Users browsing this thread: 2 Guest(s)