I'm having a problem that appears to be g++ related. Basically, g++ takes substantially more time to compile a program when it is split into multiple files versus a single monolithic file. In fact, if you cat the individual files together and compile that, it runs much faster than if you list the individual files on the g++ command line. For example, with nine files, it takes 1 minute, 39 seconds to compile; when I cat them together it only takes 13 seconds to compile. I've tried using strace
but it just gets stuck in cc1plus
; when I use the -f
option I still can't sort out what's causing the problem.
I've isolated the problem. Here is how to reproduce it. I wrote a very simple program, like so:
void func_01(int i) { int j; volatile int *jp; jp = &j; for (; i; i--) ++*jp;}void call_01(void){ func_01(10000);}int main(int argc, char *argv[]){ call_01();}
Then I replicated it, removing main and substituting increasing numbers, 999 times. Then I built:
% time g++ -c test*.cppreal 0m18.919suser 0m10.208ssys 0m5.595s% cat test*.cpp > mon.cpp% time g++ -c mon.cpp real 0m0.824suser 0m0.776ssys 0m0.040s
Because I intend to scale to hundreds of files much more complex than this, it's important to get the build time down. Can anyone help explain why this is happening, or offer a less gross workaround? I think it has in part to do with the preprocessor and the savings caused by include guards, because if I include even one file, the time difference is dramatically increased (a factor of five in one case) but it still remains, without includes, a factor of twenty faster to go with the monolithic file.
The version of g++ is 4.4.2, but I checked the latest version, 8.2.0, and it exists there as well.