Quantcast
Channel: Active questions tagged gcc - Stack Overflow
Viewing all articles
Browse latest Browse all 22016

Why does GCC generate a faster program than Clang in this recursive Fibonacci code?

$
0
0

This is the code that I tested:

#include <iostream>
#include <chrono>
using namespace std;

#define CHRONO_NOW                  chrono::high_resolution_clock::now()
#define CHRONO_DURATION(first,last) chrono::duration_cast<chrono::duration<double>>(last-first).count()

int fib(int n) {
    if (n<2) return n;
    return fib(n-1) + fib(n-2);
}

int main() {
    auto t0 = CHRONO_NOW;
    cout << fib(45) << endl;
    cout << CHRONO_DURATION(t0, CHRONO_NOW) << endl;
    return 0;
}

Of course, there are much faster ways of calculating Fibonacci numbers, but this is a good little stress test that focuses on recursive function calls. There's nothing else to the code, other than the use of chrono for measuring time.

First I ran the test a couple of times in Xcode on OS X (so that's clang), using -O3 optimization. It took about 9 seconds to run.

Then, I compiled the same code with gcc (g++) on Ubuntu (using -O3 again), and that version only took about 6.3 seconds to run! Also, I was running Ubuntu inside VirtualBox on my mac, which could only affect the performance negatively, if at all.

So there you go:

  • Clang on OS X -> ~9 secs
  • gcc on Ubuntu in VirtualBox -> ~6.3 secs.

I know that these are completely different compilers so they do stuff differently, but all the tests I've seen featuring gcc and clang only showed much less of a difference, and in some cases, the difference was the other way around (clang being faster).

So is there any logical explanation why gcc beats clang by miles in this particular example?


Viewing all articles
Browse latest Browse all 22016

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>