This is a very simple experiment to demonstrate the importance of compiled languages over dynamic or interpreted languages when it comes to system programming or high degree of efficiency.
A simple loop operation to sum up the numbers from 1 to 1,000,000,000 (one billion).
Clone the repo, then open
$ g++ cplusplus.cpp -o cpp -O3 $ ./cpp
$ javac java.java $ java Java
mono on macOS/linux only.
$ csc csharp.cs $ mono csharp.exe
$ python python.py
$ ruby ruby.rb
$ go build go.go $ ./go
To obtain the time spent on execution, measure the last line of each run with
time multiple times and get the average.
$ time java Java ... real 0m0.395s user 0m0.361s sys 0m0.028s $ /usr/bin/time -l java Java ... 22679552 maximum resident set size
(macOS mid 2015, 2.5 GHz Quad-Core Intel Core i7, 16 GB RAM)
|Language||Elapsed Time (second)||Memory (MB)|
Clearly, the way of optimization of this pure calculation logic in compiled versions as we see in static compiled languages outperformed the interperted languages (except
nodeJs) drastically in terms of speed. Additionally,
C# results show the outstanding optimization made to the compiler and runtime
CLR to perform nearly identical to the low level languages, for such essential looping computing.
The two figures below show the comparison between
Python languages in terms of both speed (time) and memory (space).
All PRs are welcome for other languages or improvements on Github.