Test of sleep functions

Linux vs Mac usleep() relative error


ABSTRACT: I analyse how much you can trust usleep() function on both Mac and Linux.

CODE REPOSITORY: On Github

LINUX MACHINE: Linux examples presented in this post were run on Asus G752VY (mid-2016) laptop with Gentoo Linux (kernel: 4.4.39 Gentoo flavour).

MAC MACHINE: Mac examples were run on 13′ MacBook Pro (late-2016) with macOS Sierra (kernel: Darwin Kernel 16.5.0).

>>> Shortcut: Go directly to the most interesting results <<<


Why am I writing this post?
Some time ago i needed to put my program to sleep for a short moment, ca. ~100 microseconds. I found that “usleep()” function puts a thread to sleep for a requested number of microseconds. Documentation for usleep is here.

Ok, so it is rather easy and logical – call usleep(1) and execution stops for a microsecond, call usleep(10) and execution stops for ten microseconds.

But “logic was such a liar” (copyright by Max Payne).
I started to use usleep() and strange things started to happen…


Call usleep() one million times.

Fast start a.k.a. usleep(1):

I wrote a simple piece of code to test usleep() function. You can find the file with the code on Github: Linux version, Mac version. This program calls usleep(1) one million times and measures the time of these calls.

Go to sleep/linux or sleep/mac dependently on the platform you are in.
Firstly, compile the code:

$ make us_test

and then run it:

$ ./us_test

Putting a thread to sleep one million times should give around 1 second sleep time. So what is the result?
On my Mac the measured time is:

$ ./us_test
Start: …
         finish! (Was it a second?)
It was 6.538 seconds.

On my Linux Gentoo the measured time is:

$ ./us_test
Start: …
         finish! (Was it a second?)
It was 52.941 seconds.

If you run the code on a different machine the results may vary a bit, but the general result is the same: calling usleep(1) one million times takes MUCH more time than a second.
The above results are horrible! The sleep time on Mac takes ca. 6.5 times longer than the expected time, and on Linux it is ca. 53 (!) times longer.

Give usleep a bit more time – usleep(10):
Well, calling usleep(1) might seem unfair, since one microsecond is the shortest argument the function gen be called with. So let us increase it to ten microseconds and call usleep(10). File with the code which calls usleep(10) one million times is on Github: Linux version, Mac version.

To run the code, go to sleep/linux or sleep/mac dependently on the platform you are in.
Firstly, compile the code:

$ make us10_test

and then run it:

$ ./us10_test

Putting a thread to sleep for ten microseconds one million times should give around 10 seconds sleep time.
On my Mac the measured time is:

$ ./us10_test
Start: …
         finish! (Was it ten seconds?)
It was 15.914 seconds.

On my Linux Gentoo the measured time is:

$ ./us10_test
Start: …
         finish! (Was it ten seconds?)
It was 61.970 seconds.

Again, if you run this code on a different machine the results may vary a bit. The above results are still bad, but not as terrible as previously. It looks like increasing argument for usleep() improved a little the sleeping time correctness.


Measure usleep() error for a number of different arguments.
Let us measure error of usleep() for a number of different arguments.
I wrote (in C++) a Python module which calls usleep() with a requested argument one hundred times and measures the sleep time. The module can be found on Github: Linux version, Mac version.
Additionally, I prepared a Python script (Linux version, Mac version) which tests the usleep() function using the Python module and plots the results.


To run the code go to sleep/linux or sleep/mac dependently on platform you are in.
Firstly, compile the Python module:

$ make python_module

Now you are ready to run the script:

$ python3 run_usleep_test.py

Mac results.
The results from my Mac are below. The green line is the real average sleep time, the blue line is the requested sleep time. On the second figure there is a relative error between the real and the requested sleep time (black line).
the measured average sleep time and requested sleep time (Mac)
usleep() relative error (Mac)
Linux results.
The results I obtained on my laptop with Gentoo Linux are below. The green line is the real average sleep time, the blue line is the requested sleep time. On the next figure there is a relative error between the real and the requested sleep time (red line).
the measured average sleep time and requested sleep time (Linux)
usleep() relative error (Linux)


Linux vs Mac
Here the results get really interesting. On the figure below there are combined plots of relative sleep time errors for both Linux and Mac combined. Let us take a closer look on this figure.

Definitely, Mac has a bit better start. For little values of requested sleeping time the Mac error is not as ridiculous as error in Linux. But then, for requested sleep time > ~150 us Linux starts being better than Mac. Furthermore, something strange happens to Mac, as the error even goes up a bit for a moment. Afterwards (sleep time > ~500 us), Mac’s error is getting lower, but way slower than Linux’s error.

For requested sleep time > 1 ms error in Linux is becoming acceptable (error < 0.1). For requested sleep time > 10 ms error in Linux is marginal (error < 0.01). Sleep time error in Mac drops below 0.01 for significantly higher requested sleep time values. IN SUMMARY: IMHO, Linux puts threads to sleep better.
Linux vs Mac usleep() relative error


Conclusions
The conlusion is rather pesimistic, but not everything related to computing must be happy. Do not expect precise sleeping times on x86 platform, especially when asking for sleeps < 1ms.

Copyright © 2019 | First Mag designed by Themes4WP