This isn't explicitly related, but interesting, so I offer it up here.<p>When I read 40ms, it triggered a memory from tracking down a different 40ms latency bug a few years ago. I work on the Netflix app for set top boxes, and a particular pay TV company had a box based on AOSP L. Testing discovered that after enough times switching the app between foreground and background, playback would start to stutter. The vendor doing the integration blamed Netflix - they showed that in the stutter case, the Netflix app was not feeding video data quickly enough for playback. They stopped their analysis at this point, since as far as they were concerned, they had found the issue and we had to fix the Netflix app.<p>I doubted the app was the issue, as it ran on millions of other devices without showing this behavior. I instrumented the code and measured 40ms of extra delay from the thread scheduler. The 40ms was there, and was outside of our app's context. Literally, I measured it between the return of the thread handler and the next time the handler was called. So I responded, to paraphrase, its not us, its you. Your Android scheduler is broken.<p>But the onus was on me to prove it by finding the bug. I read the Android code, and learned Android threads are a userspace construct - the Android scheduler uses epoll() as a timer and calls your thread handler based on priority level. I thought, epoll() performance isn't guaranteed, maybe something obscure changed, and this change is adding an additional 40ms in this particular case. So I dove into the kernel, thinking the issue must be somewhere inside epoll().<p>Lucky for me, another engineer, working for a different vendor on the project, found the smoking gun in this patch in Android M (the next version). It was right there, an extra 40ms explicitly (and mistakenly) added when a thread is created while the app is in the background.<p><a href="https://android.googlesource.com/platform/system/core/+/4cdce42%5E%21/#F0" rel="nofollow">https://android.googlesource.com/platform/system/core/+/4cdc...</a><p><pre><code> Fix janky navbar ripples -- incorrect timerslack values
If a thread is created while the parent thread is "Background",
then the default timerslack value gets set to the current
timerslack value of the parent (40ms). The default value is
used when transitioning to "Foreground" -- so the effect is that
the timerslack value becomes 40ms regardless of foreground/background.
This does occur intermittently for systemui when creating its
render thread (pretty often on hammerhead and has been seen on
shamu). If this occurs, then some systemui animations like navbar
ripples can wait for up to 40ms to draw a frame when they intended
to wait 3ms -- jank.
This fix is to explicitly set the foreground timerslack to 50us.
A consequence of setting timerslack behind the process' back is
that any custom values for timerslack get lost whenever the thread
has transition between fg/bg.
--- a/libcutils/sched_policy.c
+++ b/libcutils/sched_policy.c
@@ -50,6 +50,7 @@
// timer slack value in nS enforced when the thread moves to background
#define TIMER_SLACK_BG 40000000
+#define TIMER_SLACK_FG 50000
static pthread_once_t the_once = PTHREAD_ONCE_INIT;
@@ -356,7 +357,8 @@
&param);
}
- prctl(PR_SET_TIMERSLACK_PID, policy == SP_BACKGROUND ? TIMER_SLACK_BG : 0, tid);
+ prctl(PR_SET_TIMERSLACK_PID,
+ policy == SP_BACKGROUND ? TIMER_SLACK_BG : TIMER_SLACK_FG, tid);
return 0;</code></pre>