Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Monday December 14 2015, @11:35PM   Printer-friendly
from the rock-out-with-your-clock-out dept.

Instructions to sleep for a second almost never result in precisely one second’s sleep. Bob Schmidt walks us through the mechanics of why.

Suppose you are walking down the hallway of your office, and a Summer Intern (SI) intercepts you and asks, “If I put a line of code in my program that simply reads sleep(10) , how long will my program sleep? 1

You look at the harried SI and reply, “It depends,” and you continue on your way.

The SI rushes to catch up with you, and asks, “It depends on what?

And you answer, “That, too, depends,” as you continue walking.

At this point our young SI is frantic (and in immediate danger of going bald). “Stop talking in riddles, grey hair! I’m in real need of help here.

Your stroll has taken you to the entrance of the break room, so you grab your interlocutor, duck inside, grab two cups of your favourite caffeinated beverage, and sit down.

It depends,” you say, “on many things, so let’s start with first things first.

First things first

To understand what’s going on ‘under the hood’ when a sleep() is executed, it helps to know a little about how CPUs work, and that means knowing something about CPU clocks, interrupts, and schedulers. The former two are hardware concepts; the latter is a software concept.

It's a decent peek under the hood for folks who usually treat such things as blackbox.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday December 15 2015, @12:47AM

    by Anonymous Coward on Tuesday December 15 2015, @12:47AM (#276433)

    I remember once I ended up building a roll-your-own scheduler for reasons I won't go into. The algorithm resembled:

    ignore_t = nil //initialize
    while true //forever
          t = getSystemTime()
          t = roundToMinutes(t)
          if there are any tasks matching minute t and t not-equals ignore_t then
                run the tasks for t
                ignore_t = t // current minute's tasks already completed
          end if
          sleep a few seconds // to not waste cpu
    end while

    It worked fine for a few days, but then started failing. After intricate debugging I discovered that the CPU got bogged down with certain processes such that it never got a chance to test certain minutes. It seemed strange that a server would be so bogged down that it completely halted this process for more than a minute. It was probably caused by task(s) running at maximum priority. But other projects were beyond my control; I was told to live with it as is. I solved it by adding more logic to catch "missed" tasks from older time slots (minutes). But, it complicated a relatively clean algorithm.

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 15 2015, @01:33AM

      by Anonymous Coward on Tuesday December 15 2015, @01:33AM (#276458)

      You wrote a scheduler that ran in user-space?

      Of course it was unreliable.

      • (Score: 1, Touché) by Anonymous Coward on Tuesday December 15 2015, @02:12AM

        by Anonymous Coward on Tuesday December 15 2015, @02:12AM (#276481)

        Because:

        A. Nobody uses cron.
        B. Cron runs in the kernel.

        Which fallacious belief do you subscribe to?

        • (Score: 2) by ledow on Tuesday December 15 2015, @08:21AM

          by ledow (5567) on Tuesday December 15 2015, @08:21AM (#276568) Homepage

          There's a big difference between a program scheduler (e.g. cron) and a task scheduler to choose which task is next to run on the processor.

          • (Score: -1, Troll) by Anonymous Coward on Tuesday December 15 2015, @09:06AM

            by Anonymous Coward on Tuesday December 15 2015, @09:06AM (#276578)

            If you would kindly read the pseudocode at the top of this thread, you would see that it describes a cron-type scheduler that schedules tasks at whole-minute intervals and sleeps for entire seconds at a time. Or don't. Be as illiterate as you want to be, asshole.

      • (Score: 0) by Anonymous Coward on Tuesday December 15 2015, @07:12PM

        by Anonymous Coward on Tuesday December 15 2015, @07:12PM (#276761)

        They didn't want me messing at the OS level because I was a contractor. If you are given limited resources, you have to make due. If somebody wants to pay me to reinvent the wheel, I will. It's my obligation to point out it's reinventing the wheel to the customer, but the decision to actually do it is theirs.

  • (Score: 2) by Appalbarry on Tuesday December 15 2015, @12:58AM

    by Appalbarry (66) on Tuesday December 15 2015, @12:58AM (#276439) Journal

    Reminds me of the good old days of dial up Internet where you added a utility that connected with an atomic clock somewhere to adjust your PC clock to "accurate" time.

    • (Score: 3, Informative) by Anonymous Coward on Tuesday December 15 2015, @01:37AM

      by Anonymous Coward on Tuesday December 15 2015, @01:37AM (#276459)

      Maybe you haven't noticed, but that's still what we do. It's called ntpd and/or ntpdate

  • (Score: 2) by Rich on Tuesday December 15 2015, @01:02AM

    by Rich (945) on Tuesday December 15 2015, @01:02AM (#276441) Journal

    What the guy writes roughly describes the behaviour of a fringe OS from the days computers were built with a lot of the TTLs he mentions. Microware OS/9 comes to mind (though that relinquishes on "tsleep(1)", not 0). To confuse the summer intern a bit more, someone should now try to explain how Linux calculates CFS vruntime values on a modern NOHZ kernel when the wakeup triggers *g*

  • (Score: 0) by Anonymous Coward on Tuesday December 15 2015, @01:46AM

    by Anonymous Coward on Tuesday December 15 2015, @01:46AM (#276466)

    To find out how long your program was in sleep, remember to read the clock before and after sleeping. Then you will have another problem, when you discover that checking the clock is eating so much CPU time that your program isn't getting any work done. Wait, why is the kernel asking the hypervisor for the time? And why even ask the kernel? Isn't the time available to user space? Finally you say, fuck all this OS abstraction bullshit, give me RDTSC!

    • (Score: 2) by shortscreen on Tuesday December 15 2015, @05:09AM

      by shortscreen (2252) on Tuesday December 15 2015, @05:09AM (#276518) Journal

      unfortunately RDTSC has become somewhat complicated now on CPUs with multiple cores that run at variable clock rates

      • (Score: 0) by Anonymous Coward on Tuesday December 15 2015, @07:06AM

        by Anonymous Coward on Tuesday December 15 2015, @07:06AM (#276560)

        No, now it's uncomplicated again with invariant TSC. If a particular processor doesn't have an invariant TSC, why then I just remember to disable power saving.

        • (Score: 0) by Anonymous Coward on Wednesday December 16 2015, @03:11PM

          by Anonymous Coward on Wednesday December 16 2015, @03:11PM (#277117)

          And if there are multiple separate processors in the system?

    • (Score: 0) by Anonymous Coward on Tuesday December 15 2015, @06:03AM

      by Anonymous Coward on Tuesday December 15 2015, @06:03AM (#276540)
      Don't forget the other problem where someone/something adjusts the clock while the program is sleeping. Sometimes the clock can go backwards.

      That's why where possible you should use monotonic time and not "human/clock time" for stuff that isn't to do with "human/clock time". However this is not always easily available and I blame the OS and hardware bunch for the dismal state of things. It's not like there aren't enough transistors etc nowadays to make things better.

      For example at one point of time they said "don't use RDTSC" (for some CPUs the RDTSC wasn't synced) then they say "don't use the clock" and said use something like HPET if it's available, but back then it was not always available... So go figure.
  • (Score: 2) by FatPhil on Tuesday December 15 2015, @08:52AM

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday December 15 2015, @08:52AM (#276574) Homepage
    "In implementations I’ve seen, the parameter to sleep() – the number of milliseconds to sleep -"

    sleep(3) takes seconds. He's possibly thinking of something like the non-standard msleep(). Which is odd, as he didn't mention msleep() whilst mentioning usleep() and nanosleep() at the start of the article. So clearly he's confused. Confused people shouldn't write articles.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: -1, Troll) by Anonymous Coward on Tuesday December 15 2015, @09:13AM

      by Anonymous Coward on Tuesday December 15 2015, @09:13AM (#276580)

      For the purposes of this discussion I’m going to assume a generic version of sleep() that accepts a parameter representing time in milliseconds.

      Fat demented idiots shouldn't read articles.

    • (Score: 2) by TheRaven on Tuesday December 15 2015, @11:51AM

      by TheRaven (270) on Tuesday December 15 2015, @11:51AM (#276603) Journal
      He might be thinking of Sleep(), which is the Windows version of msleep().
      --
      sudo mod me up
      • (Score: 2) by FatPhil on Tuesday December 15 2015, @12:35PM

        by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday December 15 2015, @12:35PM (#276614) Homepage
        Thanks for the windows insight I was lacking, that makes perfect sense. Linux's msleep() came immediately to mind as I've spent many a year as a kernel programmer, but it's not a c library or posix thing.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by sjames on Tuesday December 15 2015, @06:37PM

      by sjames (2882) on Tuesday December 15 2015, @06:37PM (#276748) Journal

      It depends on the language and library you're using. Some sleep functions take ms parameter. which may or may not result in a call to libc's sleep function.

  • (Score: 2) by darkfeline on Wednesday December 16 2015, @01:49AM

    by darkfeline (1030) on Wednesday December 16 2015, @01:49AM (#276930) Homepage

    First things first, it depends on the OS. It's certainly possible to write an OS that can guarantee X seconds of sleep within some margin of error.

    Also, this kind of thing I learned in my second CS class. I guess not everyone is lucky to study a competently built curriculum.

    --
    Join the SDF Public Access UNIX System today!
  • (Score: 2) by cafebabe on Saturday December 19 2015, @08:08AM

    by cafebabe (894) on Saturday December 19 2015, @08:08AM (#278513) Journal

    #include <stdkids_today_they_dont_know_theyre_born.h>

    usleep()? usleep()??? Back in the day, we had select(0, &dummy, &dummy, &dummy, timeval_struct) to specify a sub-second delay. If we were lucky, we had a wrapper around it. And we had to specify every unsigned char and every signed int just to get code through HP-UX's cruddy compiler. We also had to walk uphill, both ways, in five miles of snow, at the height of summer.

    --
    1702845791×2