Scheduling

Scheduling

Optional readings for this topic from Operating Systems: Principles and Practice: Chapter 7 up through Section 7.2.

For starters, assume there is only one core.

Simple Scheduling Algorithms

First-in-first-out (FIFO) scheduling (also called non-preemptive):

  • Keep all of the ready threads in a single list called the ready queue.
  • When a thread becomes ready, add it to the back of the ready queue.
  • Run the first thread on the queue until it exits or blocks.

Solution: limit maximum amount of time that a thread can run without a context switch. This time is called a time slice.

Round robin scheduling: run thread for one time slice, then return to back of ready queue. Each thread gets equal share of the cores.

  • Linux time slice: 4 ms

How do we decide whether a scheduling algorithm is good?

  • Minimize response time.
  • Use resources efficiently:
    • Full utilization: keep cores and disks busy
    • Low overhead: minimize context switches
  • Fairness (distribute CPU cycles equitably)

Is round-robin better than FIFO?

Optimal scheduling: SRPT (Shortest Remaining Processing Time)

  • Run the thread that will finish most quickly, run it without interruptions.
  • Another advantage of SRPT: improves overall resource utilization.
    • Suppose some jobs CPU-bound, some I/O-bound.
    • SRPT will give priority to I/O-bound jobs, which keeps the disks/network as busy as possible.

Key idea: can use past performance to predict future performance.

  • Behavior tends to be consistent
  • If a process has been executing for a long time without blocking, it's likely to continue executing.

Priority-Based Scheduling

Priorities: most real schedulers support a priority for each thread:

  • Always run the thread with highest priority.
  • In case of tie, use round-robin among highest priority threads
  • Use priorities to implement various scheduling policies (e.g. approximate SRPT)

Priority Queues:

  • One ready queue for each priority level.
  • Overall idea: threads that aren't using much CPU time stay in the higher-priority queues, threads that are CPU-bound migrate to lower-priority queues.
  • One possible approach:
    • After blocking, thread starts in highest priority queue
    • If a thread reaches the end of its time slice without blocking it moves to the next lower queue.
    • Result: I/O-bound threads stay in the highest-priority queues, CPU-bound threads migrate to lower-priority queues
    • What are the problems with this approach?

4.4 BSD Scheduler:

  • Keep information about recent CPU usage for each thread
  • Give highest priority to thread that has used the least CPU time recently.
  • Interactive and I/O-bound threads will use little CPU time and remain at high priority.
  • CPU-bound threads will eventually get lower priority as they accumulate CPU time.

Multiprocessor Scheduling

Simple approach:

  • Share the scheduling data structures among all of the cores
  • One dispatcher per core
  • Separate timer interrupts for each core
  • Run the k highest-priority threads on the k cores.
  • When a thread becomes runnable, see if its priority is higher than the lowest-priority thread currently running. If so, preempt that thread.

Problems/issues for multiprocessors:

  • Contention:
    • With lots of cores, system will bottleneck on the central ready queue.
    • Solution: separate ready queue per core; balance queues over time (work stealing).
  • Core affinity:
    • Once a thread has been running on a particular core it is expensive to move it to a different core (hardware caches will have to be reloaded).
    • Multiprocessor schedulers typically try to keep a thread on the same core as much as possible.

Conclusion

Scheduling algorithms should not affect the results produced by the system.

However, the algorithms do impact the system's efficiency and response time.

The best schemes are adaptive.

To be optimal, we'd have to predict the future.