Operating Systems 1

Tutorial 3

  1. If a semaphore with initial value i has had s SIGNAL operations performed on it, what inequality expresses the number of (completed) WAIT operations which may have been performed?
    Straight out of the notes, this one:
    counterValue = i + s - numberOfWAITs
    but...
    counterValue >= 0
    therefore...
    i + s - numberOfWAITs >= 0
    i + s >= numberOfWAITs
    or
    numberOfWAITs <= i + s
  2. A process performs a READ operation on a pipe, and blocks (because no data is available). What could cause it to unblock?
    Some other process doing a WRITE into the pipe would unblock it (the READer, that is, not the pipe :-).
  3. Under what circumstances might a WRITE operation to a pipe cause the writing process to block?
    This will only happen if the pipe's internal buffer is full. The WRITEr will become unblocked once some other process has READ some bytes.
  4. What distinguishes a rendezvous, specifically, from a message SEND generally?
    With a rendezvous, messages are not queued.

    The SENDer must wait for some other task to RECEIVE; a RECEIVEr must wait for another task to SEND. (SEND and RECEIVE are synchronous with a rendezvous; asynchronous in a message-passing system which queues messages.)

  5. Process A is streaming media data (audio and video) from a server on a network. Process B is displaying it. Process A and B share a buffer, to smooth out slight disturbences in the network transfer rate. Show how you might use semaphores to allow process A to put data into the the buffer while process B takes data out of the buffer, without race conditions, and without holding up either process needlessly.
    This is the producer-consumer problem. Use 3 semaphores:
    const int BUFFER_CAPACITY = 100; // Say, for example.
    Semaphore emptySlots = new Semaphore(BUFFER_CAPACITY);
    Semaphore fullSlots = new Semaphore(0);
    Semaphore mutualExclusion = new Semaphore(1);
    The code for the producer is:
    void producer(){
        for(/*ever*/;;){
            Object item = produceItem();
            wait(emptySlots); // Wait for buffer space.
            wait(mutualExclusion); // Enter critical section.
            buffer.insertItem(item);
            signal(mutualExclusion); // Leave critical section.
            signal(fullSlots); // Signal the consumer.
        }
    }
    and for the consumer:
    void consumer(){
        for(/*ever*/;;){
            wait(fullSlots); // Wait for item in buffer.
            wait(mutualExclusion); // Enter critical section.
            Object item = buffer.removeItem();
            signal(mutualExclusion); // Leave critical section.
            signal(emptySlots); // Signal the producer.
            consumeItem(item);
        }
    }
  6. Task 1 sends two signals to task 2 (using signals, not semaphores). The code of task 2 attempts to wait for two signals, but ends up blocking indefinitely. What has gone wrong? Would the same problem have occurred if the tasks were communicating with semaphores?
    What has gone wrong is that the OS has allocated only a single bit to record whether a signal was sent to the task. When task 1 sent two signals, the first signal set the bit; the second signal had no effect. Task 2 did not block when it performed its first wait (since the signal bit was set), but the fact that two signals were set has been lost; the second attempt to wait for a signal caused task 2 to block indefinitely.

    No. This wouldn't happen with semaphores. The semaphore counts the number of signals which have been received, so none are lost.

  7. One solution to the Dining Philosopher's Problem is to provide a single, global semaphore. When a philosopher wishes to eat, the philosopher does a WAIT on the semaphore, then a SIGNAL once finished eating. This way only one philosopher ever picks up any forks, and deadlock is prevented. What is wrong with this soluion?
    What is wrong is that only one philosopher gets to eat at any one time. It doesn't allow two philosophers at opposite sides of the table to both be eating.
  8. Why is 'run until blocked' scheduling a not a good scheduling policy in general?
    It is 'unfair': a single task hogs the CPU. If the task never blocks, other tasks never get a chance to run. (Could be other tasks are more important.)
  9. In a round-robin scheduling system, the task-switch overhead is 3ms. There are four tasks, with quantums of length 20ms, 23ms, 40ms & 100ms. Assuming that all the tasks are CPU bound (they never block), what is the efficiency of the system? What is the worst-case response time?
    Efficiency is...
    useful-time ÷ total-time
    = (20 + 23 + 40 + 100) ÷ (20 + 3 + 23 + 3 + 40 + 3 + 100 + 3)
    = 183 ÷ 195
    = 93.8% (to 1 decimal place)
    Worst-case response time: Assume 20ms process has just completed its time-slice, and it must wait for its chance to come round again:
    3 + 23 + 3 + 40 + 3 + 100 + 3
    = 172ms
    = about a 6th of a second
  10. In a priority-based scheduling system, give an example of a criterion used to determine internal priority. Give an example of a criterion used to determine external priority?
    [Requires wild guesswork. The students have been told what internal and external priorities are, but not many examples. Internal priorities are calculated (usually dynamically) by the scheduler. External priorities are determined explicitly by the creator of the process, either a human or parent process.]

    Criteria for internal priority:

    • Amount of last quantum used
    • Average % CPU usage so far
    • # open files

    Criteria for external priority:

    • Whether the process belongs to the user or to the OS
    • Importance of the user
    • Whether the process is a background process or not
    • How quickly the user wishes to see results
    • Its deadline (in a soft real-time system)