Traditionally, OSs introduced to efficiently use machine: allow many users to use same machine, load batch jobs faster than a human operator.Nowadays more important to hide the hardware from applications software, and allow user to run multiple programs, etc. (Make user more efficient, less so the machine.)
Agree? PCs are 'personal computers', only one user.
Disagree? PCs are often shared in offices, in the home. Often networked, so other people and programs access the 'personal' computer. Also, the idea of protected processes is useful when running untrusted code (but it's not something I mentioned in the lectures).
Another 'matter of opinion' one, really.For: filesystem code often part of the OS kernel. More compellingly: OS provides filesystem model. In UNIX, and other OSs with a unified filesystem namespace (all files, directories and devices in the same tree), OS is responsible for integrating the filesystems into the same (virtual file system) tree.
Against: filesystem code often not part of OS kernel, and runs in user space (esp. modern OSs, like Linux and Plan9). Or NFS (Network File System), which is not even in the same computer, let alone built into the operating system.
Oh, lots of things: "Windows Explorer", programs like "Word", "Netscape". DOS commands like "DIR", "FORMAT". Applets and accessories, like calculators, like the program which sits in the background and waits for CDs to be inserted. Remember that each instance of, e.g. the calculator program, is a different process.
(Because the process might be using the registers and otherwise they'd get mangled...)So that the next time a process gets a go on the CPU, the state of the machine is exactly as it was when the process was preempted. It is important that each process may ignore the fact that the OS is switching rapidly between it and other processes.
A virtual machine is... central to computer science... How to explain it?
It's a model of a machine which is presented to client software by lower-level software.
The 'virtual machine' presented to a process in a multitasking system is one in which it has the processor and a big lump of memory to itself, and in which there are no interrupts or suchlike hardware details.
It is useful because it allows a simpler model (than reality, which is messy) to be presented to higher-level software.
Advantages: modularisation of code (over 'monolithic' design)---clear demaraction of responsibility. Better security: each layer only trusts the layer below it.
Disadvantages: still 'monolithic'---difficult to extend. System calls may be expensive if they drill down several layers.
Microkernel architecture moves all but the most fundemental services out of the kernel into user-level processes. With operating systems expected to perform more and more jobs, moving services into processes outside the kernel
- lets the kernel concentrate on its job---makes it smaller, more efficient, more reliable.
- allows the OS to be extended easily (just add new server processes).
- better security: server processes need not be an entangled mess of trusted code. Bugs in one process don't trash other processes.
File server (local or network).
VM server---handles mapping files and swapping out pages to disk.
Display server---like X-Window.
Process server. (It's in the book. I have no idea what it would do.)
Various device drivers.
Using user mode ensures that certain operations are not available to user-level programs. For example: switching off interrupts, buggering with the memory management unit (which would allow access to other processes' memory), turning the processor off.
Means the processer is switching between them rapidly, giving the illusion of concurrency.
Dispatcher loads the new process state into the machine registers and switches to user mode. (It launches the new process after a task switch.)
The scheduler makes the executive decision about which process to run next (which process the dispatcher should dispatch).
- Certain system calls (esp. blocking system calls).
- At the end of an interrupt service routine.
also:
- When a process dies (completes its task).
Runnable processes are ready to take on the processor at a microsecond's notice. They have something to do now.
Blocked processes are waiting for an event---for a resource to become available, for some data to be delivered, for a semaphore to become non-zero. They can't run even if the processor has nothing better to do.
A 'critical section' is a region of code which makes some non-atomic update to a data structure which may be accessed by other processes. It isn't a critical section if the code within is atomic. It isn't a critical section if there's only ever one process accessing/updating the same structure/variable. ('Critical section' implies possible race condition.)No, race conditions only occur when there are multiple concurrent processes.
(Race conditions should not occur in a cooperatively-multitasking system---except if the programmer is really careless---since the current process controls when the next task switch occurs...)