No.Pages are mapped to arbitrary frames in physical memory (or on disk), so allocation of memory does not require contiguous free space, and hence external fragmentation is not a problem. It does suffer from internal fragmentation, where allocated ranges of memory must be rounded up to the nearest page: space is wasted at the end of the last block.
Size of virtual memory = 216
Size of a page = 212
Number of pages = memory-size ÷ page-size
= 216 ÷ 212
= 24 = 16
Maximum size of page table is 16 entries.Number of page frames is 128 = 27
Size of physical memory = page-size × number-of-frames
= 212 × 27
= 219 (= 512K)
19-bit physical address
<A, 6, 7, 16, 17, X, 37, 3C>(all numbers are hexadecimal and the X indicates an 'absent' page). Where do virtual addresses 034516 and 6FF016 map to in physical memory?
Virtual addresses: 0345
= 0000 0011 0100 01016FF0
= 0110 1111 1111 0000Page numbers (top 4 bits): 0000
= 00110
= 6Offsets (bottom 12 bits): 0011 0100 0101 1111 1111 0000 Pages map to frames: A16
= 00010103716
= 0110111Physical addresses: 000 1010 0011 0100 0101
= 0A345011 0111 1111 1111 0000
= 37FF0(Anyone who says that 6FF016 is in an 'absent' page has been counting page numbers from 1 instead of from 0.)
Set up several pages in each process's page table, refering to the same physical frames. If process 1---for example---has memory which it does not wish to share, frame numbers for these pages do not appear in process 2's page table.
The granularity hints let the MMU know if it can use a single entry in the TLB to represent several consecutive page->frame mappings, thus saving precious entries in the translation lookaside buffer.
The instruction 'faults', and the operating system gets control. (Typically what happens then is: the thread to which the process belonged becomes 'blocked', the OS schedules the page to be brought in from disk, and eventually, once the page is read in, the OS restarts the faulted instruction.)
in registers:
- It's faster.
in memory:
- Page table can be bigger.
- Context switches are faster (only change pointer to table).
The disadvantages of the single-level page table are to do with it not scaling well:
- Large page tables require contiguous memory which is difficult to allocate.
- If the virtual address space is not 'full', big chunks of the page table may be wasted on unmapped memory.
- You can't swap out parts of a single-level page table.
With a multilevel page table, the page number part of the virtual address is split into two or more parts (each corresponding to a page table level). The first part of the page number is used as an offset into the process's top-level page table. Each top-level page table entry contains a pointer to one of the second-level page tables.
The second part of the page number is used as an index into the second-level table. Say there were only two levels; the second-level page table would point to the physical page frame to which the page is mapped. (If there were n levels, the indirection would continue until the nth-level page table, which would point to the physical frame.)
The offset part of the virtual address is used to specify an address within the physical frame, as with single-level paging.
What is the memory access overhead on a TLB-miss? ('Overhead' means the time takein in addition to the 70ns to eventually access data in physical memory.) What is the average overhead when using the TLB? What is the average total memory access time? (And how much slower is this than accessing physical memory directly?)
On a TLB miss, overhead is:
12ns + (3 × 70ns) = 222nsAverage overhead is:
(hit-overhead × hit-rate) + (miss-overhead × miss-rate)
= (12ns × 0.94) + (222ns × 0.06)
= 24.6nsAverage access-time is:
70ns + 24.6ns
= 94.6nsHow much slower is 94.6ns than 70ns?
overhead ÷ 70ns
= 24.6 ÷ 70ns
= 0.351 (35.1%)