Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Submit your homework solution via email by the end of the due date. The subject

ID: 3686683 • Letter: S

Question

Submit your homework solution via email by the end of the due date. The subject line will be 412-h09-Last, First, where you use your same. E.g. my submission would have a subject line 412-h09-Morris-Gerald. The PDF will be named 412-h09-last-first pdf (your names please). (a) True false: virtual memory can allow a program that is larger than physical memory to run on a computer (b) True of false: a virtual memory system divides physical memory into pages that can be allocated to different processes. (c) True or false: a page table is used in conjunction with the memory management unit to map a virtual address to a physical address. (d) True of false: a Translation look-aside Buffer (TLB) is an on-chip cache to speedup address translation. (e) True of false: a TLB caches page table entries. Suppose the virtual memory space of a curtain computer is 16MB. Further suppose that computer has 1 MB physical memory. Finally, suppose the virtual memory system uses 8KB pages. (a) How many pages does t his system have? (b) How many page frames does the system have? (c) How may offset bits are there? Suppose the virtual memory space of a certain computer is 16TB. Further suppose there is 4GB physical memory and that the system uses 16KB pages. Finally, assume cach page table entry requires 16 bytes, clearly, if we use a traditional page table, them Pages = virtual memory size/page size = 16TB/16KB/page = 2^1-2^10/2^4 - 2^10 = 2^10 pages. Since-each page table entry requires 16 bytes, we would need 1GB - 2^30 = 16GB just for the page table! One solution to this "large memory" problem is to use a multi-level page table and another solution is to use an inverted page t able. Suppose we choose the inverted page table approach. How much memory is needed for the inverted page table (assume each entry is still 16 bytes as before)? Assume a block size of 256 bytes, a clock rate of 1GHz, an L1 miss rate of 2% and that main memory takes 100ns of overbead and then delivers 16 bytes per clock cycle. What is the AMAT? True of false. (a) Memory hierarchy caches are based on the principal of locality of reference (temporal and spatial). (b) AMAT is passed on three factors, hit time, miss rate, and miss penalty (or miss time). (c) A direct - mapped cache could also be referred to as 1-way set associative. (d) LRU replacement policy can be reasonably approximated via random but not last for large caches. (e) Write-back caches often use no write-allocate on a write miss. (f) Virtual memory can be larger than physical memory. (g) The miss penalty for a cache miss is significantly higher than the miss penalty for a page fault. (h) A virtual memory page can be either in physical memory or on disk. (i) When the CPU references a virtual page that is on disk a page fault occurs. (j) The CPU produces a physical address that is translated to a virtual address to access memory. Assume the clock time for given hard disk drive is 8ms, that the disk drive runs at 7200 RPM, and that the electronics can support a peak bandwidth of 58 MB/s. What is the average time it would take to fetch a 4KB block of data from the hard drive?

Explanation / Answer

Answer:

Question 1

a) True

Explanation:

Virtual memory makes use of hard disks rather than physical RAM whose size is limited. Whenever any information is needed in RAM, rapid swapping is done between RAM and the hard disk.In this way, programs larger than RAM can also run on the computer.

b)True

Explanation:

Every virtual memory implementations divide a virtual address space into blocks of contiguous virtual memory addresses, called pages, which are usually 4 KB in size.

c)True

MMUs use an in-memory table of items called a "page table", containing one "page table entry" (PTE) per page, to map virtual page numbers to physical page numbers in main memory.

d)False

Explanation:

A TLB cache is not an on-chip cache or CPU cache. Instead, it is part of the memory management unit (MMU).

e)

True

Explanation:

A translation lookaside buffer (TLB) has a fixed number of slots containing page table entries and segment table entries