Please I found the answer but I need a step by step solution so that I can under
ID: 3870480 • Letter: P
Question
Please I found the answer but I need a step by step solution so that I can understand .
The LRU replacement policy is based on the assumption that if address A1 is accessed less recently than address A2 in the past, then A2 will be accessed again before A1 in the future. Hence, A2 is given priority over A1. Discuss how this assumption fails to hold when the a loop larger than the instruction cache is being continuously executed. For example, consider a fully associative 128-byte instruction cache with a 4-byte block (each block can hold one instruction). The cache uses an LRU replacement policy. a. What is the asymptotic instruction miss rate for a 128-byte loop with a large number of iterations? b. Repeat part (a) for loop sizes 220 bytes and 280 bytes. c. If the cache replacement policy is changed to most recently used (MRU) (replace the most recently accessed cache line), which of the three above cases 128, 220, or 280-byte loops) would benefit from this policy?
Explanation / Answer
a) generally a 64-byte loop completely fill in 128 byte cache memory. So that there is no asymptotic miss rate and it will be 0%
b)100% miss rate.Because as per given cache is fully associative and it is using LRU replacement, so that both 220 bytes and 280 bytes loops will produce 100% miss rate