Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Problem 6: Consider a computer system with 16 MB memory and 16 KB cache with 8-b

ID: 3738883 • Letter: P

Question

Problem 6: Consider a computer system with 16 MB memory and 16 KB cache with 8-byte line a) Find the size in bits of byte offset, line/set , and tag in case of direct mapping, 2- b) For the memory reference of address 0x345678, show the values (in Hexadecimal) c) Assuming a one dirty-bit and one valid-bit per cache line. What is the number of way set associative, and 4-way set associative of the byte offset, line/set , and tag in each of the three cases above overhead bytes compared to the usable cache memory size in each of the three cases above? Fill the table blow to show the values of V and D, Tag bits, if there is cache hit or miss, and describe the action taken for the following memory access sequence for the two cases direct mapping and 2-way set associative d) o Read sequence 0x345678, OxFC167B, 0xD8967A. Followed by write sequence 0x21D678. Followed by read sequence 0xAE567F, 0x541679 Line 2 Line 1 Operation/Set Tag VID H Action Taken Action Taken Address Read 0x345678 Read 0xFC167B Read 0xD8967A

Explanation / Answer

On the off chance that direction comes about are just gone through the enroll document (no sending/bypassing) and enlists are just perused just before EX, at that point your graph looks revise. (You should seriously think about the void stages previously IF for SUB and BNEZ to be slow down cycles since ordinarily the following direction's IF would promptly take after the IF phase of the past guideline. Then again, that may be seen as jumbling the outline.)

In any case, a 5-arrange pipeline is regularly improved to maintain a strategic distance from the majority of the above slows down by sending comes about specifically from the finish of the EX organize (or the finish of the M arrange for loads) of the outcome delivering guideline to the start of the EX phase of the reliant direction. (For store guidelines, the incentive to be put away in memory may just be required by the M or even the W arrange, so a fashioner should think about including sending for this case. With this straightforward pipeline, this would matter for a couple of directions giving a memory move since loads are the main guidelines with a dormancy more noteworthy than one. For a broad superscalar, such could permit something like "Include R3, R2, R1; SW R3, 0(R4);" to start execution in a similar cycle.)

With such an upgraded pipeline, the ADD just has one slow down cycle (after ID) by sending the outcome from the finish of the M phase of "LW R2, 400(R4)" to the beginning of the ADD's EX.

| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |

LW R1, 0(R4) | IF | ID | EX | M | W |

LW R2, 400(R4) | IF | ID | EX | M | W |

ADD1 R3,R1,R2 | IF | ID | * | EX | M | W |

SW R3, 0(R4) | IF | * | ID | EX | M | W |

SUB R4,R4,#4 | * | IF | ID | EX | M | W |

BNEZ R4, L1 | IF | ID | EX | M | W |

Such advancements add intricacy to the outline, however keeping away from pointless slows down can perceptibly enhance execution.