Sabtu, 23 Juni 2018

Sponsored Links

May 4: Caches. - ppt download
src: slideplayer.com

A cache is a memory that stores data that is currently being used by the processor. The memory block can not be randomly placed in the cache and is restricted to one cache line by "Placement Policy". In other words, the Placement Policy determines where a specific memory block can be placed when it enters the cache.

There are three different policies available for cache memory block placement.


Video Cache Placement Policies



Cache Pemetaan Langsung

In a directly mapped cache structure, the cache is organized into multiple sets with one cache line per set. Based on the memory block address, it can only occupy one cache line. The cache can be framed as a column matrix (n * 1).

To place the block in cache

  • The settings are determined by the index bits coming from the address of the memory block.
  • The memory block is placed in the identified set and the tag is stored in the tag field associated with the set.
  • If the cache line was previously occupied, then the new data will replace the cache memory blocks.

To search for words in cache

  • The set is identified by the address bit index.
  • The tag bit that comes from the memory block address is compared to the tag bits associated with the set. If the tag matches, then there is the cache hit and cache block returned to the processor. There is another cache miss and the memory block is taken from the bottom memory (main memory, disk).

Advantages

  • This placement policy is power efficient because it avoids searching through all cache lines.
  • Simple replacement policy and replacement policy.
  • This requires cheap hardware because only one tag needs to be checked at a time.

Losses

  • This has a lower cache click rate, since there is only one cache line available in one set. Each time new memory is referenced to the same set, the cache line is replaced, causing a loss of conflict.

Example

Consider the main memory of 16 Kilobytes, which is arranged as a 4-byte block and 256 byte Cache with a block size of 4 bytes.

Because each cache block is 4 bytes in size, the total number of sets in the cache is 256/4, which is equivalent to 64 sets.

The address entered into the cache is divided into bits for Offset, Index, and Tag.

The offset corresponds to the bit used to specify the byte to be accessed from the cache line.

In the example, the offset bit is 2 which is used to overcome 4 bytes of cache lines.

The index corresponds to the bit used to determine the Cache set.

In the example, the index bits are 6 that are used to handle 64 cache sets.

The tag matches the remaining bits.

In the example, the bit tag is 6 (14 - (6 2)), which is stored in the tag field to match the address on the cache request.

Address 0x0000 (tag - 00_0000, index - 00_0000, offset - 00) map to block 0 memory and occupy set 0 from cache.

Address 0x0004 (tag - 00_0000, index - 00_0001, offset - 00) map to block 1 from memory and occupy set 1 from cache.

Similarly, the address 0x00FF (tag - 00_0000, index - 11_1111, offset - 11) maps to block 63 from memory and occupies set of 63 from the cache.

Address 0x0100 (tag - 00_0001, index - 00_0000, offset - 00) map to block 64 from memory and occupy set 0 of cache.

Maps Cache Placement Policies



Associate Cache Full

In an associative full cache, the cache is set into a cache set with multiple cache lines. The memory block can occupy one of the cache lines. The cached organization can be framed as a row matrix (1 * m).

To place the block in cache

  • The cache line is selected based on the valid bit associated with it. If the valid bit is 0, the new memory block can be placed in the cache line, else it must be placed in another cache line with valid 0 bit.
  • If the cache is completely filled then a block is evicted and the memory block is placed on the cache line.
  • The memory block caching from the cache is determined by the replacement policy.

To search for words in cache

  • The Tag field of the memory address is compared to the tag bit associated with all cache lines. If it fits, the block is in the cache and is a cache hit. If it does not match, then cache is miss and must be taken from the bottom memory.
  • Based on Offset, a byte is selected and returned to the processor.

Benefits

  • The associative full cache structure gives us the flexibility of placing a memory block in one of the cache lines and hence full utilization of the cache.
  • Placement policies provide better cached click rates.
  • This offers the flexibility of using various replacement algorithms in case of a cache error.

Loss

  • The placement policy is slow because it takes a while to iterate through all the rows.
  • The placement policy is power-hungry because it must be repeated over the entire cache set to find the block.
  • The most expensive of all methods, due to the high cost of associative comparison hardware.

Example

Consider the main memory of 16 Kilobytes, which is organized as a 4-byte block and 256 byte Cache and 4 byte block size.

Because each cache block is 4 bytes in size, the total number of sets in the cache is 256/4, which is equivalent to 64 sets or cache lines.

The address entered into the cache is divided into bits for offset and tags.

The offset corresponds to the bit used to specify the byte to be accessed from the cache line.

In the example, the offset bit is 2 which is used to resolve the 4 bytes of the cache line and the remaining bit forms the tag.

In the example, the tag bit is 12 (14 - 2), which is stored in the tag field of the cache line to match the address on the cache request.

Because each memory block can be mapped to each cache line, the memory block can occupy one of the cache lines based on the replacement policy.

Cache Design, Unified or split cache, multiple level of caches ...
src: i.ytimg.com


Set Associative Cache

Setting an associative cache is a trade-off between the directly mapped Cache and the full associative cache.

A collection of associative cache can be imagined as a matrix (n * m). The cache is divided into set 'n' and each set contains 'm' cache lines. The first memory block is mapped to a set and then placed into each cache line of the set.

The cache range from direct to associative mapping is a series of associative association levels. (Direct mapping is one-way set associative and cache Completely associative with m block is an associative set m-walk.)

Many processor cache in today's design can be mapped directly, two associative defined directions, or associate four-way association.

To place the block in cache

  • The settings are determined by the index bits coming from the address of the memory block.
  • The memory block is placed in the identified set and the tag is stored in the tag field associated with the set.
  • If the cache line is occupied, then the new data replaces the cache block identified with the help of the replacement policy.

To search for words in cache

  • The settings are determined by the index bits coming from the address of the memory block.
  • The tag bit is compared to the tag of all cache lines present in the selected set. If the tag matches, then it's the corresponding hit and byte cache retrieved and sent to the processor. If the tag does not match, then it is cache miss and taken from the bottom memory.

Benefits

  • The placement policy is a trade-off between direct mapping and full associative cache.
  • This offers the flexibility of using replacement algorithms if a cache miss occurs.

Losses

  • The placement policy will not use all cache lines available in cache effectively and suffer from conflict defeat.

Example

Consider the main memory of 16 Kilobytes, which is arranged as a 4-byte block and 256 byte Cache with 4 byte block size and 2-way set associative.

Because each cache block is 4 bytes in size, the total number of sets in the cache is 256/4, which is equivalent to 64 sets or cache lines.

In the example, the offset bit is 2 which is used to resolve the 4 bytes of the cache line, the index bit is 5 which is used to overcome the 32 lines of cache and the tag bit is 7 (14 - (5 2)), which is stored in the tag to be matched with the address on the cache request.

Address 0x0000 (tag - 000_0000, index - 0_0000, offset - 00) maps to block 0 from memory and occupies set 0 of cache. This block occupies one of the cache lines of set 0 and is determined by the replacement policy for the cache.

Address 0x0004 (tag - 000_0000, index - 0_0001, offset - 00) map to block 1 from memory and occupy one cache line from set 1 of the cache.

Similarly, the address 0x00FF (tag - 000_0001, index - 1_1111, offset - 11) maps to block 63 of the memory and occupies one of the cache lines from the 31st set of the cache.

Address 0x0100 (tag - 000_0010, index - 0_0000, offset - 00) map to block 64 from memory and occupy one cache line from set 0 of cache.

May 4: Caches. - ppt download
src: slideplayer.com


See also

  • Associativity
  • Cache replacement policy
  • Cache hierarchy
  • Writing Policy
  • Cache coloring

Caching at the wireless edge | Philosophical Transactions of the ...
src: rsta.royalsocietypublishing.org


References

Source of the article : Wikipedia

Comments
0 Comments