Slide #1.

MEMORY & I/O SYSTEMS Chapter 8 Digital Design and Computer Architecture, 2nd Edition David Money Harris and Sarah L. Harris Chapter 8 <1>
More slides like this


Slide #2.

MEMORY & I/O SYSTEMS Chapter 8 :: Topics • Introduction • Memory System Performance Analysis • Caches • Virtual Memory • Memory-Mapped I/O • Summary Chapter 8 <2>
More slides like this


Slide #3.

MEMORY & I/O SYSTEMS Introduction • Computer performance depends on: – Processor performance – Memory system performance Memory Interface CLK Processor CLK MemWrite Address WriteData WE Memory Chapter 8 <3> ReadData
More slides like this


Slide #4.

MEMORY & I/O SYSTEMS Processor-Memory Gap In prior chapters, assumed access memory in 1 clock cycle – but hasn’t been true since the 1980’s Chapter 8 <4>
More slides like this


Slide #5.

MEMORY & I/O SYSTEMS Memory System Challenge • Make memory system appear as fast as processor • Use hierarchy of memories • Ideal memory: – Fast – Cheap (inexpensive) – Large (capacity) But can only choose two! Chapter 8 <5>
More slides like this


Slide #6.

Cache Speed MEMORY & I/O SYSTEMS Memory Hierarchy Main Memory Technology Price / GB Access Time (ns) Bandwidth (GB/s) SRAM $10,000 1 25+ DRAM $10 10 - 50 10 SSD $1 100,000 0.5 HDD $0.1 10,000,000 0.1 Virtual Memory Capacity Chapter 8 <6>
More slides like this


Slide #7.

MEMORY & I/O SYSTEMS Locality Exploit locality to make memory accesses fast • Temporal Locality: – Locality in time – If data used recently, likely to use it again soon – How to exploit: keep recently accessed data in higher levels of memory hierarchy • Spatial Locality: – Locality in space – If data used recently, likely to use nearby data soon – How to exploit: when access data, bring nearby data into higher levels of memory hierarchy too Chapter 8 <7>
More slides like this


Slide #8.

MEMORY & I/O SYSTEMS Memory Performance • Hit: data found in that level of memory hierarchy • Miss: data not found (must go to next level) Hit Rate = # hits / # memory accesses = 1 – Miss Rate Miss Rate = # misses / # memory accesses = 1 – Hit Rate • Average memory access time (AMAT): average time for processor to access data AMAT = tcache + MRcache[tMM + MRMM(tVM)] Chapter 8 <8>
More slides like this


Slide #9.

MEMORY & I/O SYSTEMS Memory Performance Example 1 • A program has 2,000 loads and stores • 1,250 of these data values in cache • Rest supplied by other levels of memory hierarchy • What are the hit and miss rates for the cache? Chapter 8 <9>
More slides like this


Slide #10.

MEMORY & I/O SYSTEMS Memory Performance Example 1 • A program has 2,000 loads and stores • 1,250 of these data values in cache • Rest supplied by other levels of memory hierarchy • What are the hit and miss rates for the cache? Hit Rate = 1250/2000 = 0.625 Miss Rate = 750/2000 = 0.375 = 1 – Hit Rate Chapter 8 <10>
More slides like this


Slide #11.

MEMORY & I/O SYSTEMS Memory Performance Example 2 • Suppose processor has 2 levels of hierarchy: cache and main memory • tcache = 1 cycle, tMM = 100 cycles • What is the AMAT of the program from Example 1? Chapter 8 <11>
More slides like this


Slide #12.

MEMORY & I/O SYSTEMS Memory Performance Example 2 • Suppose processor has 2 levels of hierarchy: cache and main memory • tcache = 1 cycle, tMM = 100 cycles • What is the AMAT of the program from Example 1? AMAT = tcache + MRcache(tMM) = [1 + 0.375(100)] cycles = 38.5 cycles Chapter 8 <12>
More slides like this


Slide #13.

MEMORY & I/O SYSTEMS Gene Amdahl, 1922• Amdahl’s Law: the effort spent increasing the performance of a subsystem is wasted unless the subsystem affects a large percentage of overall performance • Co-founded 3 companies, including one called Amdahl Corporation in 1970 Chapter 8 <13>
More slides like this


Slide #14.

MEMORY & I/O SYSTEMS Cache • • • • Highest level in memory hierarchy Fast (typically ~ 1 cycle access time) Ideally supplies most data to processor Usually holds most recently accessed data Chapter 8 <14>
More slides like this


Slide #15.

MEMORY & I/O SYSTEMS Cache Design Questions • What data is held in the cache? • How is data found? • What data is replaced? Focus on data loads, but stores follow same principles Chapter 8 <15>
More slides like this


Slide #16.

MEMORY & I/O SYSTEMS What data is held in the cache? • Ideally, cache anticipates needed data and puts it in cache • But impossible to predict future • Use past to predict future – temporal and spatial locality: – Temporal locality: copy newly accessed data into cache – Spatial locality: copy neighboring data into cache too Chapter 8 <16>
More slides like this


Slide #17.

MEMORY & I/O SYSTEMS Cache Terminology • Capacity (C): – number of data bytes in cache • Block size (b): – bytes of data brought into cache at once • Number of blocks (B = C/b): – number of blocks in cache: B = C/b • Degree of associativity (N): – number of blocks in a set • Number of sets (S = B/N): – each memory address maps to exactly one cache set Chapter 8 <17>
More slides like this


Slide #18.

MEMORY & I/O SYSTEMS How is data found? • Cache organized into S sets • Each memory address maps to exactly one set • Caches categorized by # of blocks in a set: – Direct mapped: 1 block per set – N-way set associative: N blocks per set – Fully associative: all cache blocks in 1 set • Examine each organization for a cache with: – Capacity (C = 8 words) – Block size (b = 1 word) – So, number of blocks (B = 8) Chapter 8 <18>
More slides like this


Slide #19.

MEMORY & I/O SYSTEMS Example Cache Parameters • C = 8 words (capacity) • b = 1 word (block size) • So, B = 8 (# of blocks) Ridiculously small, but will illustrate organizations Chapter 8 <19>
More slides like this


Slide #20.

MEMORY & I/O SYSTEMS Direct Mapped Cache Address 11...11111100 11...11111000 mem[0xFF...FC] mem[0xFF...F8] 11...11110100 mem[0xFF...F4] 11...11110000 11...11101100 mem[0xFF...F0] mem[0xFF...EC] 11...11101000 mem[0xFF...E8] 11...11100100 mem[0xFF...E4] 11...11100000 mem[0xFF...E0] 00...00100100 mem[0x00...24] 00...00100000 00...00011100 mem[0x00..20] mem[0x00..1C] Set Number 00...00011000 mem[0x00...18] 6 (110) 00...00010100 mem[0x00...14] 5 (101) 00...00010000 00...00001100 mem[0x00...10] mem[0x00...0C] 4 (100) 3 (011) 00...00001000 mem[0x00...08] 2 (010) 00...00000100 mem[0x00...04] 1 (001) 00...00000000 mem[0x00...00] 0 (000) 230 Word Main Memory 7 (111) 23 Word Cache Chapter 8 <20>
More slides like this


Slide #21.

MEMORY & I/O SYSTEMS Direct Mapped Cache Hardware Memory Address Tag Byte Set Offset 00 27 3 V Tag Data 8-entry x (1+27+32)-bit SRAM 27 32 = Hit Data Chapter 8 <21>
More slides like this


Slide #22.

MEMORY & I/O SYSTEMS Direct Mapped Cache Performance Memory Address Tag Byte Set Offset 00...00 001 00 # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: 3 V Tag Data 0 0 0 0 1 1 1 0 00...00 00...00 00...00 mem[0x00...0C] mem[0x00...08] mem[0x00...04] Miss Rate = ? Chapter 8 <22> Set 7 (111) Set 6 (110) Set 5 (101) Set 4 (100) Set 3 (011) Set 2 (010) Set 1 (001) Set 0 (000)
More slides like this


Slide #23.

MEMORY & I/O SYSTEMS Direct Mapped Cache Performance Memory Address Tag Byte Set Offset 00...00 001 00 # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0xC($0) lw $t3, 0x8($0) addi $t0, $t0, -1 j loop done: 3 V Tag Data 0 0 0 0 1 1 1 0 00...00 00...00 00...00 mem[0x00...0C] mem[0x00...08] mem[0x00...04] Miss Rate = 3/15 = 20% Temporal Locality Compulsory Misses Chapter 8 <23> Set 7 (111) Set 6 (110) Set 5 (101) Set 4 (100) Set 3 (011) Set 2 (010) Set 1 (001) Set 0 (000)
More slides like this


Slide #24.

MEMORY & I/O SYSTEMS Direct Mapped Cache: Conflict Memory Address Byte Set Offset Tag 00...01 001 00 3 # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: V Tag Data 0 0 0 0 0 0 1 0 00...00 mem[0x00...04] mem[0x00...24] Miss Rate = ? Chapter 8 <24> Set 7 (111) Set 6 (110) Set 5 (101) Set 4 (100) Set 3 (011) Set 2 (010) Set 1 (001) Set 0 (000)
More slides like this


Slide #25.

MEMORY & I/O SYSTEMS Direct Mapped Cache: Conflict Memory Address Byte Set Offset Tag 00...01 001 00 3 # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: V Tag Data 0 0 0 0 0 0 1 0 00...00 mem[0x00...04] mem[0x00...24] Miss Rate = 10/10 = 100% Conflict Misses Chapter 8 <25> Set 7 (111) Set 6 (110) Set 5 (101) Set 4 (100) Set 3 (011) Set 2 (010) Set 1 (001) Set 0 (000)
More slides like this


Slide #26.

Memory Address Byte Set Offset Tag 00 28 Way 1 2 V Tag 28 = Way 0 Data 32 V Tag 28 Data 32 = Hit1 0 1 MEMORY & I/O SYSTEMS N-Way Set Associative Cache Hit0 32 Hit Data Chapter 8 <26> Hit1
More slides like this


Slide #27.

MEMORY & I/O SYSTEMS N-Way Set Associative Performance # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = ? Way 1 V Tag Data V Tag 0 0 0 0 0 0 0 0 Way 0 Data Set 3 Set 2 Set 1 Set 0 Chapter 8 <27>
More slides like this


Slide #28.

MEMORY & I/O SYSTEMS N-Way Set Associative Performance # MIPS assembly code addi $t0, $0, 5 loop: beq $t0, $0, done lw $t1, 0x4($0) lw $t2, 0x24($0) addi $t0, $t0, -1 j loop done: Miss Rate = 2/10 = 20% Associativity reduces conflict misses Way 1 V Tag Data 0 0 1 0 Way 0 V Tag Data 0 0 00...10 mem[0x00...24] 1 0 00...00 mem[0x00...04] Chapter 8 <28> Set 3 Set 2 Set 1 Set 0
More slides like this


Slide #29.

MEMORY & I/O SYSTEMS Fully Associative Cache V Tag Data V Tag Data V Tag Data V Tag Data V Tag Data V Tag Data V Tag Data V Tag Data Reduces conflict misses Expensive to build Chapter 8 <29>
More slides like this


Slide #30.

• Increase block size: – – – – Block size, b = 4 words C = 8 words Direct mapped (1 block per set) Number of blocks, B = 2 (C/b = 8/4 = 2) Memory Address Tag Block Byte Set Offset Offset 00 27 2 V Tag Data Set 1 Set 0 27 32 32 Data Chapter 8 <30> 32 00 Hit 32 01 = 32 10 11 MEMORY & I/O SYSTEMS Spatial Locality?
More slides like this


Slide #31.

Memory Address Tag Block Byte Set Offset Offset 00 27 2 V Tag Data Set 1 Set 0 27 32 32 Data Chapter 8 <31> 32 00 Hit 32 01 = 32 10 11 MEMORY & I/O SYSTEMS Cache with Larger Block Size
More slides like this


Slide #32.

MEMORY & I/O SYSTEMS Direct Mapped Cache Performance addi loop: beq lw lw lw addi j done: $t0, $t0, $t1, $t2, $t3, $t0, loop $0, 5 $0, done 0x4($0) 0xC($0) 0x8($0) $t0, -1 Miss Rate = ? Chapter 8 <32>
More slides like this


Slide #33.

addi loop: beq lw lw lw addi j done: $t0, $t0, $t1, $t2, $t3, $t0, loop Tag $0, 5 $0, done 0x4($0) 0xC($0) 0x8($0) $t0, -1 Miss Rate = 1/15 = 6.67% Larger blocks reduce compulsory misses through spatial locality Block Byte Set Offset Offset Memory 00...00 0 11 00 Address 2 27 V Tag 0 1 00...00 27 Data mem[0x00...0C] 32 32 32 32 Data Chapter 8 <33> mem[0x00...00] 32 00 Hit mem[0x00...04] 01 = mem[0x00...08] 10 11 MEMORY & I/O SYSTEMS Direct Mapped Cache Performance Set 1 Set 0
More slides like this


Slide #34.

MEMORY & I/O SYSTEMS Cache Organization Recap • • • • • Capacity: C Block size: b Number of blocks in cache: B = C/b Number of blocks in a set: N Number of sets: S = B/N Organization Direct Mapped Number of Ways Number of Sets (N) (S = B/N) 1 B N-Way Set Associative 1 < N < B B/N Fully Associative 1 B Chapter 8 <34>
More slides like this


Slide #35.

MEMORY & I/O SYSTEMS Capacity Misses • • • • • Cache is too small to hold all data of interest at once If cache full: program accesses data X & evicts data Y Capacity miss when access Y again How to choose Y to minimize chance of needing it again? Least recently used (LRU) replacement: the least recently used block in a set evicted Chapter 8 <35>
More slides like this


Slide #36.

MEMORY & I/O SYSTEMS Types of Misses • Compulsory: first time data accessed • Capacity: cache too small to hold all data of interest • Conflict: data of interest maps to same location in cache Miss penalty: time it takes to retrieve a block from lower level of hierarchy Chapter 8 <36>
More slides like this


Slide #37.

MEMORY & I/O SYSTEMS LRU Replacement # MIPS assembly lw $t0, 0x04($0) lw $t1, 0x24($0) lw $t2, 0x54($0) Way 1 V U Tag 0 0 0 0 0 0 0 0 Data Way 0 V Tag 0 0 0 0 Data Set 3 (11) Set 2 (10) Set 1 (01) Set 0 (00) Chapter 8 <37>
More slides like this


Slide #38.

MEMORY & I/O SYSTEMS LRU Replacement # MIPS assembly lw $t0, 0x04($0) lw $t1, 0x24($0) lw $t2, 0x54($0) Way 1 V U Tag 0 0 1 0 0 0 0 00...010 0 Data Way 0 V Tag Data 0 0 mem[0x00...24] 1 00...000 0 mem[0x00...04] Set 3 (11) Set 2 (10) Set 1 (01) Set 0 (00) (a) Way 1 V U Tag 0 0 1 0 0 0 1 00...010 0 Data Way 0 V Tag Data 0 mem[0x00...24] 0 1 00...101 0 mem[0x00...54] Set 3 (11) Set 2 (10) Set 1 (01) Set 0 (00) (b) Chapter 8 <38>
More slides like this


Slide #39.

MEMORY & I/O SYSTEMS Cache Summary • What data is held in the cache? – Recently used data (temporal locality) – Nearby data (spatial locality) • How is data found? – Set is determined by address of data – Word within block also determined by address – In associative caches, data could be in one of several ways • What data is replaced? – Least-recently used way in the set Chapter 8 <39>
More slides like this


Slide #40.

MEMORY & I/O SYSTEMS Miss Rate Trends • Bigger caches reduce capacity misses • Greater associativity reduces conflict misses Adapted from Patterson & Hennessy, Computer Architecture: A Quantitative Approach, 2011 Chapter 8 <40>
More slides like this


Slide #41.

MEMORY & I/O SYSTEMS Miss Rate Trends • Bigger blocks reduce compulsory misses • Bigger blocks increase conflict misses Chapter 8 <41>
More slides like this


Slide #42.

MEMORY & I/O SYSTEMS Multilevel Caches • Larger caches have lower miss rates, longer access times • Expand memory hierarchy to multiple levels of caches • Level 1: small and fast (e.g. 16 KB, 1 cycle) • Level 2: larger and slower (e.g. 256 KB, 2-6 cycles) • Most modern PCs have L1, L2, and L3 cache Chapter 8 <42>
More slides like this


Slide #43.

MEMORY & I/O SYSTEMS Intel Pentium III Die Chapter 8 <43>
More slides like this


Slide #44.

MEMORY & I/O SYSTEMS Virtual Memory • Gives the illusion of bigger memory • Main memory (DRAM) acts as cache for hard disk Chapter 8 <44>
More slides like this


Slide #45.

Cache Speed MEMORY & I/O SYSTEMS Memory Hierarchy Main Memory Technology Price / GB Access Time (ns) Bandwidth (GB/s) SRAM $10,000 1 25+ DRAM $10 10 - 50 10 SSD $1 100,000 0.5 HDD $0.1 10,000,000 0.1 Virtual Memory Capacity • Physical Memory: DRAM (Main Memory) • Virtual Memory: Hard drive – Slow, Large, Cheap Chapter 8 <45>
More slides like this


Slide #46.

MEMORY & I/O SYSTEMS Hard Disk Magnetic Disks Read/Write Head Takes milliseconds to seek correct location on disk Chapter 8 <46>
More slides like this


Slide #47.

MEMORY & I/O SYSTEMS Virtual Memory • Virtual addresses – – – – Programs use virtual addresses Entire virtual address space stored on a hard drive Subset of virtual address data in DRAM CPU translates virtual addresses into physical addresses (DRAM addresses) – Data not in DRAM fetched from hard drive • Memory Protection – – – – Each program has own virtual to physical mapping Two programs can use same virtual address for different data Programs don’t need to be aware others are running One program (or virus) can’t corrupt memory used by another Chapter 8 <47>
More slides like this


Slide #48.

MEMORY & I/O SYSTEMS Cache/Virtual Memory Analogues Cache Virtual Memory Block Page Block Size Page Size Block Offset Page Offset Miss Page Fault Tag Virtual Page Number Physical memory acts as cache for virtual memory Chapter 8 <48>
More slides like this


Slide #49.

MEMORY & I/O SYSTEMS Virtual Memory Definitions • Page size: amount of memory transferred from hard disk to DRAM at once • Address translation: determining physical address from virtual address • Page table: lookup table used to translate virtual addresses to physical addresses Chapter 8 <49>
More slides like this


Slide #50.

MEMORY & I/O SYSTEMS Virtual & Physical Addresses Most accesses hit in physical memory But programs have the large capacity of virtual memory Chapter 8 <50>
More slides like this


Slide #51.

MEMORY & I/O SYSTEMS Address Translation Chapter 8 <51>
More slides like this


Slide #52.

MEMORY & I/O SYSTEMS Virtual Memory Example • System: – Virtual memory size: 2 GB = 231 bytes – Physical memory size: 128 MB = 227 bytes – Page size: 4 KB = 212 bytes Chapter 8 <52>
More slides like this


Slide #53.

MEMORY & I/O SYSTEMS Virtual Memory Example • System: – Virtual memory size: 2 GB = 231 bytes – Physical memory size: 128 MB = 227 bytes – Page size: 4 KB = 212 bytes • Organization: – – – – – Virtual address: 31 bits Physical address: 27 bits Page offset: 12 bits # Virtual pages = 231/212 = 219 (VPN = 19 bits) # Physical pages = 227/212 = 215 (PPN = 15 bits) Chapter 8 <53>
More slides like this


Slide #54.

MEMORY & I/O SYSTEMS Virtual Memory Example • 19-bit virtual page numbers • 15-bit physical page numbers Chapter 8 <54>
More slides like this


Slide #55.

MEMORY & I/O SYSTEMS Virtual Memory Example What is the physical address of virtual address 0x247C? Chapter 8 <55>
More slides like this


Slide #56.

MEMORY & I/O SYSTEMS Virtual Memory Example What is the physical address of virtual address 0x247C? – – – – VPN = 0x2 VPN 0x2 maps to PPN 0x7FFF 12-bit page offset: 0x47C Physical address = 0x7FFF47C Chapter 8 <56>
More slides like this


Slide #57.

MEMORY & I/O SYSTEMS How to perform translation? • Page table – Entry for each virtual page – Entry fields: • Valid bit: 1 if page in physical memory • Physical page number: where the page is located Chapter 8 <57>
More slides like this


Slide #58.

Virtual Address Virtual Page Number 0x00002 19 Page Offset 47C 12 V VPN is index into page table 0 0 1 1 0 0 0 0 1 0 0 1 0 0 Hit Physical Address Physical Page Number 0x0000 0x7FFE Page Table MEMORY & I/O SYSTEMS Page Table Example 0x0001 0x7FFF 15 0x7FFF 12 47C Chapter 8 <58>
More slides like this


Slide #59.

What is the physical address of virtual address 0x5F20? V 0 0 1 1 0 0 0 0 1 0 0 1 0 0 Physical Page Number 0x0000 0x7FFE Page Table MEMORY & I/O SYSTEMS Page Table Example 1 0x0001 0x7FFF Chapter 8 <59>
More slides like this


Slide #60.

Virtual Address Virtual Page Number 0x00005 19 What is the physical address of virtual address 0x5F20? – VPN = 5 – Entry 5 in page table VPN 5 => physical page 1 – Physical address: 0x1F20 Page Offset F20 12 V 0 0 1 1 0 0 0 0 1 0 0 1 0 0 Hit Physical Address Physical Page Number 0x0000 0x7FFE Page Table MEMORY & I/O SYSTEMS Page Table Example 1 0x0001 0x7FFF 15 0x0001 Chapter 8 <60> 12 F20
More slides like this


Slide #61.

Virtual Address Virtual Page Number 0x00007 Page Offset 3E0 19 What is the physical address of virtual address 0x73E0? V 0 0 1 1 0 0 0 0 1 0 0 1 0 0 Hit Chapter 8 <61> Physical Page Number 0x0000 0x7FFE Page Table MEMORY & I/O SYSTEMS Page Table Example 2 0x0001 0x7FFF 15
More slides like this


Slide #62.

Virtual Address Virtual Page Number 0x00007 Page Offset 3E0 19 What is the physical address of virtual address 0x73E0? – VPN = 7 – Entry 7 is invalid – Virtual page must be paged into physical memory from disk V 0 0 1 1 0 0 0 0 1 0 0 1 0 0 Hit Chapter 8 <62> Physical Page Number 0x0000 0x7FFE Page Table MEMORY & I/O SYSTEMS Page Table Example 2 0x0001 0x7FFF 15
More slides like this


Slide #63.

MEMORY & I/O SYSTEMS Page Table Challenges • Page table is large – usually located in physical memory • Load/store requires 2 main memory accesses: – one for translation (page table read) – one to access data (after translation) • Cuts memory performance in half – Unless we get clever… Chapter 8 <63>
More slides like this


Slide #64.

MEMORY & I/O SYSTEMS Translation Lookaside Buffer (TLB) • Small cache of most recent translations • Reduces # of memory accesses for most loads/stores from 2 to 1 Chapter 8 <64>
More slides like this


Slide #65.

MEMORY & I/O SYSTEMS TLB • Page table accesses: high temporal locality – Large page size, so consecutive loads/stores likely to access same page • TLB – Small: accessed in < 1 cycle – Typically 16 - 512 entries – Fully associative – > 99 % hit rates typical – Reduces # of memory accesses for most loads/stores from 2 to 1 Chapter 8 <65>
More slides like this


Slide #66.

Virtual Address Virtual Page Number 0x00002 Page Offset 47C 19 12 Entry 1 V Virtual Page Number 1 0x7FFFD 19 = Entry 0 Physical Page Number V 0x0000 15 1 Virtual Page Number 0x00002 Physical Page Number 0x7FFF 19 TLB 15 = Hit1 0 1 MEMORY & I/O SYSTEMS Example 2-Entry TLB Hit0 Hit Physical Address 12 15 0x7FFF 47C Chapter 8 <66> Hit1
More slides like this


Slide #67.

MEMORY & I/O SYSTEMS Memory Protection • Multiple processes (programs) run at once • Each process has its own page table • Each process can use entire virtual address space • A process can only access physical pages mapped in its own page table Chapter 8 <67>
More slides like this


Slide #68.

MEMORY & I/O SYSTEMS Virtual Memory Summary • Virtual memory increases capacity • A subset of virtual pages in physical memory • Page table maps virtual pages to physical pages – address translation • A TLB speeds up address translation • Different page tables for different programs provides memory protection Chapter 8 <68>
More slides like this


Slide #69.

MEMORY & I/O SYSTEMS Memory-Mapped I/O • Processor accesses I/O devices just like memory (like keyboards, monitors, printers) • Each I/O device assigned one or more address • When that address is detected, data read/written to I/O device instead of memory • A portion of the address space dedicated to I/O devices Chapter 8 <69>
More slides like this


Slide #70.

MEMORY & I/O SYSTEMS Memory-Mapped I/O Hardware • Address Decoder: – Looks at address to determine which device/memory communicates with the processor • I/O Registers: – Hold values written to the I/O devices • ReadData Multiplexer: – Selects between memory and I/O devices as source of data sent to the processor Chapter 8 <70>
More slides like this


Slide #71.

MEMORY & I/O SYSTEMS The Memory Interface CLK MemWrite Processor Address WriteData WE Memory Chapter 8 <71> ReadData
More slides like this


Slide #72.

Address Decoder Processor MemWrite Address RDsel1:0 CLK WEM WE1 WE2 MEMORY & I/O SYSTEMS Memory-Mapped I/O Hardware CLK WE Memory WriteData CLK EN I/O Device 1 EN I/O Device 2 00 01 10 Chapter 8 <72> ReadData
More slides like this


Slide #73.

MEMORY & I/O SYSTEMS Memory-Mapped I/O Code • Suppose I/O Device 1 is assigned the address 0xFFFFFFF4 – Write the value 42 to I/O Device 1 – Read value from I/O Device 1 and place in $t3 Chapter 8 <73>
More slides like this


Slide #74.

• Write the value 42 to I/O Device 1 (0xFFFFFFF4) addi $t0, $0, 42 sw $t0, 0xFFF4($0) CLK WE RDsel1:0 WEM MemWrite Address WE1 = 1 CLK Processor Address Decoder WE2 MEMORY & I/O SYSTEMS Memory-Mapped I/O Code Memory WriteData CLK EN I/O Device 1 EN I/O Device 2 Chapter 8 <74> 00 01 10 ReadData
More slides like this


Slide #75.

• Read the value from I/O Device 1 and place in $t3 lw $t3, 0xFFF4($0) Address Decoder Processor MemWrite Address WriteData CLK WE Memory RDsel1:0 = 01 CLK WEM WE1 WE2 MEMORY & I/O SYSTEMS Memory-Mapped I/O Code CLK EN I/O Device 1 EN I/O Device 2 Chapter 8 <75> 00 01 10 ReadData
More slides like this


Slide #76.

MEMORY & I/O SYSTEMS Input/Output (I/O) Systems • Embedded I/O Systems – Toasters, LEDs, etc. • PC I/O Systems Chapter 8 <76>
More slides like this


Slide #77.

MEMORY & I/O SYSTEMS Embedded I/O Systems • Example microcontroller: PIC32 – microcontroller – 32-bit MIPS processor – low-level peripherals include: • serial ports • timers • A/D converters Chapter 8 <77>
More slides like this


Slide #78.

MEMORY & I/O SYSTEMS Digital I/O // C Code #include int main(void) { int switches; TRISD = 0xFF00; // RD[7:0] outputs // RD[11:8] inputs while (1) { // read & mask switches, RD[11:8] switches = (PORTD >> 8) & 0xF; PORTD = switches; // display on LEDs } } Chapter 8 <78>
More slides like this


Slide #79.

MEMORY & I/O SYSTEMS Serial I/O • Example serial protocols – SPI: Serial Peripheral Interface – UART: Universal Asynchronous Receiver/Transmitter – Also: I2C, USB, Ethernet, etc. Chapter 8 <79>
More slides like this


Slide #80.

MEMORY & I/O SYSTEMS SPI: Serial Peripheral Interface • Master initiates communication to slave by sending pulses on SCK • Master sends SDO (Serial Data Out) to slave, msb first • Slave may send data (SDI) to master, msb first Chapter 8 <80>
More slides like this


Slide #81.

MEMORY & I/O SYSTEMS UART: Universal Asynchronous Rx/Tx • Configuration: – start bit (0), 7-8 data bits, parity bit (optional), 1+ stop bits (1) – data rate: 300, 1200, 2400, 9600, …115200 baud • Line idles HIGH (1) • Common configuration: – 8 data bits, no parity, 1 stop bit, 9600 baud Chapter 8 <81>
More slides like this


Slide #82.

MEMORY & I/O SYSTEMS Timers // Create specified ms/us of delay using built-in timer #include void delaymicros(int micros) { if (micros > 1000) { // delaymicros(1000); delaymicros(micros-1000); } else if (micros > 6){ TMR1 = 0; // T1CONbits.ON = 1; // PR1 = (micros-6)*20; // // IFS0bits.T1IF = 0; // while (!IFS0bits.T1IF); // } } avoid timer overflow reset timer to 0 turn timer on 20 clocks per microsecond Function has overhead of ~6 us clear overflow flag wait until overflow flag set void delaymillis(int millis) { while (millis--) delaymicros(1000); // repeatedly delay 1 ms } // until done Chapter 8 <82>
More slides like this


Slide #83.

MEMORY & I/O SYSTEMS Analog I/O • Needed to interface with outside world • Analog input: Analog-to-digital (A/D) conversion – Often included in microcontroller – N-bit: converts analog input from Vref--Vref+ to 0-2N-1 • Analog output: – Digital-to-analog (D/A) conversion • Typically need external chip (e.g., AD558 or LTC1257) • N-bit: converts digital signal from 0-2N-1 to Vref--Vref+ – Pulse-width modulation Chapter 8 <83>
More slides like this


Slide #84.

MEMORY & I/O SYSTEMS Pulse-Width Modulation (PWM) • Average value proportional to duty cycle • Add high-pass filter on output to deliver average value Chapter 8 <84>
More slides like this


Slide #85.

MEMORY & I/O SYSTEMS Other Microcontroller Peripherals • Examples – – – – Character LCD VGA monitor Bluetooth wireless Motors Chapter 8 <85>
More slides like this


Slide #86.

MEMORY & I/O SYSTEMS Personal Computer (PC) I/O Systems • USB: Universal Serial Bus – USB 1.0 released in 1996 – standardized cables/software for peripherals • PCI/PCIe: Peripheral Component Interconnect/PCI Express – developed by Intel, widespread around 1994 – 32-bit parallel bus – used for expansion cards (i.e., sound cards, video cards, etc.) • DDR: double-data rate memory Chapter 8 <86>
More slides like this


Slide #87.

MEMORY & I/O SYSTEMS Personal Computer (PC) I/O Systems • TCP/IP: Transmission Control Protocol and Internet Protocol – physical connection: Ethernet cable or Wi-Fi • SATA: hard drive interface • Input/Output (sensors, actuators, microcontrollers, etc.) – Data Acquisition Systems (DAQs) – USB Links Chapter 8 <87>
More slides like this