04EI Die unterschiedlichen Derivate der C166-Familie High-Integration * 16 MByte Adreßraum * 2/4 KByte RAM * 32 CAPCOM * 4 PWM * 2 Serielle Schnittstellen * 5 Timer * Chip Selects erleichtern die Systemerweiterung * Extensive I/O C167CS * 11KB RAM * 256K Flash * 2 CAN Module * 24 ADC * RTC & Power Managem. * PLL C167CR/SR C167 * 2KB RAM C167S * CAN (nur CR) * 4K RAM * PLL * 32K ROM * 2KB RAM * PLL General Purpose * Ausgewogene Peripherie für eine Großzahl von Applikationen * 1K / 2 KB RAM * ROM / Flash / OTP Low-Cost ES - Version 2.0 * Different RAM Size * 16 M Addr. Range * 3/5 16-bit Timers * Serial i/f SSP, SSC * Reduced Chip Selects * Wide Ext. Bus Support * 3 V Options * 25 MHz Option * CAPCOM * PWM * Serial Interfaces * Timer * 10-bit / 8bit ADC * Full Bus Support/ MUX Bus only C165 * 2KB RAM * 3V * P-MQFP-100 * P-TQFP-100 27.09.08 C164CI 8xC166 * 1KB RAM * 32KB ROM * 32KB Flash * P-MQFP-100 C163 * 1KB RAM * SSP * 3V * Red. Peripherals * P-TQFP-100 * 2KB RAM * 64KB OTP/ROM/Flash * Full-CAN 2.0B * Power Management / RTC * Motor Control Peripheral * P-MQFP-80 C161RI C161xx * Großes RAM * Großes Flash * 3KB RAM * Pwr. Man. / RTC * Pwr. Man. / RTC * I2C Schnittstelle * I2C Interface * CAPCOM * 16MHz CPU * ADC * 2 USARTs * 4 M Adreßraum * CAN / J1850 * 1-2KB RAM * ADC * P-MQFP-80 C161V/K/O Seite 6
View full slide show




04EI The different derivate of the C166-family (since 1993) High-Integration * 16 MByte Address space * 2/4 KByte RAM * 32 CAPCOM * 4 PWM * 2 Serial interfaces * 5 Timers • Chip Selects makes the extension of the system Easier * Extensive I/O C167CS * 11KB RAM * 256K Flash * 2 CAN Module * 24 ADC * RTC & Power Managem. * PLL C167CR/SR C167 * 2KB RAM C167S * CAN (nur CR) * 4K RAM * PLL * 32K ROM * 2KB RAM * PLL General Purpose * Balanced peripherial * CAPCOM devices for a great number * PWM of applications * Serial Interfaces * Timer * 10-bit / 8bit ADC * 1K / 2 KB RAM * Full Bus Support/ * ROM / Flash / OTP MUX Bus only Low-Cost ES - Version 2.0 * Different RAM Size * 16 M Addr. Range * 3/5 16-bit Timers * Serial i/f SSP, SSC * Reduced Chip Selects * Wide Ext. Bus Support * 3 V Options * 25 MHz Option C165 * 2KB RAM * 3V * P-MQFP-100 * P-TQFP-100 12.08.2013 C164CI 8xC166 * 1KB RAM * 32KB ROM * 32KB Flash * P-MQFP-100 C163 * 1KB RAM * SSP * 3V * Red. Peripherals * P-TQFP-100 * 2KB RAM * 64KB OTP/ROM/Flash * Full-CAN 2.0B * Power Management / RTC * Motor Control Peripheral * P-MQFP-80 C161RI C161xx * Großes RAM * Großes Flash * 3KB RAM * Pwr. Man. / RTC * Pwr. Man. / RTC * I2C Schnittstelle * I2C Interface * CAPCOM * 16MHz CPU * ADC * 2 USARTs * 4 M Adreßraum * CAN / J1850 * 1-2KB RAM * ADC * P-MQFP-80 C161V/K/O page 7
View full slide show




Input 3x3 conv, 64 3x3 conv, 64 Pool 3x3 conv, 128 3x3 conv, 128 Pool 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 Pool 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 Pool 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 Pool FC 4096 FC 4096 FC 1000 memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: 224*224*3=150K params: 0 224*224*64=3.2M params: (3*3*3)*64 = 1,728 224*224*64=3.2M params: (3*3*64)*64 = 36,864 112*112*64=800K params: 0 112*112*128=1.6M params: (3*3*64)*128 = 73,728 112*112*128=1.6M params: (3*3*128)*128 = 147,456 56*56*128=400K params: 0 56*56*256=800K params: (3*3*128)*256 = 294,912 56*56*256=800K params: (3*3*256)*256 = 589,824 56*56*256=800K params: (3*3*256)*256 = 589,824 28*28*256=200K params: 0 28*28*512=400K params: (3*3*256)*512 = 1,179,648 28*28*512=400K params: (3*3*512)*512 = 2,359,296 28*28*512=400K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: 0 14*14*512=100K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: (3*3*512)*512 = 2,359,296 7*7*512=25K params: 0 4096 params: 7*7*512*4096 = 102,760,448 4096 params: 4096*4096 = 16,777,216 1000 params: 4096*1000 = 4,096,000
View full slide show




Input 3x3 conv, 64 3x3 conv, 64 Pool 3x3 conv, 128 3x3 conv, 128 Pool 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 Pool 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 Pool 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 Pool FC 4096 FC 4096 FC 1000 memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: memory: 224*224*3=150K params: 0 224*224*64=3.2M params: (3*3*3)*64 = 1,728 224*224*64=3.2M params: (3*3*64)*64 = 36,864 112*112*64=800K params: 0 112*112*128=1.6M params: (3*3*64)*128 = 73,728 112*112*128=1.6M params: (3*3*128)*128 = 147,456 56*56*128=400K params: 0 56*56*256=800K params: (3*3*128)*256 = 294,912 56*56*256=800K params: (3*3*256)*256 = 589,824 56*56*256=800K params: (3*3*256)*256 = 589,824 28*28*256=200K params: 0 28*28*512=400K params: (3*3*256)*512 = 1,179,648 28*28*512=400K params: (3*3*512)*512 = 2,359,296 28*28*512=400K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: 0 14*14*512=100K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: (3*3*512)*512 = 2,359,296 14*14*512=100K params: (3*3*512)*512 = 2,359,296 7*7*512=25K params: 0 4096 params: 7*7*512*4096 = 102,760,448 4096 params: 4096*4096 = 16,777,216 1000 params: 4096*1000 = 4,096,000
View full slide show




Reclaiming Memory  If ballooning is not sufficient to reclaim memory or the host free memory drops towards the hard threshold, the hypervisor starts to use swapping in addition to using ballooning. During swapping, memory compression is activated as well. With host swapping and memory compression, the hypervisor should be able to quickly reclaim memory and bring the host memory state back to the soft state.  In a rare case where host free memory drops below the low threshold, the hypervisor continues to reclaim memory through swapping and memory compression, and additionally blocks the execution of all virtual machines that consume more memory than their target memory allocations.  In certain scenarios, host memory reclamation happens regardless of the current host free memory state. For example, even if host free memory is in the high state, memory reclamation is still mandatory when a virtual machine’s memory usage exceeds its specified memory limit. If this happens, the hypervisor will employ ballooning and, if necessary, swapping and memory compression to reclaim memory from the virtual machine until the virtual machine’s host memory usage falls back to its specified limit.
View full slide show




Universality of the RAM (cont.)   RAMs can simulate FSMs  Another “universality” result: RAMs can execute RAM programs Since RAM components (CPU and bounded random-accessmemory) are themselves FSMs, a RAM can simulate any other RAM  Two “flavors” of RAM to execute RAM programs:    RAM program is stored in registers specially allocated to the RAM program (loaded onto CPU) RAM program is stored in registers of the random-access-memory (RASP model) For later discussion (if time permits)
View full slide show




Cache • Pronounced cash, a special high-speed storage mechanism. • Two types of caching are commonly used in PCs: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. • Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. • Some memory caches are built into the architecture of microprocessors. Intel Core-2 processors have 2-4 mb caches. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory (often also located on the CPU die), called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Some CPUs even have Level 3 caches. (Note most latest generation CPUs build L2 caches on CPU dies, running at the same clock rate as CPU). • Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. Today’s hard drives often have 8 – 16 mb memory caches. • When data is found in the cache, it is called a cache hit, (versus cache miss) and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.
View full slide show




Memory How do program instructions transfer in and out of RAM? Step 1. When you start the computer, certain RAM Operating system instructions Operating system interface operating system files are loaded into RAM from the hard disk. The operating system displays the user interface on the screen. Step 2. When you start a Web browser, the Web browser instructions Web browser window program’s instructions are loaded into RAM from the hard disk. The Web browser window is displayed on the screen. Step 3. When you start a paint program, the Paint program instructions Paint program window program’s instructions are loaded into RAM from the hard disk. The paint program, along with the Web Browser and certain operating system instructions are in RAM. The paint program window is displayed on the screen. RAM Step 4. When you quit a program, such as the Web browser, its program instructions are removed from RAM. The Web browser is no longer displayed on the screen. p. 198 Fig. 4-17 Web browser program instructions are removed from RAM Web browser window is no longer displayed on desktop Next
View full slide show




Memory How do program instructions transfer in and out of RAM? Step 1. When you start the computer, certain RAM Operating system instructions Operating system interface operating system files are loaded into RAM from the hard disk. The operating system displays the user interface on the screen. Step 2. When you start a Web browser, the Web browser instructions Web browser window program’s instructions are loaded into RAM from the hard disk. The Web browser window is displayed on the screen. Step 3. When you start a word processing Word processing program instructions Word processing program window program, the program’s instructions are loaded into RAM from the hard disk. The word processing program, along with the Web Browser and certain operating system instructions are in RAM. The word processing program window is displayed on the screen. RAM Step 4. When you quit a program, such as the Web browser, its program instructions are removed from RAM. The Web browser is no longer displayed on the screen. p. 143 Fig. 4-12 Web browser program instructions are removed from RAM Web browser window is no longer displayed on desktop 18 Next
View full slide show




Chapter 1: Teaching Analogies  When teaching the difference between hard drive and RAM memory, compare an office space and a computer. The working person is like a CPU, the desk area is similar to RAM memory, a file cabinet is similar to the hard drive, and files stored on the hard drive compare with printed documents stored in the file cabinet. • The larger the desk area, the greater the number of documents that can be opened on it at the same time. If the desk is not large enough, the person (the CPU) must close a file and properly store it inside the file cabinet before searching and opening a new one. This process takes time.  The different types of memory that a computer uses, in order of fastest to slowest, are as follows: • memory inside CPU - L1 cache • memory in the processor housing - L2 cache • memory on the motherboard – RAM • hard drive space that is used as memory – virtual memory  An analogy is similar to getting a drink of water: (1) Having a glass of water sitting on your desk is similar to having L1 cache. (2) Having to go refill the glass from a faucet is similar to having L2 cache. (3) Having to get bottled water from a drink machine is similar to having RAM. (4) Having to go to a store and buy bottled water is similar to having hard drive storage that is used as RAM. Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Confidential 20
View full slide show




III. RAM and ROM. Blocks of RAM (random access memory) or ROM (read-only memory) will sometimes be needed as part of our designs. A RAM or ROM memory cell will typically be implemented in simpler circuitry than a register cell. Memories will be addressed in fixed units (e.g., bytes or words), rather than one bit at a time. For both RAM and ROM if the memory contains N units, then log2N address lines will be needed. In addition, for RAM, a READ/WRITE line, to choose which is to be done, will be needed. M-bit input bus RAM or ROM Log2N-bit Address bus N locations; each location is M bits wide Read/Write (for RAM) M-bit output bus 33
View full slide show




How do program instructions transfer in and out of RAM? Step 1. When you start the computer, certain RAM Operating system instructions Operating system interface operating system files are loaded into RAM from the hard disk. The operating system displays the user interface on the screen. Step 2. When you start a Web browser, the Web browser instructions Web browser window program’s instructions are loaded into RAM from the hard disk. The Web browser window is displayed on the screen. Step 3. When you start a paint program, the Paint program instructions Paint program window program’s instructions are loaded into RAM from the hard disk. The paint program, along with the Web Browser and certain operating system instructions are in RAM. The paint program window is displayed on the screen. RAM Step 4. When you quit a program, such as the Web browser, its program instructions are removed from RAM. The Web browser is no longer displayed on the screen. Web browser program instructions are removed from RAM Web browser window is no longer displayed on desktop
View full slide show




DBs, DWs are merging as In-memory DBs: SAP® In-Memory Computing Enabling Real-Time Computing SAP® In-Memory enables real-time computing by bringing together online transaction proc. OLTP (DB) and online analytical proc. OLAP (DW). Combining advances in hardware technology with SAP InMemory Computing empowers business – from shop floor to boardroom – by giving real-time bus. proc. instantaneous access to data-eliminating today’s info lag for your business. In-memory computing is already under way. The question isn’t if this revolution will impact businesses but when/ how. In-memory computing won’t be introduced because a co. can afford the technology. It will be because a business cannot afford to allow its competitors to adopt the it first. Here is sample of what in-memory computing can do for you: • Enable mixed workloads of analytics, operations, and performance management in a single software landscape. • Support smarter business decisions by providing increased visibility of very large volumes of business information • Enable users to react to business events more quickly through real-time analysis and reporting of operational data. • Deliver innovative real-time analysis and reporting. • Streamline IT landscape and reduce total cost of ownership. Product managers will still look at inventory and point-of-sale data, but in the future they will also receive,eg., tell customers broadcast dissatisfaction with a product over Twitter. Or they might be alerted to a negative product review released online that highlights some unpleasant product features requiring immediate action. From the other side, small businesses running real-time inventory reports will be able to announce to their Facebook and Twitter communities that a high demand product is available, how to order, and where to pick up. Bad movies have been able to enjoy a great opening weekend before crashing 2nd weekend when negative word-of-mouth feedback cools enthusiasm. That week-long grace period is about to disappear for silver screen flops. Consumer feedback won’t take a week, a day, or an hour. The very second showing of a movie could suffer from a noticeable falloff in attendance due to consumer criticism piped instantaneously through the new technologies. It will no longer be good enough to have weekend numbers ready for executives on Monday morning. Executives will run their own reports on revenue, Twitter their reviews, and by Monday morning have acted on their decisions. The final example is from the utilities industry: The most expensive energy a utilities provides is energy to meet unexpected demand during peak periods of consumption. If the company could analyze trends in power consumption based on real-time meter reads, it could offer – in real time – extra low rates for the week or month if they reduce their consumption during the following few hours. In manufacturing enterprises, in-memory computing tech will connect the shop floor to the boardroom, and the shop floor associate will have instant access to the same data as the board [[shop floor = daily transaction processing. Boardroom = executive data mining]]. The shop floor will then see the results of their actions reflected immediately in the relevant Key Performance Indicators (KPI). This advantage will become much more dramatic when we switch to electric cars; predictably, those cars are recharged the minute the owners return home from work. Hardware: blade servers and multicore CPUs and memory capacities measured in terabytes. Software: in-memory database with highly compressible row / column storage designed to maximize in-memory comp. tech. SAP BusinessObjects Event Insight software is key. In what used to be called exception reporting, the software deals with huge amounts of realtime data to determine immediate and appropriate action for a real-time situation. [[Both row and column storage! They convert to column-wise storage only for Long-Lived-High-Value data?]] Parallel processing takes place in the database layer rather than in the app layer - as it does in the client-server arch. Total cost is 30% lower than traditional RDBMSs due to: • Leaner hardware, less system capacity req., as mixed workloads of analytics, operations, performance mgmt is in a single system, which also reduces redundant data storage. [[Back to a single DB rather than a DB for TP and a DW for boardroom dec. sup.]] • Less extract transform load (ETL) between systems and fewer prebuilt reports, reducing support required to run sofwr. Report runtime improvements of up to 1000 times. Compression rates of up to a 10 times. Performance improvements expected even higher in SAP apps natively developed for inmemory DBs. Initial results: a reduction of computing time from hours to seconds. However, in-memory computing will not eliminate the need for data warehousing. Real-time reporting will solve old challenges and create new opportunities, but new challenges will arise. SAP HANA 1.0 software supports realtime database access to data from the SAP apps that support OLTP. Formerly, operational reporting functionality was transferred from OLTP applications to a data warehouse. With in-memory computing technology, this functionality is integrated back into the transaction system. Adopting in-memory computing results in an uncluttered arch based on a few, tightly aligned core systems enabled by service-oriented architecture (SOA) to provide harmonized, valid metadata and master data across business processes. Some of the most salient shifts and trends in future enterprise architectures will be: • A shift to BI self-service apps like data exploration, instead of static report solutions. • Central metadata and masterdata repositories that define the data architecture, allowing data stewards to work across all business units and all platforms Real-time in-memory computing technology will cause a decline Structured Query Language (SQL) satellite databases. The purpose of those databases as flexible, ad hoc, more business-oriented, less IT-static tools might still be required, but their offline status will be a disadvantage and will delay data updates. Some might argue that satellite systems with in-memory computing technology will take over from satellite SQL DBs. SAP Business Explorer tools that use in-memory computing technology represent a paradigm shift. Instead of waiting for IT to work on a long queue of support tickets to create new reports, business users can explore large data sets and define reports on the fly.
View full slide show




68HC11 with the Handy Board Hardware Microprocessor and Memory • Address Bus – 15 wires controlled by P to select a particular location in memory for R/W – HB: memory chip is 32K RAM – 15 wires (215 = 32768) needed to uniquely specify memory address for R/W • Data Bus – 8 wires used to pass data between P and memory, 1 byte at a time – Data written to memory: P drives wires – Data read from memory: memory drives • Read/Write Control Lines – 1 wire driven by microprocessor to control function of memory – +5v for memory read operation – 0v for memory write operation • Memory Enable Control Lines – 1 wire (E clock) connects to the enable circuitry of the memory – When memory is enabled, it performs R/W, as determined by the R/W line Computer = P (executes instructions) + memory (stores instructions and other data) Copyright Prentice Hall, 2001 16
View full slide show




Virtual Memory       Every CPU family today uses virtual memory, in which disk pretends to be a bigger RAM. Virtual memory capability can’t be turned off (though you can turn off the ability to swap to disk). RAM is split up into pages, typically 4 KB each. Each page is either in RAM or out on disk. To keep track of the pages, a page table notes whether each table is in RAM, where it is in RAM (that is, physical address and virtual address are different), and some other information. So, a 4 GB physical RAM would need over a million page table entries – and a 32 GB physical RAM as on Schooner would need over 32M page table entries. Supercomputing in Plain English: Multicore Tue Apr 3 2018 69
View full slide show




Virtual Memory       Every CPU family today uses virtual memory, in which disk pretends to be a bigger RAM. Virtual memory capability can’t be turned off (though you can turn off the ability to swap to disk). RAM is split up into pages, typically 4 KB each. Each page is either in RAM or out on disk. To keep track of the pages, a page table notes whether each table is in RAM, where it is in RAM (that is, physical address and virtual address are different), and some other information. So, a 4 GB physical RAM would need over a million page table entries – and a 32 GB physical RAM as on Boomer would need over 32M page table entries. Supercomputing in Plain English: Multicore Tue March 12 2013 69
View full slide show




Virtual Memory       Every CPU family today uses virtual memory, in which disk pretends to be a bigger RAM. Virtual memory capability can’t be turned off (though you can turn off the ability to swap to disk). RAM is split up into pages, typically 4 KB each. Each page is either in RAM or out on disk. To keep track of the pages, a page table notes whether each table is in RAM, where it is in RAM (that is, physical address and virtual address are different), and some other information. So, a 4 GB physical RAM would need over a million page table entries – and a 32 GB physical RAM as on Boomer would need over 32M page table entries. Supercomputing in Plain English: Multicore Tue March 31 2015 71
View full slide show




Reclaiming Memory  ESX maintains four host free memory states: high, soft, hard, and low, which are reflected by four thresholds: 6%, 4%, 2%, and 1% of host memory respectively.  When to use ballooning or swapping (which activates memory compression) to reclaim host memory is largely determined by the current host free memory state. In the high state, the aggregate virtual machine guest memory usage is smaller than the host memory size. Whether or not host memory is overcommitted, the hypervisor will not reclaim memory through ballooning or swapping.  If host free memory drops towards the soft threshold, the hypervisor starts to reclaim memory using ballooning. Ballooning happens before free memory actually reaches the soft threshold because it takes time for the balloon driver to allocate and pin guest physical memory. Usually, the balloon driver is able to reclaim memory in a timely fashion so that the host free memory stays above the soft threshold.
View full slide show




Random Access Memory (RAM) Technology ° Why do computer designers need to know about RAM technology? • Processor performance is usually limited by memory bandwidth • As IC densities increase, lots of memory will fit on processor chip - Tailor on-chip memory to specific needs - Instruction cache - Data cache - Write buffer ° What makes RAM different from a bunch of flip-flops? • Density: RAM is much more denser CPE 442 memory.16 Introduction To Computer Architecture
View full slide show