RAID Structure  RAID – multiple disk drives provides reliability via redundancy  Increases the mean time to failure  Frequently combined with nonvolatile RAM (NVRAM) to cache the RAID array. This write-back cache is protected from data loss during power failures.  RAID is arranged into six different levels Operating System Concepts with Java – 8th Edition 12.20 Silberschatz, Galvin and Gagne ©2009
View full slide show




RAID Structure  RAID – redundant array of inexpensive disks  multiple disk drives provides reliability via redundancy  Increases the mean time to failure  Mean time to repair – exposure time when another failure could cause data loss  Mean time to data loss based on above factors  If mirrored disks fail independently, consider disk with 1300,000 mean time to failure and 10 hour mean time to repair  Mean time to data loss is 100, 0002 / (2 ∗ 10) = 500 ∗ 106 hours, or 57,000 years!  Frequently combined with NVRAM to improve write performance  Several improvements in disk-use techniques involve the use of multiple disks working cooperatively Operating System Concepts Essentials – 2nd Edition 9.34 Silberschatz, Galvin and Gagne ©2013
View full slide show




RAID (Cont.)  Disk striping uses a group of disks as one storage unit  RAID is arranged into six different levels  RAID schemes improve performance and improve the reliability of the storage system by storing redundant data  Mirroring or shadowing (RAID 1) keeps duplicate of each disk  Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability  Block interleaved parity (RAID 4, 5, 6) uses much less redundancy  RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common  Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them Operating System Concepts Essentials – 2nd Edition 9.35 Silberschatz, Galvin and Gagne ©2013
View full slide show




Redundant Array of Independent Disks (RAID) Data organization on multiple disks Data disk 0 Data disk 1 Data disk 2 Mirror disk 0 Mirror disk 1 RAID0: Multiple disks for higher data rate; no redundancy Mirror disk 2 RAID1: Mirrored disks RAID2: Error-correcting code DataA disk 0 DataB disk 1 DataC disk 2 Data D disk 3 Parity P disk Spare disk RAID3: Bit- or b yte-level striping with parity/checksum disk ABCDP=0 B=ACDP Data 0 Data 1 Data 2 Data 0’ Data 1’ Data 2’ Data 0” Data 1” Data 2” Data 0’” Data 1’” Data 2’” Parity 0 Parity 1 Parity 2 Spare disk RAID4: Parity/checksum applied to sectors,not bits or bytes Data 0 Data 1 Data 2 Data 0’ Data 1’ Data 2’ Data 0” Data 1” Parity 2 Data 0’” Parity 1 Data 2” Parity 0 Data 1’” Data 2’” Spare disk RAID5: Parity/checksum distributed across several disks RAID6: Parity and 2nd check distributed across several disks Fig. 19.5 RAID levels 0-6, with a simplified view of data organization. Computer Architecture, Memory System Design Slide 50
View full slide show




RAID Structure  RAID – multiple disk drives provides reliability via redundancy  Increases the mean time to failure  Frequently combined with NVRAM to improve write performance  RAID is arranged into six different levels Operating System Concepts Essentials – 8 th Edition 11.32 Silberschatz, Galvin and Gagne ©2011
View full slide show




RAPTOR Syntax and Semantics - Arrays Array variable - Array variables are used to store many values (of the same type) without having to have many variable names. Instead of many variables names a count-controlled loop is used to gain access (index) the individual elements (values) of an array variable. RAPTOR has one and two dimensional arrays of numbers. A one dimensional array can be thought of as a sequence (or a list). A two dimensional array can be thought of as a table (grid or matrix). To create an array variable in RAPTOR, use it like an array variable. i.e. have an index, ex. Score[1], Values[x], Matrix[3,4], etc. All array variables are indexed starting with 1 and go up to the largest index used so far. RAPTOR array variables grow in size as needed. The assignment statement GPAs[24] ← 4.0 assigns the value 4.0 to the 24th element of the array GPAs. If the array variable GPAs had not been used before then the other 23 elements of the GPAs array are initialized to 0 at the same time. i.e. The array variable GPAs would have the following values: 1 2 3 4… Array variables in action- Arrays and count-controlled loop statements were made for each other. Notice in each example below the connection between the Loop Control Variable and the array index! Notice how the Length_Of function can be used in the count-controlled loop test! Notice that each example below is a count-controlled loop and has an Initialize, Test, Execute, and Modify part (I.T.E.M)! Assigning values to an array variable Reading values into an array variable Writing out an array variable’s values Computing the total and average of an array variable’s values Index ← 1 Index ← 1 Index ← 1 Total ← 0 Loop Loop Loop Index ← 1 PUT “The value of the array at position “ + Index + “ is “ + GPAs[Index] Loop GPAs[Index] ← 4.0 “Enter the GPA of student “” + Index + “: “ GET GPAs[Index] Index >= 24 Index >= 24 Index >= Length_Of (GPAs) Index ← Index + 1 Index ← Index + 1 Index ← Index + 1 Total ← Total + GPAs[Index] Index >= Length_Of(GPAs) Index ← Index + 1 … 23 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4.0 The initialization of previous elements to 0 happens only when the array variable is created. Successive assignment statements to the GPAs variable affect only the individual element listed. For example, the following successive assignment statements GPAs[20] GPAs[11] ← ← 1.7 3.2 would place the value 1.7 into the 20th position of the array, and would place the value 3.2 into the 11th position of the array. i.e. GPAs[20] ← 1.7 GPAs[11] ← 3.2 1 2 3 4… … 23 24 Initialize the elements of a two dimensional array (A two dimensional array requires two loops) Row ← 1 Loop Average ← Total / Length_Of(GPAs) Find the largest value of all the values in an array variable Find the INDEX of the largest value of all the values in an array variable Highest_GPA ← GPAs[1] Highest_GPA_Index ←1 Index ← 1 Index ← 1 Loop Loop GPAs[Index] > Highest_GPA GPAs[Index] >= GPAs[Highest_GPA_Index] Column ← 1 Loop 0 0 0 0 0 0 0 0 0 0 3.2 0 0 0 0 0 0 0 0 1.7 0 0 0 4.0 An array variable name, like GPAs, refers to ALL elements of the array. Adding an index (position) to the array variable enables you to refer to any specific element of the array variable. Two dimensional arrays work similarly. i.e. Table[7,2] refers to the element in the 7 th row and 2nd column. Individual elements of an array can be used exactly like any other variable. E.g. the array element GPAs[5] can be used anywhere the number variable X can be used. The Length_Of function can be used to determine (and return) the number of elements that are associated with a particular array variable. For example, after all the above, Length_Of(GPAs) is 24. Matrix[Row, Column] ← 1 Column >= 20 Column ← Column + 1 Highest_GPA ← GPAs[Index] Highest_GPA_Index ← Index Index >= Length_Of(GPAs) Index >= Length_Of(GPAs) Index ← Index + 1 Index ← Index + 1 PUT “The highest GPA is “ + Highest_GPA¶ PUT “The highest GPA is “ + GPAs[Highest_GPA_Index] + “ it is at position “ + Highest_GPA_Index¶ Row >= 20 Row ← Row + 1
View full slide show




Cache • Pronounced cash, a special high-speed storage mechanism. • Two types of caching are commonly used in PCs: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. • Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. • Some memory caches are built into the architecture of microprocessors. Intel Core-2 processors have 2-4 mb caches. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory (often also located on the CPU die), called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Some CPUs even have Level 3 caches. (Note most latest generation CPUs build L2 caches on CPU dies, running at the same clock rate as CPU). • Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. Today’s hard drives often have 8 – 16 mb memory caches. • When data is found in the cache, it is called a cache hit, (versus cache miss) and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.
View full slide show




RAID (Cont.)  Several improvements in disk-use techniques involve the use of multiple disks working cooperatively  Disk striping uses a group of disks as one storage unit  RAID schemes improve performance and improve the reliability of the storage system by storing redundant data  Mirroring or shadowing (RAID 1) keeps duplicate of each disk  Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability  Block interleaved parity (RAID 4, 5, 6) uses much less redundancy  RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common  Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them Operating System Concepts Essentials – 8 th Edition 11.33 Silberschatz, Galvin and Gagne ©2011
View full slide show




RAID (Cont)  Several improvements in disk-use techniques involve the use of multiple disks working cooperatively  Disk striping uses a group of disks as one storage unit  RAID schemes improve performance and improve the reliability of the storage system by storing redundant data  Mirroring or shadowing (RAID 1) keeps duplicate of each disk  Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability  Block interleaved parity (RAID 4, 5, 6) uses much less redundancy  RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common Operating System Concepts with Java – 8th Edition 12.21 Silberschatz, Galvin and Gagne ©2009
View full slide show




Array Lesson 1 Outline 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Array Lesson 1 Outline Mean of a List of Numbers Mean: Declarations Mean: Greeting, Input Mean: Calculation Mean: Output Mean: Compile, Run Mean: 5 Input Values Mean: 7 Input Values Mean: One Line Different Mean: Compile, Run for 5 Mean: Compile, Run for 7 Scalars #1 Scalars #2 Another Scalar Example A Similar Program, with Multiplication A Similar Program, with a Twist Arrays Array Element Properties 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. Array Properties #1 Array Properties #2 Array Properties #3 Array Properties #4 Array Properties #5 Array Indices #1 Array Indices #2 Multidimensional Arrays & 1D Arrays Array Declarations #1 Array Declarations #2 Array Declarations #3 Assigning a Value to an Array Element Array Element Assignment Example Getting Array Element Value with scanf Array Element scanf Example #1 Array Element scanf Example #2 for Loops for Tasks on Arrays #1 for Loops for Tasks on Arrays #2 Another for/Array Example #1 Another for/Array Example #2 Another for/Array Example #3 Don’t Need to Use Entire Declared Length Array Lesson 1 CS1313 Spring 2019 1
View full slide show




Stable-Storage Implementation  Write-ahead log scheme requires stable storage  Stable storage means data is never lost (due to failure, etc)  To implement stable storage:   Replicate information on more than one nonvolatile storage media with independent failure modes  Update information in a controlled manner to ensure that we can recover the stable data after any failure during data transfer or recovery Disk write has 1 of 3 outcomes 1. Successful completion - The data were written correctly on disk 2. Partial failure - A failure occurred in the midst of transfer, so only some of the sectors were written with the new data, and the sector being written during the failure may have been corrupted 3. Total failure - The failure occurred before the disk write started, so the previous data values on the disk remain intact Operating System Concepts Essentials – 2nd Edition 9.42 Silberschatz, Galvin and Gagne ©2013
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show




5.1. Replica Placement: The First Baby Steps: The placement of replicas is critical to HDFS reliability and performance. Optimizing replica placement distinguishes HDFS from most other distributed file systems. This is a feature that needs lots of tuning and experience. The purpose of a rack-aware replica placement policy is to improve data reliability, availability, and network bandwidth utilization. The current implementation for the replica placement policy is a first effort in this direction. The short-term goals of implementing this policy are to validate it on production systems, learn more about its behavior, and build a foundation to test and research more sophisticated policies. Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches. In most cases, network bandwidth between machines in the same rack is greater than network bandwidth between machines in different racks. The NameNode determines the rack id each DataNode belongs to via the process outlined in Rack Awareness: A simple but non-optimal policy is to place replicas on unique racks. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. However, this policy increases the cost of writes because a write needs to transfer blocks to multiple racks. For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance. The current, default replica placement policy described here is a work in progress. 5.2. Replica Selection: To minimize global bandwidth consumption and read latency, HDFS tries to satisfy a read request from a replica that is closest to the reader. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If angg/ HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica. 5.3. Safemode: On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified # of replicas. NameNode then replicates blocks to other DataNodes. Distributed Databases Hadoop Computing Model Notion of trans: unit of work ACID props, CC Notion of jobL unit work No CC Data Model Struct data w known schema Read/Write mode Any data any format ReadOnly mode Cost Model - Expensive servers Cheap commodity mach Fault Tolerance - Failures are rare Recovery mechanisms Failure common ~1000s Simple efficient fault tol KeyCharacteristi Effic, optimizatns, fine-tuning Scalability, flex, fault tol Bigger Picture: Hadoop vs. Other Systems Cloud Computing Compute model where any compute infrastructure can run on cloud Hardware & Software provided as remote services Elastic: grows/shrinks based on user’s demand Example: Amazon EC2
View full slide show




RAID Technology (contd.)  Different raid organizations were defined based on different combinations of the two factors of granularity of data interleaving (striping) and pattern used to compute redundant information.  Raid level 0 has no redundant data and hence has the best write performance at the risk of data loss  Raid level 1 uses mirrored disks.  Raid level 2 uses memory-style redundancy by using Hamming codes, which contain parity bits for distinct overlapping subsets of components. Level 2 includes both error detection and correction.  Raid level 3 uses a single parity disk relying on the disk controller to figure out which disk has failed.  Raid Levels 4 and 5 use block-level data striping, with level 5 distributing data and parity information across all disks.  Raid level 6 applies the so-called P + Q redundancy scheme using Reed-Soloman codes to protect against up to two disk failures by using just two redundant disks. Copyright © 2007 Ramez Elmasri and Shamkant B. Navathe Slide 13- 29
View full slide show




Consistency Points (2 of 2) • WAFL uses NVRAM (NV = Non-Volatile): – (NVRAM is DRAM with batteries to avoid losing during unexpected poweroff, some servers now just solid-state or hybrid) – NFS requests are logged to NVRAM – Upon unclean shutdown, re-apply NFS requests to last consistency point – Upon clean shutdown, create consistency point and turnoff NVRAM until needed (to save power/batteries) • Note, typical FS uses NVRAM for metadata write cache instead of just logs – Uses more NVRAM space (WAFL logs are smaller) • Ex: “rename” needs 32 KB, WAFL needs 150 bytes • Ex: write 8 KB needs 3 blocks (data, inode, indirect pointer), WAFL needs 1 block (data) plus 120 bytes for log – Slower response time for typical FS than for WAFL (although WAFL may be a bit slower upon restart)
View full slide show




Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Instr cache Reg file ALU Data cache Reg file Instr cache Reg file ALU Data cache Reg file Reg file ALU Data cache Instr 3 Instr 5 Instr 4 Instr cache Cycle 7 Cycle 8 Cycle 9 Cycle 2 Cycle 3 Writes into $8 Bubble Reg file Task dimension Cycle 4 ALU Bubble Instr cache Reg file Cycle 5 Cycle 6 Reg file Data cache Reg file ALU Data cache Reg file Cycle 8 Cycle 9 Cycle 7 Without data forwarding, three bubbles are needed to resolve a read-after-write data dependency Reads from $8 Time dimension Instr cache Instr 3 Instr 2 Instr 1 Bubble Instr cache Cycle 1 ALU Data cache Reg file Instr cache Reg file ALU Data cache Reg file ALU Data cache Reg file Reg file ALU Data cache Reg file Instr cache Reg file ALU Data cache Bubble Reg file Instr cache Task dimension Writes into $8 Reg file Instr cache Instr 4 Instr 5 Cycle 6 Time dimension Instr 2 Instr 1 Inserting Bubbles in a Pipeline Bubble Two bubbles, if we assume that a register can be updated and read from in one cycle Reads from $8 Reg file Computer Architecture, Data Path and Control Slide 48
View full slide show




Producer Consumer Synchronized Circular Buffer Produced 1 into cell 0 write 1 read 0 buffer: Produced 2 into cell 1 write 2 read 0 buffer: Consumed 1 from cell 0 write 2 read 1 buffer: Produced 3 into cell 2 write 3 read 1 buffer: Produced 4 into cell 3 write 4 read 1 buffer: Produced 5 into cell 4 write 0 read 1 buffer: Produced 6 into cell 0 write 1 read 1 buffer: BUFFER FULL WAITING TO PRODUCE 7 Consumed 2 from cell 1 write 1 read 2 buffer: Produced 7 into cell 1 write 2 read 2 buffer: BUFFER FULL WAITING TO PRODUCE 8 Consumed 3 from cell 2 write 2 read 3 buffer: Produced 8 into cell 2 write 3 read 3 buffer: BUFFER FULL WAITING TO PRODUCE 9 Consumed 4 from cell 3 write 3 read 4 buffer: Produced 9 into cell 3 write 4 read 4 buffer: BUFFER FULL WAITING TO PRODUCE 10 Consumed 5 from cell 4 write 4 read 0 buffer: Produced 10 into cell 4 write 0 read 0 buffer: BUFFER FULL ProduceInteger finished producing values Terminating ProduceInteger 1 -1 -1 -1 -1 1 2 -1 -1 -1 1 2 -1 -1 -1 1 2 3 -1 -1 1 2 3 4 -1 1 2 3 4 5 6 2 3 4 5 Consumed 6 from cell 0 write 0 read 1 buffer: Consumed 7 from cell 1 write 0 read 2 buffer: Consumed 8 from cell 2 write 0 read 3 buffer: Consumed 9 from cell 3 write 0 read 4 buffer: Consumed 10 from cell 4 write 0 read 0 buffer: BUFFER EMPTY ConsumeInteger retrieved values totaling: 55 Terminating ConsumeInteger 6 6 6 6 6 6 2 3 4 5 6 7 3 4 5 6 7 3 4 5 6 7 8 4 5 6 7 8 4 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 10 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 10 10 Ref: http://userhome.brooklyn.cuny.edu/irudowdky/OperatingSystems.htm & Silberschatz, Gagne, & Galvin, Operating Systems Concepts, 7th ed, Wiley (ch 1-3)
View full slide show