Disk Management     Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller can read and write  Each sector can hold header information, plus data, plus error correction code (ECC)  Usually 512 bytes of data but can be selectable To use a disk to hold files, the operating system still needs to record its own data structures on the disk  Partition the disk into one or more groups of cylinders, each treated as a logical disk  Logical formatting or “making a file system”  To increase efficiency most file systems group blocks into clusters  Disk I/O done in blocks  File I/O done in clusters Boot block initializes system  The bootstrap is stored in ROM  Bootstrap loader program stored in boot blocks of boot partition Methods such as sector sparing used to handle bad blocks Operating System Concepts Essentials – 8 th Edition 11.28 Silberschatz, Galvin and Gagne ©2011
View full slide show




Disk Management  Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller can read and write  To use a disk to hold files, the operating system still needs to record its own data structures on the disk  Partition the disk into one or more groups of cylinders  Logical formatting or “making a file system”  To increase efficiency most file systems group blocks into clusters  Disk I/O done in blocks  File I/O done in clusters  Boot block initializes system  The bootstrap is stored in ROM  Bootstrap loader program  Methods such as sector sparing used to handle bad blocks Operating System Concepts with Java – 8th Edition 12.18 Silberschatz, Galvin and Gagne ©2009
View full slide show




24 Functions are not Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods F are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show




Disk Management   Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller can read and write  Each sector can hold header information, plus data, plus error correction code (ECC)  Usually 512 bytes of data but can be selectable To use a disk to hold files, the operating system still needs to record its own data structures on the disk  Partition the disk into one or more groups of cylinders, each treated as a logical disk  Logical formatting or “making a file system”  To increase efficiency most file systems group blocks into clusters  Disk I/O done in blocks  File I/O done in clusters Operating System Concepts Essentials – 2nd Edition 9.29 Silberschatz, Galvin and Gagne ©2013
View full slide show




6. The Persistence of File System Metadata: The HDFS namespace is stored by the NameNode. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata. For example, creating a new file in HDFS causes the NameNode to insert a record into the EditLog indicating this. Similarly, changing the replication factor of a file causes a new record to be inserted into the EditLog. The NameNode uses a file in its local host OS file system to store the EditLog. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too. The NameNode keeps an image of the entire file system namespace and file Blockmap in memory. This key metadata item is designed to be compact, such that a NameNode with 4 GB of RAM is plenty to support a huge number of files and directories. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint. In the current implementation, a checkpoint only occurs when the NameNode starts up. Work is in progress to support periodic checkpointing in the near future. The DataNode stores HDFS data in files in its local file system. The DataNode has no knowledge about HDFS files. It stores each block of HDFS data in a separate file in its local file system. The DataNode does not create all files in the same directory. Instead, it uses a heuristic to determine the optimal number of files per directory and creates subdirectories appropriately. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory. When a DataNode starts up, it scans through its local file system, generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode: this is the Blockreport. 7. The Communication Protocols: All HDFS communication protocols are layered on top of the TCP/IP protocol. A client establishes a connection to a configurable TCP port on the NameNode machine. It talks the ClientProtocol with the NameNode. The DataNodes talk to the NameNode using the DataNode Protocol. A Remote Procedure Call (RPC) abstraction wraps both the Client Protocol and the DataNode Protocol. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients. 8. Robustness: The primary objective of HDFS is to store data reliably even in the presence of failures. The three common types of failures are NameNode failures, DataNode failures and network partitions. 8.1. Data Disk Failure, Heartbeats and Re-Replication: Each DataNode sends a Heartbeat message to the NameNode periodically. A network partition can cause a subset of DataNodes to lose connectivity with the NameNode. The NameNode detects this condition by the absence of a Heartbeat message. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. Any data that was registered to a dead DataNode is not available to HDFS any more. DataNode death may cause the replication factor of some blocks to fall below their specified value. The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. The necessity for re-replication may arise due to many reasons: a DataNode may become unavailable, a replica may become corrupted, a hard disk on a DataNode may fail, or the replication factor of a file may be increased. 8.2. Cluster Rebalancing: HDFS arch is compatible with data rebalancing . A scheme might automatically move data from 1 DataNode to another if the free space on a DataNode falls below a certain threshold. In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster. These types of data rebalancing schemes are not yet implemented. 8.3. Data Integrity: It is possible that a block of data fetched from a DataNode arrives corrupted. This corruption can occur because of faults in a storage device, network faults, or buggy software. The HDFS client software implements checksum checking on the contents of HDFS files. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
View full slide show




Disk Management (Cont.)  Raw disk access for apps that want to do their own block management, keep OS out of the way (databases for example)  Boot block initializes system   The bootstrap is stored in ROM  Bootstrap loader program stored in boot blocks of boot partition Methods such as sector sparing used to handle bad blocks Operating System Concepts Essentials – 2nd Edition 9.30 Silberschatz, Galvin and Gagne ©2013
View full slide show




8.4. Metadata Disk Failure: The FsImage and EditLog are central data structures. A corruption of these files can cause the HDFS instance to be non-functional. For this reason, the NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog. Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously. This synchronous updating of multiple copies of the FsImage and EditLog may degrade the rate of namespace transactions per second that a NameNode can support. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use. The NameNode machine is a single point of failure for an HDFS cluster. If the NameNode machine fails, manual intervention is necessary. Currently, automatic restart and failover of the NameNode software to another machine is not supported. 8.5. Snapshots: Snapshots support storing a copy of data at a particular instant of time. One usage of the snapshot feature may be to roll back a corrupted HDFS instance to a previously known good point in time. HDFS does not currently support snapshots but will in a future release. 9. Data Organization 9.1. Data Blocks: HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 64 MB. Thus, an HDFS file is chopped up into 64 MB chunks, and if possible, each chunk will reside on a different DataNode. 9.2. Staging: A client request to create a file does not reach the NameNode immediately. In fact, initially the HDFS client caches the file data into a temporary local file. Application writes are transparently redirected to this temporary local file. When the local file accumulates data worth over one HDFS block size, the client contacts the NameNode. The NameNode inserts the file name into the file system hierarchy and allocates a data block for it. The NameNode responds to the client request with the identity of the DataNode and the destination data block. Then the client flushes the block of data from the local temporary file to the specified DataNode. When a file is closed, the remaining un-flushed data in the temporary local file is transferred to the DataNode. The client then tells the NameNode that the file is closed. At this point, the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed, the file is lost. The above approach has been adopted after careful consideration of target applications that run on HDFS. These applications need streaming writes to files. If a client writes to a remote file directly without any client side buffering, the network speed and the congestion in the network impacts throughput considerably. This approach is not without precedent. Earlier distributed file systems, e.g. AFS, have used client side caching to improve performance. A POSIX requirement has been relaxed to achieve higher performance of data uploads. 9.3. Replication Pipelining: When a client is writing data to an HDFS file, its data is first written to a local file as explained in the previous section. Suppose the HDFS file has a replication factor of three. When the local file accumulates a full block of user data, the client retrieves a list of DataNodes from the NameNode. This list contains the DataNodes that will host a replica of that block. The client then flushes the data block to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB), writes each portion to its local repository and transfers that portion to the second DataNode in the list. The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the data to its local repository. Thus, a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. Thus, the data is pipelined from one DataNode to the next.
View full slide show




Producer Consumer Synchronized Circular Buffer Produced 1 into cell 0 write 1 read 0 buffer: Produced 2 into cell 1 write 2 read 0 buffer: Consumed 1 from cell 0 write 2 read 1 buffer: Produced 3 into cell 2 write 3 read 1 buffer: Produced 4 into cell 3 write 4 read 1 buffer: Produced 5 into cell 4 write 0 read 1 buffer: Produced 6 into cell 0 write 1 read 1 buffer: BUFFER FULL WAITING TO PRODUCE 7 Consumed 2 from cell 1 write 1 read 2 buffer: Produced 7 into cell 1 write 2 read 2 buffer: BUFFER FULL WAITING TO PRODUCE 8 Consumed 3 from cell 2 write 2 read 3 buffer: Produced 8 into cell 2 write 3 read 3 buffer: BUFFER FULL WAITING TO PRODUCE 9 Consumed 4 from cell 3 write 3 read 4 buffer: Produced 9 into cell 3 write 4 read 4 buffer: BUFFER FULL WAITING TO PRODUCE 10 Consumed 5 from cell 4 write 4 read 0 buffer: Produced 10 into cell 4 write 0 read 0 buffer: BUFFER FULL ProduceInteger finished producing values Terminating ProduceInteger 1 -1 -1 -1 -1 1 2 -1 -1 -1 1 2 -1 -1 -1 1 2 3 -1 -1 1 2 3 4 -1 1 2 3 4 5 6 2 3 4 5 Consumed 6 from cell 0 write 0 read 1 buffer: Consumed 7 from cell 1 write 0 read 2 buffer: Consumed 8 from cell 2 write 0 read 3 buffer: Consumed 9 from cell 3 write 0 read 4 buffer: Consumed 10 from cell 4 write 0 read 0 buffer: BUFFER EMPTY ConsumeInteger retrieved values totaling: 55 Terminating ConsumeInteger 6 6 6 6 6 6 2 3 4 5 6 7 3 4 5 6 7 3 4 5 6 7 8 4 5 6 7 8 4 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 10 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 10 10 Ref: http://userhome.brooklyn.cuny.edu/irudowdky/OperatingSystems.htm & Silberschatz, Gagne, & Galvin, Operating Systems Concepts, 7th ed, Wiley (ch 1-3)
View full slide show




Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks on Blocks! Dynamically Rearranging Synteny Blocks in Comparative Genomes Nick Egan’s Final Project Presentation for BIO 131 Intro to Computational Biology Taught by Anna Ritz
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode
View full slide show




Not All 20 Point Fonts Are Equal 20  A - Can You Read B - Can You Read C - Can You Read D - Can You Read E - Can You Read F - Can You Read G - Can You Read H - Can You Read I - Can You Read 16  J - Can You Read K - Can You Read L - Can You Read M - Can You Read N - Can You Read O - Can You Read P - Can You Read Q - Can You Read R - Can You Read 14  J - Can You Read K - Can You Read L - Can You Read M - Can You Read O - Can You Read P - Can You Read Q - Can You Read R - Can You Read 12  J - Can You Read K - Can You Read L - Can You Read M - Can You Read N - Can You Read O - Can You Read P - Can You Read Q - Can You Read R - Can You Read My Students Tell Me That They Like The Readability Of Ariel Font I never use fonts smaller than 20 point for lecture.
View full slide show




Disk Structure • • • • • Cylinder: the set of tracks that all the heads are currently located at. Track: A ring on a disk where data can be written Sector: The smallest transfer unit of data accessed in a block Cluster: A group of sectors the operating system treats as a unit Organization Choices – Sector mapping (One dimension array of logical blocks) • 0 is first sector, track 0 of the outermost cylinder. • Subsequent sectors map through tracks, through cylinders, in an outer to inner direction. – Sector counts and density • fixed sectors per track: varying densities • Varied sectors per track: outer tracks have more sectors; constant density – Bad block management • Sector sparing: replace bad sectors with spares in the same cylinder • Sector slipping: copy all sectors down to the next spare
View full slide show




System Boot  When power initialized on system, execution starts at a fixed memory location   Firmware ROM used to hold initial boot code Operating system must be made available to hardware so hardware can start it  Small piece of code – bootstrap loader, stored in ROM or EEPROM locates the kernel, loads it into memory, and starts it  Sometimes two-step process where boot block at fixed location loaded by ROM code, which loads bootstrap loader from disk  Common bootstrap loader, GRUB, allows selection of kernel from multiple disks, versions, kernel options  Kernel loads and system is then running Operating System Concepts – 9th Edition 2.46 Silberschatz, Galvin and Gagne
View full slide show




File system implementation Physical disks     Divided into one or more “partitions” (logical, separate disks). Each partition can have its own file system. Sector 0 = MBR (master boot record)     List of partitions (start and ends) Indicates boot partition Every partition has a boot block (although it may be empty) Boot steps: 1. 2. 3. boot code in MBR executes reads in boot block code of boot partition and executes it boot block code boots OS code in partition
View full slide show




System Boot 1. Reset event: The program counter PC is set to the address of the boot loader in the system BIOS. 2. The instructions in the boot loader: a. Execute diagnostics b. Load a boot block from a fixed disk location 3. The boot block then loads the entire operating system into memory 4. The Operating system initializes itself and begins to execute • • • • Firmware Notes Definition: Firmware is a set of instructions programmed persistently on a Read-only memory (ROM) device. Early ROM chips had to be physically changed to update the boot loader. ErasableProgrammableReadOnlyMemory (EPROM) – discovered at Intel using UV light – patented in 1972. ElectricallyEPROM – able to erase and rewrite ROM electrically.
View full slide show




Disk Structure  Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer   Low-level formatting creates logical blocks on physical media The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially  Sector 0 is the first sector of the first track on the outermost cylinder  Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost  Logical to physical address should be easy  Except for bad sectors  Non-constant # of sectors per track via constant angular velocity Operating System Concepts Essentials – 2nd Edition 9.11 Silberschatz, Galvin and Gagne ©2013
View full slide show




Preamble “Post-amble” Block Execution: 3 Detail Observing Block Observing Block “Post-amble” “Post-amble” 3 Observing Block Observing Block ok Measurement Set ready “Post-amble” EVLA Data Processing PDR Observing Observing Block Block Observing Observing Block Block Failed! Preamble “Post-amble” Preamble ok ?4 5 Preamble ready Preamble Observing Observing Block Block Observing Observing Block Block Observing Block Observing Block Measurement Set “Post-amble” “Post-amble” Preamble Preamble “Post-amble” Measurement Set “Post-amble” “Post-amble” “Post-amble” July 18 - 19, 2002 2 2 Observing Observing Block Block Block Observing Observing Observing Block Block ok Archive: Preamble Observing Block Observing Block 34 ready Preamble “Post-amble” 1 3 Observing Block Observing Observing Block Block Observing Block Observing Observing Block Block ready Preamble Execution: Preamble ready Observing Observing Block Block Observing Observing Block Block Preamble Observing Block Observing Block 22 “Post-amble” “Post-amble” Preamble Preamble 1 “Post-amble” Preamble Input Queue: ok Measurement Set Boyd Waters 13
View full slide show