Disk Scheduling  The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth  Access time has two major components  Seek time is the time for the disk to move the heads to the cylinder containing the desired sector  Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head  Minimize seek time  Seek time  seek distance  Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer Operating System Concepts with Java – 8th Edition 12.8 Silberschatz, Galvin and Gagne ©2009
View full slide show




Basic Storage System Concepts • The OS presents a one of two virtual storage abstraction – Raw device: an array of data blocks (a partition). – File system: OS schedules interleaved application requests • Goal: optimize disk access time and throughput • Access time (seek time + rotational latency) – Seek time: moving the heads to the cylinder with the data – Rotational latency: rotate the disk head to the desired sector – Transfer rate: data flow speed between drive and computer – Operating systems attempts to minimize seek time; hardware optimizes rotational latency • Definitions – Throughput: bytes transferred per unit time – Disk bandwidth: total bytes transferred / (time from first request to time of last transfer)
View full slide show




Disk Scheduling  The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth  Minimize seek time  Seek time  seek distance  Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer Operating System Concepts Essentials – 8 th Edition 11.16 Silberschatz, Galvin and Gagne ©2011
View full slide show




Disk Scheduling  The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth  Minimize seek time  Seek time  seek distance  Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer Operating System Concepts Essentials – 2nd Edition 9.17 Silberschatz, Galvin and Gagne ©2013
View full slide show




Overview of Mass Storage Structure  Magnetic disks provide bulk of secondary storage of modern computers  Drives rotate at 60 to 200 times per second  Transfer rate is rate at which data flow between drive and computer  Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)  Head crash results from disk head making contact with the disk surface  That’s bad  Disks can be removable  Drive attached to computer via I/O bus  Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI  Host controller in computer uses bus to talk to disk controller built into drive or storage array Operating System Concepts with Java – 8th Edition 12.4 Silberschatz, Galvin and Gagne ©2009
View full slide show




Overview of Mass Storage Structure  Magnetic disks provide bulk of secondary storage of modern computers  Drives rotate at 60 to 250 times per second  Transfer rate is rate at which data flow between drive and computer  Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)  Head crash results from disk head making contact with the disk surface  That’s bad  Disks can be removable  Drive attached to computer via I/O bus  Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI, SAS, Firewire  Host controller in computer uses bus to talk to disk controller built into drive or storage array Operating System Concepts Essentials – 8 th Edition 11.4 Silberschatz, Galvin and Gagne ©2011
View full slide show




Overview of Mass Storage Structure  Magnetic disks provide bulk of secondary storage of modern computers  Drives rotate at 60 to 250 times per second  Transfer rate is rate at which data flow between drive and computer  Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)  Head crash results from disk head making contact with the disk surface -- That’s bad  Disks can be removable  Drive attached to computer via I/O bus  Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI, SAS, Firewire  Host controller in computer uses bus to talk to disk controller built into drive or storage array Operating System Concepts Essentials – 2nd Edition 9.4 Silberschatz, Galvin and Gagne ©2013
View full slide show




“Software Defined Networking” approach to open it LB service IP routing service FW service Network Operating System Service Service Service Operating Operating System System Service Specialized Specialized Packet Packet Forwarding Forwarding Hardware Hardware Service Service Service Service Operating System System Specialized Packet Forwarding Forwarding Hardware Hardware Service Operating System System Service Specialized Packet Forwarding Forwarding Hardware Hardware Service Service Operating Operating System System Service Service Service Operating Operating System System Specialized Specialized Packet Packet Forwarding Hardware Forwarding Hardware Specialized Specialized Packet Packet Forwarding Forwarding Hardware Hardware
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show




Disk Access Time Average time to access a specific sector approximated by:  Taccess = Tavg seek + Tavg rotation + Tavg transfer Seek time (Tavg seek)   Time to position heads over cylinder containing target sector Typical Tavg seek = 3-5 ms Rotational latency (Tavg rotation)   Time waiting for first bit of target sector to pass under r/w head Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min  e.g., 3ms for 10,000 RPM disk Transfer time (Tavg transfer)   24 Time to read the bits in the target sector Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min  e.g., 0.006ms for 10,000 RPM disk with 1,000 sectors/track  given 512-byte sectors, ~85 MB/s data transfer rate 15-213, F’08
View full slide show




Current Internet Closed to Innovations in the Infrastructure Closed Service Service Service Operating Operating System System Service Specialized Specialized Packet Packet Forwarding Forwarding Hardware Hardware Service Service Service Service Operating System System Specialized Packet Forwarding Forwarding Hardware Hardware Service Operating System System Service Specialized Specialized Packet Packet Forwarding Forwarding Hardware Hardware Service Service Operating Operating System System Service Service Service Specialized Specialized Packet Packet Forwarding Hardware Forwarding Hardware Operating Operating System System Specialized Specialized Packet Packet Forwarding Hardware Forwarding Hardware 5
View full slide show




Rotational Latency Time latency Secondary Storage 11 the time required for the appropriate sector to rotate to the position of the I/O head The rotational latency time depends only on the speed at which the spindle rotates, and the angle through which the track must rotate to reach the I/O head. Given the following: R = the rotational speed of the spindle (in rotations per second)  = the number of radians through which the track must rotate then the rotational latency is:  θ   1000  Latency θ       2π   R  On average, the latency time will be the time required for the platter to complete 1/2 of a full rotation. Computer Science Dept Va Tech January 2004 Data Structures & File Management ©2000-2004 McQuain WD
View full slide show




Magnetic Disk Characteristic Track Sector ° Cylinder: all the tacks under the head at a given point on all surface ° Read/write data is a three-stage process: • Seek time: position the arm over the proper track Cylinder Head Platter • Rotational latency: wait for the desired sector to rotate under the read/write head • Transfer time: transfer a block of bits (sector) under the read-write head ° Average seek time as reported by the industry: • Typically in the range of 12 ms to 20 ms • (Sum of the time for all possible seek) / (total # of possible seeks) ° Due to locality of disk reference, actual average seek time may: • Only be 25% to 33% of the advertised number CPE 442 io.15 Introduction To Computer Architecture
View full slide show




SIMPLE NETWORK MANAGEMENT PROTOCOL SNMP uses ASN.1 to format communications between managers and agents, as shown at right. 30 29 Type Type 48: 48: Length: Length: Sequence Sequence41 41 Bytes Bytes 04 06 01 00 Type Type 2: 2: Length: Length: Version: Version: Integer 0 Integer 1 1 Byte Byte 0 70 75 62 Type Type 4: 4: Length: Length: String String 6 6 Bytes Bytes A0 Length: Length: getreq. getreq. 28 Bytes 28 Bytes 01 Type Type 2: 2: Length: Length: Integer Integer 1 1 Byte Byte 30 02 2B 06 04 05 69 AE Type Type 2: 2: Length: Length: Integer 4 Integer 4 Bytes Bytes 00 02 Status Status 0E Type Type 48: 48: Length: Length: Sequence Sequence14 14 Bytes Bytes 6C 63 Value: Value: “public” “public” 1C 02 This particular SNMP message is a request for the sysDescr data item. 02 01 00 06 0C Error Error Index Index 08 Type Type 48: 48: Length: Length: Sequence Sequence12 12 Bytes Bytes Object Object Length: Length: ID 8 ID 8 Bytes Bytes 01 01 02 01 02 Request Request ID ID Type Type 2: 2: Length: Length: Integer Integer 1 1 Byte Byte 30 56 01 00 Data Data Item Item sysDescr sysDescr (numeric (numeric object object identifier identifier 43 43 .. 6 6 .. 1 1 .. 2 2 .. 1 1 .. 1 1 .. 1 1 .. 0 0 05 Null Null CS 00 Length: Length: 0 0 Bytes Bytes Chapter 9 Page 6
View full slide show




The New Stack – Process Is the Next Platform Client Punch Card or Terminal Custom (10’s of users) Application Data Management Applicatio n OS and Databas e Mainframe 4 Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Any Device (Millions of users) PC or Internet (1000’s of users) Composites Applicatio n Applicatio n Applicatio n OS OS OS DB OS Client Server OS OSApplicatio ns OS OS DB OS DB OS DB OS OS SOA Magal and Word ! Essentials of Business Processes and Information Systems | © 2009 Composites OS OSApplicatio ns OS OS DB OS DB OS DB OS OS
View full slide show




Storage Definitions and Notation Review The basic unit of computer storage is the bit. A bit can contain one of two values, 0 and 1. All other storage in a computer is based on collections of bits. Given enough bits, it is amazing how many things a computer can represent: numbers, letters, images, movies, sounds, documents, and programs, to name a few. A byte is 8 bits, and on most computers it is the smallest convenient chunk of storage. For example, most computers don’t have an instruction to move a bit but do have one to move a byte. A less common term is word, which is a given computer architecture’s native unit of data. A word is made up of one or more bytes. For example, a computer that has 64-bit registers and 64-bit memory addressing typically has 64-bit (8-byte) words. A computer executes many operations in its native word size rather than a byte at a time. Computer storage, along with most computer throughput, is generally measured and manipulated in bytes and collections of bytes. A kilobyte, or KB, is 1,024 bytes a megabyte, or MB, is 1,0242 bytes a gigabyte, or GB, is 1,0243 bytes a terabyte, or TB, is 1,0244 bytes a petabyte, or PB, is 1,0245 bytes Computer manufacturers often round off these numbers and say that a megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking measurements are an exception to this general rule; they are given in bits (because networks move data a bit at a time). Operating System Concepts – 9th Edition 1.17 Silberschatz, Galvin and Gagne ©2013
View full slide show