Application I/O Interface  I/O system calls encapsulate device behaviors in generic classes  Device-driver layer hides differences among I/O controllers from kernel  New devices talking already-implemented protocols need no extra work  Each OS has its own I/O subsystem structures and device driver frameworks  Devices vary in many dimensions  Character-stream or block  Sequential or random-access  Synchronous or asynchronous (or both)  Sharable or dedicated  Speed of operation  read-write, read only, or write only Operating System Concepts – 9th Edition 13.16 Silberschatz, Galvin and Gagne ©2013
View full slide show




Application I/O Interface  I/O system calls encapsulate device behaviors in generic classes  Device-driver layer hides differences among I/O controllers from kernel  Devices vary in many dimensions  Character-stream  Sequential  Sharable  Speed or block or random-access or dedicated of operation  read-write, read only, or write only Operating System Concepts with Java – 8th Edition 12.34 Silberschatz, Galvin and Gagne ©2009
View full slide show




Application I/O Interface  I/O system calls encapsulate device behaviors in generic classes  Device-driver layer hides differences among I/O controllers from kernel  Devices vary in many dimensions  Character-stream or block  Sequential or random-access  Sharable or dedicated  Speed of operation  read-write, read only, or write only Operating System Concepts – 8th Edition 13.13 Silberschatz, Galvin and Gagne ©2009
View full slide show




Graph kernel: Kernel methods are a popular method with broad applications in data mining. In a simple way, a kernel function can be considered as a positive definite matrix that measures the similarities between each pair of input data. It the currently study, a graph kernel method, namely shortest-path kernel, developed by Borgwart and Kriegel, is used to compute the similarities between graphs. The first step of the shortest-path kernel is to transform original graphs into shortest-path graphs. A shortest-path graph has the same nodes as its original graph, and between each pair of nodes, there is an edge labeled with the shortest distance between the two nodes in the original graph. In the current study, the edge label will be referred to as the weight of the edge. This transformation can be done using any algorithm that solves the all-pairs-shortest-paths problem. In the current study, the Floyd-Warshall algorithm was used. Let G1 and G2 be two original graphs. They are transformed into shortest-path graphs S1(V1, E1) and S2(V2, E2), where V1 and V2 are the sets of nodes in S1 and S2, and E1 and E2 are the sets of edges in S1 and S2. Then a kernel function is used to calculate the similarity between G1 and G2 by comparing all pairs of edges between S1 and S2. where, kedge( ) is a kernel function for comparing two edges (including the node labels and the edge weight). Let e1 be the edge between nodes v1 and w1, and e2 be the edge between nodes v2 and w2. Then, where, knode( ) is a kernel function for comparing the labels of two nodes, and kweight( ) is a kernel function for comparing the weights of two edges. These two functions are defined as in Borgward et al.(2005): where, labels(v) returns the vector of attributes associated with node v. Note that Knode() is a Gaussian kernel function. was set to 72 by trying different values between 32 and 128 with increments of 2. where, weight(e) returns the weight of edge e. Kweight( ) is a Brownian bridge kernel that assigns the highest value to the edges that are identical in length. Constant c was set to 2 as in Borgward et al.(2005). Classification and cross-validation When the shortest-path graph kernel is used to compute similarities between graphs, the results are affected by the sizes of the graphs. Consider the case that graph G is compared with graphs Gx and Gy separately using the graph kernel: If Gx has more nodes than Gy does, then |Ex|>|Ey|, where Ex and Ey are the sets of edges in the shortest-path graphs of Gx and Gy. Therefore, the summation (i.e., SS( ) ) in K(G, Gx ) includes more items than the summation in K(G, Gy) does. Each item (i.e., kedge( )) inside the summation has a non-negative value. The consequence is that if K(G, Gx)>K(G,Gy) it may not necessary indicate that Gx is more similar to G than Gy is, in stead, it could be an artifact of the fact that Gx has more nodes than Gy. To overcome this problem, a voting strategy is developed for predicting whether a graph (or a patch) is an interface patch: Algoritm Voting_Stategy (G) Input: graph G Output: G is an interface patch or non-interface patch Let T be the set of proteins in the training setLet v be the number of votes given to “G is an interface patch” v=0 While (T is not empty) { Take one protein (P) out of T Let Gint and Gnon-int be the interface and non-interface patches from P. If K(G, Gint)>K(G,Gnon-int), then increase v by 1 } If , then G is an interface patch Else G is a non-interface patch Using this strategy, when K(G, Gint) is compared with K(G, Gnon-int), Gint and Gnon-int are guaranteed to have identical number of nodes, since they are the interface and non-interface patches extracted from the same protein (see section 2.4 for details). Each time K(G, Gint)>K(G, Gnon-int) is true, one vote is given to “G is an interface patch”. In the end G is predicted to be an interface patch if “G is an interface patch” gets more than half of the total votes, i.e.,. Leave-one-out cross-validation was performed at protein level. In one round of the experiment, the interface patch and non-interface patch of a
View full slide show




Preamble “Post-amble” Block Execution: 3 Detail Observing Block Observing Block “Post-amble” “Post-amble” 3 Observing Block Observing Block ok Measurement Set ready “Post-amble” EVLA Data Processing PDR Observing Observing Block Block Observing Observing Block Block Failed! Preamble “Post-amble” Preamble ok ?4 5 Preamble ready Preamble Observing Observing Block Block Observing Observing Block Block Observing Block Observing Block Measurement Set “Post-amble” “Post-amble” Preamble Preamble “Post-amble” Measurement Set “Post-amble” “Post-amble” “Post-amble” July 18 - 19, 2002 2 2 Observing Observing Block Block Block Observing Observing Observing Block Block ok Archive: Preamble Observing Block Observing Block 34 ready Preamble “Post-amble” 1 3 Observing Block Observing Observing Block Block Observing Block Observing Observing Block Block ready Preamble Execution: Preamble ready Observing Observing Block Block Observing Observing Block Block Preamble Observing Block Observing Block 22 “Post-amble” “Post-amble” Preamble Preamble 1 “Post-amble” Preamble Input Queue: ok Measurement Set Boyd Waters 13
View full slide show




LED Driver Addressing LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver LED Driver MODE, SIN, SCLK, BLANK, GSCLK All Common to MCU 5:32 Decoder selects 20 Addresses(0 – 19) to XLAT pins 5:32 Decoder A4 MCU A3 A2 A1 A0
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show




Architecture  Our overall architectural strategy is based on our technical principles. Our strategy addresses protection at various levels within a same system from the application layer down to the OS, Virtual Machine Monitor (VMM), and hardware. It is our contention that single-layer solutions typically improve one layer by pushing performance and security limitations to other layers, with questionable overall results. Key aspects of our strategy are the following:  Layered ATI Architecture. Conceptually, we organize the (untrusted) system software into layers, and for each layer, we propose the separation of a small, verifiable abstract trusted interface (ATI) from the functionality-rich remaining untrusted software components at that layer. This multi-layer separation can be seen as a generalization of the MILS (multiple independent levels of security) model, where the implementation of the ATI at each layer is analogous to the single-layer MILS separation kernel. We prefix a layer’s ATI with a two-letter acronym: HW-ATI for the hardware layer, VM-ATI for the virtual machine and hypervisor layer, OS-ATI for the operating system layer, MW-ATI for the middleware layer, DB-ATI for the database management system layer, and AP-ATI if an application is exporting a trusted application programmer interface. This is an application of the Isolation and Independence Principles.  Natural and Artificial Diversity. The ATI architecture does not mandate a uniquely defined ATI at each layer. There may be multiple, overlapping ATIs at any layer. Furthermore, concrete diverse implementations may exist for the same or different ATIs (e.g., Intel TXT and AMD SVM at the hardware layer). One of the main design goals of the ATI Architecture is the active incorporation of diverse designs and implementations at each layer. For diverse implementations, we will use artificial diversity tools (e.g., compiler tools for creating different program representations as discussed in [31]). For diverse designs, we will rely on natural diversity of different system components (e.g., variants of Unix and possibly Windows at the OS layer). This natural diversity offers protection beyond the implementation-level protection of artificial diversity tools.  Figure 2 The Abstract Trusted Interface (ATI) Architecture  HW-ATI (Intel TXT, AMD SVM, secure co-processors)  HW-Untrusted  VM-ATI (Xen, VMware)  VMM-Untrusted  OS-ATI (seL4 microkernel)  OS-Untrusted  MW-ATI (secure comm.)  Middleware-Unt1  DB-ATI (co-DBMS)  DBMS-Untrusted  AP-ATI  Application-Untrusted  Composition of ATIs. The main constraint of the implementation of an ATI is that it may only use the facilities provided by ATIs of lower layers. This is indicated by the downward arrows on the right side of Figure 2. The advantage of this restriction is that by utilizing only trusted components, the implementation of ATI of higher layers remains trusted. The requirement is that the ATIs at each layer must have specific functionality, such that the resulting ATI implementation can be formally verified. This follows from the Isolation Principle. The ATI architecture is flexible and dynamically reconfigurable. As new software or hardware components are verified to be secure at each layer, that layer’s ATI may be expanded by incorporating the new components. Conversely, if a new vulnerability is discovered in some implementation of ATIs, the affected components (and higher level components that use this implementation) should be excluded from the trusted side. Until the problem is fixed, the system falls back to reduced ATI functionality that remains trusted. In real-time computing, when a precise computation is under the risk of missing a deadline, alternative modules implemented with reduced computational precision in an approach called Imprecise Computation [18] can still complete a simplified task within the time constraint. Analogously, we will explore alternative designs that use different underlying ATIs for a given critical functionality (perhaps more limited than the original one). When a full-functionality application is affected by attacks or newly discovered vulnerabilities, these alternative design and implementations based on different ATIs provide secure critical
View full slide show




Producer Consumer Synchronized Circular Buffer Produced 1 into cell 0 write 1 read 0 buffer: Produced 2 into cell 1 write 2 read 0 buffer: Consumed 1 from cell 0 write 2 read 1 buffer: Produced 3 into cell 2 write 3 read 1 buffer: Produced 4 into cell 3 write 4 read 1 buffer: Produced 5 into cell 4 write 0 read 1 buffer: Produced 6 into cell 0 write 1 read 1 buffer: BUFFER FULL WAITING TO PRODUCE 7 Consumed 2 from cell 1 write 1 read 2 buffer: Produced 7 into cell 1 write 2 read 2 buffer: BUFFER FULL WAITING TO PRODUCE 8 Consumed 3 from cell 2 write 2 read 3 buffer: Produced 8 into cell 2 write 3 read 3 buffer: BUFFER FULL WAITING TO PRODUCE 9 Consumed 4 from cell 3 write 3 read 4 buffer: Produced 9 into cell 3 write 4 read 4 buffer: BUFFER FULL WAITING TO PRODUCE 10 Consumed 5 from cell 4 write 4 read 0 buffer: Produced 10 into cell 4 write 0 read 0 buffer: BUFFER FULL ProduceInteger finished producing values Terminating ProduceInteger 1 -1 -1 -1 -1 1 2 -1 -1 -1 1 2 -1 -1 -1 1 2 3 -1 -1 1 2 3 4 -1 1 2 3 4 5 6 2 3 4 5 Consumed 6 from cell 0 write 0 read 1 buffer: Consumed 7 from cell 1 write 0 read 2 buffer: Consumed 8 from cell 2 write 0 read 3 buffer: Consumed 9 from cell 3 write 0 read 4 buffer: Consumed 10 from cell 4 write 0 read 0 buffer: BUFFER EMPTY ConsumeInteger retrieved values totaling: 55 Terminating ConsumeInteger 6 6 6 6 6 6 2 3 4 5 6 7 3 4 5 6 7 3 4 5 6 7 8 4 5 6 7 8 4 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 10 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 10 10 Ref: http://userhome.brooklyn.cuny.edu/irudowdky/OperatingSystems.htm & Silberschatz, Gagne, & Galvin, Operating Systems Concepts, 7th ed, Wiley (ch 1-3)
View full slide show




Transparent Scalability Hardware is free to assign blocks to any processor at any time  A kernel scales across any number of parallel processors Device Device Kernel grid Block 0 Block 1 Block 2 Block 3 Block 0 Block 2 Block 1 Block 3 Block 4 Block 5 Block 6 Block 7  Block 4 Block 5 Block 6 Block 7 time Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Each block can execute in any CUDA Tools and Threads – Slide order relative 69
View full slide show




Transparent Scalability  Hardware is free to assign blocks to any SM (processor)  A kernel scales across any number of parallel processors Device Kernel grid Device Block 0 Block 1 Block 2 Block 3 Block 0 Block 1 Block 4 Block 5 Block 6 Block 7 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 26 time Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Each block can execute in any order relative to other blocks.
View full slide show




Character Devices  A device driver which does not offer random access to fixed blocks of data  A character device driver must register a set of functions which implement the driver’s various file I/O operations  The kernel performs almost no preprocessing of a file read or write request to a character device, but simply passes on the request to the device  The main exception to this rule is the special subset of character device drivers which implement terminal devices, for which the kernel maintains a standard interface Operating System Concepts – 8th Edition 21.56 Silberschatz, Galvin and Gagne ©2009
View full slide show




Write-back State Machine-III CPU Read hit • State machine for CPU requests for each cache block and for bus requests for each cache block Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus Write miss CPU read miss CPU Read miss for this block Write back block, Place read miss Write Back Place read miss on bus CPU Write Block; (abort on bus Place Write Miss on Bus memory access) Cache Block State CPU read hit CPU write hit 03/24/19 Exclusive (read/write) Read miss for this block Write Back Block; (abort memory access) CPU Write Miss Write back cache block Place write miss on bus 39
View full slide show




Write-back State Machine - All Requests CPU Read hit • State machine for CPU requests for each cache block and for bus requests for each cache block Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus Write miss CPU Read miss CPU read miss for this block Place read miss Write Back Write back block, on bus Block; (abort Place read miss CPU Write memory on bus access) Place Write Miss on Bus Read miss Cache Block for this block Write Back States Exclusive Block; (abort (read/write) memory access) CPU read hit CPU Write Miss Write back cache block CPU write hit Place write miss on bus 38 3/24+4/5-7/10 CSE502-S10, Lec 16-18-SMP
View full slide show




Character Devices  A device driver which does not offer random access to fixed blocks of data  A character device driver must register a set of functions which implement the driver’s various file I/O operations  The kernel performs almost no preprocessing of a file read or write request to a character device, but simply passes on the request to the device  The main exception to this rule is the special subset of character device drivers which implement terminal devices, for which the kernel maintains a standard interface Operating System Concepts Essentials – 2nd Edition 15.56 Silberschatz, Galvin and Gagne ©2013
View full slide show




CPE 631 AM Snoopy-Cache State Machine-III CPU Read hit State machine for CPU requests for each cache block and for bus requests for each cache block Cache State Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus Write miss CPU read miss CPU Read miss for this block Write back block, Place read miss Place read miss on bus Write Back CPU Write on bus Block; (abort Place Write Miss on Bus memory access) Block Read miss Write Back for this block Block; (abort Exclusive memory access) (read/write) CPU Write Miss CPU read hit Write back cache block CPU write hit Place write miss on bus 24/03/19 UAH-CPE631 3
View full slide show




Character Devices  A device driver which does not offer random access to fixed blocks of data.  A character device driver must register a set of functions which implement the driver’s various file I/O operations.  The kernel performs almost no preprocessing of a file read or write request to a character device, but simply passes on the request to the device.  The main exception to this rule is the special subset of character device drivers which implement terminal devices, for which the kernel maintains a standard interface. Operating System Concepts 20.48 Silberschatz, Galvin and Gagne 2002
View full slide show




A REAL LIFE EXAMPLE 200 application servers accessing a cluster of 4 DBs  Driver upgrade more complex than database upgrade  Online upgrades?  Interne t DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver DB Driver
View full slide show




Snoopy-Cache State Machine • State machine for CPU requests for each cache block and for bus requests for each cache blockWrite miss Cache State 03/24/19 CPU Read hit Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus CPU read miss CPU Read miss for this block Write back block, Place read miss Place read miss on bus Write Back CPU Write on bus Block; (abort Place Write Miss on Bus memory access) Block Read miss Write Back for this block Block; (abort Exclusive memory access) (read/write) CPU Write Miss CPU read hit Write back cache block CPU write hit Place write miss on bus UAH-CPE 631 4
View full slide show




Snoopy-Cache State Machine-III • State machine for CPU requests for each cache block and for bus requests for each cache blockWrite miss Cache State 03/24/19 CPU Read hit Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus CPU read miss CPU Read miss for this block Write back block, Place read miss Place read miss on bus Write Back CPU Write on bus Block; (abort Place Write Miss on Bus memory access) Block Read miss Write Back for this block Block; (abort Exclusive memory access) (read/write) CPU Write Miss CPU read hit Write back cache block CPU write hit Place write miss on bus UAH-CPE 631 10
View full slide show




Snoopy-Cache State Machine-III • State machine for CPU requests for each cache block and for bus requests for each cache blockWrite miss Cache State 03/24/19 CPU Read hit Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus CPU read miss CPU Read miss for this block Write back block, Place read miss Place read miss on bus Write Back CPU Write on bus Block; (abort Place Write Miss on Bus memory access) Block Read miss Write Back for this block Block; (abort Exclusive memory access) (read/write) CPU Write Miss CPU read hit Write back cache block CPU write hit Place write miss on bus UAH-CPE 631 27
View full slide show




High-level subsystems Application File System Read/write Block-oriented device management … … Keyboard driver Keyboard controller :: Printer controller Network communication software … … Network driver Network controller … … devices ::::::: : Printer driver Open/close read/write send/receive SW/HW Hard disk controller Stream-oriented device management Network interface HW controls CD-ROW controller Hard disk drive Open/close get/put io_control interface Device dependant software CD-ROW Driver Low-level subsystems Stream device interface Device independent software I/O System Open/close I/O System Interface Block device interface Virtual memory Management
View full slide show




Snoopy-Cache State Machine CPU Read hit Cache State Write miss for this block Shared CPU Read Invalid (read/only) Place read miss on bus CPU Write Place Write Miss on bus Write miss CPU read miss CPU Read miss for this block Write back block, Place read miss Place read miss on bus Write Back CPU Write on bus Block; (abort Place Write Miss on Bus memory access) Block Read miss Write Back for this block Block; (abort Exclusive memory access) (read/write) CPU Write Miss CPU read hit Write back cache block CPU write hit Place write miss on bus
View full slide show




C:\UMBC\331\java> java.ext.dirs=C:\JDK1.2\JRE\lib\ext java.io.tmpdir=C:\WINDOWS\TEMP\ os.name=Windows 95 java.vendor=Sun Microsystems Inc. java.awt.printerjob=sun.awt.windows.WPrinterJob java.library.path=C:\JDK1.2\BIN;.;C:\WINDOWS\SYSTEM;C:\... java.vm.specification.vendor=Sun Microsystems Inc. sun.io.unicode.encoding=UnicodeLittle file.encoding=Cp1252 java.specification.vendor=Sun Microsystems Inc. user.language=en user.name=nicholas java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport... java.vm.name=Classic VM java.class.version=46.0 java.vm.specification.name=Java Virtual Machine Specification sun.boot.library.path=C:\JDK1.2\JRE\bin os.version=4.10 java.vm.version=1.2 java.vm.info=build JDK-1.2-V, native threads, symcjit java.compiler=symcjit path.separator=; file.separator=\ user.dir=C:\UMBC\331\java sun.boot.class.path=C:\JDK1.2\JRE\lib\rt.jar;C:\JDK1.2\JR... user.name=nicholas user.home=C:\WINDOWS C:\UMBC\331\java>java envSnoop -- listing properties -java.specification.name=Java Platform API Specification awt.toolkit=sun.awt.windows.WToolkit java.version=1.2 java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment user.timezone=America/New_York java.specification.version=1.2 java.vm.vendor=Sun Microsystems Inc. user.home=C:\WINDOWS java.vm.specification.version=1.0 os.arch=x86 java.awt.fonts= java.vendor.url=http://java.sun.com/ user.region=US file.encoding.pkg=sun.io java.home=C:\JDK1.2\JRE java.class.path=C:\Program Files\PhotoDeluxe 2.0\Adob... line.separator=
View full slide show




Overview  I/O management is a major component of operating system design and operation  Important aspect of computer operation  I/O devices vary greatly  Various methods to control them  Performance management  New types of devices frequent  Ports, busses, device controllers connect to various devices  Device drivers encapsulate device details  Present uniform device-access interface to I/O subsystem Operating System Concepts – 9th Edition 13.4 Silberschatz, Galvin and Gagne ©2013
View full slide show