Kernel I/O Subsystem  Caching - fast memory holding copy of data  Always just a copy  Key to performance  Spooling - hold output for a device  If device can serve only one request at a time  i.e., Printing  Device reservation - provides exclusive access to a device  System calls for allocation and deallocation  Watch out for deadlock Operating System Concepts with Java – 8th Edition 12.44 Silberschatz, Galvin and Gagne ©2009
View full slide show




Kernel I/O Subsystem    Caching - fast memory holding copy of data  Always just a copy  Key to performance Spooling - hold output for a device  If device can serve only one request at a time  i.e., Printing Device reservation - provides exclusive access to a device  System calls for allocation and deallocation  Watch out for deadlock Operating System Concepts – 8th Edition 13.24 Silberschatz, Galvin and Gagne ©2009
View full slide show




Kernel I/O Subsystem    Caching - faster device holding copy of data  Always just a copy  Key to performance  Sometimes combined with buffering Spooling - hold output for a device  If device can serve only one request at a time  i.e., Printing Device reservation - provides exclusive access to a device  System calls for allocation and de-allocation  Watch out for deadlock Operating System Concepts – 9th Edition 13.29 Silberschatz, Galvin and Gagne ©2013
View full slide show




Additional OS I/O Subsystem Services • Caching - fast memory access to recently accessed data • – Always just a copy – Significant performance impact Spooling - hold output for a device – If device can serve only one request at a time – i.e., Printing • Exclusive device reservation • • • • – System calls for allocation and de-allocation – Deadlocks are possible Scheduling - Ordering the I/O requests in the per-device queue – Some OSs try fairness Buffering - store data in OS buffers during transfers – To cope with device speed or transfer size mismatches – To maintain “copy semantics”, and dirty buffers. Fault processing: Recovery through retry operations, error logs Miscellaneous: Pipes, FIFOs, packet handling, streams, queues, mailboxes
View full slide show




DBs, DWs are merging as In-memory DBs: SAP® In-Memory Computing Enabling Real-Time Computing SAP® In-Memory enables real-time computing by bringing together online transaction proc. OLTP (DB) and online analytical proc. OLAP (DW). Combining advances in hardware technology with SAP InMemory Computing empowers business – from shop floor to boardroom – by giving real-time bus. proc. instantaneous access to data-eliminating today’s info lag for your business. In-memory computing is already under way. The question isn’t if this revolution will impact businesses but when/ how. In-memory computing won’t be introduced because a co. can afford the technology. It will be because a business cannot afford to allow its competitors to adopt the it first. Here is sample of what in-memory computing can do for you: • Enable mixed workloads of analytics, operations, and performance management in a single software landscape. • Support smarter business decisions by providing increased visibility of very large volumes of business information • Enable users to react to business events more quickly through real-time analysis and reporting of operational data. • Deliver innovative real-time analysis and reporting. • Streamline IT landscape and reduce total cost of ownership. Product managers will still look at inventory and point-of-sale data, but in the future they will also receive,eg., tell customers broadcast dissatisfaction with a product over Twitter. Or they might be alerted to a negative product review released online that highlights some unpleasant product features requiring immediate action. From the other side, small businesses running real-time inventory reports will be able to announce to their Facebook and Twitter communities that a high demand product is available, how to order, and where to pick up. Bad movies have been able to enjoy a great opening weekend before crashing 2nd weekend when negative word-of-mouth feedback cools enthusiasm. That week-long grace period is about to disappear for silver screen flops. Consumer feedback won’t take a week, a day, or an hour. The very second showing of a movie could suffer from a noticeable falloff in attendance due to consumer criticism piped instantaneously through the new technologies. It will no longer be good enough to have weekend numbers ready for executives on Monday morning. Executives will run their own reports on revenue, Twitter their reviews, and by Monday morning have acted on their decisions. The final example is from the utilities industry: The most expensive energy a utilities provides is energy to meet unexpected demand during peak periods of consumption. If the company could analyze trends in power consumption based on real-time meter reads, it could offer – in real time – extra low rates for the week or month if they reduce their consumption during the following few hours. In manufacturing enterprises, in-memory computing tech will connect the shop floor to the boardroom, and the shop floor associate will have instant access to the same data as the board [[shop floor = daily transaction processing. Boardroom = executive data mining]]. The shop floor will then see the results of their actions reflected immediately in the relevant Key Performance Indicators (KPI). This advantage will become much more dramatic when we switch to electric cars; predictably, those cars are recharged the minute the owners return home from work. Hardware: blade servers and multicore CPUs and memory capacities measured in terabytes. Software: in-memory database with highly compressible row / column storage designed to maximize in-memory comp. tech. SAP BusinessObjects Event Insight software is key. In what used to be called exception reporting, the software deals with huge amounts of realtime data to determine immediate and appropriate action for a real-time situation. [[Both row and column storage! They convert to column-wise storage only for Long-Lived-High-Value data?]] Parallel processing takes place in the database layer rather than in the app layer - as it does in the client-server arch. Total cost is 30% lower than traditional RDBMSs due to: • Leaner hardware, less system capacity req., as mixed workloads of analytics, operations, performance mgmt is in a single system, which also reduces redundant data storage. [[Back to a single DB rather than a DB for TP and a DW for boardroom dec. sup.]] • Less extract transform load (ETL) between systems and fewer prebuilt reports, reducing support required to run sofwr. Report runtime improvements of up to 1000 times. Compression rates of up to a 10 times. Performance improvements expected even higher in SAP apps natively developed for inmemory DBs. Initial results: a reduction of computing time from hours to seconds. However, in-memory computing will not eliminate the need for data warehousing. Real-time reporting will solve old challenges and create new opportunities, but new challenges will arise. SAP HANA 1.0 software supports realtime database access to data from the SAP apps that support OLTP. Formerly, operational reporting functionality was transferred from OLTP applications to a data warehouse. With in-memory computing technology, this functionality is integrated back into the transaction system. Adopting in-memory computing results in an uncluttered arch based on a few, tightly aligned core systems enabled by service-oriented architecture (SOA) to provide harmonized, valid metadata and master data across business processes. Some of the most salient shifts and trends in future enterprise architectures will be: • A shift to BI self-service apps like data exploration, instead of static report solutions. • Central metadata and masterdata repositories that define the data architecture, allowing data stewards to work across all business units and all platforms Real-time in-memory computing technology will cause a decline Structured Query Language (SQL) satellite databases. The purpose of those databases as flexible, ad hoc, more business-oriented, less IT-static tools might still be required, but their offline status will be a disadvantage and will delay data updates. Some might argue that satellite systems with in-memory computing technology will take over from satellite SQL DBs. SAP Business Explorer tools that use in-memory computing technology represent a paradigm shift. Instead of waiting for IT to work on a long queue of support tickets to create new reports, business users can explore large data sets and define reports on the fly.
View full slide show




Cache • Pronounced cash, a special high-speed storage mechanism. • Two types of caching are commonly used in PCs: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. • Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. • Some memory caches are built into the architecture of microprocessors. Intel Core-2 processors have 2-4 mb caches. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory (often also located on the CPU die), called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Some CPUs even have Level 3 caches. (Note most latest generation CPUs build L2 caches on CPU dies, running at the same clock rate as CPU). • Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. Today’s hard drives often have 8 – 16 mb memory caches. • When data is found in the cache, it is called a cache hit, (versus cache miss) and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show




Array Lesson 2 Outline 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Array Lesson 2 Outline Reading Array Values Using for Loop 19. #1 20. Reading Array Values Using for Loop 21. #2 22. for Loop: Like Many Statements #1 23. for Loop: Like Many Statements #2 24. for Loop: Like Many Statements #3 25. Reading Array on One Line of Input #1 Reading Array on One Line of Input #2 26. Reading Array on One Line of Input #3 27. Aside: Why Named Constants Are Good 28. Named Constants as Loop Bounds #1 29. Named Constants as Loop Bounds #2 30. Computing with Arrays #1 31. Computing with Arrays #2 32. Computing with Arrays #3 33. Computing with Arrays #4 34. Computing with Arrays #5 35. Static Memory Allocation Static Memory Allocation Example #1 Static Memory Allocation Example #2 Static Sometimes Not Good Enough #1 Static Sometimes Not Good Enough #2 Static Sometimes Not Good Enough #3 Static Sometimes Not Good Enough #4 Static Memory Allocation Can Be Wasteful Dynamic Memory Allocation #1 Dynamic Memory Allocation #2 Dynamic Memory Allocation #3 Dynamic Memory Allocation #4 Dynamic Memory Deallocation Dynamic Memory Allocation Example #1 Dynamic Memory Allocation Example #2 Dynamic Memory Allocation Example #3 Exercise: mean #1 Exercise: mean #2 Array Lesson 2 CS1313 Spring 2019 1
View full slide show




The New Stack – Process Is the Next Platform Client Punch Card or Terminal Custom (10’s of users) Application Data Management Applicatio n OS and Databas e Mainframe 4 Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Request Form Any Device (Millions of users) PC or Internet (1000’s of users) Composites Applicatio n Applicatio n Applicatio n OS OS OS DB OS Client Server OS OSApplicatio ns OS OS DB OS DB OS DB OS OS SOA Magal and Word ! Essentials of Business Processes and Information Systems | © 2009 Composites OS OSApplicatio ns OS OS DB OS DB OS DB OS OS
View full slide show




Graph kernel: Kernel methods are a popular method with broad applications in data mining. In a simple way, a kernel function can be considered as a positive definite matrix that measures the similarities between each pair of input data. It the currently study, a graph kernel method, namely shortest-path kernel, developed by Borgwart and Kriegel, is used to compute the similarities between graphs. The first step of the shortest-path kernel is to transform original graphs into shortest-path graphs. A shortest-path graph has the same nodes as its original graph, and between each pair of nodes, there is an edge labeled with the shortest distance between the two nodes in the original graph. In the current study, the edge label will be referred to as the weight of the edge. This transformation can be done using any algorithm that solves the all-pairs-shortest-paths problem. In the current study, the Floyd-Warshall algorithm was used. Let G1 and G2 be two original graphs. They are transformed into shortest-path graphs S1(V1, E1) and S2(V2, E2), where V1 and V2 are the sets of nodes in S1 and S2, and E1 and E2 are the sets of edges in S1 and S2. Then a kernel function is used to calculate the similarity between G1 and G2 by comparing all pairs of edges between S1 and S2. where, kedge( ) is a kernel function for comparing two edges (including the node labels and the edge weight). Let e1 be the edge between nodes v1 and w1, and e2 be the edge between nodes v2 and w2. Then, where, knode( ) is a kernel function for comparing the labels of two nodes, and kweight( ) is a kernel function for comparing the weights of two edges. These two functions are defined as in Borgward et al.(2005): where, labels(v) returns the vector of attributes associated with node v. Note that Knode() is a Gaussian kernel function. was set to 72 by trying different values between 32 and 128 with increments of 2. where, weight(e) returns the weight of edge e. Kweight( ) is a Brownian bridge kernel that assigns the highest value to the edges that are identical in length. Constant c was set to 2 as in Borgward et al.(2005). Classification and cross-validation When the shortest-path graph kernel is used to compute similarities between graphs, the results are affected by the sizes of the graphs. Consider the case that graph G is compared with graphs Gx and Gy separately using the graph kernel: If Gx has more nodes than Gy does, then |Ex|>|Ey|, where Ex and Ey are the sets of edges in the shortest-path graphs of Gx and Gy. Therefore, the summation (i.e., SS( ) ) in K(G, Gx ) includes more items than the summation in K(G, Gy) does. Each item (i.e., kedge( )) inside the summation has a non-negative value. The consequence is that if K(G, Gx)>K(G,Gy) it may not necessary indicate that Gx is more similar to G than Gy is, in stead, it could be an artifact of the fact that Gx has more nodes than Gy. To overcome this problem, a voting strategy is developed for predicting whether a graph (or a patch) is an interface patch: Algoritm Voting_Stategy (G) Input: graph G Output: G is an interface patch or non-interface patch Let T be the set of proteins in the training setLet v be the number of votes given to “G is an interface patch” v=0 While (T is not empty) { Take one protein (P) out of T Let Gint and Gnon-int be the interface and non-interface patches from P. If K(G, Gint)>K(G,Gnon-int), then increase v by 1 } If , then G is an interface patch Else G is a non-interface patch Using this strategy, when K(G, Gint) is compared with K(G, Gnon-int), Gint and Gnon-int are guaranteed to have identical number of nodes, since they are the interface and non-interface patches extracted from the same protein (see section 2.4 for details). Each time K(G, Gint)>K(G, Gnon-int) is true, one vote is given to “G is an interface patch”. In the end G is predicted to be an interface patch if “G is an interface patch” gets more than half of the total votes, i.e.,. Leave-one-out cross-validation was performed at protein level. In one round of the experiment, the interface patch and non-interface patch of a
View full slide show




Describing SubSystem Dependencies - Subsystems  Subsystem Dependencies on a SubSystem <> Client Support <> Server Support Flexible Server • When a subsystem uses some behavior of an element contained by another subsystem or package, a dependency on the external element is needed. • If the element on which subsystem depends is within a different subsystem, the dependency should be on that SubSystem interface, not on the element within the SubSystem, since we are denied entry to the subsystem. • We know the advantages of such a design… • It also gives the designer total freedom in designing the internal 7 behavior of
View full slide show




C:\UMBC\331\java> java.ext.dirs=C:\JDK1.2\JRE\lib\ext java.io.tmpdir=C:\WINDOWS\TEMP\ os.name=Windows 95 java.vendor=Sun Microsystems Inc. java.awt.printerjob=sun.awt.windows.WPrinterJob java.library.path=C:\JDK1.2\BIN;.;C:\WINDOWS\SYSTEM;C:\... java.vm.specification.vendor=Sun Microsystems Inc. sun.io.unicode.encoding=UnicodeLittle file.encoding=Cp1252 java.specification.vendor=Sun Microsystems Inc. user.language=en user.name=nicholas java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport... java.vm.name=Classic VM java.class.version=46.0 java.vm.specification.name=Java Virtual Machine Specification sun.boot.library.path=C:\JDK1.2\JRE\bin os.version=4.10 java.vm.version=1.2 java.vm.info=build JDK-1.2-V, native threads, symcjit java.compiler=symcjit path.separator=; file.separator=\ user.dir=C:\UMBC\331\java sun.boot.class.path=C:\JDK1.2\JRE\lib\rt.jar;C:\JDK1.2\JR... user.name=nicholas user.home=C:\WINDOWS C:\UMBC\331\java>java envSnoop -- listing properties -java.specification.name=Java Platform API Specification awt.toolkit=sun.awt.windows.WToolkit java.version=1.2 java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment user.timezone=America/New_York java.specification.version=1.2 java.vm.vendor=Sun Microsystems Inc. user.home=C:\WINDOWS java.vm.specification.version=1.0 os.arch=x86 java.awt.fonts= java.vendor.url=http://java.sun.com/ user.region=US file.encoding.pkg=sun.io java.home=C:\JDK1.2\JRE java.class.path=C:\Program Files\PhotoDeluxe 2.0\Adob... line.separator=
View full slide show




5.6.3 Canceling a Printing Request: The cancel Command • The cancel command cancels requests for print jobs made with the lp command. • To use the cancel command, you need to specify the ID of the printing job, which is provided by lp, or the printer name. The following command sequences illustrate the use of the cancel command: $ lp myfirst [Return] . . . . . . . . . . Print myfirst on the default printer. request id lp1-6889 (1 file) $ cancel lp1-6889 [Return]. . . . . . Cancel the specified printing request. Request “lp1-6889” canceled $ cancel lp1-6889 [Return]. . . . . . Cancel the current requests on the printer lp1. request “lp1-6889” canceled  1. Specifying printing request ID cancels the printing job even if it is currently printing. 2. Specifying the printer name only cancels the request that is currently printing on the specified printer. Your other printing jobs in the queue will be printed. 3. In both cases, the printer is freed to print the next job request. Amir Afzal UNIX Unbounded, 5th Edition Copyright ©2008 Chapter 5: Introduction to the UNIX File System 60 of 69
View full slide show




Effective Memory Access Time in Two-Level Paging  Multilevel paging does not affect performance drastically. For a 2-level page table, converting a logical address into a physical one may take 3 memory accesses (1 access for outer table, one access for actual section which has the page, and 1 access for actual frame). Caching (TLBs) helps and performance remains reasonable.  Effective Memory Access Time = h (sa + ma) + (1-h) (sa + 3ma) = h sa + h ma + sa + 3ma - h sa – 3 h ma = sa + 3ma – 2h ma = sa + (3 – 2h) ma Effective Memory Access Time, if hit ratio = 98%, Effective Memory Access Time = 20 + (3 – 2(0.98)) 100 = = 20 + 1.04 (100) = 124 nsecs or a 24% slowdown. Operating System Concepts with Java – 8th Edition 8.47 Silberschatz, Galvin and Gagne ©2009
View full slide show




Memory-Mapped File Technique for all I/O  Some OSes uses memory mapped files for standard I/O  Process can explicitly request memory mapping a file via mmap() system call   Now file mapped into process address space For standard I/O (open(), read(), write(), close()), mmap anyway  But map file into kernel address space  Process still does read() and write()   Copies data to and from kernel space and user space Uses efficient memory management subsystem  Avoids needing separate subsystem  COW can be used for read/write non-shared pages  Memory mapped files can be used for shared memory (although again via separate system calls) Operating System Concepts Essentials – 8th Edition 8.60 Silberschatz, Galvin and Gagne ©2011
View full slide show