Pakistani stuttering families PKST 072 I:1 II:1 III:1 Sooban I:2 II:2 III:2 M.Din II:3 III:3 III:4 III:5 III:6 III:7 Bibi Rani Nizam Din Ali Muhammad Bagh Bare Roshan Din IV:1 Aysha V:1 Genotyped Zahoor ahmad V:2 mukhtar VI:1 kalsoom II:4 IV:2 V:3 ghulam Fatima VI:2 VII:4 Bashiran Imran VI:3VI:4 VI:5 Haneef haji haji Tufail yousaf IV:3 IV:4 M.DinCharagh V:4bibi(hajan) V:5 Raj bibi M.Din VI:6 VI:7 VI:8VI:11 VI:9VI:10 Amanat Rasheedan VII:19 VII:20 VII:21 VII:22 VII:23 VII:1 VII:2 Genotyped VII:3 VII:5VII:6 VII:7 VII:8 YasmeenAbdul Ghaffar M waqas M Iqbal Abdul Jabbar Bushra VII:18 VII:17 VII:9VII:10 VII:11 VII:12 VII:13 VII:14 VII:15 VII:16 Sohail Akram Surriya Aslam Ilyas Reehana shafeeq Rukhsana Jameel VIII:5 VIII:7 Asif VIII:1 VIII:2 VIII:3 VIII:4 VIII:6 Abdul Baree Usman Asad IX:1IX:18 IX:2IX:3IX:4 IX:5 IX:6 IX:7 IX:8 IX:9 IX:10 IX:11 IX:12 IX:13 IX:14 IX:15 IX:16 IX:17 Genotyped Genotyped Genotyped Atif Tahir Firdos Shagufta Tashfeen KashifGenotyped YaseenGenotyped Zulfiqar M Ali Tayyab robina awais Genotyped M Hussain shahid hafeez shazia Yasmeen Nazia X:14 X:12 X:13 X:15X:16 X:1 X:2 Genotyped Abid Genotyped Haroon X:3 Genotyped Asif X:4 X:5 shan Iqra X:7 Amir X:6 Tayyaba PKST 72 X:8 X:9 X:10 X:11 AhmadFehmeeda Jawwad Hamza
View full slide show




Closed Area Checklist - Idea Closed Areas Interview Guide ___Is the media in the area marked properly? (classified, unclassified, and system software) ___Are both classified and unclassified computer equipment affixed with a label indicating their level of processing? ___ Review the visitor log. Pay close attention to the visitor’s company name. Did someone visit from an HVAC service? If so, ask the area custodian what they did. Did they put a hole in the wall or make a change affecting the area integrity or the 147? If so, is it greater than 96 square inches? Did someone visit from Xerox? If so, what did they do while they were there? Did they install a new copy machine with a hard drive? Did this get connected to the classified AIS? Did someone visit from a computer service vendor? If so, what did they do? Did they bring diagnostic equipment with them? If so, did they connect it to the AIS? Did any visitors have “keyboard” access? If so, was that authorized? Dispose of visitor logs from before the last DSS audit ____ Does the 147 note “open storage” of AIS? ____ AIS TEAM MEMBER: Dispose of system paperwork from before the last DSS audit (unless it is still relevant) ___ AIS TEAM MEMBER: Look around. Is there any new hardware connected to the AIS? If so, what is it? Does it have memory? ___ AIS TEAM MEMBER: Check the AIS system access list. Are all individuals still active employees? Balance the list against an active employee listing. Bring a list of recently terminated employees with you, too. Are all individuals on the system access list also on the Closed Area access list? If not, why not? Review the Closed Area access list. Do you see anyone who recently terminated? If so, request that they be taken off the Closed Area access list. Were they on the system access list? If so, has their account been disabled? Balance all the lists against each other. Has everyone on the system access list taken the required CBEs? (Verify.) ___ Are there Security posters in the area? ____ Are the FAX machines in the area marked to indicate “for unclassified use only”? ____ Are the shredders marked “for unclassified use only”? ___ AIS TEAM MEMBER: Do the classified printers have a sign “Output must be treated as classified until reviewed ….?” ___ Are the recycle bins labeled “for unclassified use only”? ____ Are the supplies in the area sufficient? (CD labels, classification labels, coversheets, etc.). ___ Does the area have a “marking guide” poster? ____ Does the area have an updated Security points of contact poster? __ AIS TEAM MEMBER: Before going to audit the system, read about what the system is used for and what it does. This will generate questions and help you understand what goes on in the area ___ AIS TEAM MEMBER: Have a user walk you through the steps they follow when they create classified data. What do they print out? Is it classified? If it’s not classified, do they verify that? How do they know what’s classified? (Do they refer to the program security classification guide? Do they know where the guide is located?) Where do they put the classified when it’s completed? Go look at their safe. Are things marked properly? Ask if the data in the safe is for a current contract. If not, explain the requirements for retention approval. (See NISPOM 5-701) Where does the data or hardware go from there? Is it sent to a customer? What is our relationship with the organization they send it to? Do we have DD Forms 254 in place to/from that organization? What is the classification of what they are working on? Is the system approved up to that level? ___ Do they support IR&D activities? If so, explain how IR&D documents must be marked “IR&D Document,” etc. in accordance with the NISPOM (11-304) ____ Are the above-the-ceiling checks being conducted on the required schedule? Look at the records. Dispose of records from before the last DSS audit __ AIS TEAM MEMBER: Review Trusted Download logs, ask people where the removed media is currently located (stored on a computer, CD, printout), and which method they used for the transfer. DSS is focusing on interviews with employees and may very well ask them to actually demonstrate a trusted download. Ask the employee to walk through the steps with you to prepare them for the audit.
View full slide show




Three Releases Your project will be completed in three releases, with some additional follow-up afterwards. Release 1 Release 2 Release Follow-Up Release 3 3 Follow-Up •Initial Initial code code & & documentatio documentatio n n •Specification Specification ss required: required: week week of of 8/28 8/28 •Completed Completed release release due: due: week week of of 9/18 9/18 •More More thorough thorough code code & & documentation documentation •Complete Complete code code & & documentation documentation •Specifications Specifications required: required: week of of 9/18 9/18 •Specifications Specifications required: required: week week of of 10/16 10/16 •Completed Completed release release due: due: week week of of 10/16 10/16 •Completed Completed release release due: due: week week of of 11/13 11/13 3 •Peer Peer reviews: reviews: week week of of 11/27 11/27 •Faculty Faculty presentation: presentation: week week of of 12/4 12/4 •Post-mortem Post-mortem discussion: discussion: week week of of 12/11 12/11
View full slide show




A time plot can be used to compare two or more data sets covering the same time period. 1918 influenza epidemic Date # Cases # Deaths week week week week week week week week week week week week week week week week week 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 531 4233 8682 7164 2229 600 164 57 722 1517 1828 1539 2416 3148 3465 1440 0 0 130 552 738 414 198 90 56 50 71 137 178 194 290 310 149 10000 9000 10000 8000 9000 7000 8000 6000 7000 5000 6000 4000 5000 3000 4000 Incidence 2000 3000 2000 1000 1000 0 800 700 800 700600 600500 500400 400300 300200 200100 1000 0 0 week week 1 week 3 week 5 week 7 week 9 week 11 week 13 week 15 17 # Cases # Deaths The pattern over time for the number of flu diagnoses closely resembles that for the number of deaths from the flu, indicating that about 8% to 10% of the people diagnosed that year died shortly afterward from complications of the flu.
View full slide show




Non Point Source Projects - Clean Water Act Section 319 VII-A Agricultural Cropland VII-B Agricultural Animals VII-C Silviculture VII-D Urban, excluding decentralized systems, Green Infrastructure VII-E Ground Water, unknown source VII-F Marinas Ex. Regional Digester/Bioenergy Facility NYS Environmental Facilities Corporation VII-G Resource Extraction VII-H Brownfields VII-I Storage Tanks VII-J Sanitary Landfills VII-K Hydromodification VII-L Individual/Decentralized Systems Ex. Porous Pavement September 28, 2010
View full slide show




CWA Section 319 Projects – Non Point Source (NPS) VII-A Agricultural Cropland VII-B Agricultural Animals VII-C Silviculture VII-D Urban, excluding decentralized systems VII-E Ground Water, unknown source VII-F Marinas NYS Environmental Facilities Corporation VII-G Resource Extraction VII-H Brownfields VII-I Storage Tanks VII-J Sanitary Landfills VII-K Hydromodification VII-L Individual/Decentralized Systems November 17, 2010
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show