Achieving Certification Project Informaton Technology Management Human Resources Diagram is for illustrative purposes only and does not reflect all requirements for certification, or our complete offering in training Intermedia Advanced Entry te (Senior Professional in Human Resources) Validates the Level SPHR PHR (Professional in Human Resources) Validates the skills necessary for working in Human Resources Management in large and small enterprises; minimum 1-4 years experience for PHR, depending on level of education skills necessary for working in Human Resources Management in large and small enterprises; minimum 5-7 years experience for SPHR, depending on level of education CAPM (Certified Associate in Project Management) requires at least 1500 hours of project experience OR 23 hours of project management education by the time they sit for the exam (our program meets this requirement) PMP (Project Mgmt Professional) requires 5 yrs of project mgmt experience, with 7,500 hrs leading projects & 35 hrs of project mgmt education OR A 4-yr degree and 3 yrs of project mgmt experience, w/ 4,500 hrs leading projects & 35 hrs project CCENT (Cisco Certifiedmgmt Entry ed CCNA (Cisco Certified Network Technician) Network Associate) validates validates the ability to the ability to install, install, operate and configure, operate, and troubleshoot a small troubleshoot medium-size enterprise branch network, routed and switched including basic network networks; recommended 1-3 security; first step towards years professional CCNA experience CompTIA A+ certification is the starting point for a career in IT. The exams cover maintenance of PCs, mobile devices, laptops, operating systems & printers; Cert consists of two exams, no professional experience requirements Server+ certification validates knowledge in system hardware, software, storage, best practices in IT environment, disaster recovery & troubleshooting; recommended A+ certification and 18 months professional experience Six Sigma Green Belt (SSGB) certification Validates knowledge and professional experience of data collection and analysis for Six Sigma projects; required 3 years professional experience Network+ certification validates entry level knowledge of network technologies, installation & config, media and topologies, management, and security; recommended A+ certification and 9 months networking experience Security+ certification validates entry-level knowledge of Information Security practices; recommended Network+ certification and two years of technical networking experience, with an emphasis on security SSCP (Systems Security Certified Practitioner) validates knowledge and professional experience in Information Security practices; minimum 1 year experience CISSP (Certified Information Systems Security Professional) validates knowledge and professional experience in Information Security practices; minimum 5 years’ experience (complete listing at vets.syr.edu/vctp)
View full slide show




A Mock Student “Learning” Schedule incorporating the Study Cycle Your Practice Schedule Time M T W T F 6:00 AM Wake up Wake up Wake up Wake up Wake up Sat Sun Wake up Wake up Vol Church Vol Church 7:00 AM 8:00 AM English 9:00 AM Math Biology Math 10:00 AM Review Biology Review 11:00 AM English Review English Math Biology Review Review 12:00 PM Lunch Lunch Lunch Lunch Lunch Lunch Church 1:00 PM History Chemistry History Chemistry History Study Lunch 2:00 PM Lab1 Chemistry Lab2 Chemistry Review Study Study 3:00 PM Lab1 Review Lab2 Review Study Study Study 4:00 PM Lab1 Lab2 Study Study Study 5:00 PM Review Review Study Fun Study 6:00 PM Dinner Dinner Dinner Dinner Dinner Fun Dinner 7:00 PM Study Study Study Study Fun Fun House 8:00 PM Study Study Study Study Fun Fun House 9:00 PM Study Study Study Study Fun Fun House 10:00 PM Preview Preview Preview Preview Fun Fun Preview 11:00 PM Sleep Sleep Sleep Sleep Sleep Sleep Sleep
View full slide show




2. Count the “F’s” in the following: “FINISHED FILES ARE THE RESULT OF YEARS OF SCIENTIFIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS.” A. 3 B. 4 C. 5 D. 6 6 Answer: D. 6 FINISHED FILES ARE THE RESULT OF YEARS OF SCIENTIFIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS. Because we often “speak” the words in our minds as we read them, we don’t “hear” the “F’s” because they don’t “sound” like “F’s”
View full slide show




Enter result Enter result Enter result Enter result Enter result Enter result Enter result Enter result Enter result Enter result Passed 6 Failed 4 (1 (1 (1 (1 (1 (1 (1 (1 (1 (1 = = = = = = = = = = pass, pass, pass, pass, pass, pass, pass, pass, pass, pass, 2 2 2 2 2 2 2 2 2 2 = = = = = = = = = = fail): fail): fail): fail): fail): fail): fail): fail): fail): fail): 1 2 2 1 1 1 2 1 1 2 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Enter result (1 Passed 9 Failed 1 Raise tuition = = = = = = = = = = pass, pass, pass, pass, pass, pass, pass, pass, pass, pass, 2 2 2 2 2 2 2 2 2 2 = = = = = = = = = = fail): fail): fail): fail): fail): fail): fail): fail): fail): fail): 1 1 1 1 2 1 1 1 1 1 Outline fig02_11.cpp output (1 of 1)  2003 Prentice Hall, Inc. All rights reserved. 37
View full slide show




DEN 219/229 Reflection Journal Rubric GRADING Criteria Reflective Student Aware Student Reflective Novice BelowExpectations Clarity Language is clear and expressive. The reader can create a mental picture of the situation being described. Abstract concepts are explained accurately. Explanation makes sense to an uninformed reader. The learning experience being reflected upon is relevant and meaningful to student and course learning goals. Minor, infrequent lapses in clarity and accuracy. There are frequent lapses in clarity and accuracy. Language is unclear and confusing throughout. Concepts are either not discussed or are presented inaccurately. The learning experience being reflected upon is relevant and meaningful to student and course learning goals. Student makes attempts to demonstrate relevance, but the relevance is unclear to the reader. Most of the reflection is irrelevant to student and/ or course learning goals. The reflection demonstrates connections between the experience and material from other courses, but lacks relevance and depth. There is little to no attempt to demonstrate connections between the learning experience and previous other personal and/ or learning experiences. Student makes attempts at applying the learning experience to understanding of self, others, and/ or course concepts but fails to demonstrate depth of analysis. There is some attempt at self-criticism, but the selfreflection fails to demonstrate a new awareness of personal biases, etc. No attempt to demonstrate connections to previous learning or experience. Student’s language is clear and expressive Relevance The learning experience is relevant and meaningful to student. Interconnections The reflection demonstrates connections between the experience and material from The reflection demonstrates other courses; past connections between the experience and material from experience; and/ or personal goals. other courses. Analysis The reflection moves beyond simple description of the experience Self-criticism Ability of the student to question their own biases, stereotypes, preconceptions, and/ or assumptions. The reflection moves beyond simple description of the experience to an analysis of how the experience contributed to student understanding of self, others, and/ or course concepts. The reflection demonstrates student attempts to analyze the experience but analysis lacks depth. The reflection demonstrates ability of the student to question their own biases, stereotypes, preconceptions, and/ or assumptions and define new modes of thinking as a result. The reflection demonstrates ability of the student to question their own biases, stereotypes, preconceptions. Adapted from University of Iowa, Office of Service Learning Reflection does not move beyond description of the learning experience(s). No attempt at selfcriticism.
View full slide show




Sample Report by a Forensics Expert  These files all contained the message: “I’d like to offer you some material from my company in exchange for a position in your company.” – [email protected] These files grabbed my attention so I made sure to take down the access times (all last accessed on 3/9/04 around 11:38 AM). I took note by book marking the four files by selecting them and right clicking  Bookmark Files. I created a new folder called TMP Files (ACME) and the four were imported there for further consideration later. Boeing’s results were next shuffled through but they were mostly HTML files that Pat Smith must have been visiting. The bulk of the hits came from Raytheon. They were a mix of web files including data and content. The web files came from the Raytheon website where the company’s about and contact pages were visited. Also mixed in were a few e-mails to a [email protected] I selected a few files which I saved to bookmarks in the DBX Files (Raytheon) folder. Two emails in particular stood out that contained information that seemed to relate to this case. The following below is where the files can be located.
View full slide show




HDFS (Hadoop Distributed File System) is a distr file sys for commodity hdwr. Differences from other distr file sys are few but significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides hi thruput access to app data and is suitable for apps that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS originally was infrastructure for Apache Nutch web search engine project, is part of Apache Hadoop Core http://hadoop.apache.org/core/ 2.1. Hardware Failure Hardware failure is the normal. An HDFS may consist of hundreds or thousands of server machines, each storing part of the file system’s data. There are many components and each component has a non-trivial prob of failure means that some component of HDFS is always non-functional. Detection of faults and quick, automatic recovery from them is core arch goal of HDFS. 2.2. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. 2.3. Large Data Sets Apps on HDFS have large data sets, typically gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It provides high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It supports ~10 million files in a single instance. 2.4. Simple Coherency Model: HDFS apps need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in future [write once read many at file level] 2.5. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the app is running. HDFS provides interfaces for applications to move themselves closer to where the data is located. 2.6. Portability Across Heterogeneous Hardware and Software Platforms: HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications. 3. NameNode and DataNodes: HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is 1 blocks stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction The NameNode and DataNode are pieces of software designed to run on commodity machines, typically run GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 4. The File System Namespace: HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This info is stored by NameNode. 5. Data Replication: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode
View full slide show