Order of Review Summary Discussion order is based on the average of the impact scores from assigned reviewers Final scores of discussed applications may differ from preliminary scores as recalibration happens dynamically
View full slide show




The NIH Peer Review Process Types of Reviewers • Regular reviewers – permanent and temporary – Preliminary impact/priority scores, criterion scores, written critiques – Final impact/priority scores • Other Contributing Reviewers (“mail” reviewers) – Written critiques, criterion scores, preliminary impact/priority scores – Cannot submit final impact/priority scores 10 10
View full slide show




Understanding the Equity Summary Score Methodology 2 3. Calculate – The normalized analysts’ recommendations and the accuracy weightings are combined to create a single score. For the largest 1,500 stocks by market capitalization, these scores are then forcibly ranked against all the other scores to create a standardized Equity Summary Score on a scale of 0.1 to 10.0 for the 1,500 stocks. This means that there will be a uniform distribution of scores provided by the model thereby assisting investors in evaluating the largest stocks (in terms of Understanding the Equity Summary Score Methodology Provided By 2 capitalization), which typically make up the majority of individual investors’ portfolios. Finally, smaller cap stocks are then slotted into this distribution without a force ranking, and may not exhibit the same balanced distribution. The Equity Summary Score and associated sentiment ratings by StarMine are: 0.1 to 1.0 ‐ very bearish 1.1 to 3.0 ‐ bearish 3.1 to 7.0 ‐ neutral 7.1 to 9.0 ‐ bullish 9.1 to 10.0 ‐ very bullish Other Important Model Factors:  An Equity Summary Score is only provided for stocks with ratings from four or more independent research providers.  New research providers are ramped in slowly by StarMine to avoid rapid fluctuations in Equity Summary Scores. Indep. research providers that are removed from Fidelity.com will similarly be ramped out slowly to avoid rapid fluctuations. Notes on Using the Equity Summary Score: The Equity Summary Score and sentiment ratings are ratings of relative, not absolute forecasted performance. The StarMine model anticipates that the highest rated stocks, those labeled “Very Bullish” as a group, may outperform lower rated groups of stocks. In a rising market, most stocks may experience price increases, and in a declining market, most stocks may experience price declines  Proper diversification within a portfolio is critical to the effective use of the Equity Summary Score. Individual company performance is subject to a broad range of factors that cannot be adequately captured in any rating system.  Larger differences in Equity Summary Scores may lead to differences in future performance. The sentiment rating labels should only be used for quick categorization. An 8.9 Bullish is closer to a 9.1 Very Bullish than a 7.1 Bullish.  For a customer holding a stock with a lower Equity Summary Score, there are many important considerations (for example, taxes) that may be much more important than the Score.  The Equity Summary Score by StarMine does not predict future performance of underlying stocks. The Equity Summary Score model has only been in production since August 2009 and therefore no assumptions should be made about how the model will perform in differing market conditions. Understanding the Equity Summary Score Methodology Provided By 3 How has the Equity Summary Score performed? Transparency is a core value at Fidelity, and that is why StarMine provides Fidelity with a view of the historical aggregate performance of the Equity Summary Score across all covered stocks each month. You can use this to obtain insight into the performance and composition of the Equity Summary Score. In addition, the individual stock price performance during each period of the Equity Summary
View full slide show




Traditional* Review Meeting Process • Upper half applications discussed: Reviewers are guided by specific review criteria Protections for Humans, Vertebrate Animals, Environment (Biohazard) may affect final score Assigned reviewers recommend scores for each application in upper half; all members not in conflict vote their conscience (outlier score policy pertains) Other considerations not affecting final score are discussed (e.g., budget, foreign applicants, resource sharing plans) • Lower half applications not discussed, not assigned an overall score * Aspects of this process will change in May, 2009 http://enhancing-peer-review.nih.gov
View full slide show




How Reviewers are Selected • Three or more reviewers are selected for a proposal • Types of reviewers recruited: – Reviewers with specific content expertise – Reviewers with general science or education expertise • Sources of reviewers: – PO’s knowledge of the research area – References listed in the proposal – Recent professional society programs – Computer searches of S&E journals – Former reviewers – Reviewer recommendations in proposal/email • Reviewers volunteer to POs! Understanding NSF 26 October 2017
View full slide show




Pre-Meeting Review Process • Appropriate reviewers recruited by SRO; minimum of 3 “interactive” reviewers per application • Conflicts of interest identified • Applications made available to reviewers ~6 weeks prior to meeting • Critiques and preliminary scores posted by assigned reviewers on NIH web site at least 2-3 days prior to meeting • Critiques and preliminary scores (excluding conflicts) available to review group prior to meeting
View full slide show




Visualize ImPACT and its causes along the journey to sustainability. Paul E Waggoner and Jesse H. Ausubel The Connecticut Agricultural Experiment Station, New Haven and Rockefeller University, New York. October 2002 Sustainable production and consumption are responding to the needs for a better life with minimum impact on the environment. For the foreseeable future the response will be against a background of slowing but still persistent population growth. So our task is visualizing consumers and producers opposing growing population and income to slow or stop the growth of impact. Envision a scroll unrolling from an instrument in the sky above the U.S., dynamically recording the annual percentage change of a national impact. The unrolling scroll could, for example, record the annual percentage change in national greenhousegas emission or fertilizer and cropland use. Its needle would ink a line to the right if impact accelerated and to the left if it slowed. By exerting leverages along the beam of a balance four actors—parents, workers, consumers and producers—move the needle right to more and left to less impact. Another instrument with four pens could be tracing on another scroll the actor’s changing leverages, recording their movement left and right along the beam and thus their quantitative contributions to changing impact. Superimposing the leverages that decrease impact over those that increase it would make their net and thus changing national impact visible on a final scroll of countered leverage. ImPACT* provides the means for visualizing the journey to sustainability, quantitatively. Identify the four actors of ImPACT by heart, moneybag, gift-wrapped box and tools. Color black the team of two actors who will likely increase impact. Color green the team of two who may oppose them and decrease impact.     P = population, A = income, as GDP per person, C = consumers’ behavior, as product use per GDP, and T = producers’ efficiency, as impact per product. Sometimes C is called intensity of use, A affluence and T technology. We call decreasing C, dematerialization. * Waggoner, P.E., and Ausubel, J. H. 2002. A framework for sustainability science: A renovated IPAT identity. Proc. National Acad. Sci. (US) 99:7860-7865. On line at http://phe.rockefeller.edu/ImPACT/supp1.ppt See http://phe.rockefeller.edu/ImPACT/supp1.ppt regarding consumers’ behavior and producers’ efficiency moving in consistent patterns.
View full slide show




Work Breakdown Schedule Aaron 145 hrs Matt 148 hrs 1.1 Review General Air Foil Theory 6 hrs 1.1 Review General Air Foil Theory 6 hrs 1.2 Preliminary Senior Design Coordinator Meetings 2 hrs 1.2 Preliminary Senior Design Coordinator Meetings 2 hrs 1.3 Examine Existing RC Planes 12 hrs 1.3 Examine Existing RC Planes 12 hrs 1.4.1 Select Competition Class 10 hrs 1.4.1 Select Competition Class 10 hrs 10 hrs 1.4.2 Review Selected Class Requirements 10 hrs 2.1.1 Review Existing Wing Designs 8 hrs 2.2.1 Existing Technology Review 9 hrs 2.1.2 Select Basic Wing Layout 9 hrs 2.2.2 Theoretical Propeller Design 9 hrs 2.1.3 Theoretical Design of Wing 18 hrs 2.2.3 Computer Aided Propeller Analysis 9 hrs 2.1.4 Computer Aided Wing Analysis 14 hrs 2.2.4 Physical Modeling 9 hrs 2.1.5 Physical Modeling 10 hrs 2.3.1 Aerodynamic Review 6 hrs 3.1 Combine Wing, Propeller, Fuselage Models 2 hrs 2.3.2 Theoretical Fuselage Design 8 hrs 3.2 Wind Tunnel Testing 2 hrs 2.3.3 Computer Aided Fuselage Design 6 hrs 3.3 Analyze Results 1 hr 2.3.4 Physical Modeling 4 hrs 4.1.1 Project Proposal 4 hrs 3.1 Combine Wing, Propeller, Fuselage Models 2 hrs 4.1.2 Semester Report 14 hrs 3.2 Wind Tunnel Testing 2 hrs 4.2.1 Project Proposal 4 hrs 3.3 Analyze Results 1 hr 4.2.2 Semester Report Brett 19 hrs 148 hrs 4.1.1 Project Proposal 4 hrs 4.1.2 Semester Report 14 hrs 1.1 Review General Air Foil Theory 6 hrs 4.2.1 Project Proposal 6 hrs 1.2 Preliminary Senior Design Coordinator Meetings 2 hrs 4.2.2 Semester Report 19 hrs 1.3 Examine Existing RC Planes 12 hrs Tzvee 145 hrs 1.4.1 Select Competition Class 10 hrs 1.1 Review General Air Foil Theory 6 hrs 1.4.2 Review Selected Class Requirements 10 hrs 1.2 Preliminary Senior Design Coordinator Meetings 2 hrs 2.2.1 Existing Technology Review 9 hrs 1.3 Examine Existing RC Planes 12 hrs 2.2.2 Theoretical Propeller Design 9 hrs 1.4.1 Select Competition Class 10 hrs 2.2.3 Computer Aided Propeller Analysis 9 hrs 1.4.2 Review Selected Class Requirements 10 hrs 2.2.4 Physical Modeling 9 hrs 1.5 Establish Requirements Matrix 4 hrs 2.3.1 Aerodynamic Review 6 hrs 2.1.1 Review Existing Wing Designs 8 hrs 2.1.2 Select Basic Wing Layout 9 hrs 2.3.2 Theoretical Fuselage Design 8 hrs 2.1.3 Theoretical Design of Wing 18 hrs 2.3.3 Computer Aided Fuselage Design 6 hrs 2.1.4 Computer Aided Wing Analysis 14 hrs 2.3.4 Physical Modeling 4 hrs 2.1.5 Physical Modeling 10 hrs 3.1 Combine Wing, Propeller, Fuselage Models 2 hrs 3.1 Combine Wing, Propeller, Fuselage Models 2 hrs 3.2 Wind Tunnel Testing 2 hrs 3.2 Wind Tunnel Testing 2 hrs 3.3 Analyze Results 1 hr 3.3 Analyze Results 1 hr 4.1.1 Project Proposal 4 hrs 4.1.1 Project Proposal 4 hrs 4.1.2 Semester Report 14 hrs 4.1.2 Semester Report 14 hrs 4.2.1 Project Proposal 6 hrs 4.2.1 Project Proposal 0 hrs 4.2.2 Semester Report 19 hrs 4.2.2 Semester Report 19 hrs 1.4.2 Review Selected Class Requirements
View full slide show




Frequency Distribution  Example: Marada Inn Guests staying at Marada Inn were asked to rate the quality of their accommodations as being excellent, above average, average, below average, or Above Average Below Average poor.Average The Average Above Average Above Average ratings provided by a sample of 20Above guests are: Above Average Average Above Average Average Above Average Below Average Poor Excellent Above Average Average Below Average Poor Above Average Average © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied Slide 5 or duplicated, or posted to a publicly accessible website, in whole or in part.
View full slide show




Frequency Distribution  Example: Marada Inn Guests staying at Marada Inn were asked to rate the quality of their accommodations as being excellent, above average, average, below average, or poor. The ratings provided by a sample of 20 guests are: Above Average Below Average Average Above Average Above Average Above Average Below Average Above Average Below Average Poor Average Poor Above Average Above Average Excellent Average Above Average Average Above Average Average 5
View full slide show




Example: Marada Inn Guests staying at Marada Inn were asked to rate the quality of their accommodations as being excellent, above average, average, below average, or poor. The ratings provided by a sample of 20 quests are shown below. Below Average Average Above Average Above Average Above Average Above Average Above Average Below Average Below Average Average Poor Poor Above Average Excellent Above Average Average Above Average Average Above Average Average © 2003 Thomson/South-Western 5
View full slide show




At the Review Meeting Reviewer 1 (Primary): Introduces the application and presents critique. Reviewers 2 and 3: Highlight additional issues and areas that significantly impact scores Understanding NIH All section members join the discussion Summary by Chair Assigned reviewers provide final scores, setting range Slide 22 All section members provide final scores privately. If voting out of range, rationales are given Non-scoreable issues discussed: budget, data sharing plan, foreign applications, etc. 8 November 2018
View full slide show




At the Review Meeting Reviewer 1 introduces the application and presents critique. Reviewers 2 and 3 highlight additional issues and areas that significantly impact scores Understanding NIH All members join the discussion; Summary by Chair Slide 32 Assigned reviewers provide final scores, setting range All members provide final scores privately. If voting out of range, rationales are given Non-scoreable issues discussed: budget, data sharing plan, foreign applications, etc. 8 November 2017
View full slide show




The NIH Peer Review Process SRG Meeting Procedures • Discussion format – Members with conflicts excused – Initial levels of enthusiasm stated (assigned reviewers and discussants) – Primary reviewer - explains project, strengths, weaknesses – Other assigned reviewers and discussants follow – Open discussion (full panel) – Levels of enthusiasm (assigned reviewers) re-stated – Individual SRG members vote – Other review considerations discussed (budget) 36 36
View full slide show




Example: Marada Inn Guests staying at Marada Inn were asked to rate the quality of their accommodations as being excellent, above average, average, below average, or poor. The ratings provided by a sample of 20 guestsAverage are: Below Average Above Average Above Average Above Average Average Above Average Average Above Average Above Average Below Average Poor Excellent Above Average Average Above Average Below Average Poor Above Average Average
View full slide show




The NIH Peer Review Process Pre-Meeting SRG Procedures • SRO – Performs administrative review of applications – Recruits reviewers, arranges for meeting date and site – Assigns 3 SRG members to each application – Makes applications available to reviewers • Internet Assisted Review (IAR) site or on CDs • Usually about six weeks before the SRG meeting – Instructs reviewers in review procedures – Monitors posting of initial scores and critiques in IAR Documents for Reviewers are available at: http://grants.nih.gov/grants/peer/reviewer_guidelines.htm#general_guidelines 30 30
View full slide show




Seventh Vector Mechanics for Engineers: Dynamics Impact • Impact: Collision between two bodies which occurs during a small time interval and during which the bodies exert large forces on each other. • Line of Impact: Common normal to the surfaces in contact during impact. Direct Central Impact • Central Impact: Impact for which the mass centers of the two bodies lie on the line of impact; otherwise, it is an eccentric impact.. • Direct Impact: Impact for which the velocities of the two bodies are directed along the line of impact. Oblique Central Impact • Oblique Impact: Impact for which one or both of the bodies move along a line other than the line of impact. © 2003 The McGraw-Hill Companies, Inc. All rights reserved. 13 - 51
View full slide show




Post Meeting Review Process • Scores are provided to investigators within 3 working days • Summary Statements for discussed and scored applications include Resume & Summary of Discussion, (largely unedited) critiques, and other recommendations (e.g., Budget) • Summary Statements for lower half (Not Discussed) applications receive (largely unedited) critiques and review criteria scores but no overall impact scores • All Summary Statements are made available within 30 days of meeting (10 days for new investigators’ R01s)
View full slide show




Usage Based Reading (UBR) vs. Checklist Based Reading (CBR)        Results: UBR is significantly more efficient and effective than CBR UBR finds more faults per time unit for crucial and important faults UBR finds a larger share of faults UBR reviewers spent an average of 6.5 minutes less in preparation and 4 minutes less in inspection UBR reviewers found twice as many crucial faults per hour as CBR reviewers UBR reviewers identified average of 21% more faults than CBR reviewers CBR discovered 63% more unimportant faults  Which means…CBR wastes effort searching for issues
View full slide show




Understanding the Equity Summary Score Methodology 3 Score sentiment can be viewed on the symbol ‐ specific Analyst Opinions History and Performance pages. 1. Equity Summary Scorecard Summary: A Total Return by Sentiment chart shows how a theoretical portfolio of stocks in each of the five sentiments performed within the selected time period. For example, the bright green bar represents the performance of all the Very Bullish stocks. Provided for comparison is the performance of First Call Consensus Recommendation of Strong Buy, the average of all stocks with an Equity Summary Score, and the S&P 500 Total Return Index. 2. Performance by Sector and Market Cap Fidelity customers have access to more in‐depth analysis of the Equity Summary Score universe and performance. The Total Return by Sector chart provides the historical performance of a theoretical portfolio of Very Bullish stocks in each sector over the time period selected. For comparison, the average performance of all stocks with an Equity Summary Score during the time period by sector is also provided. The Total Return by Market Cap shows the historical performance by market capitalization for stocks with an Equity Summary Score of Very Bullish as compared to typical market benchmarks as well the average for the largest 500 stocks, the next smaller 400 stocks, and the next 600 smaller stocks by market capitalization. The last table is the Equity Summary Score universe distribution for the reporting month by market capitalization and score. Understanding the Equity Summary Score Methodology Provided By 4 Important Information on Monthly Performance Calculations by StarMine  The set of covered stocks and ratings are established as of the second to last trading day of a given month. For a stock to be included in the scorecard calculations, it must have an Equity Summary Score as of the second to the last trading day of the month. The positions are assumed to be entered into on the last trading day of the month, and, if necessary, exited on the last trading day of the next month.  The Scorecard calculations use the closing price as of the last trading day of the month. The Scorecard calculations assume StarMine exits old positions and enters new ones at the same time at closing prices on the last trading day of a given month. The calculations assume 100% investment at all times.  The 1‐Year total return by Market Cap table breakpoints for the largest 500 stocks (large cap), the next 400 (mid cap), and the next 600 (small cap), are also established as of the end of trading on the second to the last trading day of a given month.  The calculation of performance assumes an equal dollar weighted portfolio of stocks ie theoretical investment allocated to each stock is the same  Performance in a given month for a given stock is calculated as [starting price (starting price meaning closing price as of the last day of trading of the prior month) less the ending price, divided by the starting price.] Prices incorporate any necessary adjustments for dividends and corporate actions (e.g. splits or spinoffs).  The performance of a given tier of rated stocks is calculated by adding up the performance of all stocks within that given tier, then dividing by the total number of stocks in a given tier.  The process for the next month begins again by looking at Equity Summary Scores as of the second‐to‐last trading day of the new month, placing stocks into their given tiers, and starting the process all over again.  It is important to note that the “theoretical” portfolio rebalancing process that StarMine performs between the end of one month and the beginning of the next month is, for the purposes of the scorecard, a cost‐free process. This means that no commissions or other transaction costs (e.g. bid/ask spreads) are included in the calculations.  If a customer attempted to track portfolios of stocks similar to those included in the scorecard, their returns would likely differ due to transaction costs as well as different purchase and sale prices received when buying or selling stocks.
View full slide show




Frequency Distribution Categorical Data Marada Inn Ratings – customer ratings Below Average Above Average Above Average Average Above Average Average Above Average Average Above Average Below Average Poor Excellent Above Average Average Above Average Above Average Below Average Poor Above Average Average Count ratings in each category Slide 4
View full slide show