24 Functions are not Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods F are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods Methods are are are are are are are are are are are are are are are are are are are are are are are are not not not not not not not not not not not not not not not not not not not not not not not not Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions Functions
View full slide show




Six Sigma Improvement Model 1. Define Determine the current process characteristics critical to customer satisfaction and identify any gaps. 2. Measure Quantify the work the process does that affects the gap. 3. Analyze Use data on measures to perform process analysis. 4. Improve Modify or redesign existing methods to meet the new performance objectives. 5. Control Monitor the process to make sure high performance levels are maintained. © 2007 Pearson Education
View full slide show




"Gap Hill Climbing": mathematical analysis One way to increase the size of the functional gaps is to hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher STDev would increase the likelihood that gaps would be larger ( more dispersion allows for more and/or larger gaps). This is very general. We are more interested in growing the one particular gap of interest (largest gap or largest thinning). To do this we can do as follows: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




OPERATIONS: STATEMENT OF GOALS FOR 2014-2015 Conference Services: Meet or Exceed Budget Revenue Goal of $360k - Based on what we have learned from our data, during FY’15 we plan to concentrate on: securing repeat customers (particularly larger groups), increase sales/marketing initiatives in the Greater DC area by establishing new contacts and reaching out to new groups that better fit our facility profile and mission. Limit camps and emphasizing corporate and educational conferences (targeting women 13 to 18) to better serve our long term fiscal needs and our mission. Trinity Center: Meet or Exceed Budget Revenue Goal of $566k - Based on an analysis of the data we recognize that the average total membership volume in FY’15 must increase from the low 400’s to an approximate target of 530. We have a plan to meet that goal. Efforts have already begun to increase revenue in areas that experienced shortfalls in FY’14. Specifically; special events, field rental, class revenue and basketball court revenue. Dining Services: Improve customer satisfaction - The data clearly indicates the need to maintain a consistent and a much higher level of customer satisfaction with campus dining. In FY’15 this will be accomplished by improving menu selections; improve food offerings (such as using more in- season local produce); increase special theme meals and improve communication with students. Progress will be measured by two Sodexo customer surveys to be conducted in FY’15. Bookstore: Slow the trend of declining textbook unit sales from -12% to -6% - The data clearly reports a declining trend in on campus textbook sales. To address the issue Barnes and Noble will better communicate to students their cost saving formats, on-line purchasing options and competitive pricing as an alternative to other available textbook purchasing options. We will investigate the option to implement the Barnes & Noble The Freshman Connection program which is a custom informational email campaign that is supported through social media and during on campus orientation sessions. The campaign seeks to educate students about their options for textbook savings (used, rental and digital) and answer questions about the textbook buying process, something that is new to a majority of first time college students. Where Barnes & Noble College has been allowed to deploy this messaging at similar sized institutions, it has generated additional revenue of between $14,000 and $44,000 annually. Facilities Services: Improve Work Order Response – After analyzing two years of work order system data, next year’s work order goals will focus on completing non-emergency work orders within 24 hours, (instead of the current 36) and continue to perform moving requests within 3 days. A specific effort will be made to work with the residential life staff to reduce their preventative maintenance work order requests by 20%. Main Hall work orders will be reduced by 10%. Work order related customer satisfaction survey information is now collected and quantified by Aramark management. The customer survey response rate of 3% is an area that needs significant improvement. Academic Center Construction Project: Meet schedule milestones – The data reflects that we have maintained a realistic but aggressive construction schedule. During the summer of 2014 we will continue to work with DC and local utilities in their permitting processes. Excavation is to begin in the fall of 2014. By the summer of 2015 we anticipate that the exterior building structure will be complete and the façade and roof work finalized. 83
View full slide show




"Gap Hill Climbing": mathematical analysis One way to increase the size of the functional gaps is to hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher STDev would increase the likelihood that gaps would be larger ( more dispersion allows for more and/or larger gaps). We can also try to grow one particular gap or thinning using support pairs as follows: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




"Gap Hill Climbing": mathematical analysis 1. To increase gap size, we hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher StDev would increase the likelihood that gaps would be larger since more dispersion allows for more and/or larger gaps. This is very heuristic but it works. 2. We are more interested in growing the largest gap(s) of interest ( or largest thinning). To do this we could do: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




VOMmean w F=(DPP-MN)/4 Concrete4150(C, W, FA, Ag) 0 1 1 1 5 1 6 1 7 1 8 4 med=14 9 1 10 1 11 2 12 1 13 5 14 1 15 3 med=18 16 3 17 4 18 1 19 3 20 9 21 4 22 3 23 7 24 2 med=40 25 4 26 8 27 7 28 7 med=56 29 10 30 3 31 1 32 3 33 6 med=61 34 4 35 5 37 2 38 2 40 1 42 3 43 1 44 1 45 1 46 4 ______ CLUS 4 gap=7 49 1 56 1 [52,74) 0L 7M 0H CLUS_3 58 1 61 1 65 1 66 1 69 1 ______ gap=6 71 1 77 1 [74,90) 0L 4M 0H CLUS_2 80 1 83 1 ________ gap=14 86 1[0.90) 43L 46 M 55H 100 1 [90,113) 0L 6M 0H CLUS_1 103 1 105 1 108 2 112 1 _____________At this level, FinalClus1={17M} 0 errors C1 C2 C3 C4 med=10 med=9 med=17 med=21 med=23 med=34 med=33 med=57 med=62 med=71 med=71 med=86 CLUS 4 (F=(DPP-MN)/2, Fgap2 _______ 0L 0M 3H CLUS 4.4.1 gap=7 0 3 =0 0L 0M 4H CLUS 4.4.2 gap=2 7 4 =7 9 1 [8,14] 1L 5M 22H CLUS 4.4.3 1L+5M err H 10 12 11 8 gap=3 12 7 ______ 0L 0M 4H CLUS 4.3.1 gap=3 15 4 =15 18 10 0L 0M 10H CLUS 4.3.2 gap=3 21 3 =18 22 7 ______ 23 2 [20,24) 0L 10M 2H CLUS 4.7.2 gap=2 25 2 [24,30) 10L 0M 0H CLUS_4.7.1 26 3 27 1 28 2 gap=2 29 1 31 3 CLUS 4.2.1 gap=2 32 1 [30,33] 0L 4M 0H Avg=32.3 34 2 0L 2M 0H CLUS 4.2.2 gap=6 40 4 =34 ______ 0L 4M 0H CLUS_4.2.3 gap=7 47 3 =40 52 1 0L 3M 0H CLUS_4.2.4 gap=5 53 3 =47 54 3 55 4 56 2 57 3 ______ gap=2 58 1 [50,59) 12L 1M 4H CLUS 4.8.1 L60 2 8L 0M 0H CLUS_4.8.2 61 2 [59,63) gap=2 62 4 ______ =64 2L 0M 2H CLUS 4.6.1 gap=3 64 4 [66,70) 10L 0M 0H CLUS 4.6.2 67 2 gap=3 68 1 71 7 ______ gap=7 72 3 [70,79) 10L 0M 0H CLUS_4.5 79 5 5L 0M 0H CLUS_4.1.1 gap=6 85 1 =79 87 2 [74,90) 2L 0M 1H CLUS_4.1 1 Merr in L Median=0 Avg=0 Median=7 Avg=7 Median=11 Avg=10.7 Median=15 Avg=15 Median=18 Avg=18 Median=22 Avg=22 2H errs in L Median=26 Avg=26 Median=31 Median=34 Avg=34 Median=40 Avg=40 Median=47 Avt=47 Accuracy=90% Median=55 Avg=55 1M+4H errs in Median=61.5 Avg=61.3 Median=64 Avg=64 2 H errs in L Median=67 Avg=67.3 Median=71 Avg=71.7 Median=79 Avg=79 Median=87 Avg=86.3 Suppose we know (or want) 3 clusters, Low, Medium and High Strength. Then we find Suppose we know that we want 3 strength clusters, Low, Medium and High. We can use an antichain that gives us exactly 3 subclusters two ways, one show in brown and the other in purple Which would we choose? The brown seems to give slightly more uniform subcluster sizes. Brown error count: Low (bottom) 11, Medium (middle) 0, High (top) 26, so 96/133=72% accurate. The Purple error count: Low 2, Medium 22, High 35, so 74/133=56% accurate. What about agglomerating using single link agglomeration (minimum pairwise distance? Agglomerate (build dendogram) by iteratively gluing together clusters with min Median separation. Should I have normalize the rounds? Should I have used the same Fdivisor and made sure the range of values was the same in 2nd round as it was in the 1st round (on CLUS 4)? Can I normalize after the fact, I by multiplying 1st round values by 100/88=1.76? Agglomerate the 1st round clusters and then independently agglomerate 2nd round clusters? CONCRETE
View full slide show