Process Capability Index, Cpk Process Capability Index, Cpk, is an index that measures the potential for a process to generate defective outputs relative to either upper or lower specifications. Cpk = Minimum of x= – Lower specification 3 , Upper specification – x= 3 We take the minimum of the two ratios because it gives the worst-case situation. © 2007 Pearson Education
View full slide show




Jagged arrays  Implemented as arrays of arrays , 4 index type index lower bound index upper bound address address , 3 , 3 index type index type index lower bound index lower bound index upper bound index upper bound address address , 7 , 7 index type index type index lower bound index lower bound index upper bound index upper bound address address , 4 , 4 index type index type index lower bound index lower bound index upper bound index upper bound address address , 5 , 5 index type index type index lower bound index lower bound index upper bound index upper bound address address 24
View full slide show




Simple and Compound Events Example #4: A box contains a certain number of computer parts, a few of which are defective. Two parts are selected at random from this box and inspected to determine if they are good or defective. List all the outcomes included in each of the following events. Indicate which are simple and which are compound events. a)At least one part is good. b)Exactly one part is defective. c)The first part is good and the second is defective. d)At most one part is good. Solution: Let, D = a defective part G = a good part The experiment has the following outcomes: DD = both parts are defective DG = the 1st part is defective and the 2nd is good GG = both parts are good GD = the 1st part is good and the 2nd is defective a)At least one part is good = {DG, GG, GD} compound event b)Exactly one part is defective = {DG, GD} compound event c)The 1st is good and 2nd defective = {GD} simple event d)At most one part is good = {DD, DG, GD} compound event. 5 Copyright © 2014, 2012, 2010 Pearson Education, Inc. Section 4.3-5
View full slide show




RAPTOR Syntax and Semantics - Arrays Array variable - Array variables are used to store many values (of the same type) without having to have many variable names. Instead of many variables names a count-controlled loop is used to gain access (index) the individual elements (values) of an array variable. RAPTOR has one and two dimensional arrays of numbers. A one dimensional array can be thought of as a sequence (or a list). A two dimensional array can be thought of as a table (grid or matrix). To create an array variable in RAPTOR, use it like an array variable. i.e. have an index, ex. Score[1], Values[x], Matrix[3,4], etc. All array variables are indexed starting with 1 and go up to the largest index used so far. RAPTOR array variables grow in size as needed. The assignment statement GPAs[24] ← 4.0 assigns the value 4.0 to the 24th element of the array GPAs. If the array variable GPAs had not been used before then the other 23 elements of the GPAs array are initialized to 0 at the same time. i.e. The array variable GPAs would have the following values: 1 2 3 4… Array variables in action- Arrays and count-controlled loop statements were made for each other. Notice in each example below the connection between the Loop Control Variable and the array index! Notice how the Length_Of function can be used in the count-controlled loop test! Notice that each example below is a count-controlled loop and has an Initialize, Test, Execute, and Modify part (I.T.E.M)! Assigning values to an array variable Reading values into an array variable Writing out an array variable’s values Computing the total and average of an array variable’s values Index ← 1 Index ← 1 Index ← 1 Total ← 0 Loop Loop Loop Index ← 1 PUT “The value of the array at position “ + Index + “ is “ + GPAs[Index] Loop GPAs[Index] ← 4.0 “Enter the GPA of student “” + Index + “: “ GET GPAs[Index] Index >= 24 Index >= 24 Index >= Length_Of (GPAs) Index ← Index + 1 Index ← Index + 1 Index ← Index + 1 Total ← Total + GPAs[Index] Index >= Length_Of(GPAs) Index ← Index + 1 … 23 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4.0 The initialization of previous elements to 0 happens only when the array variable is created. Successive assignment statements to the GPAs variable affect only the individual element listed. For example, the following successive assignment statements GPAs[20] GPAs[11] ← ← 1.7 3.2 would place the value 1.7 into the 20th position of the array, and would place the value 3.2 into the 11th position of the array. i.e. GPAs[20] ← 1.7 GPAs[11] ← 3.2 1 2 3 4… … 23 24 Initialize the elements of a two dimensional array (A two dimensional array requires two loops) Row ← 1 Loop Average ← Total / Length_Of(GPAs) Find the largest value of all the values in an array variable Find the INDEX of the largest value of all the values in an array variable Highest_GPA ← GPAs[1] Highest_GPA_Index ←1 Index ← 1 Index ← 1 Loop Loop GPAs[Index] > Highest_GPA GPAs[Index] >= GPAs[Highest_GPA_Index] Column ← 1 Loop 0 0 0 0 0 0 0 0 0 0 3.2 0 0 0 0 0 0 0 0 1.7 0 0 0 4.0 An array variable name, like GPAs, refers to ALL elements of the array. Adding an index (position) to the array variable enables you to refer to any specific element of the array variable. Two dimensional arrays work similarly. i.e. Table[7,2] refers to the element in the 7 th row and 2nd column. Individual elements of an array can be used exactly like any other variable. E.g. the array element GPAs[5] can be used anywhere the number variable X can be used. The Length_Of function can be used to determine (and return) the number of elements that are associated with a particular array variable. For example, after all the above, Length_Of(GPAs) is 24. Matrix[Row, Column] ← 1 Column >= 20 Column ← Column + 1 Highest_GPA ← GPAs[Index] Highest_GPA_Index ← Index Index >= Length_Of(GPAs) Index >= Length_Of(GPAs) Index ← Index + 1 Index ← Index + 1 PUT “The highest GPA is “ + Highest_GPA¶ PUT “The highest GPA is “ + GPAs[Highest_GPA_Index] + “ it is at position “ + Highest_GPA_Index¶ Row >= 20 Row ← Row + 1
View full slide show




Intensive Care Lab Assessing Process Capability Example 6.5 Cpk = Minimum of Cpk = Cpk = Upper specification = 30 minutes Lower specification = 20 minutes Average service = 26.2 minutes  = 1.35 minutes x= – Lower specification Minimum of 3 26.2 – 20.0 3(1.35) Minimum of © 2007 Pearson Education , 1.53, 0.94 Upper specification – x= 3 , 30.0 – 26.2 3(1.35) = 0.94 Process Capability Index
View full slide show




Intensive Care Lab Assessing Process Capability Example 6.5 Cp = Cp = Upper specification - Lower specification 30 - 20 6(1.35) 6 = 1.23 Process Capability Ratio Does not meet 4 (1.33 Cp) target Before Process Modification Upper specification = 30.0 minutes Lower specification = 20.0 minutes Average service = 26.2 minutes  = 1.35 minutes Cpk = 0.94 Cp = 1.23 After Process Modification Upper specification = 30.0 minutes Lower specification = 20.0 minutes Average service = 26.1 minutes  = 1.2 minutes Cpk = 1.08 Cp = 1.39 © 2007 Pearson Education
View full slide show




Slightly Less Simple Loops (C) for (index = 0; index < length; index++) { dst[index] = pow(src1[index], src2[index]); } for (index = 0; index < length; index++) { dst[index] = src1[index] % src2[index]; } for (index = 0; index < length; index++) { dst[index] = sqrt(src[index]); } for (index = 0; index < length; index++) { dst[index] = cos(src[index]); } for (index = 0; index < length; index++) { dst[index] = exp(src[index]); } for (index = 0; index < length; index++) { dst[index] = log(src[index]); } Supercomputing in Plain English: Instruct Lev Par Tue Feb 13 2018 48
View full slide show




Terms • Risk Analysis • The process of evaluating a decision in the face of uncertainty by quantifying the likelihood and magnitude of undesirable outcomes. • What-if Analysis • Consider alternative values for a random variable and computing model output. • We will setup a range of values for each random variable. • A trial-and-error approach to learning about the range of possible outputs for a model. • Base-case scenario • Determining outputs assuming most likely values for the random variables of the model. • Worst-case scenario • Determining outputs assuming worst values that can be expected for the random variables of the model. • Best-case scenario • Determining outputs assuming best values that can be expected for the random variables of the model. • With the Base, Best and Worst case scenarios, the what-if analysis will yield a range of values for the decision maker. • When we run the simulation we can look at: • Frequency Distributions, Histograms , and Relative Frequency Distributions that will help us to see the full picture of the range of values possible and the relative frequency for each value. In this way simulation will provide the decision maker with a more complete picture in the face of uncertainty than will a simple what if analysis with Best, Base and Worst case scenarios. 15
View full slide show




Merge Sort Analysis Worst-case time complexity for applying the merge function to a size-k subarray: M(k) = 18k-7. template template Etype> void merge(Etype source[], void merge(Etype source[], Etype Etype dest[], dest[], int int lower, lower, int int middle, middle, int int upper) upper) { int int s1 s1 = = lower; lower; int // int s2 s2 = = middle middle + + 1; 1; // 1 1 TU TU int int d d = = lower; lower; do do { if if (source[s1] (source[s1] < < source[s2]) source[s2]) // // If If block: block: { // 14 { // 14 TU TU dest[d] dest[d] = = source[s1]; source[s1]; s1++; s1++; } else else { { dest[d] dest[d] = = source[s2]; source[s2]; s2++; s2++; } d++; // d++; // 1 1 TU TU } while ((s1 <= middle) && // } while ((s1 <= middle) && // k-m k-m iter. iter. (s2 // @ (s2 <= <= upper)); upper)); // @ 3 3 TU TU } } if (s1 > middle) do do { { dest[d] dest[d] = = source[s2]; source[s2]; s2++; s2++; d++; } } while while (s2 (s2 <= <= upper); upper); else else do do { { dest[d] = source[s1]; s1++; s1++; d++; d++; } } while while (s1 (s1 <= <= middle); middle); CS 340 // 1 TU // // // // // // // 6 6 1 1 1 m m TU TU TU TU TU iter. iter. @ @ 1 1 TU TU Time complexity for applying the order function to a size-k subarray: R(k), where R(1)=1 and R(k) = 5+M(k)+2R(k/2) = 18k-2+2R(k/2). This recurrence relation yields R(k) = 18klogk-logk+2. template template void order(Etype source[], Etype dest[], int lower, int upper) { int middle; if (lower != upper) { middle = (lower + upper) / 2; order(dest, source, lower, middle); order(dest, source, middle + 1, upper); merge(source, dest, lower, middle, upper); } } // 1 TU // // // // 3 TU R(k/2) TU R(k/2)+1 TU M(k) TU Time complexity for applying the mergesort function to a sizen subarray: T(n) = 8n+1+R(n) = 18nlogn+8n-logn+3. template void mergeSort(Etype A[], const int n) { Etype Acopy[n+1]; // 1 TU int size; for (int k = 1; k <= n; k++) // n iter. @ 2 TU Acopy[k] = A[k]; // 6 TU order(Acopy, A, 1, size); // R(n) TU } While While this this O(nlogn) O(nlogn) time time complexity complexity is is favorable, favorable, the the requirement requirement of of aa duplicate duplicate array array is is detrimental detrimental to to the the Merge Merge Sort Sort algorithm, algorithm, possibly possibly making making it it less less popular popular than certain alternative choices. than certain alternative choices. Page 16
View full slide show




Worst-case Corner Model • Worst-case four corner model o Conventionally, process variability is modeled on the basis of the worst-case four corners   corners for analog applications • For modeling worst-case speed o slow NMOS and slow PMOS(SS) corner • For modeling worst-case power o fast NMOS and fast PMOS(FF) corner corners for digital applications • For modeling worst-case 1 o fast NMOS and slow PMOS(FS) corner • For modeling worst-case 0 o slow NMOS and fast PMOS(SF) corner 28
View full slide show




Flashlight example problem Start state: [closed(case), closed(top), inside(batteries), defective(batteries), ok(light), unbroken(case)] Goal conditions: [ok(batteries), ok(light), closed(case), closed(top)] pre replace_batteries disassemble_case post post assemble_case turn_over_case State after disassemble-case: [open(case), closed(top), inside(batteries), defective(batteries), ok(light), unbroken(case)] State after turn-over-case: [outside(batteries), open(case), closed(top), defective(batteries), ok(light), unbroken(case)] State after replace-batteries: [ok(batteries), inside(batteries), open(case), closed(top), ok(light), unbroken(case)] State after assemble-case: [closed(case), ok(batteries), inside(batteries), closed(top), ok(light), unbroken(case)]
View full slide show




Seismic attribute-assisted interpretation of incised valley fill geometries: A case study of Anadarko Basin Red Fork interval. Yoscel Suarez*, Chesapeake Energy and The University of Oklahoma, USA Kurt J. Marfurt, The University of Oklahoma, USA Mark Falk, Chesapeake Energy, USA Al Warner , Chesapeake Energy, USA Seismic Attribute Generation Edge Detection Relative Acoustic Impedance The Relative Acoustic Impedance (RAI) is a simplified inversion. This attribute is widely used for lithology discrimination and as a thickness variation indicator. Since the RAI enhances impedance contrast boundaries, it may help delimit different facies within an incised valley-fill complex. Figure 15 shows the better delineation of the different valley-fill episodes. The impedance amplitude variations within the system may be correlated to sand/shale ratios. Higher values of RAI seem to be related to sandier intervals (black arrow). Coherence According to Chopra and Marfurt (2007) coherence is a measure of similarity between waveforms or traces. Peyton et al. (1998) showed the value of this edge detection attribute to identify channel boundaries in the Red Fork level. Figure 11 shows the results of the modern coherence algorithm and the interpretation. The modern coherence algorithm is slightly superior. It shows additional features (blue arrows), and enhances the edge of Phase II (pink arrow). It also shows that the current outlines of Phase II could be modified in the encircled areas. Figure 15. Relative Acoustic Impedance (RAI) at the Red Fork level. Figure 11. Modern coherency horizon slice at the Red Fork level Figure 12. Other modern edge-detector attributes: a) Sobel coherence. b) Energy ratio coherence Energy Weighted Coherent Amplitude Gradients Chopra and Marfurt (2007), by using a wedge model, demonstrate that waveform difference detection algorithms are insensitive to waveform changes below tuning frequencies. In this study the energy ratio coherence, defined by the coherent energy normalized by the total energy of the traces within the calculation window, and the Sobel coherence, which is a measure of relative changes in amplitude were used. Figure 12 shows a horizon slice of the energy ratio coherence and the Sobel coherence at the Red Fork level. The results from these two energy weighted routines are very similar to the coherence attribute, however the level of detail of the coherency algorithm is greater in the encircled areas. Even though both algorithms show similar features, the Sobel coherence seems to be more affected by the acquisition footprint than does the energy ratio coherence. Seismic Attribute Blending Peak Frequency and Peak Amplitude Displays Liu and Marfurt (2007) show that by combining the peak frequency and peak amplitude volumes extracted from the spectral decomposition analysis, the interpreter can identify highly tuned intervals. Low peak frequency values correlate with thicker intervals and high peak frequencies with thinner features. Figures 16 (a,b) show the peak frequency and peak amplitude volumes respectively. Figure 16(c) shows the combination of both displays, which simplifies the interpretation of multiple volumes of data. Figure 16(d) shows the blended image with the overlain geological interpretation. This combination iof attributes shows a better definition of the Phases boundaries especially the Phase II in the NW corner of the survey, in between the two valley branches. The changes in facies within the Phase V are evident in the southernmost green arrow. The differentiation between the Phase III and Phase V is sharper (northernmost green arrow). Outside of the incised valley system the lithology relationship with frequency is still unclear. The dashed orange lines show the proposed changes to the Phase II outline. Curvature Although successful in delineating channels in Mesozoic rocks in Alberta, Canada (Chopra and Marfurt, 2008), for this study, volumetric curvature does not provide images of additional interpretational value. While the Red Fork channel boundaries can be delineated using this attribute (Figure 13), the results shown by the coherence and spectral decomposition are superior. In this situation the acquisition footprint negatively impacts the lateral resolution quality of the attribute. Blue arrows indicate channel edges. Figure 13. Other modern edge-detector attributes: a) Sobel coherence. b) Energy ratio coherence Figure 16. Peak Frequency and Peak Amplitude analysis at the Red Fork level. (a) Peak Frequency volume, red corresponds to higher frequencies. (b) Peak Amplitude volume, white corresponds to higher peak amplitude values. (c) Peak frequency and peak amplitude blended volume. The co-rendered image shows valley-fill boundaries. (d) co-rrendered image with interpretation Spectral Decomposition Matching pursuit spectral decomposition was used to generate individual frequency volumes as well as peak amplitude and peak frequency datasets. Castagna et al. (2003) discuss the value of using matching pursuit spectral decomposition and how we can associate different “tuning frequencies” to different reservoir properties like fluid content, thickness and/or lithology. Figure 14 shows a matching pursuit 36 Hz spectral component at the Red Fork level. The level of detail using matching pursuit spectral decomposition is superior to that provided by the DFT Figure 14. 36 Hz matching pursuit spectral decomposition. Note the enhanced level of detail offered by the matching pursuit spectral decomposition. a) without geological interpretation b) with geological interpretation This study has identified correlations between attribute expressions of Red Fork channels that can be applied to underexploited exploration areas in the Mid-continent, and to fluvial deltaic channels in Paleozoic rocks in general. When it comes to answer the key questions discussed at the beginning of this paper, we learned that the coherence and energy weighted attributes help improve the resolution of subtle features like small channels and channel levees. They also help differentiate the cutbank from the gradational inner bank. It is also evident from this study that even though there have been some improvements in the coherence routines, the differences between current algorithms with the ones applied by Peyton et al. in 1998 are minimal. Additionally, detailed channel geomorphology and lithology discrimination were possible by introducing the spectral decomposition and relative acoustic impedance attributes in the analysis. On one hand, the use of spectral decomposition helped define different facies within the channel system and increased the resolution of channel boundaries. On the other hand, the variations in the RAI values were found to be correlative to lithology infill, for instance higher values of RAI show direct relationship to shalier intervals within the channel complex. One of the key findings of this study is the great value that blended images of attributes bring to the interpreter. Such technology was not available ten years ago. But today, by combining multiple attributes, fluvial facies delineation is possible when co-rendering edge detection attributes with lithology indicators. It is important to mention that the signal/noise ratio of the data is a key factor that will determine the resolution and quality of the seismic attribute response. In this study, curvature did not provide images of additional interpretational value. These unsatisfactory results may be related to acquisition footprint contamination. Therefore, footprint removal methods will be performed in an attempt to enhance signal-tonoise ratio. Acknowledgments We thank Chesapeake Energy for their support in this research effort. We give special thanks to Larry Lunardi, Carroll Shearer, Mike Horn, Mike Lovell and Travis Wilson for their valuable contribution and feedback. And to my closest friends Carlos Santacruz and Luisa Aurrecoechea for cheering me up at all times. Amplitude Variability Semblance of the Relative Acoustic Impedance Chopra and Marfurt (2007) define semblance as “the ratio of the energy of the average trace to the average energy of all the traces along a specified dip.” Since RAI has sharper facies boundaries the semblance computed from RAI should be crisper than semblance computed from the conventional seismic. Figure 17 shows the value of combining these attributes. Outside of the channel complex the lithology relationship with frequency is still unclear(red arrow). The yellow arrow points to a potential fluvial channel outside of the incised valley-system. The dashed orange lines show the proposed changes to the Phase II outline. Conclusions Figure 17 a) the Semblance of the RAI and b) RAI and RAI semblance blended image. The combination of both attributes helps delineate Relative Acoustic Impedance boundaries.
View full slide show




Some Simple Loops (C) for (index = 0; index < length; index++) { dst[index] = src1[index] + src2[index]; } for (index = 0; index < length; index++) { dst[index] = src1[index] - src2[index]; } for (index = 0; index < length; index++) { dst[index] = src1[index] * src2[index]; } for (index = 0; index < length; index++) { dst[index] = src1[index] / src2[index]; } for (index = 0; index < length; index++) { sum = sum + src[index]; } Supercomputing in Plain English: Instruct Lev Par Tue Feb 13 2018 46
View full slide show