Replay QoE measurement Old way: QoE = Server + Network Modern way: QoE = Servers + Network + Browser Browsers are smart Parallelism on multiple connections JavaScript execution can trigger additional queries Rendering introduces delays in resource access Caching and pre-fetching HTTP replay cannot approximate real Web browser access to resources 0.25s 0.25s 0.06s 1.02s 0.67s 0.90s 1.19s 0.14s 0.97s 1.13s 0.70s 0.28s 0.27s 0.12s 3.86s 1.88s Total network time GET /wiki/page 1 Analyze page GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET combined.min.css jquery-ui.css main-ltr.css commonPrint.css shared.css flaggedrevs.css Common.css wikibits.js jquery.min.js ajax.js mwsuggest.js plugins...js Print.css Vector.css raw&gen=css ClickTracking.js Vector...js js&useskin WikiTable.css CommonsTicker.css flaggedrevs.js Infobox.css Messagebox.css Hoverbox.css Autocount.css toc.css Multilingual.css mediawiki_88x31.png 2 Rendering + JavaScript GET GET GET GET GET GET GET GET GET ExtraTools.js Navigation.js NavigationTabs.js Displaytitle.js RandomBook.js Edittools.js EditToolbar.js BookSearch.js MediaWikiCommon.css 3 Rendering + JavaScript GET GET GET GET GET GET GET GET GET GET GET 4 GET GET GET GET GET GET page-base.png page-fade.png border.png 1.png external-link.png bullet-icon.png user-icon.png tab-break.png tab-current.png tab-normal-fade.png search-fade.png Rendering search-ltr.png arrow-down.png wiki.png portal-break.png portal-break.png arrow-right.png generate page send files send files mBenchLab – [email protected] BROWSERS MATTER FOR QOE? send files send files + 2.21s total rendering time 6
read write architecture behav_detailed of cpu-mem is Microprocessor-Memory Communication signal address: integer; signal data_in, data_out : word; (contd): A more detailed behavioral description signal data_read, data_write, mem_ready: std_logic := `0’; Shantanu Dutt, UIC CPU-Mem begin 6 6’ CMI: process is -- CPU-Memory Interface module data_read variable AR, PC : integer; variable data_reg, instr_reg: word; 3 3’ begin CPU-Mem data_write wait until read = ‘1’ or write = ‘1’; Interface 1 if read = ‘1’ then data_in dr 2 AR := PC; wait for 1 ns; -- 1 ns reg. acc. time 3 2 3’ 3 address <= AR after 2 ns; -- 2 ns prop. delay Memory PC address AR 3 data_read <= `1’ after 2 ns; -- 2ns prop. delay; note simult. w/ addr 4’ 4 4’ wait until mem_ready = ‘1’; 5 +2 data_out 5 instr_reg := data_out; wait for 1 ns; 6 IR <= instr_reg after 1 ns; ir 7’ 7 8 6 data_read <= `0’ after 2 ns; 4’ mem_ready 4 7’ wait until mem_ready = `0’; 8 PC := PC+2; wait for 1 ns; 6 1 DR ARin IR elsif write = ‘1’ then data_reg := DR; AR := ARin; Legend: dr: data_reg wait for 1 ns; -- 1 ns reg acc (both happening in parallel) ir: instr_reg address <= AR after 2 ns; data_in <= data_reg after 2 ns; The red arrows w/ #s show the seq. of operations (or of data_write <= ‘1’ after 2 ns; …………………… end if; corresp. signals). For a # j, j’ end process CMI; denotes the delayed version of CPU the corresp. signal Relate this Memory: process is seq. to the seq. of opers type data_type is array (0 to 63) of word; described in the VHDL code on variable store: data_type; variable temp_reg: word; the left. 2ns reg + 1ns reg (PC) variable addr: integer; RAM access 2ns prop. access delay begin 2ns prop. delay delay delay wait until data_read = `1’ or data_write = `1’; data_read 2ns prop. 3’ if data_read = ‘1’ then – next:1ns reg. & ram access delay for 2ns prop. addr :=address; temp_reg := store(addr/2); wait for 2 ns; read sig. delay mem_ready 4 data_out <= temp_reg after 2 ns; • Multi-process description describing fully responsive handshaking -- RAM r/w time is 1ns; prop. time = 2ns 4 mem_ready <= ‘1’ after 2 ns; between the CPU-Mem interface (CMI) and Memory modules • Most Data/address/control-signal propagation delays and storage 6’ wait until data_read = ‘0’; 7 mem_ready <= ‘0’ after 2 ns; access times are accounted for: delay parameters used: 1-way commun. elsif data_write = ‘1’ then addr := address; store(addr/2) := data_in; wait for 2 ns; ……………… end if; end process Memory; end architecture behav_detailed; time w/ memory = 2 ns, RAM/register r/w time = 1 ns. • Note: A safer form of wait until X = ‘1’ is if X /= ‘1’ then wait until X = ‘1’ when it is not known for sure that X will not be ‘1’ at the point we want to wait for X being ‘1’. Similarly for waiting for X to be ‘0’.
Comparison with other methods Recently, Tjong and Zhou (2007) developed a neural network method for predicting DNA-binding sites. In their method, for each surface residue, the PSSM and solvent accessibilities of the residue and its 14 neighbors were used as input to a neural network in the form of vectors. In their publication, Tjong and Zhou showed that their method achieved better performance than other previously published methods. In the current study, the 13 test proteins were obtained from the study of Tjong and Zhou. Thus, we can compare the method proposed in the current study with Tjong and Zhou’s neural network method using the 13 proteins. Figure 1. Tradeoff between coverage and accuracy In their publication, Tjong and Zhou also used coverage and accuracy to evaluate the predictions. However, they defined accuracy using a loosened criterion of “true positive” such that if a predicted interface residue is within four nearest neighbors of an actual interface residue, then it is counted as a true positive. Here, in the comparison of the two methods, the strict definition of true positive is used, i.e., a predicted interface residue is counted as true positive only when it is a true interface residue. The original data were obtained from table 1 of Tjong and Zhou (2007), the accuracy for the neural network method was recalculated using this strict definition (Table 3). The coverage of the neural network was directly taken from Tjong and Zhou (2007). For each protein, Tjong and Zhou’s method reported one coverage and one accuracy. In contrast, the method proposed this study allows the users to tradeoff between coverage and accuracy based on their actual need. For the purpose of comparison, for each test protein, topranking patches are included into the set of predicted interface residues one by one in the decreasing order of ranks until coverage is the same as or higher than the coverage that the neural network method achieved on that protein. Then the coverage and accuracy of the two methods are compared. On a test protein, method A is better than B, if accuracy(A)>accuracy(B) and coverage (A)≥coverage(B). Table 3 shows that the graph kernel method proposed in this study achieves better results than the neural network method on 7 proteins (in bold font in table 3). On 4 proteins (shown in gray shading in table 3), the neural network method is better than the graph kernel method. On the remaining 2 proteins (in italic font in table 3), conclusions can be drawn because the two conditions, accuracy(A)>accuracy(B) and coverage (A)≥coverage(B), never become true at the same time, i.e., when coverage (graph kernel)>coverage(neural network), we have accuracy(graph kernel)accuracy(neural network). Note that the coverage of the graph kernel method increases in a discontinuous fashion as we use more patches to predict DNA-binding sites. One these two proteins, we were not able to reach at a point where the two methods have identical coverage. Given these situations, we consider that the two methods tie on these 2 proteins. Thus, these comparisons show that the graph kernel method can achieves better results than the neural network on 7 of the 13 proteins (shown in bold font in Table 3). Additionally, on another 4 proteins (shown in Italic font in Table 3), the graph kernel method ties with the neural network method. When averaged over the 13 proteins, the coverage and accuracy for the graph kernel method are 59% and 64%. It is worth to point out that, in the current study, the predictions are made using the protein structures that are unbound with DNA. In contrast, the data we obtained from Tjong and Zhou’s study were obtained using proteins structures bound with DNA. In their study, Tjong and Zhou showed that when unbound structures were used, the average coverage decreased by 6.3% and average accuracy by 4.7% for the 14 proteins (but the data for each protein was not shown).
// determine the average of an arbitrary number of grades public void DetermineClassAverage() { int total; // sum of grades int gradeCounter; // number of grades entered int grade; // grade value double average; // number with decimal point for average // initialization phase total = 0; // initialize total gradeCounter = 0; // initialize loop counter // processing phase // prompt for and read a grade from the user Console.Write( "Enter grade or -1 to quit: " ); grade = Convert.ToInt32( Console.ReadLine() ); Avoid infinite loop! // loop until sentinel value is read from the user while ( grade != -1 ) { total = total + grade; // add grade to total gradeCounter = gradeCounter + 1; // increment counter // prompt for and read the next grade from the user Console.Write( "Enter grade or -1 to quit: " ); grade = Convert.ToInt32( Console.ReadLine() ); } // end while // termination phase // if the user entered at least one grade... if ( gradeCounter != 0 ) { // calculate the average of all the grades entered average = ( double ) total / gradeCounter; Avoid division by 0 // display the total and average (with two digits of precision) Console.WriteLine( "\nTotal of the {0} grades entered is {1}", gradeCounter, total ); Console.WriteLine( "Class average is {0:F}", average ); } // end if else // no grades were entered, so output error message Console.WriteLine( "No grades were entered" ); } // end method DetermineClassAverage } 40
Sat, Sep 08 (#23) Florida * College Station 2:30 p.m. Sat, Sep 15 SMU Dallas, Texas 2:30 p.m. Sat, Sep 22 South Carolina State College Station TBA Sat, Sep 29 Arkansas * College Station TBA Sat, Oct 06 Ole Miss * Oxford, Miss. TBA Sat, Oct 13 Louisiana Tech Shreveport, La. TBA Sat, Oct 20 LSU * College Station TBA Sat, Oct 27 Auburn * Auburn, Ala. TBA Sat, Nov 03 Mississippi State * Starkville, Miss. TBA Sat, Nov 10 Alabama * Tuscaloosa, Ala. TBA Sat, Nov 17 Sam Houston State College Station TBA Sat, Nov 24 Missouri * College Station TBA Sat, Dec 01 SEC Championship Atlanta, Ga. 3:00 p.m. BLOG