Linked/W Eng 071 19 EDU 100 01 ENG 091 01 MAT 001 06 HUM 100 01 PSY 101 05 EDU 101 01 SOC 101 05 SOC 101 01 MAT 001 02 ENG 071 03 ENG 101 10 PSY 122 02 ENG 071 10 ENG 101 11 ENG 101 25 Time 10:30-11:20 10:30-11:45 8:30-10:20 10:30-11:30 7:30-8:20 1:30-2:45 10:30-11:45 9-10:45 8:30-9:20 9-10:15 9:30-10:20 11:30-12:20 10:30-11:45 12:30-1:20 9-10:15 12:30-1:20 10:30-11:20 9:30-10:20 6-8:45 pm 9-11:45 9:30-10:20 8:30-9:20 12:30-1:45 12:30-1:20 9:30-10:20 Room HH 219 SA 102 AR Lib 3 AR 211 NA G 9 NA 211 BH 002 NA G 11 SA 006 AR 211 HH 114 NA G11 BH 002 NA G11 SA 006 HH 116 ENG 071 54 ENG 071 52 PSY 122 52 HUM 101 50 SOC 101 50 Day MWF MW MWF T MWF T-R MW TR MWF MWF MWF MWF TR MWF TR MWF MWF MWF T S MWF MWF WF MWF MWF ENG 101 64 PSY 123 50 ENG 071 61 PSY 101 59 ENG 071 64 COM 103 52 PSY 100 51 PSY 101 64 TR WF TR TR TR TR WF TR 9-10:15 12-1:15 1:30-2:45 9-10:15 9-10:15 10:30-11:45 10:30-11:45 7:30-8:45 LC 318 LC 113 LF 205 LC 211 LC 110 LC 201 LP 405 LC 104 SOC 101 30 NA G 11 LD 301 LF 205 LC 207 LF 214 LC 115 Instructor TBA TBA Black Delmonaco TBA Pisarik Higgins Ogburn Alessi Caruso Laughlin Davidson Regan Laughlin Davidson TBA TBA TBA TBA TBA Schleicher TBA Regan Ryan TBA Hutchinson TBA Gibbons TBA McKeon,R 11 Gray Keen IDS Sec IDS 101 01 IDS 101 02 IDS 101 03 IDS 101 04 IDS 101 05 IDS 101 06 IDS 101 07 IDS 101 08 IDS 101 09 IDS 101 10 IDS 101 11 IDS 101 12 IDS 101 13 IDS 101 14 IDS 101 15 IDS 101 16 IDS 101 17 IDS 101 18 IDS 101 30 IDS 101 31 IDS 101 50 IDS 101 51 IDS 101 52 IDS 101 53 IDS 101 54 IDS 101 55 IDS 101 56 IDS 101 57 IDS 101 58 IDS 101 59 IDS 101 60 IDS 101 61 IDS 101 62 IDS 101 63 Instructor TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA TBA DAY M M M R W R M R F W W F T M T M W F T S M M W M F W R F T T R T W R Time 8:30-9:20 9:30-10:20 10:30-11:20 10:30-11:20 9:30-10:20 12:30-1:20 12-12:20 11-11:50 9:30-10:20 10:30-11:20 8:30-9:20 10:30-11:20 12-12:50 1:30-2:20 8-8:50 1:30-2:20 10:30-11:20 8:30-9:20 5-5:50 pm 12-12:50 8:30-9:20 9:30-10:20 11:30-12:20 11:30-12:20 8:30-9:20 8-8:50 8-8:50 1:30-2:20 12:30-1:20 10:30-11:20 10:30-11:20 9:30-10:20 9:30-10:20 9-9:50 Room NA 102 NA 102 HH 213 AR 110 HH 112 SA 006 BH 002 AR-Lib 3 BH 103 NA 211 SA 103 SA 103 BH 002 HH 212 HH 310 HH 220 NA 117 BH 103 NA G9 NA G9 LC 209 LC 209 LC 112 LC 105 LC 306 LC 306 LC 110 LC 213 A LC 213 LP 403 LF 205 LC 118 LC 214 LP 405
View full slide show




Replay    QoE measurement  Old way: QoE = Server + Network  Modern way: QoE = Servers + Network + Browser Browsers are smart  Parallelism on multiple connections  JavaScript execution can trigger additional queries  Rendering introduces delays in resource access  Caching and pre-fetching HTTP replay cannot approximate real Web browser access to resources 0.25s 0.25s 0.06s 1.02s 0.67s 0.90s 1.19s 0.14s 0.97s 1.13s 0.70s 0.28s 0.27s 0.12s 3.86s 1.88s Total network time GET /wiki/page 1 Analyze page GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET GET combined.min.css jquery-ui.css main-ltr.css commonPrint.css shared.css flaggedrevs.css Common.css wikibits.js jquery.min.js ajax.js mwsuggest.js plugins...js Print.css Vector.css raw&gen=css ClickTracking.js Vector...js js&useskin WikiTable.css CommonsTicker.css flaggedrevs.js Infobox.css Messagebox.css Hoverbox.css Autocount.css toc.css Multilingual.css mediawiki_88x31.png 2 Rendering + JavaScript GET GET GET GET GET GET GET GET GET ExtraTools.js Navigation.js NavigationTabs.js Displaytitle.js RandomBook.js Edittools.js EditToolbar.js BookSearch.js MediaWikiCommon.css 3 Rendering + JavaScript GET GET GET GET GET GET GET GET GET GET GET 4 GET GET GET GET GET GET page-base.png page-fade.png border.png 1.png external-link.png bullet-icon.png user-icon.png tab-break.png tab-current.png tab-normal-fade.png search-fade.png Rendering search-ltr.png arrow-down.png wiki.png portal-break.png portal-break.png arrow-right.png generate page send files send files mBenchLab – [email protected] BROWSERS MATTER FOR QOE? send files send files + 2.21s total rendering time 6
View full slide show




read write architecture behav_detailed of cpu-mem is Microprocessor-Memory Communication signal address: integer; signal data_in, data_out : word; (contd): A more detailed behavioral description signal data_read, data_write, mem_ready: std_logic := `0’; Shantanu Dutt, UIC CPU-Mem begin 6 6’ CMI: process is -- CPU-Memory Interface module data_read variable AR, PC : integer; variable data_reg, instr_reg: word; 3 3’ begin CPU-Mem data_write wait until read = ‘1’ or write = ‘1’; Interface 1 if read = ‘1’ then data_in dr 2 AR := PC; wait for 1 ns; -- 1 ns reg. acc. time 3 2 3’ 3 address <= AR after 2 ns; -- 2 ns prop. delay Memory PC address AR 3 data_read <= `1’ after 2 ns; -- 2ns prop. delay; note simult. w/ addr 4’ 4 4’ wait until mem_ready = ‘1’; 5 +2 data_out 5 instr_reg := data_out; wait for 1 ns; 6 IR <= instr_reg after 1 ns; ir 7’ 7 8 6 data_read <= `0’ after 2 ns; 4’ mem_ready 4 7’ wait until mem_ready = `0’; 8 PC := PC+2; wait for 1 ns; 6 1 DR ARin IR elsif write = ‘1’ then data_reg := DR; AR := ARin; Legend: dr: data_reg wait for 1 ns; -- 1 ns reg acc (both happening in parallel) ir: instr_reg address <= AR after 2 ns; data_in <= data_reg after 2 ns; The red arrows w/ #s show the seq. of operations (or of data_write <= ‘1’ after 2 ns; …………………… end if; corresp. signals). For a # j, j’ end process CMI; denotes the delayed version of CPU the corresp. signal Relate this Memory: process is seq. to the seq. of opers type data_type is array (0 to 63) of word; described in the VHDL code on variable store: data_type; variable temp_reg: word; the left. 2ns reg + 1ns reg (PC) variable addr: integer; RAM access 2ns prop. access delay begin 2ns prop. delay delay delay wait until data_read = `1’ or data_write = `1’; data_read 2ns prop. 3’ if data_read = ‘1’ then – next:1ns reg. & ram access delay for 2ns prop. addr :=address; temp_reg := store(addr/2); wait for 2 ns; read sig. delay mem_ready 4 data_out <= temp_reg after 2 ns; • Multi-process description describing fully responsive handshaking -- RAM r/w time is 1ns; prop. time = 2ns 4 mem_ready <= ‘1’ after 2 ns; between the CPU-Mem interface (CMI) and Memory modules • Most Data/address/control-signal propagation delays and storage 6’ wait until data_read = ‘0’; 7 mem_ready <= ‘0’ after 2 ns; access times are accounted for: delay parameters used: 1-way commun. elsif data_write = ‘1’ then addr := address; store(addr/2) := data_in; wait for 2 ns; ……………… end if; end process Memory; end architecture behav_detailed; time w/ memory = 2 ns, RAM/register r/w time = 1 ns. • Note: A safer form of wait until X = ‘1’ is if X /= ‘1’ then wait until X = ‘1’ when it is not known for sure that X will not be ‘1’ at the point we want to wait for X being ‘1’. Similarly for waiting for X to be ‘0’.
View full slide show




Comparison with other methods Recently, Tjong and Zhou (2007) developed a neural network method for predicting DNA-binding sites. In their method, for each surface residue, the PSSM and solvent accessibilities of the residue and its 14 neighbors were used as input to a neural network in the form of vectors. In their publication, Tjong and Zhou showed that their method achieved better performance than other previously published methods. In the current study, the 13 test proteins were obtained from the study of Tjong and Zhou. Thus, we can compare the method proposed in the current study with Tjong and Zhou’s neural network method using the 13 proteins. Figure 1. Tradeoff between coverage and accuracy In their publication, Tjong and Zhou also used coverage and accuracy to evaluate the predictions. However, they defined accuracy using a loosened criterion of “true positive” such that if a predicted interface residue is within four nearest neighbors of an actual interface residue, then it is counted as a true positive. Here, in the comparison of the two methods, the strict definition of true positive is used, i.e., a predicted interface residue is counted as true positive only when it is a true interface residue. The original data were obtained from table 1 of Tjong and Zhou (2007), the accuracy for the neural network method was recalculated using this strict definition (Table 3). The coverage of the neural network was directly taken from Tjong and Zhou (2007). For each protein, Tjong and Zhou’s method reported one coverage and one accuracy. In contrast, the method proposed this study allows the users to tradeoff between coverage and accuracy based on their actual need. For the purpose of comparison, for each test protein, topranking patches are included into the set of predicted interface residues one by one in the decreasing order of ranks until coverage is the same as or higher than the coverage that the neural network method achieved on that protein. Then the coverage and accuracy of the two methods are compared. On a test protein, method A is better than B, if accuracy(A)>accuracy(B) and coverage (A)≥coverage(B). Table 3 shows that the graph kernel method proposed in this study achieves better results than the neural network method on 7 proteins (in bold font in table 3). On 4 proteins (shown in gray shading in table 3), the neural network method is better than the graph kernel method. On the remaining 2 proteins (in italic font in table 3), conclusions can be drawn because the two conditions, accuracy(A)>accuracy(B) and coverage (A)≥coverage(B), never become true at the same time, i.e., when coverage (graph kernel)>coverage(neural network), we have accuracy(graph kernel)accuracy(neural network). Note that the coverage of the graph kernel method increases in a discontinuous fashion as we use more patches to predict DNA-binding sites. One these two proteins, we were not able to reach at a point where the two methods have identical coverage. Given these situations, we consider that the two methods tie on these 2 proteins. Thus, these comparisons show that the graph kernel method can achieves better results than the neural network on 7 of the 13 proteins (shown in bold font in Table 3). Additionally, on another 4 proteins (shown in Italic font in Table 3), the graph kernel method ties with the neural network method. When averaged over the 13 proteins, the coverage and accuracy for the graph kernel method are 59% and 64%. It is worth to point out that, in the current study, the predictions are made using the protein structures that are unbound with DNA. In contrast, the data we obtained from Tjong and Zhou’s study were obtained using proteins structures bound with DNA. In their study, Tjong and Zhou showed that when unbound structures were used, the average coverage decreased by 6.3% and average accuracy by 4.7% for the 14 proteins (but the data for each protein was not shown).
View full slide show




// determine the average of an arbitrary number of grades public void DetermineClassAverage() { int total; // sum of grades int gradeCounter; // number of grades entered int grade; // grade value double average; // number with decimal point for average // initialization phase total = 0; // initialize total gradeCounter = 0; // initialize loop counter // processing phase // prompt for and read a grade from the user Console.Write( "Enter grade or -1 to quit: " ); grade = Convert.ToInt32( Console.ReadLine() ); Avoid infinite loop! // loop until sentinel value is read from the user while ( grade != -1 ) { total = total + grade; // add grade to total gradeCounter = gradeCounter + 1; // increment counter // prompt for and read the next grade from the user Console.Write( "Enter grade or -1 to quit: " ); grade = Convert.ToInt32( Console.ReadLine() ); } // end while // termination phase // if the user entered at least one grade... if ( gradeCounter != 0 ) { // calculate the average of all the grades entered average = ( double ) total / gradeCounter; Avoid division by 0 // display the total and average (with two digits of precision) Console.WriteLine( "\nTotal of the {0} grades entered is {1}", gradeCounter, total ); Console.WriteLine( "Class average is {0:F}", average ); } // end if else // no grades were entered, so output error message Console.WriteLine( "No grades were entered" ); } // end method DetermineClassAverage } 40
View full slide show




Sat, Sep 08 (#23) Florida * College Station 2:30 p.m. Sat, Sep 15 SMU Dallas, Texas 2:30 p.m. Sat, Sep 22 South Carolina State College Station TBA Sat, Sep 29 Arkansas * College Station TBA Sat, Oct 06 Ole Miss * Oxford, Miss. TBA Sat, Oct 13 Louisiana Tech Shreveport, La. TBA Sat, Oct 20 LSU * College Station TBA Sat, Oct 27 Auburn * Auburn, Ala. TBA Sat, Nov 03 Mississippi State * Starkville, Miss. TBA Sat, Nov 10 Alabama * Tuscaloosa, Ala. TBA Sat, Nov 17 Sam Houston State College Station TBA Sat, Nov 24 Missouri * College Station TBA Sat, Dec 01 SEC Championship Atlanta, Ga. 3:00 p.m. BLOG
View full slide show