Payroll Processing PAY ID PAY PERIOD DATES 6/1/2014 MO7 6/30/2014 5/19/2014 MS7 6/18/2014 6/02/2014 – SP13 6/18/2014 6/19/2014 SP14 7/1/2014 5/25/2014 BW13 6/07/2014 6/08/2014BW14 6/21/2014 6/22/2014 – BW15 7/05/2014 SPECIAL  CHECK DATE BANNER ENCUMBRANC INTERFACE E RELEASE DATE DATE NHIDIST LOAD DATES 7/01/2014 7/01/2014 6/30/2014 06/30/2014 7/01/2014 7/01/2014 6/30/2014 06/27/2014 7/01/2014 7/01/2014 6/30/2014 06/27/2014 7/15/2014 7/15/2014 6/18/2014 06/13/2014 7/02/2014 7/02/2014 06/30/2014 06/30/2014 FY2015 Prepaid Insurance FY2015 Prepaid Insurance FY2015 7/16/2014 7/16/2014 07/11/2014 FY2015 7/15/2014 7/15/2015 07/11/2014 FY2015 6/18/2014 6/18/2014 07/11/2014 COMMENTS Prepaid Insurance FY2015
View full slide show




Why Deep Learning? Different Classifiers on the MNIST Database Linear classifier Pairwise linear classifier None Deskewing Error rate (%) 7.6[9] K-Nearest Neighbors K-NN with non-linear deformation (P2DHMDM) None Shiftable edges 0.52[18] Boosted Stumps Product of stumps on Haar features None Haar features 0.87[19] Non-linear classifier 40 PCA + quadratic classifier None Support vector machine Virtual SVM, deg-9 poly, 2pixel jittered None Neural network 2-layer 784-800-10 None None Neural network 2-layer 784-800-10 elastic distortions None 1.6[21] 0.7[21] Deep neural network 6-layer 784-2500-2000-15001000-500-10 elastic distortions None 0.35[22] Convolutional neural network 6-layer 784-40-80-500-10002000-10 None Expansion of the training data 0.31[15] Convolutional neural network 6-layer 784-50-100-500-100010-10 None Expansion of the training data 0.27[16] Convolutional neural network Committee of 35 CNNs, 1-20P-40-P-150-10 elastic distortions Width normalizations 0.23[8] Convolutional neural network Committee of 5 CNNs, 6-layer 784-50-100-500-1000-10-10 None Expansion of the training data 0.21[17] Distortion https://en.wikipedia.org/wiki/MNIST_database 9 Preprocessing None 3.3[9] 0.56[20] Deskewing P Classifier DE E Type
View full slide show




Comparison with other methods Recently, Tjong and Zhou (2007) developed a neural network method for predicting DNA-binding sites. In their method, for each surface residue, the PSSM and solvent accessibilities of the residue and its 14 neighbors were used as input to a neural network in the form of vectors. In their publication, Tjong and Zhou showed that their method achieved better performance than other previously published methods. In the current study, the 13 test proteins were obtained from the study of Tjong and Zhou. Thus, we can compare the method proposed in the current study with Tjong and Zhou’s neural network method using the 13 proteins. Figure 1. Tradeoff between coverage and accuracy In their publication, Tjong and Zhou also used coverage and accuracy to evaluate the predictions. However, they defined accuracy using a loosened criterion of “true positive” such that if a predicted interface residue is within four nearest neighbors of an actual interface residue, then it is counted as a true positive. Here, in the comparison of the two methods, the strict definition of true positive is used, i.e., a predicted interface residue is counted as true positive only when it is a true interface residue. The original data were obtained from table 1 of Tjong and Zhou (2007), the accuracy for the neural network method was recalculated using this strict definition (Table 3). The coverage of the neural network was directly taken from Tjong and Zhou (2007). For each protein, Tjong and Zhou’s method reported one coverage and one accuracy. In contrast, the method proposed this study allows the users to tradeoff between coverage and accuracy based on their actual need. For the purpose of comparison, for each test protein, topranking patches are included into the set of predicted interface residues one by one in the decreasing order of ranks until coverage is the same as or higher than the coverage that the neural network method achieved on that protein. Then the coverage and accuracy of the two methods are compared. On a test protein, method A is better than B, if accuracy(A)>accuracy(B) and coverage (A)≥coverage(B). Table 3 shows that the graph kernel method proposed in this study achieves better results than the neural network method on 7 proteins (in bold font in table 3). On 4 proteins (shown in gray shading in table 3), the neural network method is better than the graph kernel method. On the remaining 2 proteins (in italic font in table 3), conclusions can be drawn because the two conditions, accuracy(A)>accuracy(B) and coverage (A)≥coverage(B), never become true at the same time, i.e., when coverage (graph kernel)>coverage(neural network), we have accuracy(graph kernel)accuracy(neural network). Note that the coverage of the graph kernel method increases in a discontinuous fashion as we use more patches to predict DNA-binding sites. One these two proteins, we were not able to reach at a point where the two methods have identical coverage. Given these situations, we consider that the two methods tie on these 2 proteins. Thus, these comparisons show that the graph kernel method can achieves better results than the neural network on 7 of the 13 proteins (shown in bold font in Table 3). Additionally, on another 4 proteins (shown in Italic font in Table 3), the graph kernel method ties with the neural network method. When averaged over the 13 proteins, the coverage and accuracy for the graph kernel method are 59% and 64%. It is worth to point out that, in the current study, the predictions are made using the protein structures that are unbound with DNA. In contrast, the data we obtained from Tjong and Zhou’s study were obtained using proteins structures bound with DNA. In their study, Tjong and Zhou showed that when unbound structures were used, the average coverage decreased by 6.3% and average accuracy by 4.7% for the 14 proteins (but the data for each protein was not shown).
View full slide show




The progress! • Some top performers: – [PCA] What makes a patch distinct [Margolin et. al., CVPR 13] – [SF]Saliency filters [Perazzi et. al., CVPR 12]: • F-Measure: 0.84 – [GC]/[GC-seg]Global contrast-based salient region detection [Cheng et. al., CVPR 11] • F-Measure: 0.75 – [FT] Frequency Tuned Salient Region Detection [Achanta et. a.l., CVPR 09] : • 0.65 by [Achanta et. al., CVPR 09]. Image from [Perazzi et. al., CVPR 2012]
View full slide show




Travel Card Cycles, Dates & Deadlines Monthly Statemen t Billing Cycle Name Cycle Date Modificati Statemen on Cut Off ts Date Available For Printing If Approved Statemen ts & Receipts Due To Travel Office August August 7/16 – 8/15 8/22/2014 8/23/2014 8/29/2014 September September 8/16 – 9/15 9/24/2014 9/25/2014 9/30/2014 October October 9/16–10/15 10/24/2014 10/25/2014 10/30/2014 November November 10/1611/15 11/24/2014 11/25/2014 12/1/2014 December December 11/1612/15 12/19/2014 1/6/2014 January January 12/16- 1/15 1/24/2014 1/25/2014 1/30/2014 February February 1/16 – 2/15 2/25/2014 2/26/2014 3/3/2014 March March 2/16 – 3/15 3/24/2014 3/25/2014 3/31/2014 All statements April are due to the Travel 4:30 April 3/16 –Office 4/15by4/24/2014 PM on above dates. Cards will be frozen until 1/2/2014 Mail to: Gladden 4/25/2014Sandy 4/30/2014 Travel Office
View full slide show




Start a video conference Start an ad-hoc video conference to discuss a subject that requires immediate attention. Invite other people to a video call 1. In the conversation window, pause on the people button, and click Invite More People. 1. Select many contacts by holding down the Ctrl key and clicking the names. 2. Right-click the selection, and click Start a Video Call. 2. 3. When you start a video call, you automatically use Lync computer audio. Select the invitees from the Add People window, and click Add. 3. Your new invitees receive a request to join your call. 4. Use the video controls to manage the conference. Add video to an IM conversation Answer a video call 1. Pause on the camera button and check your preview. 2. Adjust your camera if needed, and click Start My Video. 3. To stop sharing your video, click Stop My Video. When someone calls you, an alert pops up on your screen. To answer the call, click anywhere in the picture area. Click Ignore to reject the call and send to voice mail. Click Options to take other actions: • Send the call to Voice Mail. • Redirect to your Mobile or Home phone. • Reply by IM instead of video. • Answer With Audio Only if you don’t want to share your video. • Set to Do not Disturb to reject the call and avoid other calls. TIP Click End Video to stop sharing your video with others AND end their video feeds to you.
View full slide show




The Digital Video Broadcasting Standard 187 Data Bytes + 1 Synch Byte Decoded Data   Convolutional Encoder depth = 12 Constraint length = 7 convolutional deinterleaver 188 data bytes, 204 coded bytes t=8 byte error correcting capability. Convolutional interleaver   RS decoder convolutional interleaver Shortened Reed Solomon Outer Code   RS Encoder (204,188,8) Depth of 12 Convolutional Inner Code   Constraint length K = 7 Odenwalder code Rates r = 1/2, 2/3, 3/4, 5/6, and 7/8 Viterbi Decoder AWGN noise
View full slide show




Compressing Video with AVS Video Editor • • • • • • • • • • • To compress a video using AVS first start a project Next after you have the project upload all the pictures you want and proceed You should be able to change the frame rate at which these pictures are displayed Once you have the pictures compiled produce the video If you are not using pictures just insert the video you are using When given options for producing the video, mute any sound that it may try and link to the video as well as decrease the bitrate These two things will help to drastically decrease the size of the video file After that insert the video you just created back into a new project and produce another video Make the video a file type that generally has a lower quality to decrease size even more Decrease bit rate again and mute to decrease size again Experiment with other options to decrease your video as well, it will be entirely dependent on what quality you want the video to be as well as what you think needs to be in it
View full slide show




MUTUAL FUNDS: NEURAL NETWORKS versus REGRESSION ANALYSIS • For neural networks to be successful, they must outperform methods currently being used in the marketplace. • Mutual funds are basically ……………………………………. …. Mutual funds have become a major force on Wall Street over the past few years. They function much like an individual security and their prices should reflect all public information. Relationships between …………………………………………………. are very hard to forecast. For years, regression analysis has been a popular tool investors have used to forecast …………………… of mutual funds. • Investors know that neural networks might be able to pinpoint these relationships better than old methods. • Predictions made for Net Asset Value using 15 economic variables as inputs showed that neural networks were 40% better as tools for forecasting: …………………………….(difference between actual and forecasted NAV) was ………. for neural nets as compared to …….. for regression. • Important reason for superior performance of neural networks is its ………….. It was able to look at all aspects of relationships, whereas regression analysis was ……………………………………………… • .
View full slide show