City Housing Trust Funds Berkeley, California: Housing Trust Fund Cupertino, California: Affordable Housing Fund Los Angeles, California: Housing Trust Fund Menlo Park, California: Below Market Rate Housing Reserve Morgan Hill, California: Senior Housing Trust Fund Palo Alto, California: The Housing Reserve Sacramento, California: Housing Trust Fund San Diego, California: Housing Trust Fund San Francisco, California: Office Affordable Housing Production Program; Hotel Tax Fund; and Bond Housing Program Santa Monica, California: Citywide Housing Trust Fund West Hollywood, California: Affordable Housing Trust Fund Aspen, Colorado: Housing Day Care Fund Boulder, Colorado: Community Housing Assistance Program and Affordable Housing Fund Denver, Colorado: Skyline Housing Fund Longmont, Colorado: Affordable Housing Fund Telluride, Colorado: Housing Trust Fund Tallahassee, Florida: Housing Trust Fund Chicago, Illinois: Low Income Housing Trust Fund Bloomington, Indiana: Housing Trust Fund Fort Wayne, Indiana: Central City Housing Trust Fund Indianapolis, Indiana: Housing Trust Fund Lawrence, Kansas: Housing Trust Fund Boston, Massachusetts: Neighborhood Housing Trust Cambridge, Massachusetts: Housing Trust Fund Ann Arbor, Michigan: Housing Trust Fund St. Paul, Minnesota: STAR Program St. Louis, Missouri: Housing Trust Fund New Jersey: 142 COAH approved developer fee programs Santa Fe, New Mexico: Community Housing Trust Greensboro, North Carolina: VM Nussbaum Housing Partnership Fund Columbus/Franklin County: Affordable Housing Trust Fund Toledo, Ohio: Housing Fund Portland, Oregon: Housing Investment Fund Charleston, South Carolina: Housing Trust Fund Knoxville, Tennessee: Housing Trust Fund Nashville, Tennessee: Nashville Housing Fund, Inc. Austin, Texas: Housing Trust Fund San Antonio, Texas: Housing Trust Salt Lake City, Utah: Housing Trust Fund Burlington, Vermont: Housing Trust Fund Alexandria, Virginia: Housing Trust Fund Manassas, Virginia: Manassas Housing Trust Fund, Inc. Bainbridge Island, Washington: Housing Trust Fund Seattle, Washington: Housing Assistance Funds Washington, D.C.: Housing Production Trust Fund
View full slide show




Reform: History • The gap is caused by housing segregation, location and local funding • De jure segregation up to 1954 • laws required separation of black and white children across schools • Brown v. Board of Education • Court overturned the precedent requiring integration • De facto segregation: • Schools continued to be segregated even in the absence of segregation laws
View full slide show




VOMmean w F=(DPP-MN)/4 Concrete4150(C, W, FA, Ag) 0 1 1 1 5 1 6 1 7 1 8 4 med=14 9 1 10 1 11 2 12 1 13 5 14 1 15 3 med=18 16 3 17 4 18 1 19 3 20 9 21 4 22 3 23 7 24 2 med=40 25 4 26 8 27 7 28 7 med=56 29 10 30 3 31 1 32 3 33 6 med=61 34 4 35 5 37 2 38 2 40 1 42 3 43 1 44 1 45 1 46 4 ______ CLUS 4 gap=7 49 1 56 1 [52,74) 0L 7M 0H CLUS_3 58 1 61 1 65 1 66 1 69 1 ______ gap=6 71 1 77 1 [74,90) 0L 4M 0H CLUS_2 80 1 83 1 ________ gap=14 86 1[0.90) 43L 46 M 55H 100 1 [90,113) 0L 6M 0H CLUS_1 103 1 105 1 108 2 112 1 _____________At this level, FinalClus1={17M} 0 errors C1 C2 C3 C4 med=10 med=9 med=17 med=21 med=23 med=34 med=33 med=57 med=62 med=71 med=71 med=86 CLUS 4 (F=(DPP-MN)/2, Fgap2 _______ 0L 0M 3H CLUS 4.4.1 gap=7 0 3 =0 0L 0M 4H CLUS 4.4.2 gap=2 7 4 =7 9 1 [8,14] 1L 5M 22H CLUS 4.4.3 1L+5M err H 10 12 11 8 gap=3 12 7 ______ 0L 0M 4H CLUS 4.3.1 gap=3 15 4 =15 18 10 0L 0M 10H CLUS 4.3.2 gap=3 21 3 =18 22 7 ______ 23 2 [20,24) 0L 10M 2H CLUS 4.7.2 gap=2 25 2 [24,30) 10L 0M 0H CLUS_4.7.1 26 3 27 1 28 2 gap=2 29 1 31 3 CLUS 4.2.1 gap=2 32 1 [30,33] 0L 4M 0H Avg=32.3 34 2 0L 2M 0H CLUS 4.2.2 gap=6 40 4 =34 ______ 0L 4M 0H CLUS_4.2.3 gap=7 47 3 =40 52 1 0L 3M 0H CLUS_4.2.4 gap=5 53 3 =47 54 3 55 4 56 2 57 3 ______ gap=2 58 1 [50,59) 12L 1M 4H CLUS 4.8.1 L60 2 8L 0M 0H CLUS_4.8.2 61 2 [59,63) gap=2 62 4 ______ =64 2L 0M 2H CLUS 4.6.1 gap=3 64 4 [66,70) 10L 0M 0H CLUS 4.6.2 67 2 gap=3 68 1 71 7 ______ gap=7 72 3 [70,79) 10L 0M 0H CLUS_4.5 79 5 5L 0M 0H CLUS_4.1.1 gap=6 85 1 =79 87 2 [74,90) 2L 0M 1H CLUS_4.1 1 Merr in L Median=0 Avg=0 Median=7 Avg=7 Median=11 Avg=10.7 Median=15 Avg=15 Median=18 Avg=18 Median=22 Avg=22 2H errs in L Median=26 Avg=26 Median=31 Median=34 Avg=34 Median=40 Avg=40 Median=47 Avt=47 Accuracy=90% Median=55 Avg=55 1M+4H errs in Median=61.5 Avg=61.3 Median=64 Avg=64 2 H errs in L Median=67 Avg=67.3 Median=71 Avg=71.7 Median=79 Avg=79 Median=87 Avg=86.3 Suppose we know (or want) 3 clusters, Low, Medium and High Strength. Then we find Suppose we know that we want 3 strength clusters, Low, Medium and High. We can use an antichain that gives us exactly 3 subclusters two ways, one show in brown and the other in purple Which would we choose? The brown seems to give slightly more uniform subcluster sizes. Brown error count: Low (bottom) 11, Medium (middle) 0, High (top) 26, so 96/133=72% accurate. The Purple error count: Low 2, Medium 22, High 35, so 74/133=56% accurate. What about agglomerating using single link agglomeration (minimum pairwise distance? Agglomerate (build dendogram) by iteratively gluing together clusters with min Median separation. Should I have normalize the rounds? Should I have used the same Fdivisor and made sure the range of values was the same in 2nd round as it was in the 1st round (on CLUS 4)? Can I normalize after the fact, I by multiplying 1st round values by 100/88=1.76? Agglomerate the 1st round clusters and then independently agglomerate 2nd round clusters? CONCRETE
View full slide show




"Gap Hill Climbing": mathematical analysis 1. To increase gap size, we hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher StDev would increase the likelihood that gaps would be larger since more dispersion allows for more and/or larger gaps. This is very heuristic but it works. 2. We are more interested in growing the largest gap(s) of interest ( or largest thinning). To do this we could do: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




"Gap Hill Climbing": mathematical analysis One way to increase the size of the functional gaps is to hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher STDev would increase the likelihood that gaps would be larger ( more dispersion allows for more and/or larger gaps). This is very general. We are more interested in growing the one particular gap of interest (largest gap or largest thinning). To do this we can do as follows: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




"Gap Hill Climbing": mathematical analysis One way to increase the size of the functional gaps is to hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher STDev would increase the likelihood that gaps would be larger ( more dispersion allows for more and/or larger gaps). We can also try to grow one particular gap or thinning using support pairs as follows: F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows. Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning. This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.). The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?) In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q??? 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 0 2 3 4 5 6 7 8 =p 9 d 2-gap d2 d 1-g ap j d k c e m n r f s o g p h i d1 l q f e d c b a 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 1 2 3 4 5p 6 7 8 9d 1-g ap d 2-gap a q=b d2 f e d c b a 9 8 7 6 5 4 3 2 1 0 a b d d1 j k qc e q f
View full slide show




Gametes formed by segregation and independent assortment of alleles 1 4 Gametes from F1 male 1 4 1 4 1 4 1 4 BS Bs bS bs BS (cont’d) Gametes from F1 female 1 1 1 4 4 4 Bs bS bs BBSS BBSs BbSS BbSs Black, short Black, short Black, short Black, short BBSs BBss BbSs Bbss Black, short Black, long Black, short Black, long BbSS BbSs bbSS bbSs Black, short Black, short Brown, short Brown, short BbSs Bbss bbSs bbss Black, short Black, long Brown, short Brown, long F2 generation (cont’d next slide)
View full slide show




3. Rough pTrees A pTrees is defined by a Tuple Set Predicate (T/F on every set of tuples). E.g., for bit-slices, roughly pure1 might have predicate: " x% 1-bits", 0
View full slide show