RE: [TMIP] Model of Model outputs ?

2 posts / 0 new
Last post
kkockelm@mail.u...
RE: [TMIP] Model of Model outputs ?

I think several people have done this over the years, JAB, since it makes great sense (if you have results from distinctive 50+ runs). For example, in an 2002 Annals of Regional Science (ARS) paper (pre-print at http://www.caee.utexas.edu/prof/kockelman/public_html/ARS01ErrorPropagat...), Yong Zhao & I illustrated how 100 runs led to solid prediction important outputs (like key-link flow values & subregional VMT & VHT: showing R2 = .77 & .95 in Fig 7 regression results). We were doing a lot of sensitivity testing, so it was on a subnet of just 25 zones, 818 links. Most recently (among things I’ve seen), I believe someone at CamSys did this for CA’s HSR model results (i.e., sensitivity testing of the importance of different input & parameter assumptions, which was the actual focus of our ARS paper). Related to this, if you’re interested in more papers on uncertainty implications (of inputs, including model parameters), please visit www.caee.utexas.edu/prof/Kockelman. I think we have 10 papers there (ranging from AADT uncertainty to toll-road-demand variation, land use forecasting and new methods for sensitivity testing of complex model results). Moreover, co-developers and I would love to see you & others using the (open-source) Project Evaluation Toolkit (PET, at http://www.ce.utexas.edu/prof/kockelman/PET_Website/homepage.htm) that does such simulations, as special sub-module, as described in http://www.ce.utexas.edu/prof/kockelman/public_html/TRB12ToolkitSensitiv.... PET also does demand forecasting for nets up to 300 links (with multiple modes, user classes, times of day, etc.), while delivering loads of holistically-developed project (or policy) performance metrics. A special module performs knapsack budgeting across all sorts of competing project. User documentation also explains how MPOs & their consultants can take large-region (tens of thousands of links & zones) scenario outputs to create the same holistically-based performance metrics, within the Toolkit. Hope you find what you’re looking for! Kara ========================================== Dr. Kara Kockelman, PhD, PE E.P. Schoch Professor in Engineering Department of Civil, Architectural & Environmental Engineering The University of Texas at Austin 301 E. Dean Keeton St., Stop C1761, Office: 6.9 ECJ Austin, TX 78712-1112 512-471-0210 (FAX: 512-475-8744) kkockelm@mail.utexas.edu http://www.caee.utexas.edu/prof/kockelman ========================================== From: jabunch.work=gmail.com@mg.tmip.org [mailto:jabunch.work=gmail.com@mg.tmip.org] On Behalf Of jabunch Sent: Monday, May 01, 2017 11:59 AM To: TMIP Subject: [TMIP] Model of Model outputs ? Does anyone recall a presentation, paper, or webinar that described developing a meta model by analyzing the outputs of travel forecasting runs. The simplified model would be used for quick response sketch planning or sensitivity analysis. I swear I've seen something on this recently (the last year or so) but now can't find it on the web or in my files. JAB -- Full post: https://tmip.org/content/model-model-outputs Manage my subscriptions: https://tmip.org/mailinglist Stop emails for this post: https://tmip.org/mailinglist/unsubscribe/3196 @mg.tmip.org>@mail.utexas.edu>

jabunch

Thanks everyone for your quick replies. I will track down these leads? We were thinking about applying the technique for some high level scenario planning.

JAB