4 Book Positioning and Related Literature
Probabilistic modelling for design, decision-support, risk assessment or uncertainty analysis has a long history. Pioneering projects in the 1980s relied on rather simplified mathematical representations of the systems, such as closed-form physics or simplified system reliability, involving a basic probabilistic treatment such as purely expert-based distributions or deterministic consequence modelling. Meanwhile, quality control enhancement and innovation programs in design and process engineering started with highly simplified statistical protocols or pure expertise-based tools. Thanks to the rapid development of computing resources, probabilistic approaches have gradually included more detailed physical-numerical models. Such complex modelling implies a finer calibration process through heterogeneous data sets or expertise. The large CPU time requirement is all the more demanding since the emerging regulatory specification is intended to adequately predict rare probabilities or tail uncertainties. Conversely, the importance of critical issues of parsimony or control of the risk of over-parameterisation of analytics exaggerating the level of confidence that may be placed in predictions through uncontrollably-complex models becomes all the greater. This gives rise to new challenges lying at the frontier between statistical modelling, physics, scientific computing and risk analysis.
Indeed, there seems to be insufficient co-operation between ...
Get Modelling Under Risk and Uncertainty: An Introduction to Statistical, Phenomenological and Computational Methods now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.