4 Book Positioning and Related Literature

Probabilistic modelling for design, decision-support, risk assessment or uncertainty analysis has a long history. Pioneering projects in the 1980s relied on rather simplified mathematical representations of the systems, such as closed-form physics or simplified system reliability, involving a basic probabilistic treatment such as purely expert-based distributions or deterministic consequence modelling. Meanwhile, quality control enhancement and innovation programs in design and process engineering started with highly simplified statistical protocols or pure expertise-based tools. Thanks to the rapid development of computing resources, probabilistic approaches have gradually included more detailed physical-numerical models. Such complex modelling implies a finer calibration process through heterogeneous data sets or expertise. The large CPU time requirement is all the more demanding since the emerging regulatory specification is intended to adequately predict rare probabilities or tail uncertainties. Conversely, the importance of critical issues of parsimony or control of the risk of over-parameterisation of analytics exaggerating the level of confidence that may be placed in predictions through uncontrollably-complex models becomes all the greater. This gives rise to new challenges lying at the frontier between statistical modelling, physics, scientific computing and risk analysis.

Indeed, there seems to be insufficient co-operation between ...

Get Modelling Under Risk and Uncertainty: An Introduction to Statistical, Phenomenological and Computational Methods now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.