Chapter 11. Risk Management
A significant barrier to deploying autonomous vehicles (AVs) on a massive scale is safety assurance.
Majid Khonji et al. (2019)
Having better prediction raises the value of judgment. After all, it doesn’t help to know the likelihood of rain if you don’t know how much you like staying dry or how much you hate carrying an umbrella.
Ajay Agrawal et al. (2018)
Vectorized backtesting in general enables one to judge the economic potential of a prediction-based algorithmic trading strategy on an as-is basis (that is, in its pure form). Most AI agents applied in practice have more components than just the prediction model. For example, the AI of autonomous vehicles (AVs) comes not standalone but rather with a large number of rules and heuristics that restrict what actions the AI takes or can take. In the context of AVs, this primarily relates to managing risks, such as those resulting from collisions or crashes.
In a financial context, AI agents or trading bots are also not deployed as-is in general. Rather, there are a number of standard risk measures that are typically used, such as (trailing) stop loss orders or take profit orders. The reasoning is clear. When placing directional bets in financial markets, too-large losses are to be avoided. Similarly, when a certain profit level is reached, the success is to be protected by early close outs. How such risk measures are handled is a matter, more often than not, of human judgment, supported probably by ...
Get Artificial Intelligence in Finance now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.