Chapter 6Data Risk Measurement
Like models, data is both useful and—to the extent that it does get used—risky. As discussed extensively in Chapter 2, the main causes and mitigation of data risk will be reflected in the firm's overall data and information processing infrastructure. To the extent that that design supports the controlled acquisition of data, a unidirectional flow of data from the core out to the multiple end-users, and the efficiency of targeted data quality control points, the overall data risk will be low. For firms without infrastructures that boast these features, the risks will likely be higher and may be extremely difficult to assess with any rigor. For firms with scattered systems, multiple points of entry, and downstream data quality mechanisms, proactive data risk management will be difficult, and the measurement of risks may need to be gleaned from the frequency of observed errors and the intensity and scope of ongoing reconciliation processes. For most institutions, organized data quality programs are in their infancy and often seriously constrained by system configuration deficiencies. Again, the observations of regulators are apropos:
3.14. Data dictionary: Firms had started to create data dictionaries, which was seen as a good approach to understanding and classifying data. However, few firms could evidence the effectiveness of existing procedures to ensure the timely maintenance and consistent use of the data dictionary across the firm.
3.15. Data ...
Get Financial Institution Advantage and the Optimization of Information Processing now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.