Fed Still Struggling to Quantify Banks’ Risk
By John Heltman
A top Federal Reserve official said the central bank is still grappling with how to quantify certain types of operational risk at the largest banks and identify reliable controls to manage them. David Lynch, assistant director for quantitative risk management at the Fed, told a conference Wednesday that when it comes to some kinds of risk — particularly those for which incidents are unpredictable or rare — the agency lacks agreed-upon methods for counting and assigning costs to incidents. Without those basic metrics, developing useful models for banks and supervisors to rely on to identify and hedge those risks is very difficult. Operational risk is a catch-all term to refer to costs associated from human error, legal liabilities, natural or manmade disasters, and can include anything from cybersecurity threats to regulatory fines. The Fed has been requiring banks to hold capital to mitigate operational risks for decades, but most recently updated its rules in December 2013 to accommodate standards made as part of the Basel III accords. (The changes in Basel III were identical to the operational risk rules in Basel II, but the Fed never got around to implementing the Basel II accords.) Karen Shaw Petrou, managing partner at Federal Financial Analytics, said that “floor” will give the Fed and other global regulators even less wiggle room to allow banks to identify and install controls to reduce that capital burden. That may be problematic for two reasons: first, it will eliminate banks’ incentive to look for ways to control its operational risk, and second, it will limit regulators’ ability to manage operational risks in more creative and effective ways. “Capitalizing operational risk does very little to ensure operational resilience, in contrast to backup systems and all the other resources,” Petrou said. “There really actually is an incentive to take more risk since you’re going to have to hold the same amount of capital regardless.”