The New Robber Barons Read online

Page 18


  The rating agencies’ have turned their names to mud, and Congress and the SEC do not seem to have the expertise to even correctly identify the issues. When they adopted arbitrary rating labels as benchmarks, the BIS, Fed and SEC enabled junk science. Investors do not have an alternative to this flawed system other than to do their own fundamental credit analysis.

  II. NOT MERE OPINIONS: PART OF REGULATORY AND MARKET FRAMEWORK

  The rating agencies assert that they issue mere opinions, but the NRSRO designation gives ratings the appearance of being issued from a position of authority. Regulators and investors rely on this pseudo-authority. When named as defendants in legal disputes, rating agencies hid behind the shield of journalist-like privilege keeping notes confidential. Rating agencies claimed they only issue opinions, however negligent they were in adhering to any reasonable professional standard when formulating those opinions. Dodd-Frank takes steps to increase transparency, but 1) it hasn’t yet been implemented, 2) the approaches to several key identified problems have yet to be defined, and 3) the Act altogether missed some of the key issues. This will be addressed in more detail in Section VIII.

  Bank regulators and insurance regulators have enacted capital rules for banks based on ratings. Many investment funds and investors have charters requiring them to only buy products that have been rated by one or more of the top three rating agencies. Since there is no independent standard to define the meaning of a rating, the rating agencies have unintentionally been given the power to change regulatory requirements. This is an extraordinary result given that the rating agencies are private companies.

  Although they shouldn’t, many investors rely on the rating and the coupon when buying structured financial products. For many investors a high coupon did not indicate high risk. Some naïve and misguided investors viewed the extra compensation as the privilege of those fortunate enough to have enough high net worth to be offered complex products. These investors did not think they were taking on extraordinary credit risk and extraordinary “financial engineering” when they bought products with the “AAA” rating. They thought they were buying sound, albeit less liquid products, and the extra coupon was viewed as compensation for less transparency and less liquidity.

  Money market funds and pension funds rely on ratings. Pension funds are required to buy investment grade rated investments. The SEC is proposing that mutual funds should not rely on ratings, but the SEC is missing a piece. The SEC should not allow an investment below a previously required rating. For example, if an investor relied on an “AAA” rating before and it did not work out, that should not mean the investor should ignore the requirement and invest in something with a lower rating, either. Rather, the investor should still be required to have an “AAA” rating and should be required to determine that the value of the investment lives up to the rating.

  Ratings are relied upon as if they are based on reliable and reproducible methods, but they are not, especially when it comes to structured products.

  A. Ratings Cartel: Moody’s, S&P, and Fitch

  In 2003, I formally explained flaws with rated structured products in a book, Collateralized Debt Obligations & Structured Finance. I discussed these issues with the rating agencies and in various forums. There were already serious problems with inflated” AAA” ratings in securitizations that had inherent structural flaws, problems with supposedly investment grade rated collateral, and conflicts of interest that held investors’ capital hostage to the self-interest of “managers” and investment banks. Those conflicts of interest often resulted in substantial principal losses to investors, and the risk was not captured in the ratings. That fact that all three top rating agencies (the only raters relevant to the issue) failed seemed more than a coincidence.

  Instead of taking measures to address these flaws, the rating agencies ramped up their flawed structured products ratings business.

  This tactic was temporarily successful because the top rating agencies act as a quasi-cartel, and their fees magically converged. They have each participated in overrating “AAA” structured products backed by dodgy loans that took substantial principal losses. As I will explain later, by the end of 2006 it was clear that it wasn’t merely a question of misguided technique, the rating agencies’ integrity was questionable, and “AAA” ratings were meaningless.

  Moody’s and S&P (Fitch was the exception) presented a fairly united front in defending their methods up to the September 2008 financial meltdown. Fitch, which also participated in overrating CDOs, seemed more responsive in downgrading structured products, but none of the three has meaningfully addressed the serious problems I discuss next.

  III. RATING AGENCIES LACK REASONABLE STANDARDS: DODD-FRANK’S MISSES

  The Dodd-Frank Act has not yet been implemented. Even when implemented, it will miss problems it should have addressed. This portion of the report addresses what can go wrong and all that has already gone wrong with the rating agencies. Dodd-Frank identified some, but not all of these problems. Subsequent sections illustrate these problems and the consequences they caused and will cause again. Section VIII. explains what Dodd-Frank got partially correct and what needs to be corrected to plug the gaping holes in the legislation and restore credibility to the rating agencies and the alternative banking system.

  A. Rating Scales Are Self-Defined: Benchmarks and Methodologies Change Arbitrarily

  Rating agencies define their own risk scales. (APPENDIX VII.) In other words, there is no independent, authoritative, or consistent definition of ratings benchmarks. Methodologies and models change at will. Global banking regulators and insurance company regulators have defined regulations and capital requirements that refer to these insubstantial ratings. An unintended consequence of the changeable ratings is rating agencies that were not elected and are not part of government have the ability to legislate by changing the meaning of their ratings.

  Moody’s awards a rating based on its estimate of expected loss, a single piece of information, and assigns a rating based on the safest (least expected loss) to the riskiest (highest expected loss): Aaa, Aa1, Aa2, Aa3, A1, A2, A3, Baa1, Baa2, Baa3, Ba1, Ba2, Ba3, B1, B2, B3, Caa1, Caa2, Caa3, Ca, C. Anything above Baa3 is considered investment grade, and anything below that is considered speculative grade. Standard & Poor’s awards ratings based on default probabilities and labels products AAA, AA+, AA, AA–, and so on. Fitch uses the same labels as S&P. As with Moody’s, anything above BBB– is considered investment grade and anything below is considered speculative grade. I’ll use “AAA” to denote the highest rating, but will specifically name Moody’s, which uses the “Aaa” notation, when I am making a point specific to Moody’s.

  “Super Senior” tranches are what I called “the greatest triumph of illusion in twentieth century finance,” in my 2003 book on CDOs. I asked where the regulators were, since this fantasy would lead to bitter disillusionment for investors. Supposedly, former “AAAs” were subordinated to this tranche. In other words, the “AAA” tranche became the “first loss” tranche for the so-called super senior. There was no standard definition for this tranche, and when I specifically asked Moody’s for the definition in 2002, it abdicated responsibility for coming up with one. I pointed out the former “AAA” was not the same as the new “AAA” that had become the “first loss” for the “super senior.” Yet the label was identical. One market definition for “super senior” was that the probability of loss was 10-6, meaning there was a one in a million chance of the investor taking the loss. Nothing could have been further from the truth. Some investors in late vintage “super senior” CDOs (2006-2007) lost all but a small single digit percentage of their initial investment. (APPENDIX VIII.)

  Even the BIS bought into the “super senior” nonsense and awarded lower regulatory capital requirements for it. In 2002, I spoke to the head of market risk at the Chicago Fed about the problem and he dismissed it saying efficient markets meant the problem would be resolved in 18 months. How did that work out? In 2003,
I wrote Jaimie Caruana, then head of the BIS, but received no response.

  At the beginning of 2005 I wrote an article for Risk Professional rating the rating agencies. Moody’s claims its ratings reflect expected loss, a mathematical concept based on the probability of default and loss severity. Moody’s occasionally changes the expected loss that corresponds to a given rating. S&P emphasizes probability of default, and occasionally changes the probability of default level that corresponds to its ratings. Fitch’s model produced unreliable results, since Fitch kept changing its correlation matrix. Structurers using Fitch’s model dubbed it the “Fitch Random Ratings Model.”

  There is tremendous moral hazard built into this system. The rating agencies enjoy extraordinary power. There is enormous incentive to fudge over serious mistakes. If an investment grade rated securitized asset shows increasing default rates in the asset portfolio, a rating agency might be tempted to change the definition of a rating rather than confront the inaccuracy of the rating, given that regulators and markets make decisions that are ratings-driven. Moreover, in order to gain market share, rating agencies may change their definitions and methodologies to win business from banks.

  Since many money managers cannot buy bonds that are not rated investment grade, and since some are required to sell bonds that fall below investment grade, ratings have a huge impact. This is why when Moody’s admitted that impairment rates show no difference in performance between CDO tranches with a junk rating of BB– and an investment grade rating of BBB, it should have been headline financial news in mainstream financial newspapers, but it wasn’t. That problem was later swamped by misrated “AAA” structured products that were in reality non-investment grade junk.

  It would seem logical that rating agencies ratings could be mapped onto each other, but they cannot. One would think that rating agencies would at least be internally consistent, but that isn’t necessarily so, especially when it comes to securitizations.

  B. Rating Agencies’ Excuses for Heterogeneous, Multi-year Failures

  The rating agencies’ problems run deep. In late 2003, the Financial Times took rating agencies to task for misrating debt issued by scandal-ridden Parmalat, Enron and WorldCom. Fitch protested that “credit ratings bring greater transparency.” Standard & Poor’s retorted that “rating agencies are not auditors or investigators and are not empowered or able to unearth fraud.”

  I responded to rating agencies’ protests. There was ample evidence that investors were misled if they believed rating agencies provided greater transparency for structured financial products. Investors relying on ratings to indicate structured products’ performance were consistently disappointed in a variety of securitizations. S&P provided a timely example after it downgraded Hollywood Funding’s deals backed by movie receipts from “AAA” (the highest credit rating possible meant to be safe from loss of principal) to BB (a noninvestment grade, i.e., junk rating, meaning principal is at risk and the investment is not suitable for investors expecting reasonable safety).

  Bond insurers raised fraud as a defense against payment, and S&P had thought payment was unconditional. In other words, S&P had a fundamental misunderstanding about the primary character of the risk. If S&P read the documents at all, then it lacked the competence and expertise to understand the meaning of the documents.

  By this time, the rating agencies had morphed into a cartel of sorts. They competed for market share and raced each other’s “standards" to the bottom. This wasn’t merely a failure of the rating agencies, however, it was a clear failure of regulators, since a loud alarm was already blaring from a series of similar rating “mishaps.” Rating agency failures were manifest in the rating of securitizations of manufactured housing loans, metals receivables, furniture receivables, subprime mortgage loans, and more.

  When rating agencies make mistakes in securitizations backed by debt, the losses tend to be permanent and unfixable. The sole source of income is the portfolio of assets. When they repeatedly fail to understand the risk of the underlying assets—as they have done over several years for a variety of securitizations—they blow the entire job.

  IV. MISRATING COMMERCIAL FINANCIAL SERVICES’ SECURITIZATIONS 1995-1998

  The Commercial Financial Services’ (CFS) debacle provides a stunning early example of the rating agencies’ incompetence. Rating agencies downgraded around $2 billion in securitizations backed by charged-off credit card receivables managed by Commercial Financial Services from investment grade to junk overnight.

  Court cases involving alleged fraud at Commercial Financial Services were litigated for years in the U.S. courts. Facts were reported in the mainstream financial press and become public knowledge, yet regulators took no steps to correct problems with the rating agencies.

  The proceedings of the public criminal trial of CFS’s former head, Bill Bartmann (United States of America v. William R. Bartmann), after which he was acquitted, gives insight into the flawed processes of the rating agencies, their cozy relationships with securitization professionals, and the failure of their regulators. Yet all involved were free to make the same mistakes in future asset-backed securitizations, particularly in the subprime mortgage market.

  All of the top three rating agencies, Moody’s, Standard & Poor’s, and Fitch overrated CFS’s asset backed securitizations. (Duff & Phelps Credit Rating Co. rated CFS’s transactions and is now a part of Fitch. Fitch IBCA also rated CFS’s securitizations.) Only Moody’s declined to rate CFS’s later transactions, but it did not withdraw rating on deals it had already rated. All three rating agencies gave investment grade ratings to securitizations that merited a junk rating.

  A. Underwriter Hired Analysts Who Rated the Deals

  Underwriters hire employees of the rating agencies, and this can be a conflict of interest for rating agency employees. In this example, I focus on S&P. Chase Securities, lead underwriter for CFS, hired analysts from Standard & Poor’s familiar with its methodology to work on CFS’s securitizations. In fact, he was hired for his “expertise,” yet he later testified that he did not use his statistics training in his work. One of the analysts had special knowledge of S&P’s approach to evaluating CFS’s securitizations. He testified that he either authored the S&P credit memo or had close involvement with S&P’s credit memo that was passed on to Chase Securities.

  The collateral for the securitizations, charged-off credit card receivables, were illiquid assets, and there were no publicly available prices. The only transactions were private and involved CFS and a handful of other participants. The ultimate value of the assets was projected based on an untested model and CFS’s representations of their ability to collect cash flows in the form of lump sum settlements or payments on “performing” loans in static pools of loans.

  B. Rating Agencies Relied on Biased Proxy Data and Ignored Statistical Principles

  S&P relied on data supplied by CFS, even though CFS had little experience with charged-off credit card receivables and claimed to have developed a proprietary model that was untested over time. Since a long period of historical data on charged-off credit card receivables was not available, CFS used data on unsecured consumer loans on which it had collected. Testimony by the Chase employee that formerly worked at S&P and testimony by an S&P employee revealed that CFS’s data did not include loans on which CFS was unsuccessful in achieving collections.

  Among the weakness of this analysis were two key problems with the data:

  1 Unsecured consumer loans were only a proxy for credit card receivables. The loan types had similarities, but also had important differences. The correlation between the payoff behaviors was unknown, and

  2 S&P did not verify how the loans were chosen as data. They did not verify whether the loans had been chosen randomly. This wasn’t a representative sample of what might happen with charged-off credit card receivables, since it was reasonable to expect that CFS would not be able to collect on some of those loans. By including only loans on which collections occurred, the
data was biased.

  C. Rating Agencies Ignored Statistical Principles and Red Flags for Three Years

  In his testimony, the former S&P employee said that he did not use his statistics training in his work regarding CFS, even when hired in Chase Securities’ research area with a dotted line reporting relationship to the Managing Director and Group Head of Asset Backed Origination and asked to draft a memo on this topic to his new boss.

  S&P and Chase Securities became comfortable with CFS’s model, not because it was accurate, not because the results correlated with actual charged-off credit card collection data, not because it correlated with even the unsecured consumer loan data provided by CFS, but because it consistently produced results that made the loans used as collateral in the securities appear much better than CFS’s biased collection data provided on unsecured consumer loans.

  D. Rating Agencies Ignored an Audit and “A Classic Situation for Fraud”

  Yet there was nothing that verified the accuracy of the model’s results. S&P and Chase Securities did not verify the utility of the data provided by CFS and did not make sure the data was unbiased. A later audit report of the collectability of the loans revealed that CFS’s representations of the collectability of the loans were grossly inaccurate, yet this report was ignored.