After a summer hiatus, the AI and the Municipal Bond Market series resumes. Prior articles covered the rise of AI Technology, Data, Economic Drivers, and Pricing. This piece presents some difficult questions as to the relevancy of credit analysis in an increasingly AI and data driven market. Upcoming articles will cover Trading Platforms and Algorithmic Trading.
With the rapid evolution and influence of AI technology in the municipal bond market, is municipal credit relevant?
No. But with modifications…a firm maybe.
The municipal bond market has traditionally been viewed as a credit market. Municipalities and governmental authorities or agencies borrowing in this market by issuing bonds are rated and ranked by their creditworthiness—the ability and willingness to repay these debts.
Credit rating agencies such as Moody’s Ratings and S&P Global Ratings—known as Nationally Recognized Statistical Rating Organizations (NRSRO)—are paid by bond issuers to rate these bonds. For example, S&P Global ratings range from, “AAA”, the best quality, to “D” which indicates the bonds are in default. There are also interim ratings, such as “AA-“ or “A+” to designate gradations in credit quality.
Because these ratings influence investors and help determine the price of a bond, issuers are keen to be assigned the best ratings possible.
Here’s why. On July 31, 2024, the IHS Markit Municipal Bond AAA Curve 20 year maturity yielded 3.39%. Comprised of lower rated (“BBB” or nonrated) bonds, the S&P Municipal Bond High Yield Index of the same date and maturity yielded 5.56%.
That 2.17% difference, commonly referred to as the “credit spread”, means millions in annual debt service to a borrower. On at $100 million bond issue, the high grade borrower pays $3.39 million in annual interest. The lower rated borrower pays $5.56 million.
Ouch.
The issuer’s pain is the bondholder’s gain. Investors are constantly on the prowl to find the highest yielding bonds for the lowest risk. That’s why municipal bond credit analysts spend so much time poring over an issuer’s financial statements and other information offering insights into a borrower’s credit strength.
But in a world of AI, are ratings and credit analysis relevant anymore?
Predicting bond default rates with neural networks and machine learning
In his MITGOV/LAB article Using AI to Finance the Things That Matter, Luke Jordan noted that AI machine learning solutions are great for the municipal bond market’s varied problems. Armed with ample data—“clean standardized data”—from a data set with over 4 million bonds financing over 440,000 projects, he trained a neural network to predict whether a municipal bond would default or not. With a rarity of defaults, the problem was challenging. The model had to “look for a needle in a haystack,” he noted.
In his 2021 publication, Bond Default Prediction with Text Embeddings, Undersampling and Deep Learning, he details the rigorous methodology he developed for the model.
This box is emphatically transparent.
For the data, he applied a 230-dimensional feature vector for each bond. If you’re not up on the latest AI lingo, feature vectors are, essentially, a bundle of numbers for each bond, where each number is a category or a piece of quantitative information. Information may be a bond’s reference data (i.e., coupon, maturity) as well as unstructured data, such as the issuer, type of project, and other “text embeddings.”
The results were quite striking. The trained model correctly identified “90% of the bonds that would default at the time the bond was issued,” he noted in his MIT GOV/LAB article.
More striking is that he did not use ratings. He did not use financial statement information.
No Balance Sheets. No Statement of Net Position. No Income Statements. No Statement of Revenues. No Cash Flows.
Not one iota of all that traditional financial information the municipal bond market crunches to assess credit risk
Understanding these results could be seen as AI demonstrating fundamental credit analysis was immaterial, Mr. Jordan was careful to add that it wasn’t possible to say categorically the model shows financials don’t matter to default risk. After all, one data point in the model is credit spread, which he presents could be argued as a summary proxy for financial analysis.
But he didn’t back away entirely. If credit spread at issuance does give some insight into financials, it’s weak, and likely subjective, he observed. Credit spreads weren’t found to be among the model’s most important determinant factors. What’s more, he continued, while the model didn’t prove financials don’t matter, it did indicate they’re not as important as usually considered when assessing credit risk.
Considering the model didn’t use financial data, it’s a struggle to find exactly what importance they provided at all.
It’s easy to imagine the municipal bond participants who have relied on these time-worn metrics for decades to be in high dudgeon, fulminating and at the ready to read that report with a self-serving analytic scalpel to do some slicing and dicing of their own.
But before trying to slice and dice the AI methodology, it would be better to consider why AI was able to skip the financials. There are five factors.
What Are The Odds?
Factor One. Look at the default rate in high grade municipal bonds. The Moody’s US Municipal Bond Defaults and Recoveries, 1970-2022 report notes (page 9) that during that time, over any average five-year period, the cumulative default rate of the highest grade general government municipal bond issuers—those rated Aa or Aaa on the Moody’s rating scale—was 0.01%. Go out over a 10-year period and the rate is 0.02%. Now stay 10 years out and go down one rating grade to include bond issues rated single-A. The aggregate cumulative default rate was a barely perceptible 0.07%.
Even if you look at high grade (single-A to Aaa rated) municipal utility issuers such as electric utilities, tolls, water/sewer, the cumulative default rate for is 0.03% in the 5-year period and 0.09% for the 10-year period. Competitive enterprises, which include higher education, hospitals, housing, have a cumulative default rate of 0.09% in the 5-year period. For the 10-year period, the rate was 0.23%.
Let’s translate this into investor-speak. If you have a buy-and-hold portfolio of high-grade municipal bonds maturing in 10 years or less, the likelihood of a bond in your portfolio defaulting is only somewhat worse than your chance of being hit by lightning.
Take this one step further. If the default risk is so low, then how exactly should rating upgrades or downgrades be valued in the market? If your bond is rated “Aa3” by Moody’s and is downgraded to A1, does it—or should it—matter? Isn’t this just counting how many angels can dance on the head of a pin?
All this builds to the final question: what, exactly, justifies any of the credit spreads in the high grade municipal bond market?
Probability. Or Probability Not.
Factor Two. There is a certain irony that Nationally Recognized Statistical Rating Organizations seem to be lacking in the ‘statistical’ department when it comes to municipal bond ratings. Moody’s 2007 report The U.S. Municipal Bond Rating Scale: Mapping to the Global Rating Scale And Assigning Global Scale Ratings to Municipal Obligations notes that “Unlike Moody’s global scale ratings, which measure ‘expected loss’ (default probability times loss given default), Moody’s long-term municipal ratings measure the intrinsic ability and willingness of an entity to pay its debt service.” Despite their 2010 recalibration of the U.S. municipal sector ratings, this remains true today.
Similarly, in its 2022 Guide to Credit Rating Essentials, S&P Global notes “Credit ratings are not absolute measures of default probability.” It continues, “Since there are future events and developments that cannot be foreseen, the assignment of credit ratings is not an exact science. For this reason, S&P Global Ratings opinions are not intended as guarantees of credit quality or as exact measures of the probability that a particular issuer or particular debt issue will default.” In another more recent report, the rating agency states “Our credit ratings are forward-looking opinions about an issuer’s relative creditworthiness. They assess the relative likelihood of whether an issuer may repay its debts on time and in full.” Not a specific probability or even a range of probabilities in sight.
[For the record, I asked both Moody’s and S&P Global to comment on their respective positions. Neither responded.]
For all the carefully vetted municipal bond rating methodologies of the NRSROs, ratings are neither deterministic nor probabilistic. They seem detached from a clear, defined taxonomy quantitatively linked to actual risk of default. Lacking this, other than emanations and penumbras, how exactly are bond prices supposed to measure NRSRO ratings in basis points?
No Comparison
Factor Three. The market lacks an overall standardized disclosure taxonomy for municipal bond issuers. Unlike their corporate counterparts, who disclose in the global standard of Extensible Business Reporting Language (XBRL), neither general obligation nor revenue bond borrowers have such consistency.
And without a standard set of metrics and definitions, making comparisons between borrowers can be difficult. In the municipal bond market, a rose is a rose is a rose in an onion. Apologies to Gertude Stein.
No wonder this market is so often described as opaque.
Granted, with default rates counted to the right of the decimal point, there’s likely some wiggle room in those comparisons without risk of catastrophic error. Additionally, it’s not completely a free-for-all. State and local governments prepare an extensive, detailed Annual Comprehensive Financial Report (ACFR), which include 10-year trends in finances, economics, demographics, and operations.
Bristling with this information, it’s not uncommon for these reports to weigh in at two hundred pages or more. The financial statements included are prepared according to standards established by the Governmental Accounting Standards Board (GASB). Equally, revenue bond issuers or borrowers generally adhere to Generally Accepted Accounting Principles in their financial reporting.
All well and good if the data in the annual financial reports and ACFRs was disclosed digitally in a structured, machine readable form. But of the 50,000 or more bond issuers reporting, you can count on one hand the one’s who do. Which introduces the fourth issue.
Pixel Dust
Factor Four. This is where it gets patently bizarre. Nearly every governmental entity from hamlets to states manage their finances in digital form—be it an Excel spreadsheet to OpenGov’s robust cloud based software. Yet in disclosure filings, this data is reconstituted into decades-old PDF technology. A PDF is a picture, everything in it pixels. It’s unstructured, not digital, not readily machine readable. Numbers and text are trapped as insects in amber.
The result? Municipal bond investors are compelled to unscramble the pixelated PDF disclosure egg to get data sort-of back to its digital form. It’s an absurdity that even Samuel Beckett and Eugene Ionesco combined couldn’t make up.
If you’re using AI machine learning or deep learning models (the latter a subset of the former) to parse data in the municipal bond market and you’ve already determined financials are not particularly salient in pricing or risk analysis, why bother at all jumping through these hoops to get them?
Bad Timing
Factor Five. In an excellent ongoing series of reports from the University of Illinois-Chicago and Merritt Research Services, Credit Rating and Geography: Examining the Timeliness of Municipal Bond Audits, authors Deborah Carroll and Richard Ciccerone document the numerous problems of lagging disclosure in grim detail. The median audit time for all municipal bond sectors has increased nearly 10.5% over the last decade or so, from 152 days in 2011 to 168 days in 2022. This marks a 12-year high, despite repeated transparency concerns brought by municipal bond investors and government watchdog groups alike.
Summary Judgement
Let’s summarize. Nearly undetectable default rates, ratings unattached to quantifiable risk probabilities, lack of a standardized disclosure taxonomy, reporting in cumbersome non-machine readable formats, and audits running nearly six-months past the close of the fiscal year.
Is it any wonder AI streams by traditional credit metrics like a river flowing around a rock? The rock stays stuck. The river moves on.
In the next piece on AI and the Municipal Bond Market, Can AI Make Credit Relevant?