The Risky Game of Credit Underwriting

The Risky Game of Credit Underwriting

Credit underwriting decisions are a cornerstone of any economy. Made wisely, they can assist entrepreneurship, promote economic growth, and generally ensure that capital is allocated to its highest and best use. On the other hand, poor credit underwriting decisions can negatively impact an industry or the economy as a whole.  Recent troubles in the U.S. economy are directly tied to the poor credit decisions of lenders to support prospective home owners who had little money and provided little information about their financial strength in an over-inflated housing environment. Recent failures of banks such as IndyMac are partly tied to poor credit underwriting decisions and over-leveraging.  The failure of banks to consider the full range of construction risk is leaving many banks high and dry due to the recent spate of construction business failures, with many more to come. The five consecutive years of recent losses in the surety industry was directly related to poor credit underwriting decisions. With all of these losses you have to wonder what is going wrong. The answer is twofold: an unusually high tolerance for risk and credit decisions based upon insufficient data.


In the case of mortgages that went bad, because loans could be packaged and resold, an anything goes atmosphere developed and many risk management practices were thrown out the window. Many loans were provided based on simple applications that provided minimal financial information. The fallout of this lending environment is showcased on Mortgage Lender Implode-o-Meter. In the case of IndyMac, a large portfolio of non-performing Alt-A loans, sometimes called liar loans, and risky construction and land development lending, left the bank with very little cushion in a falling housing market. Other banks impacted by losses only relied on financial data, failing to consider all the risks of lending to high risk industries such as construction and auto dealerships.

For instance, how many lenders gave adequate consideration to potential increases in fuel costs and its effect on buying habits? Just ask a Hummer dealership with vehicles anchored on the lot in concrete how well-prepared they were. For that matter, Mike Shedlock makes a convincing argument that the entire U.S. auto industry appeared to be caught off guard by changing consumer sentiment. It’s well worth reading. The fallout is that we could very easily see the implosion of a U.S. Big 3 auto maker, and more bank failures are almost certain to follow. Bank Implode-o-Meter provides a sobering play-by-play.

As a predictable reaction to experiencing losses, and watching other banks fail, the New York Times is reporting that Worried Banks Sharply Reduce Business Loans. Equally predictable, however, is that the reduction in business loans comes too late to significantly reduce losses, and will actually do more economic harm than good at an aggregate level.

Guarantors (Sureties)

In the case of sureties, dependency on old financial data, insufficient attention to all risk factors, and generally loose credit set up the industry for a fall. The industry was ill-prepared for any shock to the system, and 9/11 proved that, exacerbating losses that would not reverse until 2005.

Surety Industry Losses - Source Surety Association of America

Surety Industry Losses - Data from Surety Association of America

Riding a wave of construction spending and bubbling real estate valuations, the surety industry posted good results for the years ending 2005-2007. However, we doubt that trend will continue past year end 2008 into 2009; the industry still remains exposed to economic shocks:

In the past quarter, many contractors have been cutting volume significantly, some in half! Imagine trying to carry the same overhead on 50% of your previous sales. Most companies don’t know how to manage that kind of fall off, and many won’t, ending in failures that sureties will have to support. Chubb recently reported a large surety loss of $75 million and a surety combined ratio of “128.4% due to one large loss.” While many sureties “appear” to have learned their lessons from the loose practices of the early part of the decade by implementing tighter underwriting guidelines, we still expect to see more of these “surprise” losses. Why? Because, for the most part, sureties still aren’t using all the underwriting data they should be, and many contractors are just now starting to feel the brunt of a severely tightening construction environment.

So what does all this mean? In simple terms, neither banks, sureties, nor those receiving credit are accounting for all potential risks and, consequently, their decisions were made without due consideration for all the risk factors present. Although it is typical to see a high number of business failures in risky industries during downturns, if we want improved underwriting practices (a better functioning economy), losses shouldn’t be accepted with an “oh, well” mentality.

The Underwriting Deficiency Problem

So how could some of these losses be prevented? It is inevitable that there will be truly unexpected, “Black Swan” events that lead to unpredictable losses. It is impossible to completely foresee these events. That said, financial companies suffer more predictable, common losses everyday because of flaws in the underwriting process. These flaws generally fall into two categories: unusually high tolerance for risk and poor underwriting data. Addressing these issues can lead to better, more predictable results:

1) Unusually High Tolerance for Risk

This problem can never be fixed completed, but a good first step is an integrated risk management program that includes rigid policies and procedures that must be strictly adhered to when making decisions to grant credit. Unfortunately, most financial institutions already have strict underwriting guidelines in place; they simply are not followed on a consistent basis. When markets soften and competition increases, it’s all too common for underwriters to “bend” guidelines to win business. Not bending guidelines, in fact, becomes very difficult because pressure mounts from management, and sometimes Wall Street, to maintain consistent revenue growth. This causes companies to tolerate a higher level of risk than they should. Ultimately, this is a management or psychological issue. The more pressing issue is lack of useful, truly predictive data:

2) Incomplete or Misleading Underwriting Information

A common complaint we hear from underwriters and credit providers is that they just don’t have a good sense of what goes on behind the four walls of their potential clients. No matter how much financial information they receive, there’s always the lingering uncertainty of how valid the data really is. At the end of the day, financial data is only good as the inputs used. How is an underwriter supposed to know if the inputs are correct? More importantly, how are they supposed to know whether strong financials are due to a company playing Russian Roulette with risk as discussed in the “Success Paradox”, or due to a company being truly well-run? Basically, credit decisions, are only as good as the information they are based upon. If accurate or actionable information is unavailable, underwriting decisions will suffer, especially when underwriters have to guess how predictive the data really is. There are three primary causes of poor underwriting data: insufficiency, inaccuracy, and obsolescence:

a) Insufficiency is a lack of data and usually a poor excuse for underwriting failures. There are plenty of management training programs that teach “if you are missing something, then go get it.” Some underwriters do, but a lot don’t. For optimum results, all data that can help predict the likelihood of a loss should be considered. Unfortunately, sometimes data is simply either not available entirely or not available in an easily usable form.

b) Inaccuracy is caused by (1) fraudulent data or (2) poorly compiled data, and is a constant concern in every company. The old adage “garbage in – garbage out” really applies here. If the data is not accurate, then underwriting decisions will be impacted. Most inaccurate data in business stems from poor accounting procedures.

c) Obsolescence refers to data that is old and can be misleading. In most industries, data less than one year old is typically good enough to make credit decisions. However, in risky, volatile industries, cash moves through the companies very quickly. In these industries, data more than even a few months old often leads to poor credit decisions. That is why creditors should always want the most recent information available.

So how can these problems be solved?

Finding a Fix

Up until now, almost all underwriting decisions have been based primarily upon quantitative data, that is, financial information. However, pure logic tells us that financial information alone has not served to adequately stem the losses that have occurred.  To fix the problem of data insufficiency, obviously more data would be better and would help in underwriting decisions.  That is why qualitative data about an individual or company can greatly assist in underwriting decisions.  In the case of mortgage lending, would losses have been reduced if consideration was given to the risk of deflated home values, the risk of concentrated loan types, or the risk associated with lending to those with a single source of income? The answer is yes, and all of that is qualitative data.  In the case of construction lending, would losses have been reduced if banks had considered the risk of a contractor’s revenue concentration in one type of work of large proportions, or the risk that developers would not be able to pay them if houses did not sell?  The answer is again yes, and all that too is qualitative data.

So why isn’t qualitative data considered to a greater degree? There simply has been no standardized means to collect qualitative risk data. And if it was collected, there has been no way to compile and understand it. To further compound the problem, qualitative data is typically subjective and difficult to quantify. Consequently, although qualitative risk data can fill a number of credit underwriting deficiencies, it simply isn’t available in a useful form. As a result, it hasn’t been relied upon for credit decisions. It just didn’t fit into the tried and true world of numerical reporting… until now.

Recent Catalysts for Change to Underwriting Practices

Although computers and the Internet have played a role in the development of qualitative information for use in credit decisions, the real catalyst has been the emergence of Enterprise Risk Management (ERM), which has come onto the shores of corporate America in a strong wave. Ever since the Enron fiasco, the Arthur Andersen debacle, and the response with Sarbanes Oxley, attention has turned more than ever toward examination of business practices and metrics to assure that handling of company records follows strict procedures and guidelines.  The immediate response to Sarbanes Oxley was the development of corporate governance systems designed to not only incorporated procedures for handling company records, but also install business controls. However, these systems initially failed to consider all business practices that could impact profitability. In addition, they typically looked at practices in somewhat of an on or off manner; either practices were in place or they weren’t. ERM on the other hand, considers all business practices that can impact profitability and examines them in a variable form, i.e. not in place, poorly in place, functional but needing improvement, acceptably in place, etc. As such, it provides a more holistic view of the entire corporate framework and the inherent risk to enterprise objectives. Recently, S&P and other rating agencies have begun to review how well companies utilize Enterprise Risk Management, as part of their credit rating procedures. However, the rating agencies reviews seem to be more focused on whether an ERM system exists in a company and less on actually assessing the value of the controls in place.

A true enterprise risk assessment determines whether a company has the necessary systems and controls in place to maintain profitability, whether its accounting procedures are strong enough to produce reliable financial information, and whether it’s exposed to undue risk as a result of overlooked exposures. Data from the risk assessment is analyzed to reveal profitability risk, potential increases in failure risk, and has the added benefit of validating the quality of financial information generated by a company. In addition, it is usually available within a week of assessment, a much shorter time frame than financial reports. By providing additional underwriting information and analyzing the quality of accounting practices in an almost real-time manner, enterprise risk data addresses the three causes of poor underwriting data: insufficiency, inaccuracy, and obsolescence. With this data, a credit granting decision can be made without that unknown hanging over the underwriter. Just what goes on behind those four walls no longer needs to be a mystery. Now it can be known.

A Long Awaited Solution

To be truly effective as an underwriting tool, there must be a standardized system for gathering and analyzing enterprise risk data. In addition, because the risk factors in each type of business vary considerably, any system that attempts to collect qualitative risk data and quantify it must be designed specific to an industry. For example, a risk factor in the restaurant industry would be the presence of rodents, which is not really a concern in the construction industry. utilizes the patent-pending DGR Risk Analysis System, which is licensed from Druml Group, Inc. The DGR Risk Analysis System provides a standardized means for assessing and analyzing qualitative risk in a business enterprise. The source data required to produce a MyRiskControl Report comes from a standardized risk assessment, which can be performed either by a Certified Professional Assessor of Enterprise Risk or by a company’s own personnel. Although both approaches can produce accurate reports, company personnel could have inherent biases or lack a full understanding of all the risk factors. Certified Professional Assessor of Enterprise Risk are trained to be fully knowledgeable about each risk factor to ensure the greatest accuracy and neutrality of the resulting risk analysis report.  In turn, the risk analysis report scores and rates the overall severity of risk present in the assessed company.  MyRiskContol Reports are often provided to banks, sureties, and other creditors or guarantors trying to obtain a more complete picture of their prospects. As a result, credit underwriters no longer have to work solely on financial data, but can get a much clearer view of the current and projected health of their applicants to avoid undue risks. Now when choosing whether to grant credit, underwriters no longer have to play their own high stakes game of Russian Roulette.

Reblog this post

Leave a Reply