Friday 12 July 2013

Conventional Ratings: Why The Abysmal Performance?




Why are ratings so unreliable? In a recent article which appeared on 19-th July, 2012 on Thomson Reuters News & Insight, one reads: 

"This is how the role of the credit rating agencies was described by the Financial Crisis Enquiry Commission in January 2011: 

The rating agencies were essential to the smooth functioning of the mortgage-backed securities market.  Issuers needed them to approve the structure of their deals; banks needed their ratings to determine the amount of capital to hold; repo markets needed their ratings to determine loan terms; some investors could buy only securities with a triple-A rating; and the rating agencies’ judgment was baked into collateral agreements and other financial contracts.1 

The performance of the credit rating agencies as essential participants in this market has been abysmal.  In September 2011 Moody’s reported that new “impairments,” that is, non-payments, of principal and interest obligations owed to investors through these structured-finance products soared from only 109 in 2006 to 2,153 in 2007 to 12,719 in 2008 and peaked at 14,242 in 2009, but with still more than 8,000 new impairments in 2010. Thus, more than 37,000 discrete investment products defaulted in that time period.
..........

Substantial evidence has suggested that this epidemic of ratings “errors” was not the product of mere negligence, but rather was the direct and foreseeable consequence of the credit rating agencies’ business models and their largely undisclosed economic partnerships with the issuers that paid them for their investment-grade ratings."


Setting aside incompetence, the conflict of interest, the "special relationships", etc. we wish to concentrate on what is probably the most fundamental reason for the "epidemic of rating errors" - the underlying flawed mathematical approach. Yes, the mathematics behind conventional risk rating is flawed not only from a purely mathematical and philosophical perspective, it also opens the doors to numerous means of manipulating the results. There are lies, damn lies and statistics. The tools offered by statistics - extremely dangerous if in the wrong hands - are the main enabler. In particular let's look at the concept of correlation, the most fundamental quantity in anything that has to do with risk, ratings, VaR, its assessment and management. 


A correlation measures how two parameters are related to each other as they vary together. Let us see a few significant cases:


An strong linear correlation: R² = 0.83





The above situation is quite frequent in textbooks or computers. In reality, this is what you encounter most often:







The problem becomes nasty when you run into situations such as this, in which R² = 0, but which, evidently, convey plenty of information:






The evident paradox is that you have a clear structure and yet stats tells you the two parameters are independent. A clear lie in the face of evidence.

And what about cases like this one?









It is easy to draw a straight line passing through two clusters and call it "trend". But in the case above it is not a trend we see but a bifurcation. A totally different behavior. Totally different physics. Two clusters point to a bifurcation, N clusters could point to N-1 bifurcations.... certainly not to a trend. 

And finally, how would one treat similar cases?







The data is evidently structured but correlation is0. How do you deal with such situations?

The key issue is this: when looking at portfolios composed of thousands of securities, or other multi-dimensional data sets in which hundreds or thousands of variables are present, correlations are computed blindly, without actually looking at the data (the scatter plots). Who would? There are hundreds of thousands of correlations involved when dealing with large data sets. So, one closes an eye and just throws straight lines on top of data. Some false trends are captured as such, significant trends are discarded just because they don't fit a linear model. What survives this overly "democratic" filtering goes to the next step, to create more damage. Imagine, for example, the MPT (Modern Portfolio Theory), developed by Markowitz and which hinges on the covariance matrix. Covariance is of course related intimately to correlation and the flaw propagates deeply and quickly. Think of all the other places where you plug in a covariance or a standard deviation, or a linear model. Think of how many decisions are made trusting blindly these concepts. We must stop bending reality to suit our tools. We have become slaves of our own tools.



www.rate-a-business.com



Thursday 11 July 2013

The Present and Future of the Concept and Value of Ratings



“Rating agencies? Do not speak evil of the dead” (Corriere della Sera, 15/1/2009). This is an extreme synthesis of the widespread opinion and of the doubts placed on rating agencies and on the value of the concept of rating. The sub prime crisis, Enron, Lehman Brothers, or Parmalat are just a few eloquent examples of how a business or a financial product may collapse shortly after being awarded a very high investment-grade rating. But are these isolated cases, which expose the weaknesses of rating processes, or just the tip of an iceberg? Our intention is to provide a critical overview of the concept of rating and to question its conceptual validity and relevance. We believe that the increasing levels of global uncertainty and inter-dependency – complexity in other words – as well as the socio-economic context of our times, will place under pressure and scrutiny not only the concept of rating, but all conventional end established risk management and Business Intelligence techniques. Unquestionably, the concepts of risk and risk rating lie at the very heart of our troubled global economy. The question is in what measure did rating contribute to the trouble and what, in alternative, can be done to improve or replace it altogether.

The concept of rating is extremely attractive to investors and decision-makers in that it removes the burden of having to go over the books of a given company before the decision is made to invest in it or not. This job is delegated to specialized analysts whose work culminates in the rating. The rating, therefore, is an instrument of synthesis and it is precisely this characteristic that has made it so widespread. However, contrary to popular belief, a rating is not, by definition, a recommendation to buy or sell. Nor does it constitute any form of guarantee as to the credibility of a given company. A rating is, by definition, merely an estimate of the probability of insolvency or default over a certain period of time. In order to assign a rating, rating agencies utilize statistical information and statistical models which are used in sophisticated Monte Carlo Simulations. In other words, information on the past history of a company is taken into account (typically ratings are assigned to corporations which have at least five years of certified balance sheets) under the tacit assumption that this information is sufficient to hint what the future of the company will look like. In a “smooth” economy, characterized by prolonged periods of tranquility and stability this makes sense. However, in a turbulent, unstable and globalized economy, in which the “future is under construction” every single day, this is highly questionable.

The process of rating, whether applied to a corporation or to a structured financial product, is a highly subjective one. The two main sources of this subjectivity are the analyst and the mathematical models employed to compute the probability of insolvency. It is in the mathematical component of the process that we identify the main weakness of the concept of a rating. A mathematical model always requires a series of assumptions or hypothesis in order to make its formulation possible. In practice this means that certain “portions of reality” have to be sacrificed. Certain phenomena are so complicated to model that one simply neglects them, hoping that their impact will negligible. As already mentioned, this approach may work well in situations dominated by long periods of continuity. In a highly unstable economy, in which sudden discontinuities are around every corner, such an approach is doomed for failure. In fact, the usage of models under similar circumstances constitutes an additional layer of uncertainty with the inevitable result of increasing the overall risk exposure. But there is more. In an attempt to capture the discontinuous and chaotic economy, models have become more complex and, therefore, even more questionable.

In the recent past we have observed the proliferation of exotic and elaborate computer models. However, a complicated computer model requires a tremendous validation effort which, in many cases, simply cannot be performed. The reason is quite simple. Every corporation is unique. Every economic crisis is unique. No statistics can capture this fact. No matter how elaborate. Moreover, the complexity and depth of recently devised financial products has surpassed greatly the capacity of any model to embrace fully their intricate and highly stochastic dynamics, creating a fatal spill-over effect into the so-called real economy. We therefore stress with strength the fact that the usage of models constitutes a significant source of uncertainty which is superimposed on the turbulence of the global economy, only to be further amplified by the subjectivity of the analyst. In a discontinuous and “fast” economy no modeling technique can reliably provide credible estimates of the probability of insolvency, not even over short periods of time. It is precisely because of this fundamental fact that the concept and value of rating become highly questionable.

For most companies today, survival is going to be their measure of success. For example, an AA and BB rating indicate, respectively, a probability of insolvency of 1.45% and 24.57% within a period of 15 years. What is astonishing is not just the precision with which the probability is indicated, but the time span embraced by the rating. The degree of resolution – the number of rating classes – is unjustified in a turbulent economy. The fact that a rating agency can place a corporation in one of 25 or more classes indicates that there is sufficient “precision in the economy” and in the models to justify this fact. Clearly, the economy is not that precise. Table 1 indicates rating classes as defined by the three major rating agencies (number in parentheses indicates number of classes).

 Table 1.

It is precisely this attempt to search for precision in a highly uncertain and unpredictable environment that casts many doubts on the concept, relevance and value of a rating. We basically sustain that the whole concept is flawed. We see the flaw in the fact that the process of assigning a rating is essentially attempting to do something “wrong” (compute the probability of insolvency) but in a very precise manner.

An interesting parallel may be drawn between rating and car crash. Just like corporations, cars are also rated: for crashworthiness. Just like in the case of rating agencies, there exist different organisms that are certified to issue car crash ratings. A car crash rating is expressed by the number of stars – 1 to 5 – 5 being the highest. A car with a 5 star rating is claimed to be safe. Where is the problem? A crash rating is obtained in a test lab, in clinical conditions. What happens on the road is a different story. A crash rating tells you what happens under very precise but unrealistic conditions. In reality, a car will collide with another car traveling at an unknown speed, of unknown mass, with an unknown angle, and not with a fixed flat cement wall at 55 kph and at 90 degrees. So, a crash rating attempts to convey something about the future it cannot possibly catch. Just like in the case of corporate rating, computer crash simulations use state of the art stochastic models, are computationally very intensive and attempt to provide precise answers for unknown future scenarios.

In summary, rating is an instrument which, directly or indirectly, synthesizes and quantifies the uncertainty surrounding a certain corporation or a financial product, as they interact with the respective ecosystems. This not only is extremely difficult but, most importantly it misses the fundamental characteristic of our economy and namely it’s rapidly increasing complexity. This complexity, which today may actually be measured, contributes to a faster and more turbulent ecosystem with which companies will confront themselves in their struggle to remain in the marketplace. This new scenario suggests new concepts that go beyond the concept of rating. Corporate complexity, as will be shown, occupies a central position.
Complexity, when referred to a corporation, can become a competitive advantage providing it is managed. However, we identify excessive complexity as the main source of risk for a business process. In particular, high complexity implies high fragility, hence vulnerability. In other words, excessively complex corporations are exposed to the Strategic Risk of not surviving in their respective marketplaces. Evidently, high fragility increases the probability of default of insolvency. The concept is expressed synthetically via the following equation:
C_corporation  X  U_ecosystem = Fragility  

The significance of this simple equation is very clear: the fragility of a corporation is proportional to its complexity and to the uncertainty of the marketplace in which it operates. Complexity amplifies the effects of uncertainty and vice-versa. In practice what the equation states is that a highly complex business can survive in an ecosystem of low uncertainty or, conversely, if the environment is turbulent, the business will have to be less complex in order to yield an acceptable amount of fragility (risk). High complexity implies the capacity to deliver surprises. In scientific terms, high complexity manifests itself in a multitude of possible modes of behavior the system can express and, most importantly, in the capacity of the system to spontaneously switch from one such mode to another and without early warning. The fragility in the above equation is proportional to the strategic risk of being “expelled” from the marketplace.

The weakness of the concept of rating is rooted in the fact that it focuses exclusively on the uncertainty aspects of a corporation without taking complexity into account. Clearly, as equation 1 shows, uncertainty is only part of the picture. In more stable markets, neglecting complexity did not have major consequences. In a less turbulent economy, business fragility is indeed proportional to uncertainty. When turbulence becomes the salient characteristic of an economy, complexity must necessarily be included in the picture as it plays the role of an amplification factor. In fact, the mentioned turbulence is a direct consequence of high complexity (the capacity to surprise and switch modes).

The constant growth of complexity of our global economy has recently been quantified, in an analysis of the GDP evolution of the World’s major economies. It is interesting to note how in the period 2004-2007 the global economy has doubled its complexity and how half of this increment took place in 2007 alone, see Figure 1. Similarly, one may observe how the complexity of the US economy also grew albeit in the presence of strong oscillations, denoting an inherently nonstationary environment.


Figure 1. Evolution of complexity (second curve from bottom) of the World and US economies on the period 2004-2007.
The evolution of complexity, and in particular its rapid changes, act as crisis precursors. Recent studies of the complexities of Lehman Brothers, Goldman Sachs and Washington Mutual, as well as of the US housing market, have shown how in all these cases a rapid increment of complexity took place at least a year before the information of their difficulties became of public domain.

Today it is possible to rationally and objectively measure the complexity of a corporation, a market, and a financial product or of any business process. Complexity constitutes an intrinsic and holistic property of a generic dynamic system, just like, for example, energy. Evidently, high complexity implies high management effort. It also implies the capacity to deliver surprises. This is why humans prefer to stay away from highly complex situations – they are very difficult to comprehend and manage. This is why, with all things being equal, the best solution is the simplest that works. But there is more. Every system possesses the so-called critical complexity – a sort of physiological limit, which represents the maximum amount of complexity a given system may sustain. In proximity of its critical complexity, a business process becomes fragile and exposed hence unsustainable. It is evident, that the distance from critical complexity is a measure of the state of health or robustness. In 2005 Ontonix has developed rational and objective measures of complexity and critical complexity, publishing templates for a quick on-line evaluation of both of these fundamental properties of a business process. The templates are based on financial highlights and standard balance sheet entries.


The fundamental characteristic of the process of complexity quantification is that it doesn’t make use of any mathematical modeling technique (stochastic, regression, neural networks, statistics, etc.). The method in fact is a so-called model-free technique. This allows us to overcome the fundamental limitation of any model which, inevitably, involves simplifications and hypotheses which, in most cases, are rarely verified. As consequence, the method is objective. Data is analyzed as is, without making any assumptions as to their Gaussianity and continuity and without any pre-filtering of pre-conditioning. As a result, no further uncertainty, which would contaminate the result, is added. At this point it becomes evident how complexity can occupy a central position in a new approach to the problem of rating. It is in fact sufficient to collect the necessary financial data, compute the complexity and corresponding critical value and to determine the state of health of the underlying business as follows:


State of health of corporation  = Critical complexity - current complexity 

The closer to its critical complexity the more vulnerable is the business. Simple and intuitive. A paramount property of this approach is that, unlike in the case of a rating, it does not have a probabilistic connotation. In other words, no mention is made as to the future state of the corporation. All that is indicated is the current state of health, no prediction is advanced. The underlying concept is: a healthy organization can better cope with the uncertainties of its evolving ecosystem. The stratification of the state of health (or fragility) is operated on five levels: Very Low, Low, Medium, High and Very High. In highly turbulent environments, attempting to define more classes of risk is of little relevance. It is impossible to squeeze precision of out of inherently imprecise systems.

Let us illustrate the above concepts with an example of a publicly traded company. The computation of the state of health (rating) has been performed using fundamental financial parameters (see blue nodes in the graph in Figure 2) as well as certain macro-economic indicators (red nodes) which represent, albeit in a crude manner, the ecosystem of the corporation. Figure 2 illustrates the so-called Complexity & Risk Map of the corporation as determined using the on-line rating system developed by Ontonix (www.ontonix.com). The parameters of the business are arranged along the diagonal of the graph, while significant relationships between these parameters are represented by the connectors located away from the diagonal. Needless to say, the said relationships are determined by a specific model-free algorithm, not by analysts. The health rating of the company is “Very High” and corresponds, in numerical terms, to 89%. The map also indicates the so-called hub, or dominant variables, indicated as nodes of intense red and blue color.



Figure 2. Complexity & Risk Map of a corporation, indicating the corresponding health rating (business robustness).


The conventional concept of rating – intended as a synthetic reflection of the probability of insolvency – has shown its inherent limitations in a global, turbulent and fast economy. The proof lies in the crippled economy. The usage of mathematical models, as well as the subjectivity of the rating process, adds a further layer of uncertainty which is invisible to the eyes of investors and managers. Under rapidly changing conditions and in the presence of high complexity, the concept of probability of default is irrelevant. Instead, a more significant rating mechanism may be established based on the instantaneous state of health of a corporation, as its capacity to face and counter the uncertainties of its marketplace. In other words, the (strategic) risk of not being able to survive in one’s marketplace is proportional to the mentioned state of health. The capacity of a corporation to survive in its market place does not only depend on how turbulent the marketplace is but, most importantly, on how complex the corporation is. This statement assumes more importance in a highly turbulent economic climate. If corporations do not start to proactively control their own complexity, they will quickly contribute to increasing even more the turbulence of the global marketplace, making survival even more difficult. Complexity, therefore, is not just the basis of a new rating mechanism, it establishes new foundations of a superior and holistic form of Business Intelligence.





Tuesday 9 July 2013

Optimal Does NOT Mean Best


In the second half of the twentieth century it has become very popular to seek optimal solutions to a broad spectrum of problems: portfolios,  engineering systems, strategies, traffic systems, distribution channels, networks, policies, etc. But have you ever wondered if optimal really means best? Well, it does not. Optimality is not the most convenient state in which to function. The reason?
Optimal solutions are inherently fragile

Anything that is optimal is, by definition, fragile, hence vulnerable. This is the price one pays for excessive specialization or extreme organization.  Let us see why.

There are very few things that are stationary. In fact, we live in a quickly evolving environment in which there is little time for equilibrium and in which irreversible and dissipative mechanisms, together with chaos, randomness, not to mention extreme events (the so-called Black Swans) produce a sequence of unique events in which only fundamental patterns can be distinguished but in which the search for repeatable details is futile. This simple fact clashes frontally with the concept of optimality which hinges on precision and details. Sure, one can identify sweet spots in a multi-dimensional design space. From a mathematical perspective many things are possible. However, the dynamic non-equilibrium character of Nature guarantees that the conditions for which a given system has been optimized soon cease to exist. The pursuit of perfection is, therefore, an attempt to ignore the ways of Nature and Nature taxes similar efforts in proportion to the magnitude of the intended crime.

The above is true not only in the global economy. In the biosphere it is also risky to be optimal, precisely because ecosystems are dynamic, and there is little time to enjoy optimality. As Edward O. Wilson stated in one of his wonderful books: "excessive specialization is a tender trap of evolutionary opportunism." Nature very rarely tolerates optimal designs. In fact, natural systems are, in the majority of the cases, fit for the function, not optimal.

But there is more. High complexity compounds the dangers of optimality. As a system becomes more complex, approaching its own critical complexity, it possesses an increasingly large number of the so-called modes of behaviour (or attractors). Because these modes of behaviour are often very close to each other, tiny perturbations are sufficient for a given system to suddenly transition from one mode of behaviour to another. These sudden mode transitions are more frequent as complexity approaches its upper limit. This is why humans intuitively try to avoid highly complex situations - they are unmanageable precisely because of the mentioned unexpected mode transitions. In layman's terms, high complexity reflects a system's capacity to deliver surprises. This is why when speaking of a highly complex system a good design is not an optimal one but one that is fit and resilient. In other words:

Attempting to construct optimal solutions in the face of high complexity increases the cost of failure.

The following question arises at this point. Knowing that an optimal system is fragile, why then not design systems to be sub-optimal in the first place? Why not settle for a little less performance, gaining in robustness and resilience? Why this obsession to be perfect? Why push a system into a very tight corner of its design space, out of which it pops out at the snap of the fingers? Why do people pursue optimal solutions knowing that an optimal system, precisely because it is optimal, can only get worse, never better? As the ancient Romans claimed even the Gods are powerless against stupidity.

But how do you get a solution that is fit and not optimal (=fragile)?. More than a decade age we have come up with a very simple algorithm called SDI (Stochastic Design Improvement) which is described here and which establishes the following new paradigm in system design:
  1. Specify an initial (nominal) design (or solution) to a given problem.
  2. Specify acceptable target behaviour of the system, i.e. an improved design with tolerable (but not optimal) performance.
  3. Run SDI - it is an iterative procedure which produces multiple solutions in the vicinity of the target behaviour.All have very similar performance.
  4. Measure the complexity of each solution. Select the least complex one as the final solution. This is because the least complex solution is the least fragile.
The above philosophy is superior to conventional approaches to design, strategy and decision-making because it is tailored to highly uncertain, interconnected and turbulent environments, in which fitness counts much more than ephemeral perfection.
Our economy (but not only) is fragile because everything we do is focused on maximizing something (profits,  performance, success) while minimizing something else (risk, time, investment, R&D) at the same time. This leads to strains within the system. Everything is stretched to the limit (or as much as physics will allow). This is exactly what one should not do when facing turbulence. The focus should, instead, be on:
  • Solutions that are fit, not optimal.
  • Simplifiying business models and strategies.
  • Accepting compromises not seeking perfection. Improve, don't optimize.
Corruptio optimi pessima!








Making Predictions Based on Murphy's Laws


The origin of the well known Murphy's Laws may be traced to Edwards Air Force Base in 1949. A few of the most popular of these laws:
  • If anything can go wrong, it will
  • If there is a possibility of several things going wrong, the one that will cause the most damage will be the one to go wrong
  • If you perceive that there are four possible ways in which something can go wrong, and circumvent these, then a fifth way, unprepared for, will promptly develop
  • Left to themselves, things tend to go from bad to worse
  • Everything goes wrong all at once.
  • Nothing ever gets built on schedule or within budget.
  • Nothing is as easy as it looks.
  • Everything takes longer than you think. 
  • It is simple to make something complex, and complex to make it simple.

Murphy's Laws may sound funny but most of us will agree that they correctly reflect the reality more than simple anecdotes. Because of this, one may see behind Murphy's Laws the hand of Nature. Consequently, we may attempt to come up with a "scientific interpretation" of these laws. There are thousands of Murphy's Laws and we will not get into the details of any single one of them. However, we can state that they essentially point in the following direction:

Things tend to become more complex and not simpler.

In other words, Murphy's Laws state that, when given a chance, complexity will go up rather than go down.  In effect, when we say that a "situation is bad" or has "gotten worse" we often imply that it has become more complex. Highly complex situations are difficult to assess and to manage and frequently spawn unexpected behavior and this is why humans prefer to avoid them. In other words, Murphy's Laws are saying just that.

Consider evolution in our biosphere. Organisms spontaneously tend to reach forms of higher complexity. This guarantees more functionality and helps survive better. This is why apes have evolved to humanoids and not towards the ameoba.




Evolution proceeds in the direction of increasing complexity but so does the economy, civilizations, societies, manufactured products, traffic systems, telecommunication, etc., etc. Even the ancient Greeks knew that each generation leaves more chaos behind than it has found. Increasing complexity seems to be the leit-motif which we have been able to appreciate in the case of thousands of corporations and systems of which we have actually measured the trend in complexity.

What does this mean? And how can this fact be used to make forecasts? Making (business) decisions requires:

  • some sort of model or scheme
  • a mechanism to produce future scenarios, options or alternatives
  • a means of selecting options based on cost, returns, risk, etc.
Now, what has complexity got to do with all this? Very simple. Suppose that  your model has enabled you to generate a set of possible future scenarios (economic, geopolitical, etc.). With all things being equal (or, at least with many things being equal) if you don't know on which option to bet, go for the most complex one. In other words:
In the presence of multiple feasible and equivalent options, the most complex one represents the best bet.

This sounds a bit like the opposite to the popular (albeit slightly erroneous) interpretation of Ockham's razor (known as the law of parsimony  or law of economy) and to a principle that we all tend to observe in life:

When given the choice, the simpler solution is the preferable one.

This is why an experienced engineer will seek a simple solution to a design problem. This is also why a portfolio of lower complexity will exhibit a better degree of risk diversification than a more complex one. If you stimulate complexity, it will quickly get in your way. All you need to do is nothing!

So, in summary, design your business decisions based on the fact that, given a chance, the complexity of  your business environment will most probably increase, not the other way around. In a non-stationary, non-linear regime (like our global economy) making predictions by extrapolating past history (statistics) is very dangerous, precisely because conventional forecasting techniques don't take complexity into account. And rapidly rising complexity happens to be the hallmark of our turbulent times. This is true both of the markets as well as of corporations themselves.



www.ontonix.com



The World is Becoming Not Only More Complex But Also Significantly More Uncertain


The financial crisis is just one of many examples which have recently evidenced why the excess of complexity is a condition to be avoided. The excess of complexity can generate unexpected situations which often require painful and drastic actions if one is to regain control of the situation.  The required effort and time are, in addition, very relevant. Furthermore, due to the turbulent times in which we live, the consequences are greater than they would be in a stable environment. The added value of having a system that is able to warn, in time, about the possible consequences of excessive complexity, is quite obvious.

Anyone who wants to survive and evolve in the current turbulent era must include complexity as one of his metrics and must consider it as a component of any decision process. Therefore it is mandatory to move from a perception or impression of complexity to a quantification of  complexity based on objective measures.

For example, everyone is aware of the fact that the World is becoming increasingly complex. But how much is this growth? everyone has an opinion. But opinion is one thing, scientific facts are another.

Is it possible to measure the complexity of the World? Of course it is. In the present study we have utilized 1200+ factors available at the World Bank(http://data.worldbank.org/).

Based on this data, the complexity of the World (see figure 1) has doubled over the last past 25 years (1986-2010) with a annual compound annual growth rate (CAGR) of the 2,8% (see figure 2).

But the complexity growth is mainly due to the relevant increase of uncertainty (entropy). Indeed, uncertainty has increased by 350% (see figure 1) over the past 25 years and its CAGR is equal to 5,3% (see figure 3), double of the complexity CAGR.

These figures show that the World as a system is not able to treat the excess of uncertainty by creating new structure. This massive uncertainty might not be sustainable for long and therefore there might be the need to eliminate it through painful measures (or natural events, which invariably make life simpler, will do so for us) or through new important innovations which have an impact on the structure with which the world evolves.










Traditional Ratings, Traditional Risk Management: Is it Over?

Risk management technology Ontonix
".. the effective management of risk is one of the core strengths that has made Lehman Brothers so successful" (from above image, October 28-th, 2008).


Ever since the financial meltdown, rating agencies have been under fire. And so has risk management. Setting aside considerations of political nature, it is evident that risk ratings and risk management must undergo a very substantial overhaul. Times have changed and so must the techniques and methods that help decision-makers do their job.

Let's look closer look at a central concept in Risk Management, Risk Rating and VaR analysis - that of probability of failure. Imagine a computer chip manufacturer who produces, say, 1 million chips a month. Suppose also that on average 5 chips per month have a defect and don't make it through quality control. Evidently, the probability of manufacturing a chip with a defect is 5/1000000. However, even in highly controlled environments such as those in silicon chip factories it is impossible to state a-priori which particular chip will have a defect. All you can do is spot the defects after quality control. Therefore, when it comes to a single chip, the mentioned probability is irrelevant. You can only verify a failure after the fact but you cannot predict it. If chip number X will be faulty, it will be faulty no matter what probability you can dream up. For all practical purposes, when it comes to the fate of a single chip, that probability is totally irrelevant.

Take now a slightly different case, that of a single public company. Investors want to know if buying its stocks is a bad or a good idea. They look at its rating and they decide. Ratings, as we know, measure the Probability of Default - the PoD. And here comes the problem:

If, based on a production of millions of (for example) chips, it is impossible to say which chip will be faulty, how can anyone issue a PoD-type statement for ONE corporation?

What's more, chips are produced under highly controlled conditions, while corporations compete in highly turbulent, non-stationary and chaotic markets, driven by subjective and irrational decisions and sentiments. What, then, is the relevance of a PoD? Days before Lehman Brothers defaulted it had a very high investor-grade rating. This corresponded to a very low PoD. However, had there been millions of Lehman Brothers banks (like chips) that PoD might have been pretty close to the true value of a Probability of Default (like the 5/1000000 in the above example). But there has been only ONE such bank!

A market is a dynamic non-stationary conglomerate of thousands of corporations which compete - which means that they are NOT independent - and in which every company is unique. While silicon chips are independent, corporations are not. Therefore, the problem is really very very nasty. Ultimately what this means is that a PoD makes little sense, even though some people pay for it while others invest their money based on its value. There must be an alternative.

When faced with an uncertain and turbulent future risk managers must change approach pretty radically. The facts as follows:
  • Complexity (of the business model) X Uncertainty (of the markets) = Fragility (of the outcome)
  • Because the uncertainty of the global economy is on the rise (see certain laws of physics to understand why this must be so) whenever possible a simpler business is better than a complex business
  • Simpler systems cope better with extreme and sudden events than highly complex ones - they can adapt faster and are less fragile
  • Our increasingly turbulent and uncertain economy will trigger more sudden and extreme events.

This means that:

  • Risk management becomes complexity management - you keep an eye on your complexity (on a monthly or quarterly basis) and you try to keep it low (just like cholesterol or sugar levels in your blood).
  • Complexity management becomes part of the corporate strategy. Unlike risk management - some companies today don't even have a risk management department - complexity management (when operated using our technology) is simple, natural and easily embraces an entire organization.

Quantitative complexity management is a modern and natural alternative to traditional risk management - an innovative tool for the challenges of the 21-st century. Basically:

1. instead of building stochastic models (which will always be surrogates of reality) and running them using, for example, Monte Carlo Simulation, to produce PoDs with 6 decimals

2. you constantly keep an eye on your complexity - it is easy to measure it using data which reflects the performance of your business - and watch out for any fluctuations. Such fluctuations are phenomenal pre-crisis alarms.

So, the new approach to uncertainty management is NOT measuring Probabilities of Default (or failure) but acting quickly on pe-crisis alarms and keeping the complexity of your business low (just like you would do with your cholesterol). In order to better face extreme and unexpected events you need resilience not A's or B's. Remember, we're no longer on a lake with a breeze but in the middle of a stormy ocean.



Read more about it here.

Get our ebook "A New Theory of Risk and Rating" on Amazon.



www.ontonix.com



What is Critical Complexity?


Critical complexity Ontonix

What is critical complexity? Let's start with the basics. Information can be arranged into sets of rules. When these rules are related to each other they may be organized into graphs or maps, such as the one illustrated below.



Each of the red dots represents a relationship between two parameters (nodes). An example of rule (from the above map, see top):


"if UNEMPLOYMENT increases then NEW HOUSE CONSTRUCTION decreases".

This is an example of a fuzzy rule - no numbers just a global trend. Rules can be more or less fuzzy, like in the example below, where two rules are represented by scatter plots - a collection of data samples (pairs) which ultimately produce the rule.




On the left you can see a fairly crisp rule, while on the right a more fuzzy one. You can of course increase the fuzziness of a rule until it becomes so fuzzy that it no longer provides any useful information. Suppose in fact that in the above map all the rules are made so fuzzy that they are all about to break up. At that point, compute the complexity of the system - the value which you obtain is that of critical complexity. This is what OntoSpace allows you to do.

What makes information fuzzy and less precise is noise and, in general, uncertainty. A great way to illustrate the concept is by analyzing, for example, a simple phrase, such as this:

This is an example of a simple phrase which is used to illustrate the concept of critical complexity.

Let’s introduce a few spelling mistakes:

Thos is a n exrmple of a simpcle phrqse whih  I s us ed to illuxtrate the concyept of critizal com plexiuy.

Let us introduce more errors – with some imagination the phrase is still readable (especially if you happen to know the original phrase):

Tais xs a n exreple  zf a sempcle phrqee waih I s vs ed eo illuxtkate the concyevt of crstrzal ctm plexihuy.

An even more:

Taiq xs a n exrepye  zf d semicle pcrqee raih I s vs ed eo ilnuxtkare the cmncyevt tf crstrzaf ctm plsxihuy.

This last phrase is unreadable. All of the original information has been lost. We could say that the phrase before this last one was critically complex - adding a small dose of uncertainty (spelling mistakes) would destroy its structure. Systems which are on the verge of losing their structure simply because one sprinkles a little bit of noise or uncertainty on top are fragile - they collapse with no early warning. This is precisely why in the case of very large or critical systems or infrastructures, such as multi-national corporations, markets, systems of banks or telecommunication and traffic networks, it is paramount to know how complex they are and how close to their own critical complexity they happen to function. If you do know how complex your business is, and how far from criticality it finds itself functioning, you have a great early-warning system.

Today it is easy to know if a business functions close to its own critical complexity and if it is fragile. You may do this on-line here.