Wednesday 17 July 2013

Are Complex Businesses More Fragile?



Just before the Global Financial Crisis hit, there was a number of high profile businesses that were making very high profits. Then, seemingly in the blink of an eye, these businesses had failed – for some reason their wheels had fallen off.

How could companies that were making so much money one minute be ruined in the next? Clearly they were very fragile in some way.

If you remember the stories in the press, it seems that many of these companies were either selling complex products or they had complex structures, or both.

So, does complexity in business contribute to fragility in the face of surprising and extreme events such as economic changes and natural disasters?

Could it be that there is a necessary level of robustness or resilience in a business that gives it a good chance of overcoming the fallout from these sorts of events? If so, how would you measure the level of robustness or fragility of your business?

A fairly simple way is to ask your people how robust or fragile things are in the way they deal with your customers every day? They’ll be getting the feedback from your customers about any problems or opportunities to improve. They’ll also be hearing the comments from your staff about their reactions to the customer feedback. This is a structured, qualitative approach, but we’ve found it to be very useful over 15 years in finding how well your systems hang together when they come under stress.

Another way is to calculate (or estimate) the net profit that comes from each customer over a period of time, say three months or even a year. Analysis of most businesses reveal that the top 20-30% of their customers deliver 120-150% of the profits while the bottom 20-30% of customers actually take away all this extra profit. We’ve noticed over the years that the causes of this severe loss of profits by the bottom range of customers lie mainly in the complications of dealing with difficult customers, the product range being too broad and therefore costly to manage, or the fact that the internal systems are too complex for people to do good work easily. So analyzing which customers (or customer groups) are being serviced at a loss will help you locate where your business is fragile.

There’s a simple way to check if complexity is making it hard for people in a business to do good work – this is to track the trends in the ratio of Total Revenue / Total Wages paid. If everything is working well and employees are getting better at what they do through better training, systems, amenities and marketing, then as each year goes by, the total sales output per unit of wages paid should increase slightly. If this ratio is flat or decreasing, it means that something is getting in the way of good work. This something is generally unnecessary complexity within the business.

A fourth way is to calculate the Complexity Factor developed by John Mariotti and described in his book The Complexity Crisis (Adams Media 2008). This factor comprises input such as the number of products, customers, market segments, suppliers and staff, the different legal entities and main sites of the business. It calculates a factor that describes the level of complexity and hence the level of danger to the business. Mariotti’s formula reflects the added risk, complexity and fragility caused by a proliferation of legal entities (doing business in many countries) and entering many different market segments in the search for revenue growth.

Finally, the European company Ontonix Complexity Management has developed an elegant way to compare the relationships between different financial, production and economic variables that describe the performance of a business.

The input to this analysis is a spreadsheet of the value of variables such as sales, profits, product numbers, customer numbers, various costs, assets and liabilities, stock levels, employee numbers etc. They don’t have to be the same variables for each business, nor in any particular order – they just have to be complete and placed into rows of a spreadsheet that represent different historical times such as days, weeks, months, quarters or years.

This Ontonix Self-Rating analysis allows you to calculate the level of complexity in your business based on the data you have input. Naturally, the more data you put in to the spreadsheet (eg the longer the history of the variables), the more accurate your complexity measurement becomes.

The complexity level is then translated to a Robustness Rating for the business. This is done via a Star Rating – 5 Stars is very robust while 1 Star means it’s very fragile and may react badly to an external or internal change event or crisis.

Finally, the service delivers you a report that shows how much each of your business variables contributes to the overall complexity of your organization. Some complexity is necessary in any business, but too much complexity is a bad omen.

The diagram below comes from a report for a public financial institution. You’ll see that the majority of the complexity of this business resides in a number of variables at the top of the diagram.

 


This is a huge benefit because it guides you to where to start to change your business to make it more robust, profitable and sustainable. And this is where the Simpler Business Institute comes in – we have more than 20 years experience in removing unnecessary complexity and simplifying business to increase performance, profits and sustainability.

The Simpler Business Institute has worked with both Ontonix and John Mariotti to offer a world-first service of measuring, diagnosing and treating costly complexity in business. Ontonix provides the ability to measure the complexity of any system (eg a business). John Mariotti can interpret the impact on a business of complexity as defined by his Complexity Factor. The Simpler Business Institute gives guidance on how to treat the profit-sucking complexity.

The small relative cost of the Ontonix self-rating diagnosis offers huge value in understanding your business in a way you’ve never known before.

To access the Self-Rating service, go to www.simplerbusiness.com and click on the Rate-a-Business logo on the top right. You’ll then access the service, download a free white paper and example reports and be on the way to knowing how robust or fragile your business really is in the face of the types of change we are seeing in business these days.

Applying the Ontonix techniques to businesses in the same sector, we start to see benchmark data supporting the argument that the higher the complexity, the lower the profitability. We’ve thought this for years and now we’re getting the proof.

Complexity analysis is part of the business education and coaching services offered by the Simpler Business Institute, so take the opportunity soon to attend one of our seminars or request some help by contacting us directly at info@simplerbusiness.com

Simply does it!
Posted by Ian Dover - Simpler Business Institute.

Is Your Company Prepared for Extreme Events and Shocks?





Is your company prepared to face extreme events and shocks?

Does it hide fragilities of which you are unaware?  

Is it prepared to face the risks of a turbulent economy?

Is it a one-star business, hence candidate for default, or is it resilient and stable?

To which of the classes below does your business belong?

Has it crossed the black line?







Remember, sound financial performance is no guarantee. Many triple-A rated companies have collapsed. Because in a turbulent economy it is the excessive complexity of a business that makes it fragile, it is important to check its resilience on a monthly/quarterly basis. Nowadays, things happen very quickly and conventional risk assessment and risk management techniques are no longer applicable. In measuring the exposure of your business it is important to employ techniques that have been architectured specifically to take turbulence into account.

Measure the resilience of your business on-line (click on the image).





www.ontonix.com


Sunday 14 July 2013

Measuring Improvement, Benchmarking and Images


 
Recently revealed analysis of NASA's Lunar Reconaissance Orbiter (LRO) data has delivered images of unseen quality. The images of the Moon's terrain made possible by the Lunar Orbiter Laser Altimeter (LOLA) instrument onboard the LRO has been compared to similar images obtained using the Unified Lunar Control Network  based on the best available data at the time, including imagery from the Clementine, Apollo, Mariner 10, and Galileo missions as well as Earth-based observations. Two images, one from 2005 (left) and 2010 (right) are confronted and analyzed using OntoSpace. The idea is to measure their complexity and entropy. 



 

The idea, however, is to measure the "distance" between these two images. How is this done? Quite simple. Complexity measures the total amount of structured information. This is expressed in bits. Based on the complexities of the two images (177.80 in 2005 and 203.66 in 2010) we can measure the increase in information to be around 14%. You can also see that since the entropy of the image in 2010 is lower than that of the 2005 image (down to 23411.02 from 26304.82) the image is also sharper. 


However, the interesting experiment is to actually compare the topologies of the two Complexity Maps (see above picture) as this will yield the true amount of "total difference" between the information conveyed by the two images from the perspective of structure. For this purpose we use OntoBench, our complexity-based benchmarking tool. We normally use it to compare bank branches, financial products or corporations. However, since an image may be transformed to a rectangular array, and since OntoBench compares two rectangular arrays of equal size, the tool may also be used to analyze pairs of images. The result is as follows:




Complexity Data Set 1 (Moon_2005_001): 177.80
Complexity Data Set 2 (Moon_2010_001): 203.66


Data Set Similarity: 95.76 %

In other words, since the similarity between the two images is 95.76%, the global increase in information obtained is 100 - 95.67 = 4.33%. This result is not in contrast with the previously mentioned 14%. Because the 2010 image is sharper it reveals smaller details. However, these details are quite small and they do not alter the topology of the global pattern which, in both cases, is essentially the same.



www.ontonix.com



You Can Rate Anything, Not Just Corporations


The concept of rating is known to the wide public since the onset of the global economic crisis that has erupted in 2008. Ratings are essentially opinions about publicly listed companies and in which one wishes to invest. Rating agencies are institutions who emit such opinions. However, because the major three rating agencies are said to have been "enablers of the financial crisis" and because the performance of ratings has been abysmal, they have earned a bad reputation - various governments, such as those of Italy, Australia and the USA, just to give three examples, are suing them - and the concept of rating has today a generally negative connotation. 

But what does a rating really denote? A rating attempts to measure the Probability of Default (PoD) of a business. The idea is simple. You purchase the obligations of a company which needs to raise cash. The company promises to return your money back with interest. The idea of the rating is to measure the probability of not getting the investment back. It's as simple as that. However, ratings have show abysmal performance in the past few years - and this is one reason why people trust them less - not to mention that they may be manipulated easily by those who calculate them. In fact, rating agencies defend themselves by claiming that their ratings are opinions, not science (given the huge amounts of capital that revolve around ratings, they should be science, not opinions!).


But the biggest problem with ratings originates form their very heart. The key to a rating is probability, one of the least understood and elusive concepts in mathematics and physics. A probability (of a future event) has no physical meaning when applied to a single corporation or an individual. It is one thing to state that out of, say, 1000 corporations from a certain market segment, 5 have defaulted over a period of three years, it is another to claim that corporation X has a Probability of Default of, say, 0.025%. Such as statement is meaningless. It has no foundation in science because our science doesn't allow us to make predictions. And ratings are futile attempts at making predictions. A sophisticated form of circle-squaring. It cannot be done. Besides, ratings have been conceived in a totally different economic context. Today, the World is turbulent, the economy is globalized and dominated by shocks and uncertainty. One cannot use laminar flow models to simulate turbulent flow in fluids, for example. Specific turbulence models need to be used in such cases. 


What we claim in this short article is that the concept of rating should be overhauled, starting right at the base, at the very roots of the entire construct. What we need, in particular, is to depart from the concept of Probability of Default and move on to something different, something a bit more rational and scientific. One such quantity is resilience, the ability to resist shocks and impacts and shocks are the hallmark of a turbulent economy. While resilience is a property of systems, a PoD is not. Resilience  is a physical property and it can be measured. Initially developed by mechanical engineers to characterize materials, the concept of resilience may be easily extended to a wide variety of systems: corporations, banks, markets, countries, portfolios, societies, traffic systems, the human body. 




The idea, therefore, is to rate systems (corporations) based on their resilience, not on their Probability of Default.

All that is needed to measure the resilience of a system is a set of observable outputs which it produces. In the case of a corporation it can be quarterly Financial Statements, such as Cash Flow. In the case of air traffic one may use the output of an airport radar which scans airspace at a certain frequency. Data, in other words. It all hinges on data. We need data to make decisions and the quality of our decisions depends on the quality of the data itself.


If we use data that is unreliable or that has fragile structure, our decisions will be equally unreliable and fragile.

So, the idea is to actually rate the data that represents a given system. This opens infinite possibilities. With a measure of resilience we can attach a quality tag to each piece of data we use to make decisions, to manage systems, to run corporations, to drive an economy.

Resilience is measured on a scale from 0% to 100%. A low value reflects a system which is unable to survive turbulence and which is, for all practical purposes, unstable and which can easily deliver unpleasant behavior. Unexpectedly. In other words,


resilience allows us to extend the concept of rating to all sorts of systems.


Rating a system, therefore, is equivalent to rating the resilience of the data which it produces and which we use to manage and control it. Many of the systems that we are confronted with in our daily activities generate data which is collected according to specific norms and protocols. We have compiled a number of Analysis Templates for the following systems/businesses/data:

  • Small Medium Enterprise
  • Corporation
  • Financial Institution Ratios
  • Retail Bank
  • Retail Bank Branch
  • Balance Sheet
  • Cash Flow
  • Income Statement
  • Industry Ratios
  • Common Stock
  • Real Estate Residential Sector
  • Real Estate Office Sector
  • Real Estate Hotel Sector
  • Country
  • Country Financial Risk

For example, the Analysis Template necessary to obtain a resilience rating of a Small/Medium Enterprise looks like this:





More templates may be found at our resilience rating portal which allows users to analyze and rate any kind of business.

Resilience is related intimately to structure. Structure reflects information, interdependency and functionality. Data which is structured conveys more information than data which is chaotic. An example of structure is shown here. Simply move the mouse pointer over the map nodes and links. The example in question illustrates the structure of the financials of a Small/Medium Enterprise.

An engineering example is shown here where we rate a power generation plant.


The bottom line is that resilience rating can be extended to cover not just corporations but also generic systems. Most importantly, however, the concept may be applied to actually rating the quality of our decisions. Decisions based on data having fragile structure increase risks which are compounded by the increasing turbulence and complexity of our economy.







www.ontonix.com





How To Transform a Set of Curves Into One





Suppose you are monitoring a system, say a human brain, a chemical plant, an asset portfolio, a traffic system. Suppose there are hundreds of parameters that you are monitoring. How do you get the idea of how things are globally going? Which parameter do you look at? How do you "add them up"? How can you blend all the information into one parameter that would convey an idea of the situation? In the above plot an example of a section of an Electroencephalogram is shown, containing only a small number of channels (electrodes). Clearly, analysing a few curves at the same time is feasible, even via simple visual analysis, but when it comes to hundreds or thousands of channels, this is not possible, regardless of the matter of experience of the observer.

One way to map (transform) multiple channels of data onto one scalar function is via complexity. Complexity is a scalar function obtained from a sampled vector x(t) of N channels. The function is computed as C = f (T; E), where T is the Topology of the corresponding System Map (see examples of such maps for an EEG, or an ECG) and E is entropy. Given that entropy is measured in bits, C is also measured in bits, and represents the total amount of structured information within the N-channel data set.

If the N channels of data are sampled each at a certain frequency but within a moving window of a certain width, the result is a time-dependent function of data complexity C(t). The process is fast and may be performed in real-time using OntoNet™, our Quantitative Complexity Management engine, as illustrated in the scheme below (the blue arrow indicates the direction of time flow).







What the C(t) function represents is:

1. Total amount of structured information contained in the data window at time t. This includes all channel interactions.

2. An idea of overall variability of data. The higher the value of C(t) the more each channel varies in conjunction with other channels. This points to a general "increase of activity" within the system.

In addition, C(t) can be tracked to detect:

1. Imminent "traumas" within a given system. In general, traumas are preceded by sudden increases in C(t).

2. Anomalies, phase transitions, situations which are generally "invisible" to conventional statistical methods.

Evidently, similar analysis can be performed off-line, as well as in real-time.


Today, as we quickly generate massive amounts of data, techniques, such as the one described above, can help significantly in extracting useful and actionable information. 
 
 
 
 
 
 

Saturday 13 July 2013

Comparing Apples With Oranges - Introducing the Concept of Relative Complexity



Can you compare the complexity of a large company to that of a small one? The answer to this question is fairly simple once you distinguish the following two situations:

1. You analyze both companies using exactly the same business parameters.

2. You analyze both companies using similar but not identical business parameters.

In order to analyze a business (a company, a bank, etc.) one uses financial statements such as a Balance Sheet, Cash Flow, Income Statements or Ratios. Now, because no two companies conduct exactly the same kind of business, the Balance Sheets, for example, will not always contain exactly the same entries. Therefore, from a rigorous and scientific point of view, such situations would not be comparable. Consequently, comparing directly the values of the complexity of two companies would not make much sense. Imagine comparing the levels of cholesterol of a baby to that of an adult.

Unless we're in situation 1 - imagine for example  two branches of the same bank, which are monitored using the same parameters - we will need another means of comparing complexities of business, or portfolios. This is why we have introduced the concept of relative complexity. It is based on the current, lower and upper bounds of complexity and is computed as follows:


CRel = (Ccr - C) / (Ccr - Cmin) X 100%


where Ccr, Cmin and C correspond, respectively to the values of critical and lower complexity bounds and to its current value.


As an example, let us compare the relative complexities of Goldman Sachs, Citi Bank and Apple - click on the Business Structure Maps to see which business parameters have been used in each case to compute business complexity.



Goldman Sachs.



Relative complexity = 41%           




Citi Bank.




Relative complexity =  24%          



Apple Inc.



Relative Complexity =  31%                 



If you navigate the various Business Structure Maps you will notice that in each of the three cases different business parameters have been used to analyze them. Therefore, comparing the various complexities would have been meaningless and misleading. Relative complexity, on the other hand, allows us to rank business complexity even if these belong to different categories, markets or market segments, and, most importantly, independently of size. What this means is that (business) size does not matter and that an SME can be relatively much more complex than a huge multinational corporation.

But the question is, so what? What is the big deal? Why would anyone want to know the relative complexity of a business? The answer is quite simple. Why would anyone want to measure their level of cholesterol? There are many reasons why, but today, in a turbulent economy, a measure of relative complexity is of immense business value.

A highly complex business is, generally:

  • Difficult to manage, to understand
  • Difficult to adapt to turbulence
  • Able to produce surprising behaviour
  • Exposed
  • Fragile
  • Less profitable
  • Less predictable - it is difficult to make credible forecasts

Still want a complex business?




Friday 12 July 2013

Rating the Rating Agencies - We've Rated Moody's.



Moody's is the largest of the Big Three rating agencies. It employs 4500 people worldwide and has reported a revenue of $2 billion in 2010. Since rating agencies have been under heavy fire since the start of the financial meltdown - in January 2011 the Financial Crisis Inquiry Committee claimed that "The three credit rating agencies were key enablers of the financial meltdown" - we have decided to actually rate one of them. We have chosen Moody's because today it is the largest rating agency.

However, in rating Moody's we have not rated its financial performance or its capacity to honor its financial obligations or its Probability of Default. In other words, we have not performed a conventional rating which, as we claim, is not relevant in a turbulent economy. What is more relevant in turbulent times is resilience - the capacity of a business to withstand and survive sudden and extreme events. In fact, our ratings actually measure the resilience of a business based on the structure of its financials.

For the analysis we have used our on-line self-rating system. Anybody can use this system to rate any company.

We have used information from Moody's Investor Relations page, available here. If anyone wishes to verify the results of our rating it is possible to do so by simply downloading the financial information and processing it using our mentioned self-rating system.  The process, in other words, is fully transparent.

Since it is not the scope of this short blog to provide a thorough and detailed analysis, we will illustrate only on the results based on the Balance Sheet data. We have, however, analyzed also the Consolidated Income and the Cash Flow statements.

The following Balance Sheet entries have been used:


  • Cash and cash equivalents
  • Short term investments
  • Accounts receivable  net of allowances of  
  • Deferred tax assets  net
  • Other current assets
  • Total current assets
  • Property and equipment  net
  • Prepaid pension costs
  • Computer Software  Net                                     
  • Goodwill
  • Intangible assets  net
  • Deferred tax assets  net
  • Other assets
  • Total assets
  • Notes payable
  • Accounts payable and accrued liabilities
  • Commercial paper
  • Revolving credit facility
  • Current portion of long term debt
  • Bank borrowings
  • Deferred revenue
  • Total current liabilities
  • Non current portion of deferred revenue
  • Long term debt
  • Notes payable
  • Deferred tax liabilities  net
  • Unrecognized tax benefits
  • Accrued Income Taxes                                       
  • Other Accrued and Current Liabilities                      
  • Unearned Subscription Income                               
  • Other liabilities
  • Total liabilities
  • PENSION AND POSTRETIREMENT BENEFITS                        
  • Shareholders' deficit: Preferred stock  par value  
  • Shareholders' deficit: Series common stock  par value   
  • Shareholders' deficit: Common stock  par value  
  • Capital surplus
  • Accumulated deficit
  • Retained earnings
  • Treasury stock  at cost       shares of common stock at December 31
  • Accumulated other comprehensive loss
  • Cumulative translation adjustment
  • Total Moody's shareholders' deficit
  • Noncontrolling interests
  • Minimum Pension Liability                                  
  • Total shareholders' deficit
  • Total liabilities and shareholders' deficit

The corresponding Business Structure Map which may be examined interactively and is indicated below.





As the name suggests, the map represents the structure of the business as reflected, in this case, by its Balance Sheet. In the map one may identify dependencies between the various Balance Sheet entries. An intricate and inter-connected map points to a business that is difficult to manage and to understand. Information on how to interpret such maps may be found here.

If the structure of this map is resilient then also the business is resilient. But let us see how resilient the structure of Moody's business really is:



On a scale of one to five stars, a two-star rating is obtained. This is because the business is highly complex - 18.01 - which is quite close to the maximum sustainable complexity of 21.95. This means that the business cannot become much more complex than it already is today and, if it does, it will be unmanageable. In other words, the business is not well prepared to face sudden and extreme events as it is approaching high levels of fragility. Furthermore, since the business is very close to reaching its maximum sustainable complexity threshold, with the current business model Moody's cannot grow much more.

Using Moody's rating scale, two stars corresponds to A3.

When a business functions in the proximity of its critical complexity (think of your cholesterol being close to the limit suggested by your cardiologist) it is important to know what is making the business complex. This information is portrayed by the Corporate Complexity Profile. The Complexity Profile of Moody's is illustrated below:



The entries at the top of the chart are those that are responsible for the high complexity and hence for the low resilience of the business. Values are expressed in percentage terms. The fact that numerous entries have similar contribution (6-8%) points to a situation that is quite intricate and difficult to modify.

The above result poses the question: shouldn't raters have high ratings? Isn't someone who has the power to judge others supposed to give a good example? Would you trust a cardiologist who smokes while he examines your ECG?

How Resilient Are US Markets As A System?




In the past months attention, as well as speculatory attacks, have been focused on the EU. But how resilient are US markets? We have analysed the S&P, DJIA and the NASDAQ markets separately and as a single system. Here are our findings.


S&P Complexity and Robustness:


 
NASDAQ Complexity and Robustness:

 
DJIA Complexity and Robustness:

 
It is evident that considered as isolated systems, the three indices appear as healthy, with robustness ranging from 70 to 85%. However, this is a false perception. The point is that these markets are not isolated. They may embrace different types of corporations but, in reality, all companies interact, directly or indirectly, forming a single system of systems. In fact, as an integrated analysis shows, the robustness of the combined system is this:



 
Robustness falls to 68% and the Resilience Rating to a mere two stars. Highly fragile. One must always be careful. Numbers don't always tell the full story.

The corresponding Business Structure Map, which reflects the interaction between the markets (blue is NASDAQ), is illustrated below:




It is interesting to note that while the complexities of the single markets are approximately 7, that of the combined markets is over 27 instead of an expected value of approximately 21. This is because the complexity of each component cannot simply be added to that of the others to produce the overall measure of complexity. In other words, the difficulty in understanding the dynamics of the system of markets is more than that of the sum of complexities of the single markets. What is also non-intuitive (intuition fails quite often, doesn't it?) is that two robust four-star systems, when combined with a three-star system, lead to a two-star not-so-robust system. So much for linear thinking.
 
 
 
 
 

Conventional Ratings: Why The Abysmal Performance?




Why are ratings so unreliable? In a recent article which appeared on 19-th July, 2012 on Thomson Reuters News & Insight, one reads: 

"This is how the role of the credit rating agencies was described by the Financial Crisis Enquiry Commission in January 2011: 

The rating agencies were essential to the smooth functioning of the mortgage-backed securities market.  Issuers needed them to approve the structure of their deals; banks needed their ratings to determine the amount of capital to hold; repo markets needed their ratings to determine loan terms; some investors could buy only securities with a triple-A rating; and the rating agencies’ judgment was baked into collateral agreements and other financial contracts.1 

The performance of the credit rating agencies as essential participants in this market has been abysmal.  In September 2011 Moody’s reported that new “impairments,” that is, non-payments, of principal and interest obligations owed to investors through these structured-finance products soared from only 109 in 2006 to 2,153 in 2007 to 12,719 in 2008 and peaked at 14,242 in 2009, but with still more than 8,000 new impairments in 2010. Thus, more than 37,000 discrete investment products defaulted in that time period.
..........

Substantial evidence has suggested that this epidemic of ratings “errors” was not the product of mere negligence, but rather was the direct and foreseeable consequence of the credit rating agencies’ business models and their largely undisclosed economic partnerships with the issuers that paid them for their investment-grade ratings."


Setting aside incompetence, the conflict of interest, the "special relationships", etc. we wish to concentrate on what is probably the most fundamental reason for the "epidemic of rating errors" - the underlying flawed mathematical approach. Yes, the mathematics behind conventional risk rating is flawed not only from a purely mathematical and philosophical perspective, it also opens the doors to numerous means of manipulating the results. There are lies, damn lies and statistics. The tools offered by statistics - extremely dangerous if in the wrong hands - are the main enabler. In particular let's look at the concept of correlation, the most fundamental quantity in anything that has to do with risk, ratings, VaR, its assessment and management. 


A correlation measures how two parameters are related to each other as they vary together. Let us see a few significant cases:


An strong linear correlation: R² = 0.83





The above situation is quite frequent in textbooks or computers. In reality, this is what you encounter most often:







The problem becomes nasty when you run into situations such as this, in which R² = 0, but which, evidently, convey plenty of information:






The evident paradox is that you have a clear structure and yet stats tells you the two parameters are independent. A clear lie in the face of evidence.

And what about cases like this one?









It is easy to draw a straight line passing through two clusters and call it "trend". But in the case above it is not a trend we see but a bifurcation. A totally different behavior. Totally different physics. Two clusters point to a bifurcation, N clusters could point to N-1 bifurcations.... certainly not to a trend. 

And finally, how would one treat similar cases?







The data is evidently structured but correlation is0. How do you deal with such situations?

The key issue is this: when looking at portfolios composed of thousands of securities, or other multi-dimensional data sets in which hundreds or thousands of variables are present, correlations are computed blindly, without actually looking at the data (the scatter plots). Who would? There are hundreds of thousands of correlations involved when dealing with large data sets. So, one closes an eye and just throws straight lines on top of data. Some false trends are captured as such, significant trends are discarded just because they don't fit a linear model. What survives this overly "democratic" filtering goes to the next step, to create more damage. Imagine, for example, the MPT (Modern Portfolio Theory), developed by Markowitz and which hinges on the covariance matrix. Covariance is of course related intimately to correlation and the flaw propagates deeply and quickly. Think of all the other places where you plug in a covariance or a standard deviation, or a linear model. Think of how many decisions are made trusting blindly these concepts. We must stop bending reality to suit our tools. We have become slaves of our own tools.



www.rate-a-business.com



Thursday 11 July 2013

The Present and Future of the Concept and Value of Ratings



“Rating agencies? Do not speak evil of the dead” (Corriere della Sera, 15/1/2009). This is an extreme synthesis of the widespread opinion and of the doubts placed on rating agencies and on the value of the concept of rating. The sub prime crisis, Enron, Lehman Brothers, or Parmalat are just a few eloquent examples of how a business or a financial product may collapse shortly after being awarded a very high investment-grade rating. But are these isolated cases, which expose the weaknesses of rating processes, or just the tip of an iceberg? Our intention is to provide a critical overview of the concept of rating and to question its conceptual validity and relevance. We believe that the increasing levels of global uncertainty and inter-dependency – complexity in other words – as well as the socio-economic context of our times, will place under pressure and scrutiny not only the concept of rating, but all conventional end established risk management and Business Intelligence techniques. Unquestionably, the concepts of risk and risk rating lie at the very heart of our troubled global economy. The question is in what measure did rating contribute to the trouble and what, in alternative, can be done to improve or replace it altogether.

The concept of rating is extremely attractive to investors and decision-makers in that it removes the burden of having to go over the books of a given company before the decision is made to invest in it or not. This job is delegated to specialized analysts whose work culminates in the rating. The rating, therefore, is an instrument of synthesis and it is precisely this characteristic that has made it so widespread. However, contrary to popular belief, a rating is not, by definition, a recommendation to buy or sell. Nor does it constitute any form of guarantee as to the credibility of a given company. A rating is, by definition, merely an estimate of the probability of insolvency or default over a certain period of time. In order to assign a rating, rating agencies utilize statistical information and statistical models which are used in sophisticated Monte Carlo Simulations. In other words, information on the past history of a company is taken into account (typically ratings are assigned to corporations which have at least five years of certified balance sheets) under the tacit assumption that this information is sufficient to hint what the future of the company will look like. In a “smooth” economy, characterized by prolonged periods of tranquility and stability this makes sense. However, in a turbulent, unstable and globalized economy, in which the “future is under construction” every single day, this is highly questionable.

The process of rating, whether applied to a corporation or to a structured financial product, is a highly subjective one. The two main sources of this subjectivity are the analyst and the mathematical models employed to compute the probability of insolvency. It is in the mathematical component of the process that we identify the main weakness of the concept of a rating. A mathematical model always requires a series of assumptions or hypothesis in order to make its formulation possible. In practice this means that certain “portions of reality” have to be sacrificed. Certain phenomena are so complicated to model that one simply neglects them, hoping that their impact will negligible. As already mentioned, this approach may work well in situations dominated by long periods of continuity. In a highly unstable economy, in which sudden discontinuities are around every corner, such an approach is doomed for failure. In fact, the usage of models under similar circumstances constitutes an additional layer of uncertainty with the inevitable result of increasing the overall risk exposure. But there is more. In an attempt to capture the discontinuous and chaotic economy, models have become more complex and, therefore, even more questionable.

In the recent past we have observed the proliferation of exotic and elaborate computer models. However, a complicated computer model requires a tremendous validation effort which, in many cases, simply cannot be performed. The reason is quite simple. Every corporation is unique. Every economic crisis is unique. No statistics can capture this fact. No matter how elaborate. Moreover, the complexity and depth of recently devised financial products has surpassed greatly the capacity of any model to embrace fully their intricate and highly stochastic dynamics, creating a fatal spill-over effect into the so-called real economy. We therefore stress with strength the fact that the usage of models constitutes a significant source of uncertainty which is superimposed on the turbulence of the global economy, only to be further amplified by the subjectivity of the analyst. In a discontinuous and “fast” economy no modeling technique can reliably provide credible estimates of the probability of insolvency, not even over short periods of time. It is precisely because of this fundamental fact that the concept and value of rating become highly questionable.

For most companies today, survival is going to be their measure of success. For example, an AA and BB rating indicate, respectively, a probability of insolvency of 1.45% and 24.57% within a period of 15 years. What is astonishing is not just the precision with which the probability is indicated, but the time span embraced by the rating. The degree of resolution – the number of rating classes – is unjustified in a turbulent economy. The fact that a rating agency can place a corporation in one of 25 or more classes indicates that there is sufficient “precision in the economy” and in the models to justify this fact. Clearly, the economy is not that precise. Table 1 indicates rating classes as defined by the three major rating agencies (number in parentheses indicates number of classes).

 Table 1.

It is precisely this attempt to search for precision in a highly uncertain and unpredictable environment that casts many doubts on the concept, relevance and value of a rating. We basically sustain that the whole concept is flawed. We see the flaw in the fact that the process of assigning a rating is essentially attempting to do something “wrong” (compute the probability of insolvency) but in a very precise manner.

An interesting parallel may be drawn between rating and car crash. Just like corporations, cars are also rated: for crashworthiness. Just like in the case of rating agencies, there exist different organisms that are certified to issue car crash ratings. A car crash rating is expressed by the number of stars – 1 to 5 – 5 being the highest. A car with a 5 star rating is claimed to be safe. Where is the problem? A crash rating is obtained in a test lab, in clinical conditions. What happens on the road is a different story. A crash rating tells you what happens under very precise but unrealistic conditions. In reality, a car will collide with another car traveling at an unknown speed, of unknown mass, with an unknown angle, and not with a fixed flat cement wall at 55 kph and at 90 degrees. So, a crash rating attempts to convey something about the future it cannot possibly catch. Just like in the case of corporate rating, computer crash simulations use state of the art stochastic models, are computationally very intensive and attempt to provide precise answers for unknown future scenarios.

In summary, rating is an instrument which, directly or indirectly, synthesizes and quantifies the uncertainty surrounding a certain corporation or a financial product, as they interact with the respective ecosystems. This not only is extremely difficult but, most importantly it misses the fundamental characteristic of our economy and namely it’s rapidly increasing complexity. This complexity, which today may actually be measured, contributes to a faster and more turbulent ecosystem with which companies will confront themselves in their struggle to remain in the marketplace. This new scenario suggests new concepts that go beyond the concept of rating. Corporate complexity, as will be shown, occupies a central position.
Complexity, when referred to a corporation, can become a competitive advantage providing it is managed. However, we identify excessive complexity as the main source of risk for a business process. In particular, high complexity implies high fragility, hence vulnerability. In other words, excessively complex corporations are exposed to the Strategic Risk of not surviving in their respective marketplaces. Evidently, high fragility increases the probability of default of insolvency. The concept is expressed synthetically via the following equation:
C_corporation  X  U_ecosystem = Fragility  

The significance of this simple equation is very clear: the fragility of a corporation is proportional to its complexity and to the uncertainty of the marketplace in which it operates. Complexity amplifies the effects of uncertainty and vice-versa. In practice what the equation states is that a highly complex business can survive in an ecosystem of low uncertainty or, conversely, if the environment is turbulent, the business will have to be less complex in order to yield an acceptable amount of fragility (risk). High complexity implies the capacity to deliver surprises. In scientific terms, high complexity manifests itself in a multitude of possible modes of behavior the system can express and, most importantly, in the capacity of the system to spontaneously switch from one such mode to another and without early warning. The fragility in the above equation is proportional to the strategic risk of being “expelled” from the marketplace.

The weakness of the concept of rating is rooted in the fact that it focuses exclusively on the uncertainty aspects of a corporation without taking complexity into account. Clearly, as equation 1 shows, uncertainty is only part of the picture. In more stable markets, neglecting complexity did not have major consequences. In a less turbulent economy, business fragility is indeed proportional to uncertainty. When turbulence becomes the salient characteristic of an economy, complexity must necessarily be included in the picture as it plays the role of an amplification factor. In fact, the mentioned turbulence is a direct consequence of high complexity (the capacity to surprise and switch modes).

The constant growth of complexity of our global economy has recently been quantified, in an analysis of the GDP evolution of the World’s major economies. It is interesting to note how in the period 2004-2007 the global economy has doubled its complexity and how half of this increment took place in 2007 alone, see Figure 1. Similarly, one may observe how the complexity of the US economy also grew albeit in the presence of strong oscillations, denoting an inherently nonstationary environment.


Figure 1. Evolution of complexity (second curve from bottom) of the World and US economies on the period 2004-2007.
The evolution of complexity, and in particular its rapid changes, act as crisis precursors. Recent studies of the complexities of Lehman Brothers, Goldman Sachs and Washington Mutual, as well as of the US housing market, have shown how in all these cases a rapid increment of complexity took place at least a year before the information of their difficulties became of public domain.

Today it is possible to rationally and objectively measure the complexity of a corporation, a market, and a financial product or of any business process. Complexity constitutes an intrinsic and holistic property of a generic dynamic system, just like, for example, energy. Evidently, high complexity implies high management effort. It also implies the capacity to deliver surprises. This is why humans prefer to stay away from highly complex situations – they are very difficult to comprehend and manage. This is why, with all things being equal, the best solution is the simplest that works. But there is more. Every system possesses the so-called critical complexity – a sort of physiological limit, which represents the maximum amount of complexity a given system may sustain. In proximity of its critical complexity, a business process becomes fragile and exposed hence unsustainable. It is evident, that the distance from critical complexity is a measure of the state of health or robustness. In 2005 Ontonix has developed rational and objective measures of complexity and critical complexity, publishing templates for a quick on-line evaluation of both of these fundamental properties of a business process. The templates are based on financial highlights and standard balance sheet entries.


The fundamental characteristic of the process of complexity quantification is that it doesn’t make use of any mathematical modeling technique (stochastic, regression, neural networks, statistics, etc.). The method in fact is a so-called model-free technique. This allows us to overcome the fundamental limitation of any model which, inevitably, involves simplifications and hypotheses which, in most cases, are rarely verified. As consequence, the method is objective. Data is analyzed as is, without making any assumptions as to their Gaussianity and continuity and without any pre-filtering of pre-conditioning. As a result, no further uncertainty, which would contaminate the result, is added. At this point it becomes evident how complexity can occupy a central position in a new approach to the problem of rating. It is in fact sufficient to collect the necessary financial data, compute the complexity and corresponding critical value and to determine the state of health of the underlying business as follows:


State of health of corporation  = Critical complexity - current complexity 

The closer to its critical complexity the more vulnerable is the business. Simple and intuitive. A paramount property of this approach is that, unlike in the case of a rating, it does not have a probabilistic connotation. In other words, no mention is made as to the future state of the corporation. All that is indicated is the current state of health, no prediction is advanced. The underlying concept is: a healthy organization can better cope with the uncertainties of its evolving ecosystem. The stratification of the state of health (or fragility) is operated on five levels: Very Low, Low, Medium, High and Very High. In highly turbulent environments, attempting to define more classes of risk is of little relevance. It is impossible to squeeze precision of out of inherently imprecise systems.

Let us illustrate the above concepts with an example of a publicly traded company. The computation of the state of health (rating) has been performed using fundamental financial parameters (see blue nodes in the graph in Figure 2) as well as certain macro-economic indicators (red nodes) which represent, albeit in a crude manner, the ecosystem of the corporation. Figure 2 illustrates the so-called Complexity & Risk Map of the corporation as determined using the on-line rating system developed by Ontonix (www.ontonix.com). The parameters of the business are arranged along the diagonal of the graph, while significant relationships between these parameters are represented by the connectors located away from the diagonal. Needless to say, the said relationships are determined by a specific model-free algorithm, not by analysts. The health rating of the company is “Very High” and corresponds, in numerical terms, to 89%. The map also indicates the so-called hub, or dominant variables, indicated as nodes of intense red and blue color.



Figure 2. Complexity & Risk Map of a corporation, indicating the corresponding health rating (business robustness).


The conventional concept of rating – intended as a synthetic reflection of the probability of insolvency – has shown its inherent limitations in a global, turbulent and fast economy. The proof lies in the crippled economy. The usage of mathematical models, as well as the subjectivity of the rating process, adds a further layer of uncertainty which is invisible to the eyes of investors and managers. Under rapidly changing conditions and in the presence of high complexity, the concept of probability of default is irrelevant. Instead, a more significant rating mechanism may be established based on the instantaneous state of health of a corporation, as its capacity to face and counter the uncertainties of its marketplace. In other words, the (strategic) risk of not being able to survive in one’s marketplace is proportional to the mentioned state of health. The capacity of a corporation to survive in its market place does not only depend on how turbulent the marketplace is but, most importantly, on how complex the corporation is. This statement assumes more importance in a highly turbulent economic climate. If corporations do not start to proactively control their own complexity, they will quickly contribute to increasing even more the turbulence of the global marketplace, making survival even more difficult. Complexity, therefore, is not just the basis of a new rating mechanism, it establishes new foundations of a superior and holistic form of Business Intelligence.





Tuesday 9 July 2013

Optimal Does NOT Mean Best


In the second half of the twentieth century it has become very popular to seek optimal solutions to a broad spectrum of problems: portfolios,  engineering systems, strategies, traffic systems, distribution channels, networks, policies, etc. But have you ever wondered if optimal really means best? Well, it does not. Optimality is not the most convenient state in which to function. The reason?
Optimal solutions are inherently fragile

Anything that is optimal is, by definition, fragile, hence vulnerable. This is the price one pays for excessive specialization or extreme organization.  Let us see why.

There are very few things that are stationary. In fact, we live in a quickly evolving environment in which there is little time for equilibrium and in which irreversible and dissipative mechanisms, together with chaos, randomness, not to mention extreme events (the so-called Black Swans) produce a sequence of unique events in which only fundamental patterns can be distinguished but in which the search for repeatable details is futile. This simple fact clashes frontally with the concept of optimality which hinges on precision and details. Sure, one can identify sweet spots in a multi-dimensional design space. From a mathematical perspective many things are possible. However, the dynamic non-equilibrium character of Nature guarantees that the conditions for which a given system has been optimized soon cease to exist. The pursuit of perfection is, therefore, an attempt to ignore the ways of Nature and Nature taxes similar efforts in proportion to the magnitude of the intended crime.

The above is true not only in the global economy. In the biosphere it is also risky to be optimal, precisely because ecosystems are dynamic, and there is little time to enjoy optimality. As Edward O. Wilson stated in one of his wonderful books: "excessive specialization is a tender trap of evolutionary opportunism." Nature very rarely tolerates optimal designs. In fact, natural systems are, in the majority of the cases, fit for the function, not optimal.

But there is more. High complexity compounds the dangers of optimality. As a system becomes more complex, approaching its own critical complexity, it possesses an increasingly large number of the so-called modes of behaviour (or attractors). Because these modes of behaviour are often very close to each other, tiny perturbations are sufficient for a given system to suddenly transition from one mode of behaviour to another. These sudden mode transitions are more frequent as complexity approaches its upper limit. This is why humans intuitively try to avoid highly complex situations - they are unmanageable precisely because of the mentioned unexpected mode transitions. In layman's terms, high complexity reflects a system's capacity to deliver surprises. This is why when speaking of a highly complex system a good design is not an optimal one but one that is fit and resilient. In other words:

Attempting to construct optimal solutions in the face of high complexity increases the cost of failure.

The following question arises at this point. Knowing that an optimal system is fragile, why then not design systems to be sub-optimal in the first place? Why not settle for a little less performance, gaining in robustness and resilience? Why this obsession to be perfect? Why push a system into a very tight corner of its design space, out of which it pops out at the snap of the fingers? Why do people pursue optimal solutions knowing that an optimal system, precisely because it is optimal, can only get worse, never better? As the ancient Romans claimed even the Gods are powerless against stupidity.

But how do you get a solution that is fit and not optimal (=fragile)?. More than a decade age we have come up with a very simple algorithm called SDI (Stochastic Design Improvement) which is described here and which establishes the following new paradigm in system design:
  1. Specify an initial (nominal) design (or solution) to a given problem.
  2. Specify acceptable target behaviour of the system, i.e. an improved design with tolerable (but not optimal) performance.
  3. Run SDI - it is an iterative procedure which produces multiple solutions in the vicinity of the target behaviour.All have very similar performance.
  4. Measure the complexity of each solution. Select the least complex one as the final solution. This is because the least complex solution is the least fragile.
The above philosophy is superior to conventional approaches to design, strategy and decision-making because it is tailored to highly uncertain, interconnected and turbulent environments, in which fitness counts much more than ephemeral perfection.
Our economy (but not only) is fragile because everything we do is focused on maximizing something (profits,  performance, success) while minimizing something else (risk, time, investment, R&D) at the same time. This leads to strains within the system. Everything is stretched to the limit (or as much as physics will allow). This is exactly what one should not do when facing turbulence. The focus should, instead, be on:
  • Solutions that are fit, not optimal.
  • Simplifiying business models and strategies.
  • Accepting compromises not seeking perfection. Improve, don't optimize.
Corruptio optimi pessima!