Wednesday 17 July 2013

Is Your Company Prepared for Extreme Events and Shocks?





Is your company prepared to face extreme events and shocks?

Does it hide fragilities of which you are unaware?  

Is it prepared to face the risks of a turbulent economy?

Is it a one-star business, hence candidate for default, or is it resilient and stable?

To which of the classes below does your business belong?

Has it crossed the black line?







Remember, sound financial performance is no guarantee. Many triple-A rated companies have collapsed. Because in a turbulent economy it is the excessive complexity of a business that makes it fragile, it is important to check its resilience on a monthly/quarterly basis. Nowadays, things happen very quickly and conventional risk assessment and risk management techniques are no longer applicable. In measuring the exposure of your business it is important to employ techniques that have been architectured specifically to take turbulence into account.

Measure the resilience of your business on-line (click on the image).





www.ontonix.com


Sunday 14 July 2013

Measuring Improvement, Benchmarking and Images


 
Recently revealed analysis of NASA's Lunar Reconaissance Orbiter (LRO) data has delivered images of unseen quality. The images of the Moon's terrain made possible by the Lunar Orbiter Laser Altimeter (LOLA) instrument onboard the LRO has been compared to similar images obtained using the Unified Lunar Control Network  based on the best available data at the time, including imagery from the Clementine, Apollo, Mariner 10, and Galileo missions as well as Earth-based observations. Two images, one from 2005 (left) and 2010 (right) are confronted and analyzed using OntoSpace. The idea is to measure their complexity and entropy. 



 

The idea, however, is to measure the "distance" between these two images. How is this done? Quite simple. Complexity measures the total amount of structured information. This is expressed in bits. Based on the complexities of the two images (177.80 in 2005 and 203.66 in 2010) we can measure the increase in information to be around 14%. You can also see that since the entropy of the image in 2010 is lower than that of the 2005 image (down to 23411.02 from 26304.82) the image is also sharper. 


However, the interesting experiment is to actually compare the topologies of the two Complexity Maps (see above picture) as this will yield the true amount of "total difference" between the information conveyed by the two images from the perspective of structure. For this purpose we use OntoBench, our complexity-based benchmarking tool. We normally use it to compare bank branches, financial products or corporations. However, since an image may be transformed to a rectangular array, and since OntoBench compares two rectangular arrays of equal size, the tool may also be used to analyze pairs of images. The result is as follows:




Complexity Data Set 1 (Moon_2005_001): 177.80
Complexity Data Set 2 (Moon_2010_001): 203.66


Data Set Similarity: 95.76 %

In other words, since the similarity between the two images is 95.76%, the global increase in information obtained is 100 - 95.67 = 4.33%. This result is not in contrast with the previously mentioned 14%. Because the 2010 image is sharper it reveals smaller details. However, these details are quite small and they do not alter the topology of the global pattern which, in both cases, is essentially the same.



www.ontonix.com



You Can Rate Anything, Not Just Corporations


The concept of rating is known to the wide public since the onset of the global economic crisis that has erupted in 2008. Ratings are essentially opinions about publicly listed companies and in which one wishes to invest. Rating agencies are institutions who emit such opinions. However, because the major three rating agencies are said to have been "enablers of the financial crisis" and because the performance of ratings has been abysmal, they have earned a bad reputation - various governments, such as those of Italy, Australia and the USA, just to give three examples, are suing them - and the concept of rating has today a generally negative connotation. 

But what does a rating really denote? A rating attempts to measure the Probability of Default (PoD) of a business. The idea is simple. You purchase the obligations of a company which needs to raise cash. The company promises to return your money back with interest. The idea of the rating is to measure the probability of not getting the investment back. It's as simple as that. However, ratings have show abysmal performance in the past few years - and this is one reason why people trust them less - not to mention that they may be manipulated easily by those who calculate them. In fact, rating agencies defend themselves by claiming that their ratings are opinions, not science (given the huge amounts of capital that revolve around ratings, they should be science, not opinions!).


But the biggest problem with ratings originates form their very heart. The key to a rating is probability, one of the least understood and elusive concepts in mathematics and physics. A probability (of a future event) has no physical meaning when applied to a single corporation or an individual. It is one thing to state that out of, say, 1000 corporations from a certain market segment, 5 have defaulted over a period of three years, it is another to claim that corporation X has a Probability of Default of, say, 0.025%. Such as statement is meaningless. It has no foundation in science because our science doesn't allow us to make predictions. And ratings are futile attempts at making predictions. A sophisticated form of circle-squaring. It cannot be done. Besides, ratings have been conceived in a totally different economic context. Today, the World is turbulent, the economy is globalized and dominated by shocks and uncertainty. One cannot use laminar flow models to simulate turbulent flow in fluids, for example. Specific turbulence models need to be used in such cases. 


What we claim in this short article is that the concept of rating should be overhauled, starting right at the base, at the very roots of the entire construct. What we need, in particular, is to depart from the concept of Probability of Default and move on to something different, something a bit more rational and scientific. One such quantity is resilience, the ability to resist shocks and impacts and shocks are the hallmark of a turbulent economy. While resilience is a property of systems, a PoD is not. Resilience  is a physical property and it can be measured. Initially developed by mechanical engineers to characterize materials, the concept of resilience may be easily extended to a wide variety of systems: corporations, banks, markets, countries, portfolios, societies, traffic systems, the human body. 




The idea, therefore, is to rate systems (corporations) based on their resilience, not on their Probability of Default.

All that is needed to measure the resilience of a system is a set of observable outputs which it produces. In the case of a corporation it can be quarterly Financial Statements, such as Cash Flow. In the case of air traffic one may use the output of an airport radar which scans airspace at a certain frequency. Data, in other words. It all hinges on data. We need data to make decisions and the quality of our decisions depends on the quality of the data itself.


If we use data that is unreliable or that has fragile structure, our decisions will be equally unreliable and fragile.

So, the idea is to actually rate the data that represents a given system. This opens infinite possibilities. With a measure of resilience we can attach a quality tag to each piece of data we use to make decisions, to manage systems, to run corporations, to drive an economy.

Resilience is measured on a scale from 0% to 100%. A low value reflects a system which is unable to survive turbulence and which is, for all practical purposes, unstable and which can easily deliver unpleasant behavior. Unexpectedly. In other words,


resilience allows us to extend the concept of rating to all sorts of systems.


Rating a system, therefore, is equivalent to rating the resilience of the data which it produces and which we use to manage and control it. Many of the systems that we are confronted with in our daily activities generate data which is collected according to specific norms and protocols. We have compiled a number of Analysis Templates for the following systems/businesses/data:

  • Small Medium Enterprise
  • Corporation
  • Financial Institution Ratios
  • Retail Bank
  • Retail Bank Branch
  • Balance Sheet
  • Cash Flow
  • Income Statement
  • Industry Ratios
  • Common Stock
  • Real Estate Residential Sector
  • Real Estate Office Sector
  • Real Estate Hotel Sector
  • Country
  • Country Financial Risk

For example, the Analysis Template necessary to obtain a resilience rating of a Small/Medium Enterprise looks like this:





More templates may be found at our resilience rating portal which allows users to analyze and rate any kind of business.

Resilience is related intimately to structure. Structure reflects information, interdependency and functionality. Data which is structured conveys more information than data which is chaotic. An example of structure is shown here. Simply move the mouse pointer over the map nodes and links. The example in question illustrates the structure of the financials of a Small/Medium Enterprise.

An engineering example is shown here where we rate a power generation plant.


The bottom line is that resilience rating can be extended to cover not just corporations but also generic systems. Most importantly, however, the concept may be applied to actually rating the quality of our decisions. Decisions based on data having fragile structure increase risks which are compounded by the increasing turbulence and complexity of our economy.







www.ontonix.com





How To Transform a Set of Curves Into One





Suppose you are monitoring a system, say a human brain, a chemical plant, an asset portfolio, a traffic system. Suppose there are hundreds of parameters that you are monitoring. How do you get the idea of how things are globally going? Which parameter do you look at? How do you "add them up"? How can you blend all the information into one parameter that would convey an idea of the situation? In the above plot an example of a section of an Electroencephalogram is shown, containing only a small number of channels (electrodes). Clearly, analysing a few curves at the same time is feasible, even via simple visual analysis, but when it comes to hundreds or thousands of channels, this is not possible, regardless of the matter of experience of the observer.

One way to map (transform) multiple channels of data onto one scalar function is via complexity. Complexity is a scalar function obtained from a sampled vector x(t) of N channels. The function is computed as C = f (T; E), where T is the Topology of the corresponding System Map (see examples of such maps for an EEG, or an ECG) and E is entropy. Given that entropy is measured in bits, C is also measured in bits, and represents the total amount of structured information within the N-channel data set.

If the N channels of data are sampled each at a certain frequency but within a moving window of a certain width, the result is a time-dependent function of data complexity C(t). The process is fast and may be performed in real-time using OntoNet™, our Quantitative Complexity Management engine, as illustrated in the scheme below (the blue arrow indicates the direction of time flow).







What the C(t) function represents is:

1. Total amount of structured information contained in the data window at time t. This includes all channel interactions.

2. An idea of overall variability of data. The higher the value of C(t) the more each channel varies in conjunction with other channels. This points to a general "increase of activity" within the system.

In addition, C(t) can be tracked to detect:

1. Imminent "traumas" within a given system. In general, traumas are preceded by sudden increases in C(t).

2. Anomalies, phase transitions, situations which are generally "invisible" to conventional statistical methods.

Evidently, similar analysis can be performed off-line, as well as in real-time.


Today, as we quickly generate massive amounts of data, techniques, such as the one described above, can help significantly in extracting useful and actionable information. 
 
 
 
 
 
 

Saturday 13 July 2013

Comparing Apples With Oranges - Introducing the Concept of Relative Complexity



Can you compare the complexity of a large company to that of a small one? The answer to this question is fairly simple once you distinguish the following two situations:

1. You analyze both companies using exactly the same business parameters.

2. You analyze both companies using similar but not identical business parameters.

In order to analyze a business (a company, a bank, etc.) one uses financial statements such as a Balance Sheet, Cash Flow, Income Statements or Ratios. Now, because no two companies conduct exactly the same kind of business, the Balance Sheets, for example, will not always contain exactly the same entries. Therefore, from a rigorous and scientific point of view, such situations would not be comparable. Consequently, comparing directly the values of the complexity of two companies would not make much sense. Imagine comparing the levels of cholesterol of a baby to that of an adult.

Unless we're in situation 1 - imagine for example  two branches of the same bank, which are monitored using the same parameters - we will need another means of comparing complexities of business, or portfolios. This is why we have introduced the concept of relative complexity. It is based on the current, lower and upper bounds of complexity and is computed as follows:


CRel = (Ccr - C) / (Ccr - Cmin) X 100%


where Ccr, Cmin and C correspond, respectively to the values of critical and lower complexity bounds and to its current value.


As an example, let us compare the relative complexities of Goldman Sachs, Citi Bank and Apple - click on the Business Structure Maps to see which business parameters have been used in each case to compute business complexity.



Goldman Sachs.



Relative complexity = 41%           




Citi Bank.




Relative complexity =  24%          



Apple Inc.



Relative Complexity =  31%                 



If you navigate the various Business Structure Maps you will notice that in each of the three cases different business parameters have been used to analyze them. Therefore, comparing the various complexities would have been meaningless and misleading. Relative complexity, on the other hand, allows us to rank business complexity even if these belong to different categories, markets or market segments, and, most importantly, independently of size. What this means is that (business) size does not matter and that an SME can be relatively much more complex than a huge multinational corporation.

But the question is, so what? What is the big deal? Why would anyone want to know the relative complexity of a business? The answer is quite simple. Why would anyone want to measure their level of cholesterol? There are many reasons why, but today, in a turbulent economy, a measure of relative complexity is of immense business value.

A highly complex business is, generally:

  • Difficult to manage, to understand
  • Difficult to adapt to turbulence
  • Able to produce surprising behaviour
  • Exposed
  • Fragile
  • Less profitable
  • Less predictable - it is difficult to make credible forecasts

Still want a complex business?




Friday 12 July 2013

Rating the Rating Agencies - We've Rated Moody's.



Moody's is the largest of the Big Three rating agencies. It employs 4500 people worldwide and has reported a revenue of $2 billion in 2010. Since rating agencies have been under heavy fire since the start of the financial meltdown - in January 2011 the Financial Crisis Inquiry Committee claimed that "The three credit rating agencies were key enablers of the financial meltdown" - we have decided to actually rate one of them. We have chosen Moody's because today it is the largest rating agency.

However, in rating Moody's we have not rated its financial performance or its capacity to honor its financial obligations or its Probability of Default. In other words, we have not performed a conventional rating which, as we claim, is not relevant in a turbulent economy. What is more relevant in turbulent times is resilience - the capacity of a business to withstand and survive sudden and extreme events. In fact, our ratings actually measure the resilience of a business based on the structure of its financials.

For the analysis we have used our on-line self-rating system. Anybody can use this system to rate any company.

We have used information from Moody's Investor Relations page, available here. If anyone wishes to verify the results of our rating it is possible to do so by simply downloading the financial information and processing it using our mentioned self-rating system.  The process, in other words, is fully transparent.

Since it is not the scope of this short blog to provide a thorough and detailed analysis, we will illustrate only on the results based on the Balance Sheet data. We have, however, analyzed also the Consolidated Income and the Cash Flow statements.

The following Balance Sheet entries have been used:


  • Cash and cash equivalents
  • Short term investments
  • Accounts receivable  net of allowances of  
  • Deferred tax assets  net
  • Other current assets
  • Total current assets
  • Property and equipment  net
  • Prepaid pension costs
  • Computer Software  Net                                     
  • Goodwill
  • Intangible assets  net
  • Deferred tax assets  net
  • Other assets
  • Total assets
  • Notes payable
  • Accounts payable and accrued liabilities
  • Commercial paper
  • Revolving credit facility
  • Current portion of long term debt
  • Bank borrowings
  • Deferred revenue
  • Total current liabilities
  • Non current portion of deferred revenue
  • Long term debt
  • Notes payable
  • Deferred tax liabilities  net
  • Unrecognized tax benefits
  • Accrued Income Taxes                                       
  • Other Accrued and Current Liabilities                      
  • Unearned Subscription Income                               
  • Other liabilities
  • Total liabilities
  • PENSION AND POSTRETIREMENT BENEFITS                        
  • Shareholders' deficit: Preferred stock  par value  
  • Shareholders' deficit: Series common stock  par value   
  • Shareholders' deficit: Common stock  par value  
  • Capital surplus
  • Accumulated deficit
  • Retained earnings
  • Treasury stock  at cost       shares of common stock at December 31
  • Accumulated other comprehensive loss
  • Cumulative translation adjustment
  • Total Moody's shareholders' deficit
  • Noncontrolling interests
  • Minimum Pension Liability                                  
  • Total shareholders' deficit
  • Total liabilities and shareholders' deficit

The corresponding Business Structure Map which may be examined interactively and is indicated below.





As the name suggests, the map represents the structure of the business as reflected, in this case, by its Balance Sheet. In the map one may identify dependencies between the various Balance Sheet entries. An intricate and inter-connected map points to a business that is difficult to manage and to understand. Information on how to interpret such maps may be found here.

If the structure of this map is resilient then also the business is resilient. But let us see how resilient the structure of Moody's business really is:



On a scale of one to five stars, a two-star rating is obtained. This is because the business is highly complex - 18.01 - which is quite close to the maximum sustainable complexity of 21.95. This means that the business cannot become much more complex than it already is today and, if it does, it will be unmanageable. In other words, the business is not well prepared to face sudden and extreme events as it is approaching high levels of fragility. Furthermore, since the business is very close to reaching its maximum sustainable complexity threshold, with the current business model Moody's cannot grow much more.

Using Moody's rating scale, two stars corresponds to A3.

When a business functions in the proximity of its critical complexity (think of your cholesterol being close to the limit suggested by your cardiologist) it is important to know what is making the business complex. This information is portrayed by the Corporate Complexity Profile. The Complexity Profile of Moody's is illustrated below:



The entries at the top of the chart are those that are responsible for the high complexity and hence for the low resilience of the business. Values are expressed in percentage terms. The fact that numerous entries have similar contribution (6-8%) points to a situation that is quite intricate and difficult to modify.

The above result poses the question: shouldn't raters have high ratings? Isn't someone who has the power to judge others supposed to give a good example? Would you trust a cardiologist who smokes while he examines your ECG?

How Resilient Are US Markets As A System?




In the past months attention, as well as speculatory attacks, have been focused on the EU. But how resilient are US markets? We have analysed the S&P, DJIA and the NASDAQ markets separately and as a single system. Here are our findings.


S&P Complexity and Robustness:


 
NASDAQ Complexity and Robustness:

 
DJIA Complexity and Robustness:

 
It is evident that considered as isolated systems, the three indices appear as healthy, with robustness ranging from 70 to 85%. However, this is a false perception. The point is that these markets are not isolated. They may embrace different types of corporations but, in reality, all companies interact, directly or indirectly, forming a single system of systems. In fact, as an integrated analysis shows, the robustness of the combined system is this:



 
Robustness falls to 68% and the Resilience Rating to a mere two stars. Highly fragile. One must always be careful. Numbers don't always tell the full story.

The corresponding Business Structure Map, which reflects the interaction between the markets (blue is NASDAQ), is illustrated below:




It is interesting to note that while the complexities of the single markets are approximately 7, that of the combined markets is over 27 instead of an expected value of approximately 21. This is because the complexity of each component cannot simply be added to that of the others to produce the overall measure of complexity. In other words, the difficulty in understanding the dynamics of the system of markets is more than that of the sum of complexities of the single markets. What is also non-intuitive (intuition fails quite often, doesn't it?) is that two robust four-star systems, when combined with a three-star system, lead to a two-star not-so-robust system. So much for linear thinking.