Monday 8 July 2013

Can increasingly complex systems be engineered without taking complexity into account?





The trend to articulate product offering is putting pressure on manufacturing companies like never before. Therefore, the complexity of modern products and of the associated manufacturing processes is rapidly increasing. High complexity, as we know, is a prelude to vulnerability. It is a fact that in all spheres of social life excessive complexity leads to inherently fragile situations. Humans perceive this intuitively and try to stay away from highly complex situations. But can complexity be taken into account in the design and manufacturing of products? The answer is affirmative. Recently developed technology, which allows engineers to actually measure the complexity of a given design or product, makes it possible to use complexity as a design attribute. Therefore, a product may today be conceived and designed with complexity in mind from day one. Not only stresses, frequencies or fatigue life but also complexity can become a design target for engineers. Evidently, if CAE is to cope with the inevitable increase of product complexity, complexity must somehow enter the design-loop. As mentioned, today this is possible. Before going into details of how this may be done, let us first take a look at the underlying philosophy behind a “Complexity-Based CAE” paradigm. Strangely enough, the principles of this innovative approach to CAE have been established in the 14-th century by Francis William of Ockham when he announced his law of parsimony - "Entia non sunt multiplicanda praeter necessitatem" - which boils down to the more familiar "All other things being equal, the simplest solution is the best." The key, of course, is measuring simplicity (or complexity). Today, we may phrase this fundamental principle in slightly different terms:

Complexity X Uncertainty = Fragility

This is a more elaborate version of Ockham’s principle (know as Ockham’s razor) which may be read as follows: The level of fragility of a given system is the product of the complexity of that system and of the uncertainty of the environment in which is operates. In other words, in an environment with a given level of uncertainty or “turbulence” (sea, atmosphere, stock market, etc.) a more complex system/product will result to be more fragile and therefore more vulnerable. Evidently, in the case of a system having a given level of complexity, if the uncertainty of its environment is increased, this too leads to an increase of fragility. We could articulate this simple concept further by stating that:

C_design X (U_manufacturing + U_environment) = F

In the above equation we explicitly indicate that the imperfections inherent to the manufacturing and assembly process introduce uncertainty which may be added to that of the environment. What this means is simple: more audacious (highly complex) products require more stringent manufacturing tolerances in order to survive in an uncertain environment. Conversely, if one is willing to decrease the complexity of a product, then a less sophisticated and less expensive manufacturing process may be used if the same level of fragility is sought. It goes without saying that concepts such as fragility and vulnerability are intimately related to robustness. High fragility = low robustness. In other words, for a given level of uncertainty in a certain operational environment, the robustness of a given systems or product is proportional to its complexity. As mentioned, excessive complexity is a source of risk, also in engineering.

Now that we understand why measuring complexity may open new and exciting possibilities in CAE and CAD let us take a closer look at what complexity is and how it can be incorporated in the engineering process by becoming a fundamental design attribute. In order to expose the nature of complexity, an important semantic clarification is due at this point: the difference between complex and complicated. A complicated system, such as mechanical wrist watch, is indeed formed of numerous components – in some cases as many as one thousand - which are linked to each other but, at the same time, the system is also deterministic in nature. It cannot behave in an uncertain manner. It is therefore easy to manage. It is very complicated but with extremely low complexity. Complexity, on the other hand, implies the capacity to deliver surprises. This is why humans intuitively don’t like to find themselves in highly complex situations. In fact, highly complex systems can behave in a myriad of ways (called modes) and have the nasty habit of spontaneously switching mode, for example from nominal to failure. If the complexity in question is high, not only the number of failure modes increases, the effort necessary to cause catastrophic failure decreases in proportion.

Highly complicated products do not necessarily have to be highly complex. It is also true that high complexity does not necessarily imply very many interconnected components. In fact, a system with very few components can be extremely difficult to understand and control. And this brings us to our definition of complexity. Complexity is a function of two fundamental components:
  • Structure. This is reflected via the topology of the information flow between the components in a system. Typically, this is represented via a Process Map or a graph in which the components are the nodes (vertices) of the graph, connected via links (see example below).
  • Entropy. This is a fundamental quantity which measures the amount of uncertainty of the interactions between the components of the system.

Figure 1. Process Map of a CFD model of a power plant. Nodes are aligned along the diagonal of the map and significant relationships between them are indicated via blue connectors.


Obtaining a process map is simple. Two alternatives exist.
  • Run a Monte Carlo Simulation with a numerical (e.g. FEM) model, producing a rectangular array in which the columns represent the variables (nodes of the map) and the rows correspond to different stochastic realizations of these variables.
  • Collect sensor reading from a physical time-dependent system, building a similar rectangular array, in which the realizations of the variables are obtained by sampling the sensor channels at a specific frequency.
Once such arrays are available, they may be processed by OntoNet™ which directly produces the maps. A Process Map, together with its topology, reflects the functionality of a given system. Functionality, in fact, is determined by the way the system transmits information from inputs to outputs and also between the various outputs. In a properly functioning system at steady-state, the corresponding Process Map is stable and does not change with time. Evidently, if the system in question is deliberately driven into other modes of functioning – for example from nominal to maintenance – the map will change accordingly.

A key concept is that of hub. Hubs are nodes in the map which possess the highest degree (number of connections to other nodes). Hubs may be regarded as critical variables in a given system since their loss causes massive topological damage to a Process Map and therefore loss of functionality. Loss of a hub means one is on the path to failure. In ecosystems, hubs of the food-chain are known as keystone species. Often, keystone species are innocent insects or even single-cell animals. Wipe it out and the whole ecosystem may collapse. Clearly, single-hub ecosystems are more vulnerable than multi-hub ones. However, no matter how many hubs a system has, it is fundamental to know them. The same concept applies to engineering of course. In a highly sophisticated system, very often even the experienced engineer who has designed it does not know all the hubs. One reason why this is the case is because CAE still lacks the so-called systems-thinking and models are built and analyzed in “stagnant compartments” in a single-discipline setting. It is only when a holistic approach is adopted, sacrificing details for breadth, that one can establish the hubs of a given system in a significant manner. In effect, the closer you look the less you see!

Robustness has always been a concern of engineers. But can complexity be used to define and measure robustness? There exist many “definitions” of robustness. None of them is universally accepted. Most of these definitions talk of insensitivity to external disturbances. It is often claimed that low scatter in performance reflects high robustness and vice-versa. But scatter really reflects quality, not robustness. Besides, such “definitions” do not allow engineers to actually measure the overall robustness of a given design. Complexity, on the other hand, not only allows us to establish a new and holistic definition of robustness, but it also makes it possible to actually measure it, providing a single number which reflects “the global state of health” of the system in question. We define robustness as the ability of a system to maintain functionality. How do you measure this? In order to explain this new concept it is necessary to introduce the concept of critical complexity. Critical complexity is the maximum amount of complexity that any system is able to sustain before it starts to break down. Every system possesses such a limit. At critical complexity, systems become fragile and their corresponding Process Maps start to break-up. The critical complexity threshold is determined by OntoSpace™ together with the current value of complexity. The global robustness of a system may therefore be expressed as the distance that separates its current complexity from the corresponding critical complexity. In other words, R= (C_cr – C)/C_cr, where C is the system complexity while C_cr the critical complexity. With this definition in mind it now becomes clear while Ockham’s rule so strongly favours simpler solutions! A simpler solution is farther from its corresponding criticality threshold than a more complex one – it is intrinsically more robust. 

The new complexity-based definition of robustness may also be called topological robustness as it quantifies the “resilience” of the system’s Process Map in the face of external and internal perturbations (noise). However, the Process map itself carries additional fundamental information that establishes additional mechanisms to assess robustness in a more profound way. It is obvious that a multi-hub system is more robust – the topology of its Process Map is more resilient, its functionality is more protected - than a system depending on a small number of hubs. A simple way to quantify this concept is to establish the degree of each node in the Process Map – this is done by simply counting the connections stemming from each node – and to plot them according to increasing order. This is known as the connectivity histogram. A spiky plot, known also as a Zipfian distribution, points to fragile systems, while a flatter one reflect a less vulnerable Process Map topology.

The density of a Process Map is also a significant parameter. Maps with very low density (below 5-10%) point to systems with very little redundancy, i.e. with very little fail-safe capability. Highly dense maps, on the other hand, reflect situations in which it will be very difficult to make modifications to the system’s performance, precisely because of the high connectivity. In such cases, introducing a change at one node will immediately impact other nodes. Such systems are “stiff” in that reaching acceptable compromises is generally very difficult and often the only alternative is re-design.

And how about measuring the credibility of models? Models are only models. Remember how may assumptions one must make to write a partial differential equation (PDE) describing the vibrations of a beam? The beam is long and slender, the constraints are perfect, the displacements are small, shear effects are neglected, rotational inertia is neglected, the material is homogenous, the material is elastic, sections remain plane, loads are applied far from constraints, etc., etc. How much physics has been lost in the process? 5%? 10%? But that’s not all. The PDE must be discretized using finite difference or finite element schemes. Again, the process implies an inevitable loss of physical content. If that were not enough, very often, because of high CPU-consumption, models are projected onto the so-called response surfaces. Needless to say, this too removes physics. At the end of the day we are left with a numerical artefact which, if one is lucky (and has plenty of grey hair) the model captures correctly 80-90% of the real thing. Many questions may arise at this point. For instance, one could ask how relevant is an optimization exercise which, exposing such numerical constructs to a plethora of algorithms, delivers an improvement of performance of, say, 5%. This and other similar questions bring us to a fundamental and probably most neglected aspect of digital simulation – that of model credibility and model validation. The importance of a knowing how much one can trust a digital model is of paradigm importance:
  • Models are supposed to be cheaper than the real thing - physical tests are expensive.
  • Some things just cannot be tested (e.g. spacecraft in orbit).
  • If a model is supposed to replace a physical test but one cannot quantify how credible the model is (80%, 90% or maybe 50%) how can any claims or decisions based on that model be taken seriously?
  • You have a model with one million elements are you are seriously thinking considering mesh refinement in order to get “more precise answers” but you cannot quantify the level of trust of your model. How significant is the result of the mesh refinement?
  • You use a computer model to deliver an optimal design but you don't know level of trust of the model. It could very well be 70% or 60%. Or less. You then build the real thing. Are you sure it is really optimal?
But is it possible to actually measure the level of credibility of a computer model? The answer is affirmative. Based on complexity technology, a single physical test and a single simulation are sufficient to quantify the level of trust of a given computer model, providing the phenomenon in question is time-dependent. The process of measuring the quality of the model is simple:
  • Run a test and collect results (outputs) in a set of points (sensors). Arrange them in a matrix.
  • Run the computer simulation, extracting results in the same points and with the same frequency. Arrange them in a matrix.
  • Measure the complexity of both data sets. You will obtain a Process Map and the associated complexity for each case, C_t and C_m (test and model, respectively).
The following scenarios are possible:
  • The values of complexity for the two data sets are similar: your model is good and credible.
  • The test results prove to be more complex than simulation results: your model misses physics or is based on wrong assumptions.
  • The simulation results prove to be more complex than the physical test results: your model probably generates noise.
But clearly there is more. Complexity is equivalent to structured information. It is not just a number. If the complexities of the test and simulation results are equal (or very similar) one has satisfied only the necessary condition of model validity. A stronger sufficient condition requires in addition the following to hold:
  • The topologies of the two Process Maps are identical.
  • The hubs of the maps are the same.
  • The densities of the maps (i.e. ratio of links to nodes) are the same.
  • The entropy content in both cases is the same.
The measure of model credibility, or level of trust, may now be quantified as:

MC = abs[ (C_test - C_model)/C_test) ]

Figure 2 illustrates the Process Maps obtained from a crash test (left) and simulation (right). The simulation model has a complexity of 6.53, while the physical test 8.55. This leads to a difference of approximately 23%. In other words, we may conclude that according to the weak condition, the model captures approximately 77% of what the test has to offer. Moreover, the Process Maps are far from being similar. Evidently, the model still requires a substantial amount of work.

But there is more - the same index may be used to "measure the difference" between two models in which:
  •  The FE meshes have different bandwidth (a fine and a coarse mesh are built for a given problem).
  • One model is linear, the other is non-linear (one is not sure if a linear model is suitable for a given problem).
  • One model is run on 1 CPU and then on 4 CPUs (it is known that with explicit models this often leads to different results).

 Figure 2. Process Maps obtained for a physical car crash-test (left) and for a simulation (right).

And what about complexity and CAD? It is evident to every engineer that a simpler solution to a given problem is almost always:
  • Easier to design
  • Easier to assemble/manufacture
  • Easier to service/repair
  • Intrinsically more robust

The idea behind complexity-based CAD is simple: design a system that is as simple as possible but which fulfils functional requirements and constraints. Now that complexity may be measured in a rational manner, it can become a specific design objective and target and we may put the “Complexity X Uncertainty = Fragility” philosophy into practice. One way to proceed is as follows:
  • Establish a nominal parametric model of a system (see example in Figure 3, illustrating a pedestrian bridge)
  • Generate a family of topologically feasible solutions using Monte Carlo Simulation (MCS) to randomly perturb all the dimensions and features of the model.
  • Generate a mesh for each Monte Carlo realization.
  • Run an FE solver to obtain stresses and natural frequencies.
  • Process the MCS with OntoSpace™.
  • Define constraints (e.g. dimensions) and performance objectives (e.g. frequencies, mass).
  • Obtain a set of solutions which satisfy both the constraints as well as the performance objectives.
  • Obtain the complexity for each solution
  • Select the solution with the lowest complexity.

The above process may be automated using a commercial CAD system with meshing capability, a multi-run environment which supports Monte Carlo simulation and an FE solver. In the case of our bridge example, Figure 4 illustrates two solutions, possessing very similar mass, natural frequencies, stresses and robustness but dramatically different values of complexity. The solution on the right has complexity of 8.5 while the one on the left 5.4.


Figure 3. parametric quarter-model of a pedestrian bridge.


Figure 4. Two solutions to the pedestrian bridge. Note the critical variables (hub) indicated in red (inputs) and blue (outputs).

Given that the complexity of man-made products, and the related manufacturing processes, is quickly growing, these products are becoming increasingly exposed to risk, given that high complexity inevitably leads to fragility. At the same time, the issues of risk and liability management are becoming crucial in today’s turbulent economy. But highly complex and sophisticated products are characterized by a huge number of possible failure modes and it is a practical impossibility to analyze them all. Therefore, the alternative is to design systems that are intrinsically robust, i.e. that possess built-in capacity to absorb both expected and unexpected random variations of operational conditions, without failing or compromising their function. Robustness is reflected in the fact that the system is no longer optimal, a property that is linked to a single and precisely defined operational condition, but results acceptable (fit for the function) in a wide range of conditions. In fact, contrary to popular belief, robustness and optimality are mutually exclusive. Complexity-based design, i.e. a design process in which complexity becomes a design objective, opens new avenues for the engineering. While optimal design leads to specialization, and, consequently, fragility outside of the portion of the design space in which the system is indeed optimal, complexity-based design yields intrinsically robust systems. The two paradigms may therefore be compared as follows:
  • Old Paradigm:  Maximize performance, while, for example, minimizing mass.
  • New Paradigm:  Reduce complexity accepting compromises in terms of performance.
A fundamental philosophical principle that sustains the new paradigm is L. Zadeh’s Principle of Incompatibility: High complexity is incompatible with high precision. The more something is complex, the less precise we can be about it. A few examples: the global economy, our society, climate, traffic in a large city, the human body, etc., etc. What this means is that you cannot build a precise (FE) model of a highly sophisticated system. And it makes little sense to insist – millions of finite elements will not squeeze precision from where there isn’t any. Nature places physiological limits to the amount of precision in all things. The implications are clear. Highly sophisticated and complex products and systems cannot be designed via optimization, precisely because they cannot be described with high precision. In fact, performance maximization (optimization) is an exercise of precision and this, as we have seen, is intrinsically limited by Nature. For this very reason, models must be realistic, not precise.
 



Try the technology on-line. Click the banner below.









What Is The Relevance of Probability?

 


Probability is central to many concepts in economics and in science. In economics we run across probabilty when we compute VaR, PoD-type ratings, when we design portfolios, estimate expected returns, etc. But the concept of probability is extremely elusive. As Bertrand Russell said in 1929:

Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.

Probability is a "tool" which is supposed to give us a flavor of the likelyhood of future events. But what does this really mean? If an event A is said to have a probability of ocurrence of 99% does that mean it is imminent? Is it more imminent than an event B that has a chance of ocurring of 95%? Does this guarantee that A will necessarily happen before B? Not at all. Both probabilities are in fact irrelevant. In our recent blog we illustrate how PoD ratings, for example, are flawed from a very basic physical perspective. But let us see why the concept of probability is also flawed:

First of all, when you state, for example, that in the past, out of 100 start-ups 60 fail in the first three years, this is an a-posteriori statement of the obvious which says nothing of the future reserved for YOUR start-up. All start-ups are different and therefore you cannot throw them into one basket and run a statistic. Even though the number 60/100 is mathematically correct, it is irrelevant. You can, in principle, divide any two numbers in the Universe and attribute some value to them. Nobody can stop you.

Second. Imagine someone does indeed provide you with a probability of failure of your-startup and it happens to be, say, 95%. What does it mean? Does it mean that failure is imminent? Does it mean you only have two quarters left, after which you are out of business? What does the probability really mean for YOU? It means nothing. Suppose that you don't know that YOUR start-up will fail on December 31-st 2013. The probability of that event is, evidently, 100%. If the company will fail it will do so regardless of the probability of failure you may attach to it. Any probability which is not 100% is wrong. And that means inifinitely many.

Events can only have a probability of 0 or 1. They happen or they don't. The concept of probability, in its current form, is not relevant and its anectodal character, which applies exclusively to the past, is erroneously used in making forward projections. Setting aside relativistic issues and quantum mechanics, there are no known laws of physics which can predict when a company will default. As a matter of fact quantum mechanics can't predict what a single elementary particle will do. But this IS the problem. You are interested in YOUR company or the one you're investing in, not in populations of different companies.

A better concept than probability is that of resilience. Resilience can be computed based on physical quantities and it expresses something real and material. Just ask an engineer if he knows what the pendulum impact test is.



www.ontonix.com


Reality is Made of Loops But All We See Is Straight Lines

 

We tend to think in terms of straight lines. It is difficult to abandon linear "Euclidean thinking", or visions of cause-reality in a world goverened by a multitude of positive/negative often intersecting feedback loops. 

So it's all about networks. And networks of networks. When it comes to networks one immediately thinks in terms of transmitter and receiver nodes, hubs, or the shortest path from one node to another. However, networks also contain loops.

The cyclomatic number of a network (such as the one depicted below) is defined as C = L - N + 1, where L is the number of links (edges) and N is the number of nodes. The number represents the number of independent loops, or cycles, which may be identified within a given network.

In the example below there are 126 links and 20 nodes, which means that C = 107! This gives an idea of how intricate the information flow can be even in such a small network of 20 nodes.




Below we illustrate two example of such cycles which may be identified within the above map.




These are just two out of 107 possible loops! In case of the entire European Union (comprising 27 countries, each described via 24 macro-economic parameters) and which we analyze every quarter, there are a total of 648 nodes and 45180 edges (results from Q1 2011 analysis). The map is shown below.

This leads to 44533 independent cycles in which information flows in a non-stationary and turbulent manner, sprinkled with a little bit of chaos and irrational human decisions! You then wonder, for example, what is cause and effect in such a situation. How do you manage such a system. Can it be managed? If so, how? How can there be experts on such systems?

By the way, note how the system has evolved over the years to become almost critically complex and highly fragile - only a one-star rating!
 
 
 
 

Sunday 7 July 2013

Ratings - From an Opinion to Science

 
In January 2011 The Financial Crisis Inquiry Committee claimed that "The three credit rating agencies were key enablers of the financial meltdown". Failing investment banks and large corprations enjoyed investment-grade ratings days before collapse. Rating agencies claimed that ratings represent a mere opinion. And this is precisely the problem. Rating agencies represent a fundamental source of information for investors. Investors will evidently tend to invest in assets that are judged as investment-grade rather than junk. However, the First Amendment of the U.S. Constitution protects "publishers" guaranteeing them the freedom of speech.  

The original business model for the rating agencies, established when John Moody published the first publicly available ratings in 1909, was an “investor pays” model. Moody, and subsequently other rating agencies, sold thick “rating manuals” to bond investors. In the early 1970s the Big Three changed to an “issuer pays” business model, which means that an issuer of bonds pays fees to the rating agency that rates its bonds. This model continues today. This means plenty of things, conflict of interest being one.
 
Independent of whether the "issuer payes" or the "investor pays" model, the problem is that in their current form ratings can be manipulated.  You can steer a sophisticated Monte Carlo Simulation to a desired result without any major problems. Stochastic Simulation and Stochastic Calculus are so complicated, there are so many Monte Carlo sampling methods, that very few people are able to question it. Definitely not many investors. 


But let's get to the core of the matter, to the main claim in this blog. Why is it that even the Big Three agencies don't always agree on a rating of a country or a corporation? There exist many rating methods. Each method is based on models which require assumptions (like "let's suppose that the value of real estate never goes down"), etc., etc. Just multiply the number of agencies by the number of computational methods.... you get hundreds of combinations. Hundreds of ways of computing a rating. There should be one. In many sectors of the industry, even in science, there exist standards and protocols. But finance is de-regulated. The lack of regulations, which the financial industry so fiercly opposes, was what enabled the folly which has collapsed the economy. The financial industry doesn't want regulation or limits to leverage and speculation. This is why there are so many means of computing a rating which, with the current model, is a highly complex, subjective and therefore maneuverable instrument. Opinions can't be regulated. 


A method of rating does not have to be perfect (is financial data 100% accurate?). But it has to be consistent. Serious science starts with consistent measurements. You can always improve a metric, make it more accurate, but you must use it consistently. If you don't then mathematics indeed becomes an opinion and we don't want that. 


Because measuring the Probability of Default (PoD) of a company is physically impossible - this is precisely why ratings can be manipulated so easily, because it is not physics - it is necessary to take a different and new look at ratings:


1. A rating should not estimate the PoD of a company but something based on the physical properties of a company seen as a dynamical system. A good candidate is resilience. Resilience is not an opinion, it is a physical property and may be computed based on Balance Sheet, Consolidated Income Statements and Cash Flow data which listed companies publish periodically. See here how this may be done.

2. Ratings cannot be verified by investors. Suppose company X gets a rating of ABC. How can you verify it? How do you know it is correct? You don't. The only way to trust the result is to do it yourself. This is simpler than you think:


go to http://www.rate-a-business.com


and run the analysis yourself. With data YOU trust.
 
3. Because the system is available on the web it is available to anyone. Anyone can download the same financial reports and verify your calculations. It becomes impossible to manipulate the results.

4. The algorithm to compute the resilience of a business is based on physics. It doesn't change.


There are many things that need to be done to fix the economy. One of them is to change the philosophy of ratings. Ratings need to be democratized, they must become a commodity. When this happens rating agencies will become obsolete. 



www.ontonix.com

Rating the Rating Agencies - We've Rated Moody's.

 
Moody's is the largest of the Big Three rating agencies. It employs 4500 people worldwide and has reported a revenue of $2 billion in 2010. Since rating agencies have been under heavy fire since the start of the financial meltdown - in January 2011 the Financial Crisis Inquiry Committee claimed that "The three credit rating agencies were key enablers of the financial meltdown" - we have decided to actually rate one of them. We have chosen Moody's because today it is the largest rating agency.
However, in rating Moody's we have not rated its financial performance or its capacity to honor its financial obligations or its Probability of Default. In other words, we have not performed a conventional rating which, as we claim, is not relevant in a turbulent economy. What is more relevant in turbulent times is resilience - the capacity of a business to withstand and survive sudden and extreme events. In fact, our ratings actually measure the resilience of a business based on the structure of its financials.

For the analysis we have used our on-line self-rating system. Anybody can use this system to rate any company.

We have used information from Moody's Investor Relations page, available here. If anyone wishes to verify the results of our rating it is possible to do so by simply downloading the financial information and processing it using our mentioned self-rating system.  The process, in other words, is fully transparent.

Since it is not the scope of this short blog to provide a thorough and detailed analysis, we will illustrate only on the results based on the Balance Sheet data. We have, however, analyzed also the Consolidated Income and the Cash Flow statements.

The following Balance Sheet entries have been used:


  • Cash and cash equivalents
  • Short term investments
  • Accounts receivable  net of allowances of  
  • Deferred tax assets  net
  • Other current assets
  • Total current assets
  • Property and equipment  net
  • Prepaid pension costs
  • Computer Software  Net                                     
  • Goodwill
  • Intangible assets  net
  • Deferred tax assets  net
  • Other assets
  • Total assets
  • Notes payable
  • Accounts payable and accrued liabilities
  • Commercial paper
  • Revolving credit facility
  • Current portion of long term debt
  • Bank borrowings
  • Deferred revenue
  • Total current liabilities
  • Non current portion of deferred revenue
  • Long term debt
  • Notes payable
  • Deferred tax liabilities  net
  • Unrecognized tax benefits
  • Accrued Income Taxes                                       
  • Other Accrued and Current Liabilities                      
  • Unearned Subscription Income                               
  • Other liabilities
  • Total liabilities
  • PENSION AND POSTRETIREMENT BENEFITS                        
  • Shareholders' deficit: Preferred stock  par value  
  • Shareholders' deficit: Series common stock  par value   
  • Shareholders' deficit: Common stock  par value  
  • Capital surplus
  • Accumulated deficit
  • Retained earnings
  • Treasury stock  at cost       shares of common stock at December 31
  • Accumulated other comprehensive loss
  • Cumulative translation adjustment
  • Total Moody's shareholders' deficit
  • Noncontrolling interests
  • Minimum Pension Liability                                  
  • Total shareholders' deficit
  • Total liabilities and shareholders' deficit

The corresponding Business Structure Map - which may be obtained here and examined interactively using MAPVIEW - is indicated below.





As the name suggests, the map represents the structure of the business as reflected, in this case, by its Balance Sheet. In the map one may identify dependencies between the various Balance Sheet entries. An intricate and inter-connected map points to a business that is difficult to manage and to understand. Information on how to interpret such maps may be found here.

If the structure of this map is resilient then also the business is resilient. But let us see how resilient the structure of Moody's business really is:



On a scale of one to five stars, a two-star rating is obtained. This is because the business is highly complex - 18.01 - which is quite close to the maximum sustainable complexity of 21.95. This means that the business cannot become much more complex than it already is today and, if it does, it will be unmanageable. In other words, the business is not well prepared to face sudden and extreme events as it is approaching high levels of fragility. Furthermore, since the business is very close to reaching its maximum sustainable complexity threshold, with the current business model Moody's cannot grow much more.

Using Moody's rating scale, two stars corresponds to A3.

When a business functions in the proximity of its critical complexity (think of your cholesterol being close to the limited suggested by your cardiologist) it is important to know what is making the business complex. This information is portrayed by the Corporate Complexity Profile which you may also examine using MAPVIEW. The Complexity Profile of Moody's is illustrated below:



The entries at the top of the chart are those that are responsible for the high complexity and hence for the low resilience of the business. Values are expressed in percentage terms. The fact that numerous entries have similar contribution (6-8%) points to a situation that is quite intricate and difficult to modify.

The above result poses the question: shouldn't raters have high ratings? Isn't someone who has the power to judge others supposed to give a good example? Would you trust a cardiologist who smokes while he examines your ECG?




DISCLAIMER: Structural resilience ratings do not constitute buy or sell advice.



www.ontonix.com

Democratizing Ratings


 
Ratings lie at the very heart of the economy and yet, as the current crisis is showing, they are not entirely reliable. In fact:

  • The process of rating involves interaction with the recipient company. This means the process cannot be fully objective.
  • A further element of subjectivity stems from the fact that different rating agencies and rating methods exist. In fact, there exist examples of rating agencies differing when rating the same company. Moreover, even within the same rating agency, two analysts can come up with different ratings for the same company.
  • Ratings are not verifiable. Suppose a certain company is awarded a given rating. The result is made public. However, suppose that a competitor or an investor wants to question or verify the rating because he feels it is too generous. Today this is very difficult, not to say impossible.
  • Rating agencies possess and exercise enormous power.
Our vision of ratings is different. We think ratings should be democratized. This is how we see the future of ratings. In our view ratings should be:

  • Based on publicly available information. A good place to start are the financial statements that listed companies post on their websites. This information should progressively be made more comprehensive and complete. This will eliminate information asymmetry.
  • Repeatable. The same mechanism should be used for the generation of a rating. Different rating methods cause confusion and inject further uncertainty into an already turbulent economy.
  • Dynamic. In a turbulent and global economy ratings should be issued not once a year but with a higher frequency, e.g. every quarter, in sync with today's rapidly changing economy.
  • Verifiable. Any individual investor should be able to use the publicly available data on a given listed company and verify the rating by using the same publicly available mechanism.
  • Objective. The process of rating calculation should not involve elements of subjective judgement.
  • Affordable. The rating mechanism should be available on-line and be affordable to even SMEs so that they can rate themselves and share the result with their banks.
Ratings, in other words, should become a commodity.


www.ontonix.com


Keynes vs Friedman - What is the Mix Today?



There is plenty of discussion nowadays as to whether to bailout or not to bailout. This short blog does not have the objective of adding more in support of one option over the other. The goal is to show the mix of today's economy in terms of Keynsian and Friedmanian models.

The two opposed versions of the economy may be epitomized (albeit simplistically) by the visions of Keynes and Friedman. The state and regulations versus a de-regulated and shock-therapy philosophy. According to the Wikipedia:

"Keynesian economics argues that private sector decisions sometimes lead to inefficient macroeconomic outcomes and, therefore, advocates active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilize output over the business cycle."

"Friedman rejected the use of fiscal policy as a tool of demand management; and he held that the government's role in the guidance of the economy should be restricted severely. Friedman also argued for the cessation of government intervention in currency markets, thereby spawning an enormous literature on the subject, as well as promoting the practice of freely floating exchange rates."

We have recently analyzed the complexity of the World based on data from the World Bank (see our previous blogs). We know that for every system (such as the World's economy) complexity ranges from a minimum value to an upper bound, known also as critical complexity. In the proximity of the lower bound things are predictable, controllable, full of deterministic rules. Like a watch movement. Close to critical complexity things are totally different. Dynamics is governed by chaos, uncertainty and is essentially highly turbulent. Like a tornado or a storm. We could, ideally, place the models of Keynes and Friedman at these extremes. 

Setting aside the numbers, the situation today looks more or less like this:







The above result allows us to state that the global economy today is approximately 80% Friedmanian and 20% Keynsian. What is the optimal mix? Is there an optimal mix? No, there is no such thing as optimality in a dynamic, changing and turbulent setting. However, there exist many acceptable compromises and these depend on each single player in the market, on his objectives and constraints.  The point is that given the current economic situation, decisions must be made in which direction to seek the cure? More Friedman or more Keynes?