Monday 29 July 2013

Complexity? It's All Relative.





 
Complexity is a measure of the total amount of structured information (which is measured in bits) that is contained within a system and reflects many of its fundamental properties, such as:

  • Potential -  the ability to evolve, survive
  • Functionality - the set of distinct functions the system is able to perform
  • Robustness - the ability to function correctly in the presence of endogenous/exogenous uncertainties

In biology, the above can be combined in one single property known as fitness.

Like any mathematically sound metric our complexity metric is bounded (metrics that can attain infinite values are generally not so useful). The upper bound, which is of great interest, is called critical complexity and tells us how far the system can go with its current structure.

Because of the existence of critical complexity, complexity itself is a relative measure. This means that all statements, such as, "this system is very complex, that one is not", are without value until you refer complexity to its corresponding bounds. Each system in the Universe has its own complexity bounds, in addition to its current value.  Because of this a small company can, in effect, be relatively more complex than a large one, precisely because it operates closer to its own complexity limit.  Let us see a hypothetical example.

Imagine two companies: one is very large, the other small. Suppose each one operates in a multi-storey building and that each one is hiring new employees. Imagine also that the small company has reached the limit in terms of office space while the larger company is constantly adding new floors. This is illustrated in the figure below.





In this hypothetical situation, the smaller company has reached its maximum capacity and adding new employees will only make things worse. It is critically complex and, with its current structure, it cannot grow - it has reached its physiological growth limit and can do two things:

  • "Add more floors" (this is equivalent to increasing its critical complexity - one way to achieve this is via acquisitions or mergers)
  • Restructure the business
If a growing business doesn't increase its own critical complexity at the appropriate rate it will reach a situation of "saturation". If you "add floors" at a rate that is not high enough, the business will become progressively less resilient and will ultimately reach a situation in which it will not be able to function properly, not to mention facing extreme events.

Complexity is a disease of our modern times (more or less like high cholesterol, which is often consequence of our lifestyles). Globalisation, technology, or uncertainty in the economy are making life complex and it is increasing the complexity of businesses themselves. An apparently healthy business may hide (but not for long!) very high complexity. Just like very high cholesterol levels are rarely a good omen, the same may be said of high complexity. This is why companies should run a complexity health-check on a regular basis.

So, the next time you hear someone say that something is complex, ask them about critical complexity. It's all relative!
 
 
 
 
 
 

Complexity: A Link Between Science and Art?



Serious science starts when you begin to measure. According to this philosophy we constantly apply our complexity technology  in attempts to measure entities/phenomena/situations that so far haven't been quantified in rigorous scientific terms. Of course we can always apply our subjective perceptions of the reality that surrounds us so as to classify and rank, for example, beauty, fear, risk, sophistication, stress, elegance, pleasure, anger, workload, etc., etc. Based on our perceptions we make decisions, we select strategies, we make investments. When it comes to actually measuring certain perceptions complexity may be a very useful proxy.

Let's consider, for example, art. Let's suppose that we wish to measure the amount of pleasure resulting from the contemplation of a work of art, say a painting. We can postulate the following conjecture: the pleasure one receives when contemplating a work of art is proportional to its complexity. This is of course a simple assumption but it will suffice to illustrate the main concept of this short note. Modern art produces often paintings which consist of a few lines or splashes on a canvas. You just walk past. When, instead, you stand in front of a painting by, say, Rembrandt van Rijn, you experience awe and admiration. Now why would that be case? Evidently, painting something of the calibre of The Night Watch is not matter of taking a spray gun and producing something with the aid of previous ingested chemical substances. Modern "art" versus a masterpiece. Minutes of delirium versus years of hard work. Splashes versus intricate details. Clear, but how do you actually compare them?

We have measured the complexity of ten paintings by Leonardo da Vinci and Rembrandt. The results are reported below without further comments.

Leonardo
and Rembrandt





Sunday 28 July 2013

Why is Resilience (in economics) such a difficult concept to grasp?


Resilience, put in layman's terms, is the capacity to withstand shocks, or impacts. For an engineers it a very useful characteristic of materials, just like Young's modulus, the Poisson ratio of the coefficient of thermal expansion. But high resilience doesn't necessarily mean high performance, or vice versa. Take carbon fibres, for example. They can have Young's modulus of 700 Gigapacals (GPa)  and a tensile strength of 20 GPa while steel, for example, has Young's modulus of 200 GPa and a tensile strength of 1-2 Gpa. And yet, carbon fibers (as well as alloys with a high carbon content) are very fragile while steel is, in general, ductile. Basically, carbon fibres have fantastic performance in terms of stiffness and strength but responds very poorly to impacts and shocks.

What has all this got to do with economics? Our economy is extremely turbulent (and this is only just the beginning!) and chaotic, which means that it is dominated by shocks and, sometime, by extreme events (like the unexpected failure of a huge bank or corporation, or default of a country which needs to be bailed out, like Ireland, Greece, Portugal, or natural events such as tsunamis). Such extreme events send out shock waves into the global economy which, in virtue of its interconnectedness, propagates them very quickly. This can cause problems to numerous businesses even on the other side of the globe. Basically, the economy is a super-huge dynamic and densely interconnected network in which the nodes are corporations, banks, countries and even single individuals (depending on the level of detail we are willing to go to). It so happens that today, very frequently, bad things happen at the nodes of this network. The network is in a state of permanent fibrillation. It appears that the intensity of this fibrillation will increase, as will the number of extreme events. Basically, our global economy will become more and more turbulent. By the way, we use the word 'turbulence' with nonchalance but it is an extremely complex phenomenon in fluid dynamics with very involved mathematics behind it - luckily, people somehow get it! And that's good. What is not so good is that people don't get the concept of resilience. And resilience is a very important concept not just in engineering but also in economics. This is because in turbulence it is high resilience that may mean the difference between survival and collapse. High resilience can in fact be seen as s sort of stability. It is not necessary to have high performance to be resilient (or stable). In general, these two attributes of a system are independent. To explain this difficult (some say it is counter-intuitive) concept, let us consider Formula 1 cars: extreme performance, for very short periods of time, extreme sensitivity to small defects with, often, extreme consequences. Sometimes, it is better to sacrifice performance and gain resilience but this is not always possible. In Formula 1 there is no place for compromise. Winning is the only thing that counts.

But let's get back to resilience versus performance and try to reinforce the fact that the two are independent. Suppose a doctor analyzes blood and concentrates on the levels of cholesterol and, say, glucose. You can have the following combinations (this is of course a highly simplified picture):

Cholesterol: high, glucose: low
Cholesterol: low, glucose:high
Cholesterol: low, glucose: low
Cholesterol: high, glucose: high

You don't need to have high cholesterol to have high glucose concentration. And you don't need to have low glucose levels to have low levels of cholesterol.

Considering, say, the economy of a country, we can have the following conditions:

Performance: high, resilience: low
Performance: low, resilience:high
Performance: low, resilience: low
Performance: high, resilience: high

Just because the German economy performs better than that of many countries it doesn't mean it is also more resilient. This is certainly not intuitive but there are many examples in which simplistic linear thinking and intuition fail. Where were all the experts just before the sub-prime bubble exploded?




www.ontonix.com




Measuring the Complexity of Fractals



We don't need to convince anyone of the importance of fractals to science. The question we wish to address is measuring the complexity of fractals. When it comes to more traditional shapes, geometries or structures such as buildings, plants, works of art or even music, it is fairly easy to rank them according to what we perceive as intricacy or complexity. But when it comes to fractals the situation is a bit different. Fractals contain elaborate structures of immense depth and dimensionality that is not easy to grasp by simple visual inspection or intuition.

We have used OntoNet to measure the complexity of the two fractals illustrated above. Which one is the most complex of the two? And by how much? The answer is the following:

Fractal on the left - complexity = 968.8
Fractal on the right - complexity = 172.9

This means that the first fractal is about 5.6 times more complex. At first sight this may not be obvious as the image on the right appears to be more intricate with much more local detail. However, the image on the left presents more global structure hence it is more complex. The other image is more scattered with smaller local details and globally speaking it is less complex . This means that it  transmits less structured information, which is precisely what complexity quantifies. Finally, below we illustrate the complexity map of the fractal on the left hand side.





www.ontonix.com


Saturday 27 July 2013

Is it progress if a cannibal uses knife and fork?





The North American XB-70 Valkyrie was a (beautiful!) tri-sonic bomber developed in the early sixties. It was, even by today's standards, an exceptionally sophisticated and advanced machine. What strikes is the short time it took to develop and build - we intentionally omit the information here since, by today's standards it would make many (aerospace) engineers blush. Why is it  that in the days of slide-rules they could beat today's supercomputers and all other technological goodies? A few reasons are:
  • One company did everything - there was no useless geographical dispersion, management was simpler.
  • Complex systems have been successfully built even though complexity was not a design goal - the reason is that product development and manufacturing was much less complex.
  • Engineers were better trained than today - they understood mechanics better that today's youngsters.
  • The average age of engineers was much higher than today.
  • Companies had clearer roadmaps and were more motivated - today it's all about shareholders value not about building great planes.
  • Aerospace companies were run by people who understood the business.
  • Designers didn't have to struggle with super-complex super-huge software systems.
  • Companies were profitable (because they were run by people who understood the business) hence were not forced to squeeze every penny out of sub-contractors, causing them to deliver worse results ....
  • Because of the above, companies could do plenty of R&D - today, R&D is where (incompetent) management make the first cost cuts.
Today, highly complex products are engineered without taking complexity into account. When coupled with extremely complex and dispersed multi-cultural manufacturing, assembly, procurement, design and management issues you run into trouble if you don't keep complexity under control. It is not surprising that TODAY people doubt that man ever went to the Moon! In fact, those who make similar claims cannot possibly conceive of such a complex project being viable because of their poor preparation.








Friday 26 July 2013

Do We Really Understand Nature?




According to the Millennium Project the biggest global challenges facing humanity are those illustrated in the image above. The image conveys a holistic message which some of us already appreciate: everything is connected with everything else. The economy isn't indicated explicitly in the above image but, evidently, it's there, just as the industry, commerce, finance, religions, etc. Indeed a very complex scenario. The point is not to list everything but to merely point out that we live in a highly interconnected and dynamic world. We of course agree with the above picture.

As we have repeatedly pointed out in our previous articles, under similar circumstances:
  • it is impossible to make predictions - in fact, even the current economic crisis (of planetary proportions) has not been forecast
  • only very rough estimates can be attempted
  • there is no such thing as precision
  • it is impossible to isolate "cause-effect" statements as everything is linked
  • optimization is unjustified - one should seek acceptable solutions, not pursue perfection
The well known Principle of Incompatibility states in fact that "high precision is incompatible with high complexity". However, this fundamental principle, which applies to all facets of human existence, as well as in Nature, goes unnoticed. Neglecting the Principle of Incompatibility constitutes a tacit and embarrassing admission of ignorance. One such example is that of ratings. While the concept of rating lies at the very heart of our economy, and, from a point of view of principle, it is a necessary concept and tool, something is terribly wrong. A rating, as we know, measures the Probability of Default (PoD). Ratings are stratified according to classes. One example of such classes are shown below:

Class       PoD
1              =<0.05%
2              0.05% - 0.1%
3              0.1% - 0.2%
4              0.2% - 0.4%
5              0.4% - 0.7%
6              0.7% - 1.0%
etc.

A rating affects the way stocks of a given company are traded - this is precisely its function. What is shocking in the above numbers, however, is the precision. A PoD of 0.11% puts a company in class 3, while a 0.99 in class 2. How can this be so? Isn't the world  supposed to be a highly complex system? Clearly, if even a crisis of planetary proportions cannot be forecast, it not only points to high complexity (see the Principle of Incompatibility) but it also says a lot about all the Business Intelligence technology that is used in economics, finance, or management and decision making. So, where does all this precision in ratings come from? From a parallel virtual universe of equations and numbers in which everything is possible but which, unfortunately, does not map well onto reality. But the understanding of the real universe cannot be based on a parallel virtual universe which is incorrect.

The above example of PoD stratification reflects very little understanding of Nature and of its mechanisms. In fact, economic crises of global proportions suddenly happen. As Aristotle wrote in his Nikomachean Ethics: an educated mind is distinguished by the fact that it is content with that degree of accuracy which the nature of things permits, and by the fact that it does not seek exactness where only approximation is possible.








 

Driving Complexity-To-Target: Application to Portfolio Design





Driving the complexity of a given system to a prescribed target value has numerous applications, ranging from engineering (who wouldn't want a simpler design that performs according to specs?) to management, advanced portfolio design, wealth management or investment strategy.

But more than just complexity it is also the robustness of systems that is of most concern. When considering portfolios both diversification and volatility are of concern

We know that in system design (and this applies to portfolios) the mini-max principle, whereby you maximise something (e.g. the expected return) while minimising at the same time something else (e.g. risk) leads to inherently fragile solutions. Taking simultaneously many things to the limit is of course possible but the price one pays is a rigid and fragile solution: you basically push yourself into a very tight corner of  the design space where you have little margin of manoeuvre in case things go wrong. And things do go wrong. Especially if you think that most things in life are linear and follow a Gaussian distribution you should prepare yourself for a handful of surprises.

Portfolio diversification and design can be accomplished differently based on complexity and, in particular, on these two simple facts:


  • High complexity increases exposure - a less complex portfolio is better than a more complex one.
  • A less complex portfolio accomplishes better diversification (more or along the lines of the MPT and Markowitz logic).
Let us see an example. Suppose you want to build a portfolio based on the Dow Jones Industrial Average Index and its components. Without going into unnecessary technicalities, below is an example of our first portfolio. We observe that:

Its complexity is 64.3 (pretty close to the critical value of 68.75)
Entropy is 823
Robustness is 66.8%
Rating: 2 stars

Nothing to celebrate.





Suppose now that you wish to increase the robustness to, say, 85%. Using our Complexity-To-Target Technology it is possible to "force" the robustness of the portfolio to this target value. Since robustness and complexity are linked it is possible to do this either for robustness or complexity or even both. The new portfolio is illustrated below.



Complexity is now 50.9
Entropy is 542 - this tells us that the behaviour of the portfolio is substantially more predictable
Robustness is 84.9%
Rating: 4 stars

The hubs of the portfolio (red discs) have now changed but that is another matter.




www.ontonix.com


www.rate-a-business.com



Thursday 25 July 2013

Software Complexity and What Brought Down AF447


After the recovery of the black boxes from the ill-fated Air France flight 447, it has been concluded that pilot error, coupled with Pitot-tube malfunction have been the major causes of the tragedy. It appears, however, that this is yet another "loss of control" accident. Based on black box data, the aircraft stalled at very high altitude. But, you cannot stall an A330. By definition. The airliner (and many other fly-by-wire aircraft) is software packed to such an extent that it won't let you stall it even if you wanted to commit suicide. That's the theory. But in reality, you don't fly an airliner - you fly the software.  The degree of automation is phenomenal. That is precisely the problem.
Pilots say that they have become button pushers. Here are some comments on the AF447 accident taken verbatim from a Professional Pilots blog:


"We need to get away from the automated flight regime that we are in today."

"Pilots must be able to fly. And to a better standard than the autopilot!"

"To be brutally honest, a great many of my co-pilot colleagues could NOT manage their flying day without the autopilot. They would be sorely taxed."

"It will cost a lot of money to retrain these 'button pushers' to fly again, ..."

"It appears as if the sheer complexity of the systems masked the simplicity of what was really going on. "

"Just so I understand correctly, then there is no way to take direct control of the aircraft unless the computer itself decides to let you, or perhaps more correctly stated, decides you should. Sounds like Skynet in "The Terminator". "


This accident is a very complex one. It is not going to be easy to understand why the plane really came down. It will take time to analyse the data thoroughly and to understand why highly trained pilots pulled the nose up when the stall alarm went off. The theory is that they must have received a large volume of information of very highly confusing nature in order to do so. Apparently, they managed to crash a flyable aircraft.



We have our own view as to the nature of the problem, not to its cause. We believe that it is the excessive complexity of the system that is to be blamed. Modern aircraft carry over 4 million lines of code. That is a huge amount of real-time code. The code, organised into modules, runs in a myriad of modes: "normal law", "alternate law", " approach", "climb", etc., etc. The point is however this. No matter what system you're talking of, high complexity manifests itself in very unpleasant manner - the system is able to produce surprising behaviour. Unexpectedly. In other words, a highly complex system can suddenly switch mode of behaviour, often due to minute changes of its operating conditions. When you manage millions of lines of code, and, in addition, you feed into the system faulty measurements of speed, altitude, temperature, etc., what can you expect? But is it possible to analyse the astronomical number of conditions and combinations of parameters that a modern autopilot is ever going to have to process? Of course not. The more a SW module is sophisticated - number of inputs, outputs, IF statements, GOTO, read, write, COMMON blocks, lines of code, etc., etc. - the more surprises it can potentially deliver. But how can you know if a piece of SW is complex or not? Size is not sufficient. You need to measure its complexity before you can say that it is highly complex. We have a tool to do precisely that - OntoSpace. It works like this. Take a SW module like the one depicted below.

























It will have a certain number of entry points (inputs) and produce certain results (outputs). The module is designed based on the assumption that each input will be within certain (min and max) bounds. The module is then tested in a number of scenarios. Of great interest are "extreme" conditions, i.e. situations in which the module (and the underlying algorithms) and, ultimately the corresponding HW system in question is "under pressure". The uneducated public - just like many engineers - believe that the worst conditions are reached when the inputs take on extreme (min or max) values. This is not the case. Throw at your SW module hundreds of thousands or millions of combinations of inputs - you can generate them very efficiently using Monte Carlo Simulation techniques - and you will see extreme conditions, which do not involve end values of the inputs, to emerge by the dozens. And once you have the results of a Monte Carlo sweep just feed them into OntoSpace. An example with 6 inputs and 6 outputs is shown below.



 
























The module, composed of four blocks (routines) has been plugged into a Monte Carlo loop (Updated Latin Hypercube Sampling has been used to generate the random values of the inputs). As can be observed the module obtains a 5-star complexity rating. Its complexity is 24.46. The upper complexity bound - the so-called critical complexity - is equal to 34.87. In the proximity of this threshold the module will deliver unreliable results. Both these values of complexity should be specified on the back of every SW DDD or ADD (Detailed Design Document and Architectural Design Document). So, this particular module is not highly complex. The idea, of course, is simply to illustrate the process and to show a Complexity Map of a SW module. In other words, we know how to measure the complexity of a piece of SW and to measure its inclinations to misbehave (robustness).
 

But how complex is a system of 4 million lines of code? Has anyone ever measured that? Or its capacity to behave in an unexpected manner? We believe that the fate of AF447 was buried in the super-sophisticated SW which runs modern fly-by-wire airliners and which has the hidden and intrinsic ability to confuse highly trained pilots. You simply cannot and you should not design highly sophisticated systems without keeping an eye on their complexity. Imagine purchasing an expensive house without knowing what it really costs or embarking on a long journey without knowing how far you will need to go. If you design a super sophisticated system and you don't know how sophisticated is really is it will one day turn its back on you. It sounds a bit like buying complex derivatives and seeing them explode (or implode!) together with your  favourite bank. Sounds familiar, doesn't it?



www.ontonix.com


www.design4resilience.com



Wednesday 24 July 2013

Model-free methods - a new frontier of science



When we make decisions or when we think our brain does not use any equations or math models. Our behaviour is fruit of certain hard-wired instincts and experience that is acquired during our lives and stored as patterns (or attractors). We sort of "feel the answer" to problems no matter how complex they may seem but without actually computing the answer. How can that be? How can a person (not to mention an animal) who has no clue of mathematics still be capable of performing fantastically complex functions? Why doesn't a brain, with its immense memory and computational power, store some basic equations and formulae and use them when we need to make a decision? Theoretically this could be perfectly feasible. One could learn equations and techniques and store them in memory for better and more sophisticated decision-making. We all know that in reality things don't work like that. So how do they work? What mechanisms does a brain use if it is not math models? In reality the brain uses model-free methods. In Nature there is nobody to architecture a model for you. There is no mathematics in Nature. Mathematics and math models are an artificial invention of man.  Nature doesn't need to resort to equations or other analytical artifacts. These have been invented by man but this doesn't mean that they really do exist. As Heisenberg put it, what we see is not Nature but Nature exposed to our way of questioning her. If we discover that "F = M * a" that doesn't mean that Nature actually  computes this relationship each time a mass is accelerated. The relationship simply holds (until somebody disproves it).

Humans (and probably also animals) work based on  inter-related fuzzy rules which can be organised into maps, such as the one below. The so-called Fuzzy Cognitive Maps are made of nodes (bubbles) and links (arrows joining the bubbles). These links are built and consolidated  by the brain as new information linking pairs of bubbles is presented to us and becomes verifiable. Let's take highway traffic (see map below). For example, a baby doesn't know that "Bad weather increases traffic congestion". However, it is a conclusion you arrive at once you've been there yourself a few times. The rule gets crystallised and remains in our brain for a long time (unless  sometimes alcohol dissolves it!). As time passes, new rules may be added to the picture until, after years of experience, the whole thing becomes a consolidated body of knowledge. In time, it can suffer adjustments and transformations (e.g. if new traffic rules are introduced) but the bottom line is the same. There is no math model here. Just functions (bubbles) connected to each other in a  fuzzy manner, the weights being the fruit of the individuals own experience.


 


As a person gains experience, the rules (links) become stronger but, as new information is added, they can also become more fuzzy. This is the main difference between a teenager and an adult. For young people - who have very few data points on which to build the links - the rules are crisp (through two data point a straight line passes, while it is difficult for 1000 points to form a straight line - they will more probably form something that looks like a cigar). This is why many adults don't see the world as black or white and why they tend to ponder their answers to questions. Again, the point is that there is no math model here. Just example-based learning which produces sets of inter-related Fuzzy Cognitive Maps that are stored in our memory. Clearly, one may envisage attaching a measure of complexity to each such map.

OntoSpace, our flagship product, functions in a similar manner. It doesn't employ math models in order to establish relationships between the parameters of a system or a process. Essentially, it emulates the functioning of the human brain. 








Is it Possible to Make Predictions?



Prediction of the future has always been man's dream. However, there is an overwhelming amount of physical evidence that this is quite impossible. This is because the future is permanently under construction. Therefore, as every second passes, the future is changed. The cause of this are the laws of physics. If the future were predictable with all likelihood we would have different physical laws and life would probably not even exist.

But man is a subborn species. The unhealthy desire to predict the future has pushed mathematicians to devise utterly  unnatural methods which, in virtue of prolonged and often distorted use, are now deeply rooted in the practises in virtually all spheres of social life. Scientists speak of predictive models, just as the economists, the weather man, etc. Some people believe in horoscopes while others buy lucky  lottery numbers.

Much of the contemporary "predictive machinery" is based on statistics - looking back in time, building some model of what has actually happened, extrapolating into the future. The concept of probability plays a central role here. Bertrand Russel is known to have said, back in 1929, that "probability is the most important concept in modern science, especially as nobody has the slightest notion what it means". In fact, probability is not a physical entity and it is not subjected to any laws in the strict scientific meaning. As a matter of fact, there are no laws of probability. If a future event will take place, it will do so irrespective of the probability that we may have attached to it. If an extremely  unlikely event will happen, it's probability of occurrence is already 100%.

Predictions are of major interest in the realm of uncertainty. Clearly, one can predict with a high degree of accuracy when an object will hit the ground when it is dropped from a certain height (providing it is not a feather). What we are more concerned with is the desire to predict phenomena and events of interest to economists, investors, managers or politicians.
But there is another problem with predictions. Suppose you do indeed know with certainty that an event of interest to you will happen at a specific time in the future.  You will surely take action based on that knowledge. What this can cause, however, is a change in the chain of events such that you inevitably alter that event. As an example, suppose that you are extremely wealthy and that you know the exact value of certain stocks some months in advance. You will immediately start to buy and /or sell massive amounts of these stocks. This will surely cause other investors to react. Inevitably, the flow of events will be such that the predicted values of these stocks will not be the ones you knew with "certainty". What does this mean? It means that you can only verify a prediction if you do nothing. The moment you act based on your knowledge of the future you automatically alter it and the prediction cannot be verified.  Consequently, the phrase "predictive model" is an oxymoron. As mentioned, because of the way the laws of physics work, the future is permanently under construction. And if you add Goedel to the picture ...... The Creator is indeed very smart!

So, it seems that our efforts to devise some sort of predictive analytical machinery is futile. The current planetary meltdown of the economy eloquently underscores this fact. The severity and depth of this crisis has not been predicted (had this been the case we would have taken measures, right?) and this speaks of the quality of the contemporary economic and econometric models and of their predictive capability. With all respect, their predictive capability is not too exciting.

In actual fact, we still don't even really understand the crisis and its multiple causes. But how can one speak of predicting phenomena which  are poorly understood? Shouldn't we change the order of things? Shouldn't we try to first understand better the dynamics of   highly complex interconnected  and turbulent economic systems and devote less resources to fortune telling and high-tech circle-squaring? How about:

  • Taking a holistic view of things, analysing systems of entities not single entities.
  • Searching for recognisable patterns, not repeatable details. The closer you look the less you see!
  • Moving from sophisticated (and subjective) models to model-free approaches.
  • Developing a new kind of maths, which is less "digital" and closer to reality.
  • Dedicating more effort to understanding the way things work, the way Nature works.

What cannot be achieved should not be pursued. Our efforts and resources should be focused on real problems that admit real solutions. Omnis ars naturae imitatio est.








 

FREE account for Measuring Business Complexity and Rating






If you wish to measure the complexity of your business, or assess its Resilience Rating, just follow these instructions:


1. go to http://www.rate-a-business.com/index.php

2. login as User: freerating    Pwd: freerating


Don't forget to read the short tutorial!




www.ontonix.com

Monday 22 July 2013

Beyond the concepts of Risk and Risk Management






The current economic crisis indicates that conventional risk assessment, rating and management techniques don’t perform well in a turbulent and global environment. AAA-rated companies and banks have suddenly failed, demonstrating the limitations of not only risk management techniques but also the need to re-think the expensive and sophisticated Business Intelligence and Corporate Performance Management infrastructure that modern corporations have relied on. But what are the origins of the financial meltdown that is spilling over into the real economy? Why is the economy increasingly fragile? We identify three main causes: excessively complex financial products, globalized financial markets that lack regulations and usage of subjective computational models that are naturally limited to less turbulent scenarios.


Models are only Models. No matter how sophisticated, a model is always based on a series of assumptions. More sophistication means more assumptions. Classical risk evaluation models, because of their subjective nature, are inherently unable to capture the unexpected and pathological events that have punctuated human history, not to mention the economy. But there is more. Conventional Business Intelligence is unable to cope with the hidden complexity of a modern global corporation precisely because it thrives on unrealistic mathematical models. Once defined, a model is condemned to deliver only what has been hard-wired into its formulation. However, a difficulty in analysing our inherently turbulent economy and, more specifically, financial instabilities, lies in the fact that most of the crises manifest themselves in a seemingly unique manner. Life very rarely follows a Gaussian distribution and the future is constantly under construction.

  

Excessively complex financial products have spread hidden risks to every corner of the globe. Their degree of intricacy is such that they are often beyond the control of those who have created them. Derivatives of derivatives of derivatives …. The speculative use of such products creates an explosive mixture. Because of the global nature of our economy, and due to its spectacular degree of interconnectedness, such products are an ideal vehicle for creating and transmitting uncertainty.


Uncertain and global economy. It is because of the laws of physics that our economy is increasingly uncertain, unstable and interconnected. This means that it is becoming increasingly complex and turbulent. Conventional methods that rely on mathematical models are unable to capture and embrace this complexity, not to mention predict crises. The increase of complexity is inevitable and globalization is an inevitable consequence of the growth of complexity.



Complexity is a fundamental property of every dynamical system. Like many things, it can be managed provided it can be measured. As for most things in life, when managed, complexity becomes an asset. When ignored, it becomes a liability, a time bomb. Because of the laws of physics, the spontaneous increase of complexity in all spheres of social life is inevitable. Like for most things in life, every system possesses its own maximum level of sustainable complexity. Close to this limit, known as critical complexity, it becomes fragile, hence vulnerable. This is the fundamental reason why each corporation should know its value of complexity, as well as the corresponding critical value.

  
Complexity can be measured. Ontonix is the first company to have developed and marketed a radically innovative and unique technology for rational quantification and management of complexity. Introduced in 2005, OntoSpace™, our flagship product, is the World’s first complexity management system. While others struggle with definitions of complexity, we have been measuring the complexity of banks, corporations, financial products, mergers, or crises already since 2005. Our complexity measure is objective. It is natural. No fancy mathematics, statistics or exotic models. A 100% model-free approach guarantees an objective look at a corporation.



Hidden and growing complexity is the main enemy of a corporation. A corporation may still be profitable but close to default. Highly complex systems are difficult to manage and may suddenly collapse. Excessive complexity is the true source of risk.
  
Critically complex systems become almost impossible to manage, hence are vulnerable and greatly exposed to both internal and external sources of uncertainty.  



Complexity X Uncertainty = Fragility™. This simple yet fundamental equation has been coined by Ontonix and establishes the philosophy and logic behind our technology and services offering. The bottom line is simple: a complex business process, operating in an uncertain environment, is a fragile mix. Since the uncertainty of the global economy cannot be easily altered, in order to operate at acceptable levels of fragility one must necessarily reduce the complexity of the corresponding business model. Based on this logic Complexity Management goes beyond Risk Management and establishes a new underlying paradigm for a superior and holistic form of Business Intelligence. A technology of the Third Millennium.



Conventional techniques provide insufficient to insure against all future contingencies.
There are numerous recent examples of AAA-rated corporations that have suddenly defaulted or are in serious difficulty. The collapse of the Lehman Brothers Bank is a prominent case.  Based on the financial highlights of the bank in the period 2004-2008, our analysis has indicated how a quickly increasing complexity provided crisis precursors, hinting more than a year before default that the system was in difficulty. Evidently, the management was unaware that complexity was sky-rocketing as it is invisible to conventional methods. 



The bottom line: manage complexity.