Saturday 24 August 2013

Beyond Pre-Crisis Analytics



Conventional data mining technology has the objective of establishing patterns and rules from large amounts of data by combining such techniques as statistics, artificial intelligence and data-base management. Data-mining and Analytics techniques are supposed to provide managers with an extra edge and to transform data into business intelligence. Have they succeeded? To find the answer take a look at the state of the global economy.

BEYOND PRE-CRISIS TECHNOLOGY
Conventional pre-crisis data mining and data analysis techniques display information by means of curves, 2D or 3D plots, pie charts, bar charts, or fancy surfaces. When the dimensionality of data is high methods become impractical in that one has to cope with hundreds if not thousands of such plots. It is necessary to resort to methods that really synthesize data not just transform one problem to another. Our complexity-based Analytic Engine OntoNet™ does something completely different.

SEEING THE BIG PICTURE
Every time you do something to data you destroy some of the information it contains. But data is expensive. We have developed innovative model-free technology that doesn’t destroy information. In fact, our approach emulates the way the brain works when you actually look at data. By transforming raw data into structure we also achieve an unprecedented degree of synthesis. And it all comes in one single Business Structure Map. This means you get to appreciate the nature and dimensionality of all your data to the fullest possible extent.

PUTTING YOUR DATA TO WORK
Extracting knowledge from data is not just about putting together pieces of information. This is precisely where traditional technology has failed. We drown in data but we are thirsty for knowledge, not information. Capturing knowledge, be it from field data or data that emerges from computer simulation, means transforming it into structure. Structure means relationships, degrees of freedom, constraints. Structure means gaining understanding and knowledge. Precisely what OntoNet™ is about.

UNSEEN INFORMATION HIDDEN IN YOUR DATA
The moment you map multi-dimensional data onto structure you get to appreciate a fundamental and new aspect of a business – its complexity.OntoNet ™ not only provides a unique and modern representation of a business, it also measures its complexity. Why is this so important? Because the rapid increase of business complexity, which is an inevitable consequence of turbulence and globalization, is one of the biggest enemies of growth, stability and resilience. With OntoNet™, conventional risk management transitions into its more advanced and natural form: complexity management.

SUPERIOR BUSINESS INTELLIGENCE = SURVIVAL
In a globalized and increasingly turbulent economy the survival of a business hinges on its ability to react quickly to unexpected, unique and extreme events. The economy is not linear, it is not stationary, it is not in a state of equilibrium and not everything follows a Gaussian distribution. However, many of the conventional BI and Analytics techniques are in violation of the basic laws of physics. Building a sustainable and resilient economy means also going beyond regressions, neural nets or statistics.

By the way, have you ever seen the DJIA like this?


See it in motion here.





 

Is France THE time-bomb for the Euro?


In an article published last year, the economist speaks of the country that could pose the largest threat to the Euro: France. A section of the article states:

"Even as other EU countries have curbed the reach of the state, it has grown in France to consume almost 57% of GDP, the highest share in the euro zone. Because of the failure to balance a single budget since 1981, public debt has risen from 22% of GDP then to over 90% now.

The business climate in France has also worsened. French firms are burdened by overly rigid labour- and product-market regulation, exceptionally high taxes and the euro zone’s heaviest social charges on payrolls. Not surprisingly, new companies are rare. France has fewer small and medium-sized enterprises, today’s engines of job growth, than Germany, Italy or Britain. The economy is stagnant, may tip into recession this quarter and will barely grow next year. Over 10% of the workforce, and over 25% of the young, are jobless."

The Resilience Rating - which reflects the "stability" of the situation, not performance of its economy - of France has only recently grown beyond 70%. Click below to see France's Business Structure Map and rating.

 




For some reason, according to the press it appears that only Southern European economies are in trouble. This article is telling us that we are all, essentially, on the same boat. Let's not forget, the crisis is global.


www.ontonix.com





                

The 18 Truths About Complexity




A paper copyrighted in 1998, called How Complex Systems Fail and written by an M.D., Dr. Richard Cook, describes 18 truths about the underlying reasons complicated systems break down. On the surface the list appears surprisingly simple, but deeper meaning is also present. Some of the points are obvious while others may surprise you.

We report the paper verbatim.


THE EIGHTEEN TRUTHS
 
"The first few items explain that catastrophic failure only occurs when multiple components break down simultaneously:
1. Complex systems are intrinsically hazardous systems. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.
2. Complex systems are heavily and successfully defended against failure. The high consequences of failure lead over time to the construction of multiple layers of defense against failure. The effect of these measures is to provide a series of shields that normally divert operations away from accidents.
3. Catastrophe requires multiple failures - single point failures are not enough. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure.
4. Complex systems contain changing mixtures of failures latent within them. The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations.
5. Complex systems run in degraded mode. A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws.
Point six is important because it clearly states that the potential for failure is inherent in complex systems. For large-scale enterprise systems, the profound implications mean that system planners must accept the potential for failure and build in safeguards. Sounds obvious, but too often we ignore this reality:
6. Catastrophe is always just around the corner. The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system's own nature.
Given the inherent potential for failure, the next point describes the difficulty in assigning simple blame when something goes wrong. For analytic convenience (or laziness), we may prefer to distill narrow causes for failure, but that can lead to incorrect conclusions:
7. Post-accident attribution accident to a ‘root cause' is fundamentally wrong. Because overt failure requires multiple faults, there is no isolated ‘cause' of an accident. There are multiple contributors to accidents. Each of these is necessary insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident.
The next group goes beyond the nature of complex systems and discusses the all-important human element in causing failure:
8. Hindsight biases post-accident assessments of human performance. Knowledge of the outcome makes it seem that events leading to the outcome should have appeared more salient to practitioners at the time than was actually the case. Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.
9. Human operators have dual roles: as producers & as defenders against failure. The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable.
10. All practitioner actions are gambles. After accidents, the overt failure often appears to have been inevitable and the practitioner's actions as blunders or deliberate wilful disregard of certain impending failure. But all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated.

11. Actions at the sharp end resolve all ambiguity. Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors' or ‘violations' but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.
Starting with the nature of complex systems and then discussing the human element, the paper argues that sensitivity to preventing failure must be built in ongoing operations. In my experience, this is true and has substantial implications for the organizational culture of project teams:
12. Human practitioners are the adaptable element of complex systems. Practitioners and first line management actively adapt the system to maximize production and minimize accidents. These adaptations often occur on a moment by moment basis.
13. Human expertise in complex systems is constantly changing. Complex systems require substantial human expertise in their operation and management. Critical issues related to expertise arise from (1) the need to use scarce expertise as a resource for the most difficult or demanding production needs and (2) the need to develop expertise for future use.
14. Change introduces new forms of failure. The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.
15. Views of ‘cause' limit the effectiveness of defenses against future events. Post-accident remedies for "human error" are usually predicated on obstructing activities that can "cause" accidents. These end-of-the-chain measures do little to reduce the likelihood of further accidents.
16. Safety is a characteristic of systems and not of their components. Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system. The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.
17. People continuously create safety. Failure free operations are the result of activities of people who work to keep the system within the boundaries of tolerable performance. These activities are, for the most part, part of normal operations and superficially straightforward. But because system operations are never trouble free, human practitioner adaptations to changing conditions actually create safety from moment to moment.
The paper concludes with a ray of hope to those have been through the wars:
18. Failure free operations require experience with failure. Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure. More robust system performance is likely to arise in systems where operators can discern the "edge of the envelope". It also depends on providing calibration about how their actions move system performance towards or away from the edge of the envelope."


www.ontonix.com


 

Friday 23 August 2013

The Principle of Fragility



The following equation, which we call the Principle of Fragility, has been coined by Ontonix in early 2005 and indicates why complexity management is a form of risk management:


Complexity X Uncertainty = Fragility        


In order to understand the Principle of Fragility let us borrow Fourier’s idea of variable separation and create a useful parallel. Let us assume, without loss of generality, that the term “Complexity” is specific to a certain system, e.g. a corporation, while  the term “Uncertainty” concentrates the degree of turbulence (entropy) in the environment in which the system operates, e.g. a market. The equation assumes the following form:

Csystem  X Uenvironment = Fragility         

or, in the case of a business,

Cbusiness model X Umarket = Fragility         

What the equation states is that in a market of given turbulence a more complex business model will be more fragile (exposed). In practical terms, the equation may be seen as a mathematical version of Ockham’s razor: with all things being equal a less complex compromise is preferable.




Thursday 22 August 2013

Complexity Maps get a facelift.



Business Structure Maps (known also as Complexity Maps) have now a new look and feel. The size of the nodes (variables) is now function of its importance, or footprint on the system as a whole. This makes reading maps much easier as it is immediately clear where the important things are and where to start solving problems. The larger nodes are where there is more leverage and this is where one needs to concentrate.

Numerous interactive examples of maps may be seen here.



Get your own maps here.


www.ontonix.com

Friday 16 August 2013

Complexity Impacts Negatively Portfolio Returns.




In a recent blog we have pointed out research conducted at the EPFL in Lausanne, Switzerland, which confirms that complexity impacts negatively portfolio returns. The research has now been concluded and the full report is available here.

The research has bee conducted using Ontonix's on-line system for measuring the complexity and resilience of businesses and portfolios.



www.ontonix.com




Thursday 15 August 2013

How Healthy Are the US Markets? A Look at a System of Systems.

The US stock markets indices have been enjoying upward trends for a few months now. When analysed one by one, the situation appears to be very positive. Based on the last 60 days of trading and on the values of "Open", "High", "Low", "Volume", "Close" and "Adjusted Close", we have analyzed the DJIA, S&P 500 and NASDAQ Composite markets separately and then as s single interacting system. Here the results (analysis performed on August 15-th, 2013).

The DJIA. Resilience:83%















The S&P 500. Resilience: 97%





  








The NASDAQ Composite. Resilience: 95%

















Because of the inter-dependencies that globalization has created, no system acts in isolation and no system should be analyzed in isolation. All systems interact, forming a huge system of systems. To show how this can impact the big picture we have analysed the three markets simultaneously. This is the picture:

DJIA + S&P + NASDAQ. Resilience: 72%




















In the above map the first six red nodes correspond to the Dow, the following 6 blue are the S&P, the remaining 6 nodes correspond to the NASDAQ.

The combined markets have a resilience o 72% even though the three markets boast values of 83%, 97% and 95%, with an average of 92%. Surprised? We put together three components, each of which has a resilience greater than 83% and the resulting system has a resilience of 72%! This is a great  example of  "the whole being actually less than the sum of the parts". So much for linear thinking. 




www.ontonix.com


www.rate-a-business.com