Tuesday, 2 July 2013

Why Can't Huge Problems Be Solved (Anymore)?


 


Before you start to solve a problem (a numerical problem) you should check its conditioning. An ill-conditioned problem will produce very fragile and unreliable results - no matter how elegant and sophisticated solution you come up with it will be vulnerable. Think of a simple and basic problem, a linear system of equations: y = A x + b. If A is ill-conditioned, the solution will be very sensitive to entries in both b and y and errors therein will be multiplied by the so-called condition number of A, i.e. k(A). That's as far as simple linear algebra goes. However, most problems in life cannot be tackled via a linear matrix equation (or any other type of equations for that matter). This does not mean, though, that they cannot be ill-conditioned, quite the contrary. The more a problem gets non-linear the more astonishing solutions it can deliver (the Logistic Map comes to mind, for example).


The numerical conditioning of a problem should always be computed before one can attempt its solution. How often is this done? Very very rarely. Once you've determined that a problem is well-conditioned, there is the issue of determining if it will allow a solution, multiple solutions or none. Those who practice math on a daily basis know this well. Nothing new under the Sun. However, today humanity is being faced with enormous problems of planetary proportions (organized financial crime, terrorism, climate change, loss of biodiversity, depletion of natural resources, etc., etc.) which are very difficult to formulate. A scientist knows that a good formulation of a problem is half of the solution.


There is one fundamental issue, in our view, which makes huge problems difficult to solve - high complexity. Close to critical complexity - each system has such a threshold - means the problem (or system) is very ill-conditioned and dominated by uncertainty. Imagine the linear system of equations y = A x + b in which the entries of A are not crisp values but fuzzy. In other words, suppose that a particular entry aij assumes values from a certain range and that the "exact" value is unknown. This changes the situation dramatically as the system can lead to a huge number of solutions. How does one decide which one is best? In any event, hugely complex problems cannot be solved using traditional techniques and traditional thinking. First of all, if one doesn't measure the complexity and the corresponding critical complexity of a problem one has no idea of its conditioning - this means that trying to solve it may be futile. Second, huge problems are, with all likelihood, connected to other huge problems. This means that a systems, or holistic approach must be taken. Unfortunately, only engineers are trained to think in terms of systems, certainly not politicians or managers. The third problem we identify which hinders the solution to huge problems is the predominance of subjective non-quantitative approaches. If you want to lower cholesterol, you must first measure it. If you want to travel to the Moon, you must first be able to measure its distance.


Suppose, now, that a huge problem has been identified and "described properly". How would one go about solving it? Just like in the case of a mathematical problem, its conditioning should be determined before a solution strategy is adopted. This can be done easily by measuring the problem's (system's) complexity and corresponding critical complexity. Their ratio is a good proxy of numerical conditioning (i.e. k(A)). Imagine, for example, that the problem is nearly critically complex, i.e. extremely ill-conditioned (this could mean that it is close to a spontaneous phase-change or a mode switching). As in the case of mathematically ill-conditioned problems, one may transform it so as improve the conditioning. For example, in the case of eigenvalue extraction, an ill-conditioned matrix may first undergo a process of balancing via a similarity transformation to improve conditioning. In the case of nearly-critically complex systems one could first attempt a complexity reduction before a cure is adopted. Two examples: an unstable hospitalized patient is first stabilized before surgery; a company restructures before an acquisition or a merger.  This process of complexity reduction is of paramount importance when it comes to solving huge problems. However, it hinges on measuring complexity, and this, in turn, mandates a quantitative approach. Paradoxically, even though we have devised highly efficient means of generating huge amounts of data, quantitative approaches are not as popular as one would imagine. Why? Accountability. When you speak in terms of numbers you can be held accountable. 


Huge and complex problems are becoming increasingly complex and multi-faceted. Because of the laws of physics, each generation leaves behind more chaos (entropy) than the previous one. The ancient Greeks already knew this - it seems that we don't. Unless we approach a quantitative complexity monitoring and complexity management approach on a large scale, we will not be able to solve our current problems and will create even larger and nastier ones for the future generations. Until we all drown in complexity. Our priority must be to first move away from the precipice.



www.ontonix.com

No comments:

Post a Comment