Archive for November, 2017

Differences between today’s sustainability metrics and the ones needed for low cost social value transactions and efficient markets for intangible assets

November 16, 2017

Measurement is such a confusing topic! Everyone proclaims how important it is, but almost no one ever seeks out and implements the state of the art, despite the enormous advantages to be gained from doing so.

A key metric quality issue concerns the cumbersome and uninterpretable masses of data that well-intentioned people can hobble themselves with when they are interested in improving their business processes and outcomes. They focus on what they can easily count, and then they wrongly (at great but unrecognized cost) misinterpret the counts and percentages as measures.

For instance, today’s sustainability and social value indicators are each expressed in a different unit (dollars, hours, tons, joules, kilowatt hours, survey ratings, category percentages, etc.; see below for a sample list). Some of them may indeed be scientific measures of that individual aspect of the business. The problem is they are all being interpreted in an undefined and chaotic aggregate as a measure of something else (social value, sustainability, etc.). Technically speaking, if we want a scientific measure of that higher order construct, we need to model it, estimate it, calibrate it, and deploy it as a common language in a network of instruments all traceable to a common unit standard.

All of this is strictly parallel with what we do to make markets in bushels of corn, barrels of oil, and kilowatts of electricity. We don’t buy produce by count in the grocery store because unscrupulous merchants would charge the same amount for small fruits as for large. All of the scales in grocery store produce markets measure in the same unit, and all of the packages of food are similarly marked in standard units of weight and volume so we can compare prices and value.

There are a lot of advantages to taking the trouble to extend this system to social value. I suppose every one of these points could be a chapter in a book:

  • First, investing in scientific measurement reduces data volume to a tiny fraction of what we start with, not only with no loss of information but with the introduction of additional information telling us how confident we can be in the data and exactly what the data specifically mean (see below). That is, all the original information is recoverable from the calibrated measure, which is also qualified with an uncertainty range and a consistency statistic. Inconsistencies can be readily identified and acted on at individual levels.
  • Now the numbers represent something that adds up the way they do, instead of standing for the unknown, differing, and uncontrolled units used in the original counts and percentages.
  • We can take missing data into account, which means we can adapt the indicators used in different situations to specific circumstances without compromising comparability.
  • We know how to gauge the dependability of the data better, meaning that we will not be over-confident about unreliable data, and we won’t waste our time and resources obtaining data of greater precision than we actually need.
  • Furthermore, the indicators themselves are now scaled into a hierarchy that maps the continuum from low to high performance. This map points the way to improvement. The order of things on the scale shows what comes first and how more complex and difficult goals build on simpler and easier ones. The position of a measure on the scale shows what’s been accomplished, what remains to be done, and what to do next.
  • Finally, we have a single metric we can use to price value across the local particulars of individual providers. This is where it becomes possible to see who gives the most bang for the buck, to reward them, to scale up an expanded market for the product, and to monetize returns on investment.

The revolutionary network effects of efficient markets are produced by the common currencies for the exchange of value that emerge out of this context. Improvements rebalancing cost and quality foster deflationary economies that drive more profit from lower costs (think Moore’s law). We gain the efficiency of dramatic reductions in data volume, and the meaningfulness of numbers that stand for something substantively real in the world that we can act on. These combine to lower the cost of transactions, as it now becomes vastly less expensive to find out how much of the social good is available, and what quality it is. Instead of dozens or hundreds of indicators repeated for each company in an industry, and repeated for each division in each company, and all of these repeated for each year or quarter, we have access to all of that information properly contextualized in a succinct, meaningful, and interpretable format for different applications at individual, organizational, industry-wide, national, regional, or global levels of complexity.

That’s likely way too much to digest at once! But it seemed worth saying it all at once in once place, in case anyone might be motivated to get in touch or start efforts in this direction on their own.

Examples of the variety of units in a handy sustainability metrics spreadsheet can be found at the Hess web site (http://www.hess.com/sustainability/performance-data/key-sustainability-metrics): freshwater use in millions or thousands of cubic meters, solid waste and carbon emissions in thousands of tons, natural gas consumption in thousands of gigajoules, electricity consumption in thousands of kilowatt hours; employee union members, layoffs, and turnover as percentages; employee lost time incident rates in hundreds of thousands of hours worked, percentages of female or minority board members, dollars for business performance.

These indicators are chosen with good reasons for use within each specific area of interest. They comprise an intuitive observation model that has face validity. But this is only the start of the work that needs to be done to create the metrics we need if we are to radically multiply the efficiency of social value markets. For an example of how to work from today’s diverse arrays of social value indicators (where each one is presented in its own spreadsheet) toward more meaningful, adaptable, and precise measures, see:

Fisher, W. P., Jr. (2011). Measuring genuine progress by scaling economic indicators to think global & act local: An example from the UN Millennium Development Goals project. LivingCapitalMetrics.com. Social Science Research Network: http://ssrn.com/abstract=1739386 .

Advertisements

Excellent articulation of the rationale for living capital metrics 

November 2, 2017

I just found the best analysis of today’s situation I’ve seen yet. And it explicitly articulates and substantiates all my reasons for doing the work I’m doing. Wonderful to have this independent source of validation.

The crux of the problem is spelled out at the end of the article, where the degree of polarizing opposition is so extreme that standards of truth and evidence are completely compromised. My point is that the fact will remain, however, that everyone still uses language, and language still requires certain connections between concepts, words, and things to function. Continuing to use language in everyday functions in ways that assume a common consensus on meaningful reference may eventually come to be unbearably inconsistent with the way language is used politically, creating a social vacuum that will be filled by a new language capable of restoring the balance of meaning in the word-concept-thing triangles.

As is repeatedly argued in this blog, my take is that what we are witnessing is language restructuring itself to incorporate new degrees of complexity at a general institutional, world historic level. The falsehoods of our contemporary institutional definitions of truth and fact are rooted in the insufficiencies of the decision making methods and tools widely used in education, health care, government, business, etc. The numbers called measures are identified using methods that almost universally ignore the gifts of self-organized meaning that offer themselves in the structure of test, assessment, survey, poll, and evaluation response data. Those shortcomings in our information infrastructure and communication systems are causing negative feedback loops of increasingly chaotic noise.

This is why it is so important that precision science is rooted in everyday language and thinking, per Nersessian’s (2002) treatment of Maxwell and Rasch’s (1960, pp. 110-115) adoption of Maxwell’s method of analogy (Fisher, 2010; Fisher & Stenner, 2013). The metric system (System International des Unites, or SI) is a natural language extension of intuitive and historical methods of bringing together words, concepts, and things, renamed instruments, theories, and data. A new SI for human, social, and natural capital built out into science and commerce will be one component of a multilevel and complex adaptive system that resolves today’s epistemic crisis by tapping deeper resources for the creation of meaning than are available in today’s institutions.

Everything is interrelated. The epistemic crisis will be resolved when our institutions base decisions not just on a potentially arbitrary collection of facts but on facts internally consistent enough to support instrument calibration and predictive theory. The facts have to be common sensical to everyday people, to employees, customers, teachers, students, patients, doctors, nurses, managers. People have to be able to see themselves and where they stand relative to their goals, their origins, and everyone else in the pictures drawn by the results of tests, surveys, and evaluations. That’s not possible in today’s systems. And in those systems, some people have systematically unfair advantages. That has to change, not through some kind of Brave New World hobbling of those with advantages but by leveling the playing field to allow everyone the same opportunities for self-improvement and the rewards that follow from it.

That’s it in a nutshell. Really good article:

America is facing an epistemic crisis – Vox

https://apple.news/A0alOElOQT5itYGPAJ3eYPQ

References

Fisher, W. P., Jr. (2010, June 13-16). Rasch, Maxwell’s method of analogy, and the Chicago tradition. In G. Cooper (Chair), Https://conference.cbs.dk/index.php/rasch/Rasch2010/paper/view/824. Probabilistic models for measurement in education, psychology, social science and health: Celebrating 50 years since the publication of Rasch’s Probabilistic Models, University of Copenhagen School of Business, FUHU Conference Centre, Copenhagen, Denmark.

Fisher, W. P., Jr. (2010). The standard model in the history of the natural sciences, econometrics, and the social sciences. Journal of Physics Conference Series, 238(1), http://iopscience.iop.org/1742-6596/238/1/012016/pdf/1742-6596_238_1_012016.pdf.

Fisher, W. P., Jr., & Stenner, A. J. (2013). On the potential for improved measurement in the human and social sciences. In Q. Zhang & H. Yang (Eds.), Pacific Rim Objective Measurement Symposium 2012 Conference Proceedings (pp. 1-11). Berlin, Germany: Springer-Verlag.

Nersessian, N. J. (2002). Maxwell and “the method of physical analogy”: Model-based reasoning, generic abstraction, and conceptual change. In D. Malament (Ed.), Reading natural philosophy: Essays in the history and philosophy of science and mathematics (pp. 129-166). Lasalle, Illinois: Open Court.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.