Archive for the ‘Mediation’ Category

Enchantment, Organizations, and Mediating Instruments: Potential for a New Consensus?

August 3, 2011

I just came across something that could be helpful in regaining some forward momentum and expanding the frame of reference for the research on caring in nursing with Jane Sumner (Sumner & Fisher, 2008). We have yet to really work in the failure of Habermas’ hermeneutic objectivism (Kim, 2002; Thompson, 1984) and we haven’t connected what we’ve done with (a) Ricoeur’s (1984, 1985, 1990, 1995) sense of narrative as describing the past en route to prescribing the future (prefiguring, configuring, and refiguring the creation of meaning in discourse) and with (b) Wright’s (1999) sense of learning from past data to efficiently and effectively anticipate new data within a stable inferential frame of reference.

Now I’ve found a recent publication that resonates well with this goal, and includes examples from nursing to boot. Boje and Baskin (2010; see especially pp. 12-17 in the manuscript available at http://peaceaware.com/vita/paper_pdfs/JOCM_Never_Disenchanted.pdf) cite only secondary literature but do a good job of articulating where the field is at conceptually and in tracing the sources of that articulation.  So they make no mention of Ricoeur on narrative (1984, 1985, 1990) and on play and the heuristic fiction (1981, pp. 185-187), and they make no mention of Gadamer on play as the most important clue to methodological authenticity (1989, pp. 101-134). It follows that they then also do not make any use of the considerable volume of other available and relevant work on the metaphysics of allure, captivation, enthrallment, rapture, beauty, or eros.

This is all very important because these issues are highly salient markers of the distinction between a modern, Cartesian, and mechanical worldview destructive of enchantment and play, and the amodern, nonCartesian, and organic worldview in tune with enchantment and play. As I have stressed repeatedly in these posts, the way we frame problems is now the primary problem, in opposition to those who think identifying and applying resources, techniques, or will power is the problem. It is essential that we learn to frame problems in a way that begins from requirements of subject-object interdependence instead of from assumptions of subject-object independence. Previous posts here explore in greater detail how we are all captivated by the desire for meaning. Any time we choose negotiation or patient waiting over violence, we express faith in the ultimate value of trusting our words. So though Boje and Baskin do not document this larger context, they still effectively show exactly where and how work in the nonCartesian paradigm of enchantment connects up with what’s going on in organizational change management theory.

The paper’s focus on narrative as facilitating enchantment and disenchantment speaks to our fundamental absorption into the play of language. Enchantment is described on page 2 as involving positive connection with existence, of being enthralled with the wonder of being endowed with natural and cultural gifts.  Though not described as such, this hermeneutics of restoration, as Ricoeur (1967) calls it, focuses on the way symbols give rise to thought in an unasked-for assertion of meaningfulness. The structure we see emerge of its own accord across multiple different data sets from tests, surveys, and assessments is an important example of this gift through which previously identified meanings re-assert themselves anew (see my published philosophical work, such as Fisher, 2004). The contrast with disenchantment of course arises as a function of the dead and one-sided modern Cartesian effort aimed at controlling the environment, which effectively eliminates wonder and meaning via a hermeneutics of suspicion.

In accord with the work done to date with Sumner on caring in nursing, the Boje and Baskin paper describes people’s variable willingness to accept disenchantment or demand enchantment (p. 13) in terms that look quite like preconventional and postconventional Kohlbergian stages. A nurse’s need to shift from one dominant narrative form to another is described as very difficult because of the way she had used the one to which she was accustomed to construct her identity as a nurse (p. 15). Bi-directionality between nurses and patients is implied in another example of a narrative shift in a hospital (p. 16). Both identity and bi-directionality are central issues in the research with Sumner.

The paper also touches on the conceptual domain of instrumental realism, as this is developed in the works of Ihde, Latour, Heelan and others (on p. 6; again, without citing them), and emphasizes a nonCartesian subject-object unity and belongingness, which is described at length in Ricoeur’s work. At the bottom of page 7 and top of 8, storytelling is theorized in terms of retrospection, presentness, and a bet on future meaning, which precisely echoes Ricoeur’s (1984, 1985, 1990) sense of narrative refiguration, configuration, and prefiguration. A connection with measurement comes here, in that what we want is to:

“reach beyond the data in hand to what these data might imply about future data, still unmet, but urgent to foresee. The first problem is how to predict values for these future data, which, by the meaning of inference, are necessarily missing. This meaning of missing must include not only the future data to be inferred but also all possible past data that were lost or never collected” (Wright, 1999, p. 76).

Properly understood and implemented (see previous posts in this blog), measurement based in models of individual behavior provides a way to systematically create an atmosphere of emergent enchantment. Having developmentally sound narratives rooted in individual measures on multiple dimensions over time gives us a shared written history that we can all find ourselves in, and that we can then use to project a vision of a shared future that has reasonable expectations for what’s possible.

This mediation of past and future by means of technical instruments is being described in a way (Miller & O’Leary, 2007) that to me (Fisher & Stenner, 2011) denotes a vital distinction not just between the social and natural sciences, but between economically moribund and inflationary industries such as education, health care, and social services, on the one hand, and economically vibrant and deflationary industries such as microprocessors, on the other.

It is here, and I say this out loud for the first time here, even to myself, that I begin to see the light at the end of the tunnel, to see a way that I might find a sense of closure and resolution in the project I took up over 30 years ago. My puzzle has been one of understanding in theory and practice how it is that measurement and mathematical thinking are nothing but refinements of the logic used in everyday conversation. It only occurs to me now that, if we can focus the conversations that we are in ways that balance meaningfulness and precision, that situate each of us as individuals relative to the larger wholes of who we have been and who we might be, that encompasses both the welcoming Socratic midwife and the annoying Socratic gadfly as different facets of the same framework, and that enable us to properly coordinate and align technical projects involving investments in intangible capital, well, then, we’ll be in a position to more productively engage with the challenges of the day.

There won’t be any panacea but there will be a new consensus and a new infrastructure that, however new they may seem, will enact yet again, in a positive way, the truth of the saying, “the more things change, the more they stay the same.” As I’ve repeatedly argued, the changes we need to implement are nothing but extensions of age-old principles into areas in which they have not yet been applied. We should take some satisfaction from this, as what else could possibly work? The originality of the application does not change the fact that it is rooted in appropriating, via a refiguration, to be sure, a model created for other purposes that works in relation to new purposes.

Another way of putting the question is in terms of that “permanent arbitration between technical universalism and the personality constituted on the ethico-political plane” characteristic of the need to enter into the global technical society while still retaining our roots in our cultural past (Ricoeur, 1974, p. 291). What is needed is the capacity to mediate each individual’s retelling of the grand narrative so that each of us sees ourselves in everyone else, and everyone else in ourselves. Though I am sure the meaning of this is less than completely transparent right now, putting it in writing is enormously satisfying, and I will continue to work on telling the tale as it needs to be told.

 References

Boje, D., & Baskin, K. (2010). Our organizations were never disenchanted: Enchantment by design narratives vs. enchantment by emergence. Journal of Organizational Change Management, 24(4), 411-426.

Fisher, W. P., Jr. (2004, October). Meaning and method in the social sciences. Human Studies: A Journal for Philosophy and the Social Sciences, 27(4), 429-54.

Fisher, W. P., Jr., & Stenner, A. J. (2011, August 31 to September 2). A technology roadmap for intangible assets metrology. International Measurement Confederation (IMEKO). Jena, Germany.

Gadamer, H.-G. (1989). Truth and method (J. Weinsheimer & D. G. Marshall, Trans.) (Second revised edition). New York: Crossroad.

Kim, K.-M. (2002, May). On the failure of Habermas’s hermeneutic objectivism. Cultural Studies <–> Critical Methodologies, 2(2), 270-98.

Miller, P., & O’Leary, T. (2007, October/November). Mediating instruments and making markets: Capital budgeting, science and the economy. Accounting, Organizations, and Society, 32(7-8), 701-34.

Ricoeur, P. (1967). Conclusion: The symbol gives rise to thought. In R. N. Anshen (Ed.), The symbolism of evil (pp. 347-57). Boston, Massachusetts: Beacon Press.

Ricoeur, P. (1974). Political and social essays (D. Stewart & J. Bien, Eds.). Athens, Ohio: Ohio University Press.

Ricoeur, P. (1981). Hermeneutics and the human sciences: Essays on language, action and interpretation (J. B. Thompson, Ed.) (J. B. Thompson, Trans.). Cambridge, England: Cambridge University Press.

Ricoeur, P. (1984, 1985, 1990). Time and Narrative, Vols. 1-3 (K. McLaughlin (Blamey) & D. Pellauer, Trans.). Chicago, Illinois: University of Chicago Press.

Ricoeur, P. (1995). Reply to Peter Kemp. In L. E. Hahn (Ed.), The philosophy of Paul Ricoeur (pp. 395-398). Chicago, Illinois: Open Court.

Sumner, J., & Fisher, W. P., Jr. (2008). The moral construct of caring in nursing as communicative action: The theory and practice of a caring science. Advances in Nursing Science, 31(4), E19-E36.

Thompson, J. B. (1981). Critical hermeneutics: A study in the thought of Paul Ricoeur and Jurgen Habermas. New York: Cambridge University Press.

Wright, B. D. (1999). Fundamental measurement for psychology. In S. E. Embretson & S. L. Hershberger (Eds.), The new rules of measurement: What every educator and psychologist should know (pp. 65-104 [http://www.rasch.org/memo64.htm]). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Advertisements

Debt, Revenue, and Changing the Way Washington Works: The Greatest Entrepreneurial Opportunity of Our Time

July 30, 2011

“Holding the line” on spending and taxes does not make for a fundamental transformation of the way Washington works. Simply doing less of one thing is just a small quantitative change that does nothing to build positive results or set a new direction. What we need is a qualitative metamorphosis akin to a caterpillar becoming a butterfly. In contrast with this beautiful image of natural processes, the arguments and so-called principles being invoked in the sham debate that’s going on are nothing more than fights over where to put deck chairs on the Titanic.

What sort of transformation is possible? What kind of a metamorphosis will start from who and where we are, but redefine us sustainably and responsibly? As I have repeatedly explained in this blog, my conference presentations, and my publications, with numerous citations of authoritative references, we already possess all of the elements of the transformation. We have only to organize and deploy them. Of course, discerning what the resources are and how to put them together is not obvious. And though I believe we will do what needs to be done when we are ready, it never hurts to prepare for that moment. So here’s another take on the situation.

Infrastructure that supports lean thinking is the name of the game. Lean thinking focuses on identifying and removing waste. Anything that consumes resources but does not contribute to the quality of the end product is waste. We have enormous amounts of wasteful inefficiency in many areas of our economy. These inefficiencies are concentrated in areas in which management is hobbled by low quality information, where we lack the infrastructure we need.

Providing and capitalizing on this infrastructure is The Greatest Entrepreneurial Opportunity of Our Time. Changing the way Washington (ha! I just typed “Wastington”!) works is the same thing as mitigating the sources of risk that caused the current economic situation. Making government behave more like a business requires making the human, social, and natural capital markets more efficient. Making those markets more efficient requires reducing the costs of transactions. Those costs are determined in large part by information quality, which is a function of measurement.

It is often said that the best way to reduce the size of government is to move the functions of government into the marketplace. But this proposal has never been associated with any sense of the infrastructural components needed to really make the idea work. Simply reducing government without an alternative way of performing its functions is irresponsible and destructive. And many of those who rail on and on about how bad or inefficient government is fail to recognize that the government is us. We get the government we deserve. The government we get follows directly from the kind of people we are. Government embodies our image of ourselves as a people. In the US, this is what having a representative form of government means. “We the people” participate in our society’s self-governance not just by voting, writing letters to congress, or demonstrating, but in the way we spend our money, where we choose to live, work, and go to school, and in every decision we make. No one can take a breath of air, a drink of water, or a bite of food without trusting everyone else to not carelessly or maliciously poison them. No one can buy anything or drive down the street without expecting others to behave in predictable ways that ensure order and safety.

But we don’t just trust blindly. We have systems in place to guard against those who would ruthlessly seek to gain at everyone else’s expense. And systems are the point. No individual person or firm, no matter how rich, could afford to set up and maintain the systems needed for checking and enforcing air, water, food, and workplace safety measures. Society as a whole invests in the infrastructure of measures created, maintained, and regulated by the government’s Department of Commerce and the National Institute for Standards and Technology (NIST). The moral importance and the economic value of measurement standards has been stressed historically over many millennia, from the Bible and the Quran to the Magna Carta and the French Revolution to the US Constitution. Uniform weights and measures are universally recognized and accepted as essential to fair trade.

So how is it that we nonetheless apparently expect individuals and local organizations like schools, businesses, and hospitals to measure and monitor students’ abilities; employees’ skills and engagement; patients’ health status, functioning, and quality of care; etc.? Why do we not demand common currencies for the exchange of value in human, social, and natural capital markets? Why don’t we as a society compel our representatives in government to institute the will of the people and create new standards for fair trade in education, health care, social services, and environmental management?

Measuring better is not just a local issue! It is a systemic issue! When measurement is objective and when we all think together in the common language of a shared metric (like hours, volts, inches or centimeters, ounces or grams, degrees Fahrenheit or Celsius, etc.), then and only then do we have the means we need to implement lean strategies and create new efficiencies systematically. We need an Intangible Assets Metric System.

The current recession in large part was caused by failures in measuring and managing trust, responsibility, loyalty, and commitment. Similar problems in measuring and managing human, social, and natural capital have led to endlessly spiraling costs in education, health care, social services, and environmental management. The problems we’re experiencing in these areas are intimately tied up with the way we formulate and implement group level decision making processes and policies based in statistics when what we need is to empower individuals with the tools and information they need to make their own decisions and policies. We will not and cannot metamorphose from caterpillar to butterfly until we create the infrastructure through which we each can take full ownership and control of our individual shares of the human, social, and natural capital stock that is rightfully ours.

We well know that we manage what we measure. What counts gets counted. Attention tends to be focused on what we’re accountable for. But–and this is vitally important–many of the numbers called measures do not provide the information we need for management. And not only are lots of numbers giving us low quality information, there are far too many of them! We could have better and more information from far fewer numbers.

Previous postings in this blog document the fact that we have the intellectual, political, scientific, and economic resources we need to measure and manage human, social, and natural capital for authentic wealth. And the issue is not a matter of marshaling the will. It is hard to imagine how there could be more demand for better management of intangible assets than there is right now. The problem in meeting that demand is a matter of imagining how to start the ball rolling. What configuration of investments and resources will start the process of bursting open the chrysalis? How will the demand for meaningful mediating instruments be met in a way that leads to the spreading of the butterfly’s wings? It is an exciting time to be alive.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

A Framework for Competitive Advantage in Managing Intangible Assets

July 26, 2011

It has long been recognized that externalities like social costs could be brought into the market should ways of measuring them objectively be devised. Markets, however, do not emerge spontaneously from the mere desire to be able to buy and sell; they are, rather, the products of actors and agencies that define the rules, roles, and relationships within which transaction costs are reduced and from which value, profits, and authentic wealth may be extracted. Objective measurement is necessary to reduced transaction costs but is by itself insufficient to the making of markets. Thus, markets for intangible assets, such as human, social, and natural capital, remain inefficient and undeveloped even though scientific theories, models, methods, and results demonstrating their objective measurability have been available for over 80 years.

Why has the science of objectively measured intangible assets not yet led to efficient markets for those assets? The crux of the problem, the pivot point at which an economic Archimedes could move the world of business, has to do with verifiable trust. It may seem like stating the obvious, but there is much to be learned from recognizing that shared narratives of past performance and a shared vision of the future are essential to the atmosphere of trust and verifiability needed for the making of markets. The key factor is the level of detail reliably tapped by such narratives.

For instance, some markets seem to have the weight of an immovable mass when the dominant narrative describes a static past and future with no clearly defined trajectory of leverageable development. But when a path of increasing technical capacity or precision over time can be articulated, entrepreneurs have the time frames they need to be able to coordinate, align, and manage budgeting decisions vis a vis investments, suppliers, manufacturers, marketing, sales, and customers. For example, the building out of the infrastructure of highways, electrical power, and water and sewer services assured manufacturers of automobiles, appliances, and homes that they could develop products for which there would be ready customers. Similarly, the mapping out of a path of steady increases in technical precision at no additional cost in Moore’s Law has been a key factor enabling the microprocessor industry’s ongoing history of success.

Of course, as has been the theme of this blog since day one, similar paths for the development of new infrastructural capacities could be vital factors for making new markets for human, social, and natural capital. I’ll be speaking on this topic at the forthcoming IMEKO meeting in Jena, Germany, August 31 to September 2. Watch this spot for more on this theme in the near future.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

The Moral Implications of the Concept of Human Capital: More on How to Create Living Capital Markets

March 22, 2011

The moral reprehensibility of the concept of human capital hinges on its use in rationalizing impersonal business decisions in the name of profits. Even when the viability of the organization is at stake, the discarding of people (referred to in some human resource departments as “taking out the trash”) entails degrees of psychological and economic injury no one should have to suffer, or inflict.

There certainly is a justified need for a general concept naming the productive capacity of labor. But labor is far more than a capacity for work. No one’s working life should be reduced to a job description. Labor involves a wide range of different combinations of skills, abilities, motivations, health, and trustworthiness. Human capital has then come to be broken down into a wide variety of forms, such as literacy capital, health capital, social capital, etc.

The metaphoric use of the word “capital” in the phrase “human capital” referring to stocks of available human resources rings hollow. The traditional concept of labor as a form of capital is an unjustified reduction of diverse capacities in itself. But the problem goes deeper. Intangible resources like labor are not represented and managed in the forms that make markets for tangible resources efficient. Transferable representations, like titles and deeds, give property a legal status as owned and an economic status as financially fungible. And in those legal and economic terms, tangible forms of capital give capitalism its hallmark signification as the lifeblood of the cycle of investment, profits, and reinvestment.

Intangible forms of capital, in contrast, are managed without the benefit of any standardized way of proving what is owned, what quantity or quality of it exists, and what it costs. Human, social, and natural forms of capital are therefore managed directly, by acting in an unmediated way on whomever or whatever embodies them. Such management requires, even in capitalist economies, the use of what are inherently socialistic methods, as these are the only methods available for dealing with the concrete individual people, communities, and ecologies involved (Fisher, 2002, 2011; drawing from Hayek, 1948, 1988; De Soto, 2000).

The assumption that transferable representations of intangible assets are inconceivable or inherently reductionist is, however, completely mistaken. All economic capital is ultimately brought to life (conceived, gestated, midwifed, and nurtured to maturity) as scientific capital. Scientific measurability is what makes it possible to add up the value of shares of stock across holdings, to divide something owned into shares, and to represent something in a court or a bank in a portable form (Latour, 1987; Fisher, 2002, 2011).

Only when you appreciate this distinction between dead and living capital, between capital represented on transferable instruments and capital that is not, then you can see that the real tragedy is not in the treatment of labor as capital. No, the real tragedy is in the way everyone is denied the full exercise of their rights over the skills, abilities, health, motivations, trustworthiness, and environmental resources that are rightly their own personal, private property.

Being homogenized at the population level into an interchangeable statistic is tragic enough. But when we leave the matter here, we fail to see and to grasp the meaning of the opportunities that are lost in that myopic world view. As I have been at pains in this blog to show, statistics are not measures. Statistical models of interactions between several variables at the group level are not the same thing as measurement models of interactions within a single variable at the individual level. When statistical models are used in place of measurement models, the result is inevitably numbers without a soul. When measurement models of individual response processes are used to produce meaningful estimates of how much of something someone possesses, a whole different world of possibilities opens up.

In the same way that the Pythagorean Theorem applies to any triangle, so, too, do the coordinates from the international geodetic survey make it possible to know everything that needs to be known about the location and disposition of a piece of real estate. Advanced measurement models in the psychosocial sciences are making it possible to arrive at similarly convenient and objective ways of representing the quality and quantity of intangible assets. Instead of being just one number among many others, real measures tell a story that situates each of us relative to everyone else in a meaningful way.

The practical meaning of the maxim “you manage what you measure” stems from those instances in which measures embody the fullness of the very thing that is the object of management interest. An engine’s fuel efficiency, or the volume of commodities produced, for instance, are things that can be managed less or more efficiently because there are measures of them that directly represent just what we want to control. Lean thinking enables the removal of resources that do not contribute to the production of the desired end result.

Many metrics, however, tend to obscure and distract from what need to be managed. The objects of measurement may seem to be obviously related to what needs to be managed, but dealing with each of them piecemeal results in inefficient and ineffective management. In these instances, instead of the characteristic cycle of investment, profit, and reinvestment, there seems only a bottomless pit absorbing ever more investment and never producing a profit. Why?

The economic dysfunctionality of intangible asset markets is intimately tied up with the moral dysfunctionality of those markets. Drawing an analogy from a recent analysis of political freedom (Shirky, 2010), economic freedom has to be accompanied by a market society economically literate enough, economically empowered enough, and interconnected enough to trade on the capital stocks issued. Western society, and increasingly the entire global society, is arguably economically literate and sufficiently interconnected to exercise economic freedom.

Economic empowerment is another matter entirely. There is no economic power without fungible capital, without ways of representing resources of all kinds, tangible and intangible, that transparently show what is available, how much of it there is, and what quality it is. A form of currency expressing the value of that capital is essential, but money is wildly insufficient to the task of determining the quality and quantity of the available capital stocks.

Today’s education, health care, human resource, and environmental quality markets are the diametric opposite of the markets in which investors, producers, and consumers are empowered. Only when dead human, social, and natural capital is brought to life in efficient markets (Fisher, 2011) will we empower ourselves with fuller degrees of creative control over our economic lives.

The crux of the economic empowerment issue is this: in the current context of inefficient intangibles markets, everyone is personally commodified. Everything that makes me valuable to an employer or investor or customer, my skills, motivations, health, and trustworthiness, is unjustifiably reduced to a homogenized unit of labor. And in the social and environmental quality markets, voting our shares is cumbersome, expensive, and often ineffective because of the immense amount of work that has to be done to defend each particular living manifestation of the value we want to protect.

Concentrated economic power is exercised in the mass markets of dead, socialized intangible assets in ways that we are taught to think of as impersonal and indifferent to each of us as individuals, but which is actually experienced by us as intensely personal.

So what is the difference between being treated personally as a commodity and being treated impersonally as a commodity? This is the same as asking what it would mean to be empowered economically with creative control over the stocks of human, social, and natural capital that are rightfully our private property. This difference is the difference between dead and living capital (Fisher, 2002, 2011).

Freedom of economic communication, realized in the trade of privately owned stocks of any form of capital, ought to be the highest priority in the way we think about the infrastructure of a sustainable and socially responsible economy. For maximum efficiency, that freedom requires a common meaningful and rigorous quantitative language enabling determinations of what exactly is for sale, and its quality, quantity, and unit price. As I have ad nauseum repeated in this blog, measurement based in scientifically calibrated instrumentation traceable to consensus standards is absolutely essential to meeting this need.

Coming in at a very close second to the highest priority is securing the ability to trade. A strong market society, where people can exercise the right to control their own private property—their personal stocks of human, social, and natural capital—in highly efficient markets, is more important than policies, regulations, and five-year plans dictating how masses of supposedly homogenous labor, social, and environmental commodities are priced and managed.

So instead of reacting to the downside of the business cycle with a socialistic safety net, how might a capitalistic one prove more humane, moral, and economically profitable? Instead of guaranteeing a limited amount of unemployment insurance funded through taxes, what we should have are requirements for minimum investments in social capital. Instead of employment in the usual sense of the term, with its implications of hiring and firing, we should have an open market for fungible human capital, in which everyone can track the price of their stock, attract and make new investments, take profits and income, upgrade the quality and/or quantity of their stock, etc.

In this context, instead of receiving unemployment compensation, workers not currently engaged in remunerated use of their skills would cash in some of their accumulated stock of social capital. The cost of social capital would go up in periods of high demand, as during the recent economic downturns caused by betrayals of trust and commitment (which are, in effect, involuntary expenditures of social capital). Conversely, the cost of human capital would also fluctuate with supply and demand, with the profits (currently referred to as wages) turned by individual workers rising and falling with the price of their stocks. These ups and downs, being absorbed by everyone in proportion to their investments, would reduce the distorted proportions we see today in the shares of the rewards and punishments allotted.

Though no one would have a guaranteed wage, everyone would have the opportunity to manage their capital to the fullest, by upgrading it, keeping it current, and selling it to the highest bidder. Ebbing and flowing tides would more truly lift and drop all boats together, with the drops backed up with the social capital markets’ tangible reassurance that we are all in this together. This kind of a social capitalism transforms the supposedly impersonal but actually highly personal indifference of flows in human capital into a more fully impersonal indifference in which individuals have the potential to maximize the realization of their personal goals.

What we need is to create a visible alternative to the bankrupt economic system in a kind of reverse shock doctrine. Eleanor Roosevelt often said that the thing we are most afraid of is the thing we most need to confront if we are to grow. The more we struggle against what we fear, the further we are carried away from what we want. Only when we relax into the binding constraints do we find them loosened. Only when we channel overwhelming force against itself or in a productive direction can we withstand attack. When we find the courage to go where the wild things are and look the monsters in the eye will we have the opportunity to see if their fearful aspect is transformed to playfulness. What is left is often a more mundane set of challenges, the residuals of a developmental transition to a new level of hierarchical complexity.

And this is the case with the moral implications of the concept of human capital. Treating individuals as fungible commodities is a way that some use to protect themselves from feeling like monsters and from being discarded as well. Those who find themselves removed from the satisfactions of working life can blame the shortsightedness of their former colleagues, or the ugliness of the unfeeling system. But neither defensive nor offensive rationalizations do anything to address the actual problem, and the problem has nothing to do with the morality or the immorality of the concept of human capital.

The problem is the problem. That is, the way we approach and define the problem delimits the sphere of the creative options we have for solving it. As Henry Ford is supposed to have said, whether you think you can or you think you cannot, you’re probably right. It is up to us to decide whether we can create an economic system that justifies its reductions and actually lives up to its billing as impersonal and unbiased, or if we cannot. Either way, we’ll have to accept and live with the consequences.

References

DeSoto, H. (2000). The mystery of capital: Why capitalism triumphs in the West and fails everywhere else. New York: Basic Books.

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [http://www.rasch.org/rmt/rmt154j.htm].

Fisher, W. P., Jr. (2011, Spring). Bringing human, social, and natural capital to life: Practical consequences and opportunities. Journal of Applied Measurement, 12(1), in press.

Hayek, F. A. (1948). Individualism and economic order. Chicago: University of Chicago Press.

Hayek, F. A. (1988). The fatal conceit: The errors of socialism (W. W. Bartley, III, Ed.) The Collected Works of F. A. Hayek. Chicago: University of Chicago Press.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. New York: Cambridge University Press.

Shirky, C. (2010, December 20). The political power of social media: Technology, the public sphere, and political change. Foreign Affairs, 90(1), http://www.foreignaffairs.com/articles/67038/clay-shirky/the-political-power-of-social-media.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

A Technology Road Map for Efficient Intangible Assets Markets

February 24, 2011

Scientific technologies, instruments and conceptual images have been found to play vitally important roles in economic success because of the way they enable accurate predictions of future industry and market states (Miller & O’Leary, 2007). The technology road map for the microprocessor industry, based in Moore’s Law, has successfully guided market expectations and coordinated research investment decisions for over 40 years. When the earlier electromechanical, relay, vacuum tube, and transistor computing technology paradigms are included, the same trajectory has dominated the computer industry for over 100 years (Kurzweil, 2005, pp. 66-67).

We need a similar technology road map to guide the creation and development of intangible asset markets for human, social, and natural (HSN) capital. This will involve intensive research on what the primary constructs are, determining what is measurable and what is not, creating consensus standards for uniform metrics and the metrology networks through which those standards will function. Alignments with these developments will require comprehensively integrated economic models, accounting frameworks, and investment platforms, in addition to specific applications deploying the capital formations.

What I’m proposing is, in a sense, just an extension in a new direction of the metrology challenges and issues summarized in Table ITWG15 on page 48 in the 2010 update to the International Technology Roadmap for Semiconductors (http://www.itrs.net/about.html). Distributed electronic communication facilitated by computers and the Internet is well on the way to creating a globally uniform instantaneous information network. But much of what needs to be communicated through this network remains expressed in locally defined languages that lack common points of reference. Meaningful connectivity demands a shared language.

To those who say we already have the technology necessary and sufficient to the measurement and management of human, social, and natural capital, I say think again. The difference between what we have and what we need is the same as the difference between (a) an economy whose capital resources are not represented in transferable representations like titles and deeds, and that are denominated in a flood of money circulating in different currencies, and, (b) an economy whose capital resources are represented in transferable documents and are traded using a single currency with a restricted money supply. The measurement of intangible assets is today akin to the former economy, with little actual living capital and hundreds of incommensurable instruments and scoring systems, when what we need is the latter. (See previous entries in this blog for more on the difference between dead and living capital.)

Given the model of a road map detailing the significant features of the living capital terrain, industry-specific variations will inform the development of explicit market expectations, the alignment of HSN capital budgeting decisions, and the coordination of research investments. The concept of a technology road map for HSN capital is based in and expands on an integration of hierarchical complexity (Commons & Richards, 2002; Dawson, 2004), complex adaptive functionality (Taylor, 2003), Peirce’s semiotic developmental map of creative thought (Wright, 1999), and historical stages in the development of measuring systems (Stenner & Horabin, 1992; Stenner, Burdick, Sanford, & Burdick, 2006).

Technology road maps replace organizational amnesia with organizational learning by providing the structure of a memory that not only stores information, knowledge, understanding, and wisdom, but makes it available for use in new situations. Othman and Hashim (2004) describe organizational amnesia (OA) relative to organizational learning (OL) in a way that opens the door to a rich application of Miller and O’Leary’s (2007) detailed account of how technology road maps contribute to the creation of new markets and industries. Technology road maps function as the higher organizational principles needed for transforming individual and social expertise into economically useful products and services. Organizational learning and adaptability further need to be framed at the inter-organizational level where their various dimensions or facets are aligned not only within individual organizations but between them within the industry as a whole.

The mediation of the individual and organizational levels, and of the organizational and inter-organizational levels, is facilitated by measurement. In the microprocessor industry, Moore’s Law enabled the creation of technology road maps charting the structure, processes, and outcomes that had to be aligned at the individual, organizational, and inter-organizational levels to coordinate the entire microprocessor industry’s economic success. Such road maps need to be created for each major form of human, social, and natural capital, with the associated alignments and coordinations put in play at all levels of every firm, industry, and government.

It is a basic fact of contemporary life that the technologies we employ every day are so complex that hardly anyone understands how they do what they do. Technological miracles are commonplace events, from transportation to entertainment, from health care to manufacturing. And we usually suffer little in the way of adverse consequences from not knowing how an automatic transmission, a thermometer, or digital video reproduction works. It is enough to know how to use the tool.

This passive acceptance of technical details beyond our ken extends into areas in which standards, methods, and products are much less well defined. Managers, executives, researchers, teachers, clinicians, and others who need measurement but who are unaware of its technicalities are then put in the position of being passive consumers accepting the lowest common denominator in the quality of the services and products obtained.

And that’s not all. Just as the mass market of measurement consumers is typically passive and uninformed, in complementary fashion the supply side is fragmented and contentious. There is little agreement among measurement experts as to which quantitative methods set the standard as the state of the art. Virtually any method can be justified in terms of some body of research and practice, so the confused consumer accepts whatever is easily available or is most likely to support a preconceived agenda.

It may be possible, however, to separate the measurement wheat from the chaff. For instance, measurement consumers may value a way of distinguishing among methods that is based in a simple criterion of meaningful utility. What if all measurement consumers’ own interests in, and reasons for, measuring something in particular, such as literacy or community, were emphasized and embodied in a common framework? What if a path of small steps from currently popular methods of less value to more scientific ones of more value could be mapped? Such a continuum of methods could range from those doing the least to advance the users’ business interests to those doing the most to advance those interests.

The aesthetics, simplicity, meaningfulness, rigor, and practical consequences of strong theoretical requirements for instrument calibration provide such criteria for choices as to models and methods (Andrich, 2002, 2004; Busemeyer and Wang, 2000; Myung, 2000; Pitt, Kim, Myung, 2003; Wright, 1997, 1999). These criteria could be used to develop and guide explicit considerations of data quality, construct theory, instrument calibration, quantitative comparisons, measurement standard metrics, etc. along a continuum from the most passive and least objective to the most actively involved and most objective.

The passive approach to measurement typically starts from and prioritizes content validity. The questions asked on tests, surveys, and assessments are considered relevant primarily on the basis of the words they use and the concepts they appear to address. Evidence that the questions actually cohere together and measure the same thing is not needed. If there is any awareness of the existence of axiomatically prescribed measurement requirements, these are not considered to be essential. That is, if failures of invariance are observed, they usually provoke a turn to less stringent data treatments instead of a push to remove or prevent them. Little or no measurement or construct theory is implemented, meaning that all results remain dependent on local samples of items and people. Passively approaching measurement in this way is then encumbered by the need for repeated data gathering and analysis, and by the local dependency of the results. Researchers working in this mode are akin to the woodcutters who say they are too busy cutting trees to sharpen their saws.

An alternative, active approach to measurement starts from and prioritizes construct validity and the satisfaction of the axiomatic measurement requirements. Failures of invariance provoke further questioning, and there is significant practical use of measurement and construct theory. Results are then independent of local samples, sometimes to the point that researchers and practical applications are not encumbered with usual test- or survey-based data gathering and analysis.

As is often the case, this black and white portrayal tells far from the whole story. There are multiple shades of grey in the contrast between passive and active approaches to measurement. The actual range of implementations is much more diverse that the simple binary contrast would suggest (see the previous post in this blog for a description of a hierarchy of increasingly complex stages in measurement). Spelling out the variation that exists could be helpful for making deliberate, conscious choices and decisions in measurement practice.

It is inevitable that we would start from the materials we have at hand, and that we would then move through a hierarchy of increasing efficiency and predictive control as understanding of any given variable grows. Previous considerations of the problem have offered different categorizations for the transformations characterizing development on this continuum. Stenner and Horabin (1992) distinguish between 1) impressionistic and qualitative, nominal gradations found in the earliest conceptualizations of temperature, 2) local, data-based quantitative measures of temperature, and 3) generalized, universally uniform, theory-based quantitative measures of temperature.

The latter is prized for the way that thermodynamic theory enables the calibration of individual thermometers with no need for testing each one in empirical studies of its performance. Theory makes it possible to know in advance what the results of such tests would be with enough precision to greatly reduce the burden and expenses of instrument calibration.

Reflecting on the history of psychosocial measurement in this context, it then becomes apparent that these three stages can then be further broken down. The previous post in this blog lists the distinguishing features for each of six stages in the evolution of measurement systems, building on the five stages described by Stenner, Burdick, Sanford, and Burdick (2006).

And so what analogue of Moore’s Law might be projected? What kind of timetable can be projected for the unfolding of what might be called Stenner’s Law? Guidance for reasonable expectations is found in Kurzweil’s (2005) charting of historical and projected future exponential increases in the volume of information and computer processing speed. The accelerating growth in knowledge taking place in the world today speaks directly to a systematic integration of criteria for what shall count as meaningful new learning. Maps of the roads we’re traveling will provide some needed guidance and make the trip more enjoyable, efficient, and productive. Perhaps somewhere not far down the road we’ll be able to project doubling rates for growth in the volume of fungible literacy capital globally, or the halving rates in the cost of health capital stocks. We manage what we measure, so when we begin measuring well what we want to manage well, we’ll all be better off.

References

Andrich, D. (2002). Understanding resistance to the data-model relationship in Rasch’s paradigm: A reflection for the next generation. Journal of Applied Measurement, 3(3), 325-59.

Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42(1), I-7–I-16.

Busemeyer, J. R., & Wang, Y.-M. (2000, March). Model comparisons and model selections based on generalization criterion methodology. Journal of Mathematical Psychology, 44(1), 171-189 [http://quantrm2.psy.ohio-state.edu/injae/jmpsp.htm].

Commons, M. L., & Richards, F. A. (2002, Jul). Organizing components into combinations: How stage transition works. Journal of Adult Development, 9(3), 159-177.

Dawson, T. L. (2004, April). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11(2), 71-85.

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking Penguin.

Miller, P., & O’Leary, T. (2007, October/November). Mediating instruments and making markets: Capital budgeting, science and the economy. Accounting, Organizations, and Society, 32(7-8), 701-34.

Myung, I. J. (2000). Importance of complexity in model selection. Journal of Mathematical Psychology, 44(1), 190-204.

Othman, R., & Hashim, N. A. (2004). Typologizing organizational amnesia. The Learning Organization, 11(3), 273-84.

Pitt, M. A., Kim, W., & Myung, I. J. (2003). Flexibility versus generalizability in model selection. Psychonomic Bulletin & Review, 10, 29-44.

Stenner, A. J., Burdick, H., Sanford, E. E., & Burdick, D. S. (2006). How accurate are Lexile text measures? Journal of Applied Measurement, 7(3), 307-22.

Stenner, A. J., & Horabin, I. (1992). Three stages of construct definition. Rasch Measurement Transactions, 6(3), 229 [http://www.rasch.org/rmt/rmt63b.htm].

Taylor, M. C. (2003). The moment of complexity: Emerging network culture. Chicago: University of Chicago Press.

Wright, B. D. (1997, Winter). A history of social science measurement. Educational Measurement: Issues and Practice, 16(4), 33-45, 52 [http://www.rasch.org/memo62.htm].

Wright, B. D. (1999). Fundamental measurement for psychology. In S. E. Embretson & S. L. Hershberger (Eds.), The new rules of measurement: What every educator and psychologist should know (pp. 65-104 [http://www.rasch.org/memo64.htm]). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

How Evidence-Based Decision Making Suffers in the Absence of Theory and Instrument: The Power of a More Balanced Approach

January 28, 2010

The Basis of Evidence in Theory and Instrument

The ostensible point of basing decisions in evidence is to have reasons for proceeding in one direction versus any other. We want to be able to say why we are proceeding as we are. When we give evidence-based reasons for our decisions, we typically couch them in terms of what worked in past experience. That experience might have been accrued over time in practical applications, or it might have been deliberately arranged in one or more experimental comparisons and tests of concisely stated hypotheses.

At its best, generalizing from past experience to as yet unmet future experiences enables us to navigate life and succeed in ways that would not be possible if we could not learn and had no memories. The application of a lesson learned from particular past events to particular future events involves a very specific inferential process. To be able to recognize repeated iterations of the same things requires the accumulation of patterns of evidence. Experience in observing such patterns allows us to develop confidence in our understanding of what that pattern represents in terms of pleasant or painful consequences. When we are able to conceptualize and articulate an idea of a pattern, and when we are then able to recognize a new occurrence of that pattern, we have an idea of it.

Evidence-based decision making is then a matter of formulating expectations from repeatedly demonstrated and routinely reproducible patterns of observations that lend themselves to conceptual representations, as ideas expressed in words. Linguistic and cultural frameworks selectively focus attention by projecting expectations and filtering observations into meaningful patterns represented by words, numbers, and other symbols. The point of efforts aimed at basing decisions in evidence is to try to go with the flow of this inferential process more deliberately and effectively than might otherwise be the case.

None of this is new or controversial. However, the inferential step from evidence to decision always involves unexamined and unjustified assumptions. That is, there is always an element of metaphysical faith behind the expectation that any given symbol or word is going to work as a representation of something in the same way that it has in the past. We can never completely eliminate this leap of faith, since we cannot predict the future with 100% confidence. We can, however, do a lot to reduce the size of the leap, and the risks that go with it, by questioning our assumptions in experimental research that tests hypotheses as to the invariant stability and predictive utility of the representations we make.

Theoretical and Instrumental Assumptions Hidden Behind the Evidence

For instance, evidence as to the effectiveness of an intervention or treatment is often expressed in terms of measures commonly described as quantitative. But it is unusual for any evidence to be produced justifying that description in terms of something that really adds up in the way numbers do. So we often find ourselves in situations in which our evidence is much less meaningful, reliable, and valid than we suppose it to be.

Quantitative measures are often valued as the hallmark of rational science. But their capacity to live up to this billing depends on the quality of the inferences that can be supported. Very few researchers thoroughly investigate the quality of their measures and justify the inferences they make relative to that quality.

Measurement presumes a reproducible pattern of evidence that can serve as the basis for a decision concerning how much of something has been observed. It naturally follows that we often base measurement in counts of some kind—successes, failures, ratings, frequencies, etc. The counts, scores, or sums are then often transformed into percentages by dividing them into the maximum possible that could be obtained. Sometimes the scores are averaged for each person measured, and/or for each item or question on the test, assessment, or survey. These scores and percentages are then almost universally fed directly into decision processes or statistical analyses with no further consideration.

The reproducible pattern of evidence on which decisions are based is presumed to exist between the measures, not within them. In other words, the focus is on the group or population statistics, not on the individual measures. Attention is typically focused on the tip of the iceberg, the score or percentage, not on the much larger, but hidden, mass of information beneath it. Evidence is presumed to be sufficient to the task when the differences between groups of scores are of a consistent size or magnitude, but is this sufficient?

Going Past Assumptions to Testable Hypotheses

In other words, does not science require that evidence be explained by theory, and embodied in instrumentation that provides a shared medium of observation? As shown in the blue lines in the Figure below,

  • theory, whether or not it is explicitly articulated, inevitably influences both what counts as valid data and the configuration of the medium of its representation, the instrument;
  • data, whether or not it is systematically gathered and evaluated, inevitably influences both the medium of its representation, the instrument, and the implicit or explicit theory that explains its properties and justifies its applications; and
  • instruments, whether or not they are actually calibrated from a mapping of symbols and substantive amounts, inevitably influence data gathering and the image of the object explained by theory.

The rhetoric of evidence-based decision making skips over the roles of theory and instrumentation, drawing a direct line from data to decision. In leaving theory laxly formulated, we allow any story that makes a bit of sense and is communicated by someone with a bit of charm or power to carry the day. In not requiring calibrated instrumentation, we allow any data that cross the threshold into our awareness to serve as an acceptable basis for decisions.

What we want, however, is to require meaningful measures that really provide the evidence needed for instruments that exhibit invariant calibrations and for theories that provide predictive explanatory control over the variable. As shown in the Figure, we want data that push theory away from the instrument, theory that separates the data and instrument, and instruments that get in between the theory and data.

We all know to distrust too close a correspondence between theory and data, but we too rarely understand or capitalize on the role of the instrument in mediating the theory-data relation. Similarly, when the questions used as a medium for making observations are obviously biased to produce responses conforming overly closely with a predetermined result, we see that the theory and the instrument are too close for the data to serve as an effective mediator.

Finally, the situation predominating in the social sciences is one in which both construct and measurement theories are nearly nonexistent, which leaves data completely dependent on the instrument it came from. In other words, because counts of correct answers or sums of ratings are mistakenly treated as measures, instruments fully determine and restrict the range of measurement to that defined by the numbers of items and rating categories. Once the instrument is put in play, changes to it would make new data incommensurable with old, so, to retain at least the appearance of comparability, the data structure then fully determines and restricts the instrument.

What we want, though, is a situation in which construct and measurement theories work together to make the data autonomous of the particular instrument it came from. We want a theory that explains what is measured well enough for us to be able to modify existing instruments, or create entirely new ones, that give the same measures for the same amounts as the old instruments. We want to be able to predict item calibrations from the properties of the items, we want to obtain the same item calibrations across data sets, and we want to be able to predict measures on the basis of the observed responses (data) no matter which items or instrument was used to produce them.

Most importantly, we want a theory and practice of measurement that allows us to take missing data into account by providing us with the structural invariances we need as media for predicting the future from the past. As Ben Wright (1997, p. 34) said, any data analysis method that requires complete data to produce results disqualifies itself automatically as a viable basis for inference because we never have complete data—any practical system of measurement has to be positioned so as to be ready to receive, process, and incorporate all of the data we have yet to gather. This goal is accomplished to varying degrees in Rasch measurement (Rasch, 1960; Burdick, Stone, & Stenner, 2006; Dawson, 2004). Stenner and colleagues (Stenner, Burdick, Sanford, & Burdick, 2006) provide a trajectory of increasing degrees to which predictive theory is employed in contemporary measurement practice.

The explanatory and predictive power of theory is embodied in instruments that focus attention on recording observations of salient phenomena. These observations become data that inform the calibration of instruments, which then are used to gather further data that can be used in practical applications and in checks on the calibrations and the theory.

“Nothing is so practical as a good theory” (Lewin, 1951, p. 169). Good theory makes it possible to create symbolic representations of things that are easy to think with. To facilitate clear thinking, our words, numbers, and instruments must be transparent. We have to be able to look right through them at the thing itself, with no concern as to distortions introduced by the instrument, the sample, the observer, the time, the place, etc. This happens only when the structure of the instrument corresponds with invariant features of the world. And where words effect this transparency to an extent, it is realized most completely when we can measure in ways that repeatedly give the same results for the same amounts in the same conditions no matter which instrument, sample, operator, etc. is involved.

Where Might Full Mathematization Lead?

The attainment of mathematical transparency in measurement is remarkable for the way it focuses attention and constrains the imagination. It is essential to appreciate the context in which this focusing occurs, as popular opinion is at odds with historical research in this regard. Over the last 60 years, historians of science have come to vigorously challenge the widespread assumption that technology is a product of experimentation and/or theory (Kuhn, 1961/1977; Latour, 1987, 2005; Maas, 2001; Mendelsohn, 1992; Rabkin, 1992; Schaffer, 1992; Heilbron, 1993; Hankins & Silverman, 1999; Baird, 2002). Neither theory nor experiment typically advances until a key technology is widely available to end users in applied and/or research contexts. Rabkin (1992) documents multiple roles played by instruments in the professionalization of scientific fields. Thus, “it is not just a clever historical aphorism, but a general truth, that ‘thermodynamics owes much more to the steam engine than ever the steam engine owed to thermodynamics’” (Price, 1986, p. 240).

The prior existence of the relevant technology comes to bear on theory and experiment again in the common, but mistaken, assumption that measures are made and experimentally compared in order to discover scientific laws. History shows that measures are rarely made until the relevant law is effectively embodied in an instrument (Kuhn, 1961/1977, pp. 218-9): “…historically the arrow of causality is largely from the technology to the science” (Price, 1986, p. 240). Instruments do not provide just measures; rather they produce the phenomenon itself in a way that can be controlled, varied, played with, and learned from (Heilbron, 1993, p. 3; Hankins & Silverman, 1999; Rabkin, 1992). The term “technoscience” has emerged as an expression denoting recognition of this priority of the instrument (Baird, 1997; Ihde & Selinger, 2003; Latour, 1987).

Because technology often dictates what, if any, phenomena can be consistently produced, it constrains experimentation and theorizing by focusing attention selectively on reproducible, potentially interpretable effects, even when those effects are not well understood (Ackermann, 1985; Daston & Galison, 1992; Ihde, 1998; Hankins & Silverman, 1999; Maasen & Weingart, 2001). Criteria for theory choice in this context stem from competing explanatory frameworks’ experimental capacities to facilitate instrument improvements, prediction of experimental results, and gains in the efficiency with which a phenomenon is produced.

In this context, the relatively recent introduction of measurement models requiring additive, invariant parameterizations (Rasch, 1960) provokes speculation as to the effect on the human sciences that might be wrought by the widespread availability of consistently reproducible effects expressed in common quantitative languages. Paraphrasing Price’s comment on steam engines and thermodynamics, might it one day be said that as yet unforeseeable advances in reading theory will owe far more to the Lexile analyzer (Stenner, et al., 2006) than ever the Lexile analyzer owed reading theory?

Kuhn (1961/1977) speculated that the second scientific revolution of the early- to mid-nineteenth century followed in large part from the full mathematization of physics, i.e., the emergence of metrology as a professional discipline focused on providing universally accessible, theoretically predictable, and evidence-supported uniform units of measurement (Roche, 1998). Kuhn (1961/1977, p. 220) specifically suggests that a number of vitally important developments converged about 1840 (also see Hacking, 1983, p. 234). This was the year in which the metric system was formally instituted in France after 50 years of development (it had already been obligatory in other nations for 20 years at that point), and metrology emerged as a professional discipline (Alder, 2002, p. 328, 330; Heilbron, 1993, p. 274; Kula, 1986, p. 263). Daston (1992) independently suggests that the concept of objectivity came of age in the period from 1821 to 1856, and gives examples illustrating the way in which the emergence of strong theory, shared metric standards, and experimental data converged in a context of particular social mores to winnow out unsubstantiated and unsupportable ideas and contentions.

Might a similar revolution and new advances in the human sciences follow from the introduction of evidence-based, theoretically predictive, instrumentally mediated, and mathematical uniform measures? We won’t know until we try.

Figure. The Dialectical Interactions and Mutual Mediations of Theory, Data, and Instruments

Figure. The Dialectical Interactions and Mutual Mediations of Theory, Data, and Instruments

Acknowledgment. These ideas have been drawn in part from long consideration of many works in the history and philosophy of science, primarily Ackermann (1985), Ihde (1991), and various works of Martin Heidegger, as well as key works in measurement theory and practice. A few obvious points of departure are listed in the references.

References

Ackermann, J. R. (1985). Data, instruments, and theory: A dialectical approach to understanding science. Princeton, New Jersey: Princeton University Press.

Alder, K. (2002). The measure of all things: The seven-year odyssey and hidden error that transformed the world. New York: The Free Press.

Aldrich, J. (1989). Autonomy. Oxford Economic Papers, 41, 15-34.

Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42(1), I-7–I-16.

Baird, D. (1997, Spring-Summer). Scientific instrument making, epistemology, and the conflict between gift and commodity economics. Techné: Journal of the Society for Philosophy and Technology, 3-4, 25-46. Retrieved 08/28/2009, from http://scholar.lib.vt.edu/ejournals/SPT/v2n3n4/baird.html.

Baird, D. (2002, Winter). Thing knowledge – function and truth. Techné: Journal of the Society for Philosophy and Technology, 6(2). Retrieved 19/08/2003, from http://scholar.lib.vt.edu/ejournals/SPT/v6n2/baird.html.

Burdick, D. S., Stone, M. H., & Stenner, A. J. (2006). The Combined Gas Law and a Rasch Reading Law. Rasch Measurement Transactions, 20(2), 1059-60 [http://www.rasch.org/rmt/rmt202.pdf].

Carroll-Burke, P. (2001). Tools, instruments and engines: Getting a handle on the specificity of engine science. Social Studies of Science, 31(4), 593-625.

Daston, L. (1992). Baconian facts, academic civility, and the prehistory of objectivity. Annals of Scholarship, 8, 337-363. (Rpt. in L. Daston, (Ed.). (1994). Rethinking objectivity (pp. 37-64). Durham, North Carolina: Duke University Press.)

Daston, L., & Galison, P. (1992, Fall). The image of objectivity. Representations, 40, 81-128.

Dawson, T. L. (2004, April). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11(2), 71-85.

Galison, P. (1999). Trading zone: Coordinating action and belief. In M. Biagioli (Ed.), The science studies reader (pp. 137-160). New York, New York: Routledge.

Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge: Cambridge University Press.

Hankins, T. L., & Silverman, R. J. (1999). Instruments and the imagination. Princeton, New Jersey: Princeton University Press.

Heelan, P. A. (1983, June). Natural science as a hermeneutic of instrumentation. Philosophy of Science, 50, 181-204.

Heelan, P. A. (1998, June). The scope of hermeneutics in natural science. Studies in History and Philosophy of Science Part A, 29(2), 273-98.

Heidegger, M. (1977). Modern science, metaphysics, and mathematics. In D. F. Krell (Ed.), Basic writings [reprinted from M. Heidegger, What is a thing? South Bend, Regnery, 1967, pp. 66-108] (pp. 243-282). New York: Harper & Row.

Heidegger, M. (1977). The question concerning technology. In D. F. Krell (Ed.), Basic writings (pp. 283-317). New York: Harper & Row.

Heilbron, J. L. (1993). Weighing imponderables and other quantitative science around 1800. Historical studies in the physical and biological sciences), 24(Supplement), Part I, pp. 1-337.

Hessenbruch, A. (2000). Calibration and work in the X-ray economy, 1896-1928. Social Studies of Science, 30(3), 397-420.

Ihde, D. (1983). The historical and ontological priority of technology over science. In D. Ihde, Existential technics (pp. 25-46). Albany, New York: State University of New York Press.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology. (The Indiana Series in the Philosophy of Technology). Bloomington, Indiana: Indiana University Press.

Ihde, D. (1998). Expanding hermeneutics: Visualism in science. Northwestern University Studies in Phenomenology and Existential Philosophy). Evanston, Illinois: Northwestern University Press.

Ihde, D., & Selinger, E. (Eds.). (2003). Chasing technoscience: Matrix for materiality. (Indiana Series in Philosophy of Technology). Bloomington, Indiana: Indiana University Press.

Kuhn, T. S. (1961/1977). The function of measurement in modern physical science. Isis, 52(168), 161-193. (Rpt. In T. S. Kuhn, The essential tension: Selected studies in scientific tradition and change (pp. 178-224). Chicago: University of Chicago Press, 1977).

Kula, W. (1986). Measures and men (R. Screter, Trans.). Princeton, New Jersey: Princeton University Press (Original work published 1970).

Lapre, M. A., & Van Wassenhove, L. N. (2002, October). Learning across lines: The secret to more efficient factories. Harvard Business Review, 80(10), 107-11.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. New York, New York: Cambridge University Press.

Latour, B. (2005). Reassembling the social: An introduction to Actor-Network-Theory. (Clarendon Lectures in Management Studies). Oxford, England: Oxford University Press.

Lewin, K. (1951). Field theory in social science: Selected theoretical papers (D. Cartwright, Ed.). New York: Harper & Row.

Maas, H. (2001). An instrument can make a science: Jevons’s balancing acts in economics. In M. S. Morgan & J. Klein (Eds.), The age of economic measurement (pp. 277-302). Durham, North Carolina: Duke University Press.

Maasen, S., & Weingart, P. (2001). Metaphors and the dynamics of knowledge. (Vol. 26. Routledge Studies in Social and Political Thought). London: Routledge.

Mendelsohn, E. (1992). The social locus of scientific instruments. In R. Bud & S. E. Cozzens (Eds.), Invisible connections: Instruments, institutions, and science (pp. 5-22). Bellingham, WA: SPIE Optical Engineering Press.

Polanyi, M. (1964/1946). Science, faith and society. Chicago: University of Chicago Press.

Price, D. J. d. S. (1986). Of sealing wax and string. In Little Science, Big Science–and Beyond (pp. 237-253). New York, New York: Columbia University Press.

Rabkin, Y. M. (1992). Rediscovering the instrument: Research, industry, and education. In R. Bud & S. E. Cozzens (Eds.), Invisible connections: Instruments, institutions, and science (pp. 57-82). Bellingham, Washington: SPIE Optical Engineering Press.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Roche, J. (1998). The mathematics of measurement: A critical history. London: The Athlone Press.

Schaffer, S. (1992). Late Victorian metrology and its instrumentation: A manufactory of Ohms. In R. Bud & S. E. Cozzens (Eds.), Invisible connections: Instruments, institutions, and science (pp. 23-56). Bellingham, WA: SPIE Optical Engineering Press.

Stenner, A. J., Burdick, H., Sanford, E. E., & Burdick, D. S. (2006). How accurate are Lexile text measures? Journal of Applied Measurement, 7(3), 307-22.

Thurstone, L. L. (1959). The measurement of values. Chicago: University of Chicago Press, Midway Reprint Series.

Wright, B. D. (1997, Winter). A history of social science measurement. Educational Measurement: Issues and Practice, 16(4), 33-45, 52 [http://www.rasch.org/memo62.htm].

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Contrasting Network Communities: Transparent, Efficient, and Invested vs Not

November 30, 2009

Different networks and different communities have different amounts of social capital going for them. As was originally described by Putnam (1993), some networks are organized hierarchically in a command-and-control structure. The top layers here are the autocrats, nobility, or bosses who run the show. Rigid conformity is the name of the game to get by. Those in power can make or break anyone. Market transactions in this context are characterized by the thumb on the scale, the bribe, and the kickback. Everyone is watching out for themselves.

At the opposite extreme are horizontal networks characterized by altruism and a sense that doing what’s good for everyone will eventually come back around to be good for me. The ideal here is a republic in which the law rules and everyone has the same price of entry into the market.

What I’d like to focus on is what’s going on in these horizontal networks. What makes one a more tightly-knit community than another? The closeness people feel should not be oppressive or claustrophic or smothering. I’m thinking of community relations in which people feel safe, not just personally but creatively. How and when are diversity, dissent and innovation not just tolerated but celebrated? What makes it possible for a market in new ideas and new ways of doing things to take off?

And how does a community like this differ from another one that is just as horizontally structured but that does not give rise to anything at all creative?

The answers to all of these questions seem to me to hinge on the transparency, efficiency, and volume of investments in the relationships making up the networks. What kinds of investments? All kinds: emotional, social, intellectual, financial, spiritual, etc. Less transparent, inefficient, and low volume investments don’t have the thickness or complexity of the relationships that we can see through, that are well lubricated, and that are reinforced with frequent visits.

Putnam (1993, p. 183) has a very illuminating way of putting this: “The harmonies of a choral society illustrate how voluntary collaboration can create value that no individual, no matter how wealthy, no matter how wily, could produce alone.” Social capital is the coordination of thought and behavior that embodies trust, good will, and loyalty. Social capital is at play when an individual can rely on a thickly elaborated network of largely unknown others who provide clean water, nutritious food, effective public health practices (sanitation, restaurant inspections, and sewers), fire and police protection, a fair and just judiciary, electrical and information technology, affordably priced consumer goods, medical care, and who ensure the future by educating the next generation.

Life would be incredibly difficult if we could not trust others to obey traffic laws, or to do their jobs without taking unfair advantage of access to special knowledge (credit card numbers, cash, inside information), etc. But beyond that, we gain huge efficiencies in our lives because of the way our thoughts and behaviors are harmonized and coordinated on mass scales. We just simply do not have to worry about millions of things that are being taken care of, things that would completely freeze us in our tracks if they weren’t being done.

Thus, later on the same page, Putnam also observes that, “For political stability, for government effectiveness, and even for economic progress social capital may be even more important than physical or human capital.” And so, he says, “Where norms and networks of civic engagement are lacking, the outlook for collective action appears bleak.”

But what if two communities have identical norms and networks, but they differ in one crucial way: one relies on everyday language, used in conversations and written messages, to get things done, and the other has a new language, one with a heightened capacity for transparent meaningfulness and precision efficiency? Which one is likely to be more creative and innovative?

The question can be re-expressed in terms of Gladwell’s (2000) sense of the factors contributing to reaching a tipping point: the mavens, connectors, salespeople, and the stickiness of the messages. What if the mavens in two communities are equally knowledgeable, the connectors just as interconnected, and the salespeople just as persuasive, but messages are dramatically less sticky in one community than the other? In one network of networks, saying things once gets the right response 99% of the time, but in the other things have to be repeated seven times before the right response comes back even 50% of the time, and hardly anyone makes the effort to repeat things that many times. Guess which community will be safer, more creative, and thriving?

All of this, of course, is just another way to bring out the importance of improved measurement for improving network quality and community life. As Surowiecki put it in The Wisdom of Crowds, the SARS virus was sequenced in a matter of weeks by a network of labs sharing common technical standards; without those standards, it would have taken any one of them weeks to do the same job alone. The messages these labs sent back and forth had an elevated stickiness index because they were more transparently and efficiently codified than messages were back in the days before the technical standards were created.

So the question emerges, given the means to create common languages with enhanced stickiness properties, such as we have in advanced measurement models, what kinds of creativity and innovation can we expect when these languages are introduced in the domains of human, social, and natural capital markets? That is the question of the age, it seems to me…

Gladwell, M. (2000). The tipping point: How little things can make a big difference. Boston: Little, Brown, and Company.

Putnam, R. D. (1993). Making democracy work: Civic traditions in modern Italy. Princeton, New Jersey: Princeton University Press.

Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. New York: Doubleday.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Al Gore: Marshalling the Collective Will is NOT the Problem–The Problem is the Problem!

November 22, 2009

In his new book, former vice-president Al Gore says we have in hand all the tools we need to solve the climate change crises, except the collective will to do anything about them. I respectfully beg to differ. Finding the will is not the problem. We already have it and we have it volumes sufficient to the task. Gore is also wrong in claiming we have the tools we need. There are entire classes of scientific and economic tools that we are missing. It is because we lack the right tools that we are unable to focus and channel our will for solutions.

The short version of my argument is that we don’t have scientific, universally uniform, and ubiquitously used metrics for measuring overall environmental quality. Because we don’t have the measures, we can’t and don’t effectively and efficiently manage our natural capital and environmental assets. Without metrics akin to barrels of oil or bushels of grain, we don’t have markets for matching environmental quality supply with demand for it.

Without tools as essential as metrics and markets, we can’t harness our existing will to improve our relationship with the earth. What will do we have, you might ask? Our collective will is expressed in the profit motive. What we need to do is set up metrics and markets to harness the energy of the profit motive. We need to create systems for trading natural capital (and human and social capital) so that we generate real wealth and drive happiness indexes north by realizing human potential, building thriving communities, and nurturing sustainable environments. The profit motive is not our enemy. It is the source of energy we need to deal with the multiple crises we face: human, social, and environmental.

Now for the long version of my argument. The problem is the problem. We restrict our options for solving problems by the way we frame the issue. Einstein supposedly pointed out that big problems, ones framed at a level where they define the entire paradigmatic orientation to a class of smaller, solvable problems, cannot be solved from within the paradigm they emerge from. We tend to define problems from the modern point of view, in a Cartesian fashion, from the point of view of a subject that is separate from, and in no way involved in the construction of, the objects it encounters. What I want to point out is that it is this Cartesian orientation to problem definition that is itself the problem!

Set aside your opinions on the basic issues concerning climate change, and think about what’s going on. It is undeniable that human activities are implicated in changes to the environment, and that we have to learn to manage our effects on the planet, or they will feed back on us in potentially harmful ways. This is the nature of life in the flux and flow of ecological relationships. It is one of many ways in which observers are inherently implicated in constructing what is observed, which is recognized as holding true as much in physics as in anthropology. These are uncontroversial facts, quite apart from any concern with climate change.

And what these feedback loops imply, as has indeed already been pointed out by generations of scholars and thinkers, is that there is no such thing as a pure Cartesian subject separate from its objects. We shape the things in our world, and those things, in turn, shape us. Subjects and objects are mutually implicated. All observers are participant observers. It is inevitable that what we do and think will change the world, and the new world will require us to think and act differently.

The plethora of environmental crises we face are therefore situated in a new non-Cartesian paradigm. It is a fundamental error of the first order to approach a non-Cartesian problem as though it were merely another variation on the usual kind of thing that can be addressed fairly well from the Cartesian dualist perspective. When we think, as Al Gore does, that we should be socialistically organizing resources for a centrally-organized 5-year plan of attack on environmental problems, we are missing the point.

This approach can be put to work only in terms of an authoritarian form of control directed by a dictatorial panel of experts, a military junta, or a self-appointed czar. Framed from a Cartesian point of view, no democratic process will ever compel voters to do what needs to be done. As was illustrated so dramatically by the fall of Communism, the socialistic manipulation of the concrete particulars of human, social, and environmental problems is unsustainable and socially irresponsible.

The fact is that non-Cartesian problems are only made worse when we try to solve them with Cartesian solutions. This is why non-Cartesian problems are often described by philosophers as “hermeneutic,” a word that derives from the name of the Greek god Hermes, known by the ancient Romans as Mercury. Like liquid mercury, non-Cartesian problems merely split and multiply when we grasp at them clumsily ignoring our own involvement in the creation of the problem.

So we can go on trying to herd cats or nail jello to the wall, but to be part of the solution and not just another way of being part of the problem, we need to set up systems of thought and behavior that are not internally inconsistent and self-contradictory. No matter what we do, if we keep on marshalling resources to attack problems in deliberate and systematic ignorance of this cross-paradigmatic dissonance, we can only make matters worse.

What else can be done? Just what does it mean to go with the flow of the mutual implication of subject and object? How can we explicitly model the problem to include the participant observer?

“The medium is the message,” to quote Marshall McLuhan. As was pointed out so humorously by Woody Allen in his film, “Annie Hall,” this expression is often repeated and often misunderstood. Though all can see that the news and entertainment media are ubiquitous, the meaning of our captivation with the media of creative expression has not yet been clarified sufficiently well for generalized understanding.

Significant advances have occurred in recent years, however. The media we are captivated by define and limit not only how and what we communicate, but who and what we have been, are, and could be. Depending on the quality of their transparency and of the biases that color them, media convey moral, human, and economic values of various kinds. The media through which we express values include every conceivable technology, from alphabets and phonemes to buildings, clothing, and food preparation, to musical instruments, and the creations of art and science.

Media are at the crux of the lesson we have to learn if we are to frame the problems of environmental management so that we are living solutions, not exacerbating problems. Media of all kinds, from pen and paper to television to the Internet, are fundamentally technical. In fact, media are the original technologies. The words “text,” “textile,” and “technique” all derive from the Greek “techne,” to make, and have even deeper roots in the Sanskrit “TEK.” Technology is our primary medium of shared meaning. Technology embodies the meanings we create and distributes their values across society and around the world.

What we need to do to effect non-Cartesian solutions then is to dwell deeply with our shared meanings and values, and find new ways of living them out, ways that embody the unity of subject and object, problem and solution. Nice rhetoric, you might say, but what does it mean? What is its practical consequence?

Put in academic terms, the pragmatic issue concerns the nature of technology and how it provides measures of reality serving as the media through which we experience the world in terms of shared universals. Primary sources here include the works of writers like Latour, Wise, Jasanoff, Knorr-Cetina, Schaffer, Ihde, Heidegger, and others cited in previous posts in this blog, and in my published work.

To do more to cut to the chase, we can start to think of language and technology as embodying problem-solution unities. Words and tools are situated within ecologies of relationships that define their meanings and functions. We need to be more sensitive to the way meanings and values become embodied in language and technologies, and then are distributed across far-flung networks to coordinate collectively harmonized thought and action.

To get right down to where this all is leading, though it is probably far from obvious, the appropriate non-Cartesian orientation to the problems of environmental management raised in Al Gore’s new book ultimately culminates in creation of the technical networks through which we distribute measures of what we want to manage. These networks comprise the ecologies of meaning and values that we inhabit. Not coincidentally, they also create the markets in which human, social, and natural capital can be efficiently and effectively traded.

When these networks and markets are created, finding the collective will to deal with the environmental challenges we face will be the least of our problems. The profit motive is an exceptionally strong force. What we ought to be doing is figuring out how to harness it as the engine of social change. This contrasts diametrically with Al Gore’s perspective, which treats the profit motive as part of the problem.

Technical networks of instruments traceable to reference standards, and markets for the exchange of the values measured by those instruments, are what we ought to be focusing on. The previous post in this blog proposes an Intangible Assets Metric System, and is related to earlier posts on the role of common currencies for the exchange of meaningful quantitative values in creating functional markets for human, social, and natural capital. What we need are these infrastructural supports for creating the efficient markets in which demand for environmental solutions can be matched the supply of those solutions. The failure of socialism is testimony to the futility of trying to man-handle our way forward by brute force.

Of course, I will continue living out my life’s mission and passion by continuing to elaborate variations, explanations, and demonstrations of how this could be so….

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.