Archive for the ‘philosophy’ Category

Economy of language, Eros, meaning, the public, and its problems

July 11, 2017

The medium is the message. The more transparent the medium is, the more seductive the messages expressed in it. The seductiveness of numbers stems from their roots in the mathematical quality of all thinking: the way that signs are used as the media of concept-thing relations. Our captivation with numbers is entirely embedded in the allure of language, which stems in large part from its economy: knowing how to read, write, speak, and listen saves us the trouble of re-inventing words and concepts for ourselves, and of having to translate each other’s private languages. The problem is, of course, that having words for things and sharing them by no means assures understanding. But when it works, it really works, as the history of science shows.

Seductive enthrallment with meaning and beauty defines the parameters of the difference between the modern Cartesian dualist world view and the emerging unmodern nondualist world view. This is the whole point of taking up Heidegger’s sense of method as meta-odos. As Plato saw, Socrates’ recounting of the myth of Eros told to him by Diotima conveys how captivation with beauty embodies the opposites of wealth and poverty in a simultaneous possession and absence, neither of which is ever complete.

The evolutionary/developmental paradigm shift taking place will transform everything by institutionalizing in every area of life an order of magnitude increase in the complexity of relationships, and a corresponding increase in the simplicity with which those relationships can be managed. The compelling absorption into the flow of meaning that necessarily informs discourse but currently functions as an unacknowledged assumption informing operations will itself be brought into view and will become an object of operations.

As Dewey understood, public consciousness of an issue or set of issues, and the will to take them on, emerges when existing institutions fail. We are certainly living in a time in which our political, economic, social, educational, medical, legal, environmental, etc. institutions have been failing to live up to their responsibilities for quite a number of years. The efforts of the public to address these failures have been obstructed by the lack of the media needed for integrating the complex, multilevel, and discontinuous opposites of harmony and dissonance, agreement and dissent, that structure a binding, coherent culture.

Science is nothing but an extension of everyday reasoning. Instead of imitating the natural sciences, the social sciences need to focus on how science extends the complex cognitive ecologies of language. As we figure that out and get these metasystems in place, we will simultaneously create the media the public needs to find its voice and organize itself to meet the challenges of how to build new institutions capable of successfully countering human suffering, social discontent, and environmental degradation.

Six Classes of Results Supporting the Measurability of Human Functioning and Capability

April 12, 2014

Another example of high-level analysis that suffers from a lack of input from state of the art measurement arises in Nussbaum (1997, p. 1205), where the author remarks that it is now a matter of course, in development economics, “to recognize distinct domains of human functioning and capability that are not commensurable along a single metric, and with regard to which choice and liberty of agency play a fundamental structuring role.” Though Nussbaum (2011, pp. 58-62) has lately given a more nuanced account of the challenges of measurement relative to human capabilities, appreciation of the power and flexibility of contemporary measurement models, methods, and instruments remains lacking. For a detailed example of the complexities and challenges that must be addressed in the context of global human development, which is Nussbaum’s area of interest, see Fisher (2011).

Though there are indeed domains of human functioning and capability that are not commensurable along a single metric, they are not the ones referred to by Nussbaum or the texts she cites. On the contrary, six different approaches to establishing the measurability of human functioning and capability have been explored and proven as providing, especially in their composite aggregate, a substantial basis for theory and practice (modified from Fisher, 2009, pp. 1279-1281). These six classes of results speak to the abstract, mathematical side of the paradox noted by Ricoeur (see previous post here) concerning the need to simultaneously accept roles for abstract ideal global universals and concrete local historical contexts in strategic planning and thinking. The six classes of results are:

  1. Mathematical proofs of the necessity and sufficiency of test and survey scores for invariant measurement in the context of Rasch’s probabilistic models (Andersen, 1977, 1999; Fischer, 1981; Newby, Conner, Grant, and Bunderson, 2009; van der Linden, 1992).
  2. Reproduction of physical units of measurement (centimeters, grams, etc.) from ordinal observations (Choi, 1997; Moulton, 1993; Pelton and Bunderson, 2003; Stephanou and Fisher, 2013).
  3. The common mathematical form of the laws of nature and Rasch models (Rasch, 1960, pp. 110-115; Fisher, 2010; Fisher and Stenner, 2013).
  4. Multiple independent studies of the same constructs on different (and common) samples using different (and the same) instruments intended to measure the same thing converge on common units, defining the same objects, substantiating theory, and supporting the viability of standardized metrics (Fisher, 1997a, 1997b, 1999, etc.).
  5. Thousands of peer-reviewed publications in hundreds of scientific journals provide a wide-ranging and diverse array of supporting evidence and theory.
  6. Analogous causal attributions and theoretical explanatory power can be created in both natural and social science contexts (Stenner, Fisher, Stone, and Burdick, 2013).

What we have here, in sum, is a combination of Greek axiomatic and Babylonian empirical algorithms, in accord with Toulmin’s (1961, pp. 28-33) sense of the contrasting principled bases for scientific advancement. Feynman (1965, p. 46) called for less of a focus on the Greek chain of reasoning approach, as it is only as strong as its weakest link, whereas the Babylonian algorithms are akin to a platform with enough supporting legs that one or more might fail without compromising its overall stability. The variations in theory and evidence under these six headings provide ample support for the conceptual and practical viability of metrological systems of measurement in education, health care, human resource management, sociology, natural resource management, social services, and many other fields. The philosophical critique of any type of economics will inevitably be wide of the mark if uninformed about these accomplishments in the theory and practice of measurement.


Andersen, E. B. (1977). Sufficient statistics and latent trait models. Psychometrika, 42(1), 69-81.

Andersen, E. B. (1999). Sufficient statistics in educational measurement. In G. N. Masters & J. P. Keeves (Eds.), Advances in measurement in educational research and assessment (pp. 122-125). New York: Pergamon.

Choi, S. E. (1997). Rasch invents “ounces.” Rasch Measurement Transactions, 11(2), 557 [].

Feynman, R. (1965). The character of physical law. Cambridge, Massachusetts: MIT Press.

Fischer, G. H. (1981). On the existence and uniqueness of maximum-likelihood estimates in the Rasch model. Psychometrika, 46(1), 59-77.

Fisher, W. P., Jr. (1997). Physical disability construct convergence across instruments: Towards a universal metric. Journal of Outcome Measurement, 1(2), 87-113.

Fisher, W. P., Jr. (1997). What scale-free measurement means to health outcomes research. Physical Medicine & Rehabilitation State of the Art Reviews, 11(2), 357-373.

Fisher, W. P., Jr. (1999). Foundations for health status metrology: The stability of MOS SF-36 PF-10 calibrations across samples. Journal of the Louisiana State Medical Society, 151(11), 566-578.

Fisher, W. P., Jr. (2009). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement, 42(9), 1278-1287.

Fisher, W. P., Jr. (2010). The standard model in the history of the natural sciences, econometrics, and the social sciences. Journal of Physics: Conference Series, 238(1),

Fisher, W. P., Jr. (2011). Measuring genuine progress by scaling economic indicators to think global & act local: An example from the UN Millennium Development Goals project. Retrieved 18 January 2011, from Social Science Research Network:

Fisher, W. P., Jr., & Stenner, A. J. (2013). On the potential for improved measurement in the human and social sciences. In Q. Zhang & H. Yang (Eds.), Pacific Rim Objective Measurement Symposium 2012 Conference Proceedings (pp. 1-11). Berlin, Germany: Springer-Verlag.

Moulton, M. (1993). Probabilistic mapping. Rasch Measurement Transactions, 7(1), 268 [].

Newby, V. A., Conner, G. R., Grant, C. P., & Bunderson, C. V. (2009). The Rasch model and additive conjoint measurement. Journal of Applied Measurement, 107(4), 348-354.

Nussbaum, M. (1997). Flawed foundations: The philosophical critique of (a particular type of) economics. University of Chicago Law Review, 64, 1197-1214.

Nussbaum, M. (2011). Creating capabilities: The human development approach. Cambridge, MA: The Belknap Press.

Pelton, T., & Bunderson, V. (2003). The recovery of the density scale using a stochastic quasi-realization of additive conjoint measurement. Journal of Applied Measurement, 4(3), 269-281.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Rasch, G. (1977). On specific objectivity: An attempt at formalizing the request for generality and validity of scientific statements. Danish Yearbook of Philosophy, 14, 58-94.

Stenner, A. J., Fisher, W. P., Jr., Stone, M. H., & Burdick, D. S. (2013). Causal Rasch models. Frontiers in Psychology: Quantitative Psychology and Measurement, 4(536), 1-14.

Stephanou, A., & Fisher, W. P., Jr. (2013). From concrete to abstract in the measurement of length. Journal of Physics Conference Series, 459,

Toulmin, S. E. (1961). Foresight and understanding: An enquiry into the aims of science. London, England: Hutchinson.

van der Linden, W. J. (1992). Sufficient and necessary statistics. Rasch Measurement Transactions, 6(3), 231 [].


Convergence, Divergence, and the Continuum of Field-Organizing Activities

March 29, 2014

So what are the possibilities for growing out green shoots from the seeds and roots of an ethical orientation to keeping the dialogue going? What kinds of fruits might be expected from cultivating a common ground for choosing discourse over violence? What are the consequences for practice of planting this seed in this ground?

The same participant in the conversation earlier this week at Convergence XV who spoke of the peace building processes taking place around the world also described a developmental context for these issues of mutual understanding. The work of Theo Dawson and her colleagues (Dawson, 2002a, 2002b, 2004; Dawson, Fischer, and Stein, 2006) is especially pertinent here. Their comparisons of multiple approaches to cognitive and moral development have provided clear and decisive theory, evidence, and instrumentation concerning the conceptual integrations that take place in the evolution of hierarchical complexity.

Conceptual integrations occur when previously tacit, unexamined, and assumed principles informing a sphere of operations are brought into conscious awareness and are transformed into explicit objects of new operations. Developmentally, this is the process of discovery that takes place from the earliest stages of life, in utero. Organisms of all kinds mature in a process of interaction with their environments. Young children at the “terrible two” stage, for instance, are realizing that anything they can detach from, whether by throwing or by denying (“No!”), is not part of them. Only a few months earlier, the same children will have been fascinated with their fingers and toes, realizing these are parts of their own bodies, often by putting them in their mouths.

There are as many opportunities for conceptual integrations between the ages of 21 to 99 as there are between birth and 21. Developmental differences in perspectives can make for riotously comic situations, and can also lead to conflicts, even when the participants agree on more than they disagree on. And so here we arrive at a position from which we can get a grip on how to integrate convergence and divergence in a common framework that follows from the prior post’s brief description of the ontological method’s three moments of reduction, application, and deconstruction.


Woolley and colleagues (Woolley, et al., 2010; Woolley and Fuchs, 2011) describe a continuum of five field-organizing activities categorizing the types of information needed for effective collective intelligence (Figure 1). Four of these five activities (defining, bounding, opening, and bridging) vary in the convergent versus divergent processes they bring to bear in collective thinking. Defining and bounding are convergent processes that inform judgment and decision making. These activities are especially important in the emergence of a new field or organization, when the object of interest and the methods of recognizing and producing it are in contention. Opening and bridging activities, in contrast, diverge from accepted definitions and transgress boundaries in the creative process of pushing into new areas. Undergirding the continuum as a whole is the fifth activity, grounding, which serves as a theory- and evidence-informed connection to meaningful and useful results.

There are instances in which defining and bounding activities have progressed to the point that the explanatory power of theory enables the calibration of test items from knowledge of the component parts included in those items. The efficiencies and cost reductions gained from computer-based item generation and administration are significant. Research in this area takes a variety of approaches; for more information, see Daniel and Embretson (2010), DeBoeck and Wilson (2004), Stenner, et al. (2013), and others.

The value of clear definitions and boundaries in this context stems in large part from the capacity to identify exceptions that prove (test) the rules, and that then also provide opportunities for opening and bridging. Kuhn (1961, p. 180; 1977, p. 205) noted that

To the extent that measurement and quantitative technique play an especially significant role in scientific discovery, they do so precisely because, by displaying significant anomaly, they tell scientists when and where to look for a new qualitative phenomenon.

Rasch (1960, p. 124) similarly understood that “Once a law has been established within a certain field then the law itself may serve as a tool for deciding whether or not added stimuli and/or objects belong to the original group.” Rasch gives the example of mechanical force applied to various masses with resulting accelerations, introducing idea that one of the instruments might exert magnetic as well as mechanical force, with noticeable effects on steel masses, but not on wooden masses. Rasch suggests that exploration of these anomalies may result in the discovery of other similar instruments that vary in the extent to which they also exert the new force, with the possible consequence of discovering a law of magnetic attraction.

There has been an intense interest in the assessment of divergent inconsistencies in measurement research and practice following in the wake of Rasch’s early work in psychological and social measurement (examples from a very large literature in this area include Karabatsos and Ulrich, 2002, and Smith and Plackner, 2009). Andrich, for instance, makes explicit reference to Kuhn (1961), saying, “…the function of a model for measurement…is to disclose anomalies, not merely to describe data” (Andrich, 2002, p. 352; also see Andrich, 1996, 2004, 2011). Typical software for applying Rasch models (Andrich, et al., 2013; Linacre, 2011, 2013; Wu, et al., 2007) thus accordingly provides many more qualitative numbers evaluating potential anomalies than quantitative measuring numbers. These qualitative numbers (digits that do not stand for something substantive that adds up in a constant unit) include uncertainty and confidence indicators that vary with sample size; mean square and standardized model fit statistics; and principal components analysis factor loadings and eigenvalues.

The opportunities for divergent openings onto new qualitative phenomena provided by data consistency evaluations are complemented in Rasch measurement by a variety of bridging activities. Different instruments intended to measure the same or closely related constructs may often be equated or co-calibrated, so they measure in a common unit (among many publications in this area, see Dawson, 2002a, 2004; Fisher, 1997; Fisher, et al., 1995; Massof and Ahmadian, 2007; Smith and Taylor, 2004). Similarly, the same instrument calibrated on different samples from the same population may exhibit consistent properties across those samples, offering further evidence of a potential for defining a common unit (Fisher, 1999).

Other opening and bridging activities include capacities (a) to drop items or questions from a test or survey, or to add them; (b) to adaptively administer subsets of custom-selected items from a large bank; and (c) to adjust measures for the leniency or severity of judges assigning ratings, all of which can be done, within the limits of the relevant definitions and boundaries, without compromising the unit of comparison. For methodological overviews, see Bond and Fox (2007), Wilson (2005), and others.

The various field-organizing activities spanning the range from convergence to divergence are implicated not only in research on collective thinking, but also in the history and philosophy of science. Galison and colleagues (Galison, 1997, 1999; Galison and Stump, 1996) closely examine positivist and antipositivist perspectives on the unity of science, finding their conclusions inconsistent with the evidence of history. A postpositivist perspective (Galison, 1999, p. 138), in contrast, finds “distinct communities and incommensurable beliefs” between and often within the areas of theory, experiment, and instrument-making. But instead of finding these communities “utterly condemned to passing one another without any possibility of significant interaction,” Galison (1999, p. 138) observes that “two groups can agree on rules of exchange even if they ascribe utterly different significance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself.” In practice, “trading partners can hammer out a local coordination despite vast global differences.”

In accord with Woolley and colleagues’ work on convergent and divergent field-organizing activities, Galison (1999, p. 137) concludes, then, that “science is disunified, and—against our first intuitions—it is precisely the disunification of science that underpins its strength and stability.” Galison (1997, pp. 843-844) concludes with a section entitled “Cables, Bricks, and Metaphysics” in which the postpositivist disunity of science is seen to provide its unexpected coherence from the simultaneously convergent and divergent ways theories, experiments, and instruments interact.

But as Galison recognizes, a metaphor based on the intertwined strands in a cable is too mechanical to support the dynamic processes by which order arises from particular kinds of noise and chaos. Not cited by Galison is a burgeoning literature on the phenomenon of noise-induced order termed stochastic resonance (Andò  and Graziani 2000, Benzi, et al., 1981; Dykman and McClintock, 1998; Fisher, 1992, 2011; Hess and Albano, 1998; Repperger and Farris, 2010). Where the metaphor of a cable’s strands breaks down, stochastic resonance provides multiple ways of illustrating how the disorder of finite and partially independent processes can give rise to an otherwise inaccessible order and structure.

Stochastic resonance involves small noisy signals that can be amplified to have very large effects. The noise has to be of a particular kind, and too much of it will drown out rather than amplify the effect. Examples include the interaction of neuronal ensembles in the brain (Chialvo, Lontin, and Müller-Gerking, 1996), speech recognition (Moskowitz and Dickinson, 2002), and perceptual interpretation (Rianni and Simonotto, 1994). Given that Rasch’s models for measurement are stochastic versions of Guttman’s deterministic models (Andrich, 1985), the question has been raised as to how Rasch’s seemingly weaker assumptions could lead to a measurement model that is stronger than Guttman’s (Duncan, 1984, p. 220). Stochastic resonance may provide an essential clue to this puzzle (Fisher, 1992, 2011).

Another description of what might be a manifestation of stochastic resonance akin to that brought up by Galison arises in Berg and Timmermans’ (2000, p. 56) study of the constitution of universalities in a medical network. They note that, “Paradoxically, then, the increased stability and reach of this network was not due to more (precise) instructions: the protocol’s logistics could thrive only by parasitically drawing upon its own disorder.” Much the same has been said about the behaviors of markets (Mandelbrot, 2004), bringing us back to the topic of the day at Convergence XV earlier this week. I’ll have more to say on this issue of universalities constituted via noise-induced order in due course.


Andò, B., & Graziani, S. (2000). Stochastic resonance theory and applications. New York: Kluwer Academic Publishers.

Andrich, D. (1985). An elaboration of Guttman scaling with Rasch models for measurement. In N. B. Tuma (Ed.), Sociological methodology 1985 (pp. 33-80). San Francisco, California: Jossey-Bass.

Andrich, D. (1996). Measurement criteria for choosing among models with graded responses. In A. von Eye & C. Clogg (Eds.), Categorical variables in developmental research: Methods of analysis (pp. 3-35). New York: Academic Press, Inc.

Andrich, D. (2002). Understanding resistance to the data-model relationship in Rasch’s paradigm: A reflection for the next generation. Journal of Applied Measurement, 3(3), 325-359.

Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42(1), I-7–I-16.

Andrich, D. (2011). Rating scales and Rasch measurement. Expert Reviews in Pharmacoeconomics Outcome Research, 11(5), 571-585.

Andrich, D., Lyne, A., Sheridan, B., & Luo, G. (2013). RUMM 2030: Rasch unidimensional models for measurement. Perth, Australia: RUMM Laboratory Pty Ltd [].

Benzi, R., Sutera, A., & Vulpiani, A. (1981). The mechanism of stochastic resonance. Journal of Physics. A. Mathematical and General, 14, L453-L457.

Berg, M., & Timmermans, S. (2000). Order and their others: On the constitution of universalities in medical work. Configurations, 8(1), 31-61.

Bond, T., & Fox, C. (2007). Applying the Rasch model: Fundamental measurement in the human sciences, 2d edition. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Chialvo, D., Longtin, A., & Müller-Gerking, J. (1996). Stochastic resonance in models of neuronal ensembles revisited [Electronic version].

Daniel, R. C., & Embretson, S. E. (2010). Designing cognitive complexity in mathematical problem-solving items. Applied Psychological Measurement, 34(5), 348-364.

Dawson, T. L. (2002a, Summer). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3(2), 146-89.

Dawson, T. L. (2002b, March). New tools, new insights: Kohlberg’s moral reasoning stages revisited. International Journal of Behavioral Development, 26(2), 154-66.

Dawson, T. L. (2004, April). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11(2), 71-85.

Dawson, T. L., Fischer, K. W., & Stein, Z. (2006). Reconsidering qualitative and quantitative research approaches: A cognitive developmental perspective. New Ideas in Psychology, 24, 229-239.

De Boeck, P., & Wilson, M. (Eds.). (2004). Explanatory item response models: A generalized linear and nonlinear approach. Statistics for Social and Behavioral Sciences). New York: Springer-Verlag.

Duncan, O. D. (1984). Notes on social measurement: Historical and critical. New York: Russell Sage Foundation.

Dykman, M. I., & McClintock, P. V. E. (1998, January 22). What can stochastic resonance do? Nature, 391(6665), 344.

Fisher, W. P., Jr. (1992, Spring). Stochastic resonance and Rasch measurement. Rasch Measurement Transactions, 5(4), 186-187 [].

Fisher, W. P., Jr. (1997). Physical disability construct convergence across instruments: Towards a universal metric. Journal of Outcome Measurement, 1(2), 87-113.

Fisher, W. P., Jr. (1999). Foundations for health status metrology: The stability of MOS SF-36 PF-10 calibrations across samples. Journal of the Louisiana State Medical Society, 151(11), 566-578.

Fisher, W. P., Jr. (2011). Stochastic and historical resonances of the unit in physics and psychometrics. Measurement: Interdisciplinary Research & Perspectives, 9, 46-50.

Fisher, W. P., Jr., Harvey, R. F., Taylor, P., Kilgore, K. M., & Kelly, C. K. (1995, February). Rehabits: A common language of functional assessment. Archives of Physical Medicine and Rehabilitation, 76(2), 113-122.

Galison, P. (1997). Image and logic: A material culture of microphysics. Chicago: University of Chicago Press.

Galison, P. (1999). Trading zone: Coordinating action and belief. In M. Biagioli (Ed.), The science studies reader (pp. 137-160). New York: Routledge.

Galison, P., & Stump, D. J. (1996). The disunity of science: Boundaries, contexts, and power. Palo Alto, California: Stanford University Press.

Hess, S. M., & Albano, A. M. (1998, February). Minimum requirements for stochastic resonance in threshold systems. International Journal of Bifurcation and Chaos, 8(2), 395-400.

Karabatsos, G., & Ullrich, J. R. (2002). Enumerating and testing conjoint measurement models. Mathematical Social Sciences, 43, 487-505.

Kuhn, T. S. (1961). The function of measurement in modern physical science. Isis, 52(168), 161-193. (Rpt. in T. S. Kuhn, (Ed.). (1977). The essential tension: Selected studies in scientific tradition and change (pp. 178-224). Chicago: University of Chicago Press.)

Linacre, J. M. (2011). A user’s guide to WINSTEPS Rasch-Model computer program, v. 3.72.0. Chicago, Illinois:

Linacre, J. M. (2013). A user’s guide to FACETS Rasch-Model computer program, v. 3.71.0. Chicago, Illinois:

Mandelbrot, B. (2004). The misbehavior of markets. New York: Basic Books.

Massof, R. W., & Ahmadian, L. (2007, July). What do different visual function questionnaires measure? Ophthalmic Epidemiology, 14(4), 198-204.

Moskowitz, M. T., & Dickinson, B. W. (2002). Stochastic resonance in speech recognition: Differentiating between /b/ and /v/. Proceedings of the IEEE International Symposium on Circuits and Systems, 3, 855-858.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Repperger, D. W., & Farris, K. A. (2010, July). Stochastic resonance –a nonlinear control theory interpretation. International Journal of Systems Science, 41(7), 897-907.

Riani, M., & Simonotto, E. (1994). Stochastic resonance in the perceptual interpretation of ambiguous figures: A neural network model. Physical Review Letters, 72(19), 3120-3123.

Smith, R. M., & Plackner, C. (2009). The family approach to assessing fit in Rasch measurement. Journal of Applied Measurement, 10(4), 424-437.

Smith, R. M., & Taylor, P. (2004). Equating rehabilitation outcome scales: Developing common metrics. Journal of Applied Measurement, 5(3), 229-42.

Stenner, A. J., Fisher, W. P., Jr., Stone, M. H., & Burdick, D. S. (2013, August). Causal Rasch models. Frontiers in Psychology: Quantitative Psychology and Measurement, 4(536), 1-14 [doi: 10.3389/fpsyg.2013.00536].

Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010, 29 October). Evidence for a collective intelligence factor in the performance of human groups. Science, 330, 686-688.

Woolley, A. W., & Fuchs, E. (2011, September-October). Collective intelligence in the organization of science. Organization Science, 22(5), 1359-1367.

Wu, M. L., Adams, R. J., Wilson, M. R., Haldane, S.A. (2007). ACER ConQuest Version 2: Generalised item response modelling software. Camberwell: Australian Council for Educational Research.

Common Languages and Shared Vulnerability

March 27, 2014

A recent partner in conversation insisted that sometimes it just is not possible to arrive at a common language, and that it would never be possible for her to agree with those who hold out for that possibility. In the same conversation, we heard from another participant about the work of peace building in areas of the world that have suffered brutal crimes inflicted by neighbors on neighbors. The realization that there are people overcoming and learning to live with the most horrible pain considerably tempered our exchange.

I think we came to see that it is one thing to agree to disagree about something inconsequential, or even about something that leads to very different opportunities and challenges. But mutual understanding hinges on a shared language. To give up on it is to give up on the possibility of fully inclusive community. It is to give up hope for a future in which healing can happen, in which there are no permanent divisions.

Keeping the conversation going, keeping the dialogue open, changing the rules as we go along to keep the game in play: if there are any principles that should never be compromised, these are among my candidates. Our shared human vulnerability is incontrovertibly at issue most pointedly precisely at the moment when one says we will never agree on the possibility of a common language. This is exactly the time and the place when it starts to be OK to begin a process of delegitimizing and dehumanizing someone else. When there is no hope of a common language, the first step toward rationalizing ignorance, prejudice, misunderstanding, demonization, and scapegoating has been taken.

A shared language is not a prison or a smothering demand for conformity. For one thing, the refusal to hold out for the possibility of a common language is already stated in a common language. Saying “we will never agree” is an instance of what Ricoeur (1967/1974) calls the “violence of the premature conclusion.” The internal contradiction of using a common language to say that we cannot hope for one is the mirror image of the person who argues for violence. Arguing is already a step into a language others can understand, implying a choice in favor of meaning over violence. The fundamental ethical and philosophical human choice takes place right here, in the desire for meaning and, as Habermas (1995, p. 199) puts it, in considerateness for a shared vulnerability. Accordingly, Ricoeur (1967/1974, p. 88) asserts,

The importance of this subject derives from the fact that the confrontation of violence with language underlies all of the problems which we can pose concerning man. This is precisely what overwhelms us. Their encounter occupies such a vast field because violence and language each occupy the totality of the human field.

The process of working out shared meanings in a common language is not a prison sentence, however. Including opportunities and concepts for critical engagement and deconstructive rethinking provide a way to prevent common languages from being overly confining. The Socratic midwife comforting the afflicted is complemented by the Socratic gadfly afflicting the comfortable (Bernasconi, 1989; Risser, 1989).

Heidegger (1982, pp. 19-23, 320-330; Fisher, 2010; Fisher and Stenner, 2011) accordingly describes the ontological (or phenomenological) method in terms of three moments: reduction, application, and deconstruction. Putting things in words is inherently reductive. There are infinities of ways of representing any experience in words, but even the most poetic among us has to choose the words that work to serve the purpose. And we may come to see on repeated application that our purposes are compromised by ambiguities that threaten to enact the violence of premature conclusions. Attentive concern for implicit meanings may lead to ways of discerning new distinctions and new conceptualizations, leading to new reductions and new applications. Languages are living and changing all the time. New sensitivities emerge and come into words by general consensus.

Ironically, being caught up in the desire for meaning can lead to the closing off of opportunities for creating meaningful relationships. Parsing differences into ever more local distinctions and separate historical and cultural dependencies can lead to a feeling that the barriers between positions are insurmountable. This way of arriving at premature conclusions has been especially prevalent among critical theorists who focused so exclusively on the deconstructive moment in the ontological method that they forgot that their writing inherently put a new reduction into play.

For instance, Delandshire and Petrosky (1994, p. 16) proclaimed that one of the ways their “post-structuralist view of knowledge is incompatible with the necessities of measurement is that interpretations are not assumed to be consistent or similar across time, contexts, or individuals.” The extremes in this display of hubris were called out by a number of observers. Bloom (1987, p. 387), for instance, held that deconstruction “is the last, predictable, stage in the suppression of reason and the denial of the possibility of truth in the name of philosophy.”

In contrast with these opposite extremes, others have kept their critical perspective in close contact with philosophical principles. Gasche (1987, p. 5) offers a “determination of deconstruction” within which “the latter’s indebtedness to the basic operations and exigencies of philosophy comes clearly into view.” Similarly, throughout his career, Derrida (2003, pp. 62-63; also see Derrida, 1981, pp. 27-28, 34-36; 1982, p. 229; Caputo, 1997, p. 80; Kearney 1984, pp. 123-124) repeatedly took pains to explain that:

…people who read me and think I’m playing with or transgressing norms—which I do, of course—usually don’t know what I know: that all of this has not only been made possible by but is constantly in contact with very classical, rigorous, demanding discipline in writing, in ‘demonstrating,’ in rhetoric. …the fact that I’ve been trained in and that I am at some level true to this classical teaching is essential. … When I take liberties, it’s always by measuring the distance from the standards I know or that I’ve been rigorously trained in.

Derrida (1989b, p. 218) recognized that

As soon as you give up philosophy, or the word philosophy, what happens is not something new or beyond philosophy, what happens is that some old hidden philosophy under another name—for instance the name of literary theory or psychology or anthropology and so on—go on dominating the research in a dogmatic or implicit way. And when you want to make this implicit philosophy as clear and explicit as possible, you have to go on philosophizing…. That’s why I am true to philosophy.

To give up on philosophy is to give up on the desire for meaning, for the working out of a common language, to accept an inevitably premature conclusion as definitive and to choose violence as an acceptable means of working out differences. Spivak (1990, 1993) then speaks to the strategic pauses that must interrupt the critical process to allow new determinations to inform revitalized concepts and applications in dialogue.

Finally, moving forward from here to better understand what it means to measure distances relative to standards requires close consideration of mathematical issues of modeling and signification. These issues reside deeply in the motivating ideas of philosophy, as has been widely recognized over the course of the history of Continental philosophy, through the works of Husserl, Heidegger, Gadamer, Derrida, and others (Derrida, 1989a, pp. 27, 66; Fisher, 2003a, 2003b, 2004, 2010; Kisiel, 2002). Much more remains to be said and done in this area.


Bernasconi, R. (1989). Seeing double: Destruktion and deconstruction. In D. P. Michelfelder & R. E. Palmer (Eds.), Dialogue & deconstruction: The Gadamer-Derrida encounter (pp. 233-250). Albany, New York: State University of New York Press.

Bloom, A. (1987). The closing of the American mind: How higher education has failed democracy and impoverished the souls of today’s students. New York: Simon & Schuster.

Caputo, J. D. (1997). A commentary. In J. D. Caputo (Ed.), Deconstruction in a nutshell: A conversation with Jacques Derrida (pp. 31-202). New York: Fordham University Press.

Delandshere, G., & Petrosky, A. R. (1994). Capturing teachers’ knowledge. Educational Researcher, 23(5), 11-18.

Derrida, J. (1981). Positions (A. Bass, Trans.). Chicago: University of Chicago Press (Original work published 1972 (Paris: Minuit)).

Derrida, J. (1982). Margins of philosophy. Chicago, Illinois: University of Chicago Press.

Derrida, J. (1989a). Edmund Husserl’s Origin of Geometry: An introduction. Lincoln: University of Nebraska Press.

Derrida, J. (1989b). On colleges and philosophy: An interview conducted by Geoffrey Bennington. In L. Appignanesi (Ed.), Postmodernism: ICA documents (pp. 209-28). London, England: Free Association Books.

Derrida, J. (2003). Interview on writing. In G. A. Olson & L. Worsham (Eds.), Critical intellectuals on writing (pp. 61-9). Albany, New York: State University of New York Press.

Fisher, W. P., Jr. (2003a). The mathematical metaphysics of measurement and metrology: Towards meaningful quantification in the human sciences. In A. Morales (Ed.), Renascent pragmatism: Studies in law and social science (pp. 118-153). Brookfield, VT: Ashgate Publishing Co.

Fisher, W. P., Jr. (2003b). Mathematics, measurement, metaphor, metaphysics: Parts I & II. Theory & Psychology, 13(6), 753-828.

Fisher, W. P., Jr. (2004). Meaning and method in the social sciences. Human Studies: A Journal for Philosophy and the Social Sciences, 27(4), 429-54.

Fisher, W. P., Jr. (2010). Reducible or irreducible? Mathematical reasoning and the ontological method. Journal of Applied Measurement, 11(1), 38-59.

Fisher, W. P., Jr., & Stenner, A. J. (2011). Integrating qualitative and quantitative research approaches via the phenomenological method. International Journal of Multiple Research Approaches, 5(1), 89-103.

Gasché, R. (1987). Infrastructures and systemacity. In J. Sallis (Ed.), Deconstruction and philosophy: The texts of Jacques Derrida (pp. 3-20). Chicago, Illinois: University of Chicago Press.

Habermas, J. (1995). Moral consciousness and communicative action. Cambridge, Massachusetts: MIT Press.

Heidegger, M. (1967). What is a thing? (W. B. Barton, Jr. & V. Deutsch, Trans.). South Bend, Indiana: Regnery/Gateway.

Heidegger, M. (1982). The basic problems of phenomenology (J. M. Edie, Ed.) (A. Hofstadter, Trans.). Studies in Phenomenology and Existential Philosophy. Bloomington, Indiana: Indiana University Press (Original work published 1975).

Kearney, R. (1984). Dialogues with contemporary Continental thinkers: The phenomenological heritage. Manchester, England: Manchester University Press.

Kisiel, T. (2002). The mathematical and the hermeneutical: On Heidegger’s notion of the apriori. In A. Denker & M. Heinz (Eds.), Heidegger’s way of thought: Critical and interpretative signposts (pp. 187-199). New York: Continuum International Publishing Group.

Ricoeur, P. (1967). Violence et langage (J. Bien, Trans.). Recherches et Debats: La Violence, 59, 86-94. (Rpt. in D. Stewart & J. Bien, (Eds.). (1974). Violence and language, in Political and social essays by Paul Ricoeur (pp. 88-101). Athens, OH: Ohio University Press.)

Risser, J. (1989). The two faces of Socrates: Gadamer/Derrida. In D. P. Michelfelder & R. E. Palmer (Eds.), Dialogue & deconstruction: The Gadamer-Derrida encounter (pp. 176-185). Albany, New York: State University of New York Press.

Spivak, G. C. (1990). The post-colonial critic: Interviews, strategies, dialogue. New York: Routledge.

Spivak, G. C. (1993). Outside in the teaching machine. New York: Routledge.

The Path to a New Consensus: A Practical Procedure for Resolving the Opposition Between Absolute and Relative Standards

August 26, 2011

The possibility of a new nonpartisan consensus on social and economic issues has been raised from time to time lately. I’ve had some ideas fermenting in this area for a while, and it seems like they might be ready for recording here. What I want to take up concerns one of the more contentious aspects of the cultural and political disputes of recent decades. There are important differences between those who want to impose one or another kind of moral or religious standard on society as a whole and those who contend that, within certain limits, such standards are arbitrary and must be determined by each individual or group according to its own values and sense of what makes a community.The oppositions here might seem to be irreconcilable, but is that actually true?

Resolving deep-seated disagreements on this scale requires that all parties accept some baseline rules of engagement. And herein lies the rub, eh? For even something as seemingly obvious and simple as defining factual truth has proven beyond the abilities of some highly skilled and deeply motivated negotiators. So, of course, those who adhere rigidly to preconceived notions automatically remove themselves from dialogue, and I cannot presume to address them here. But for those willing to entertain possibilities following from ideas and methods with which they may be unfamiliar, I say, read on.

What I want to propose differs in several fundamental respects from what has come before, and it is very similar in one fundamental respect. The similarity stems from the realization that essentially the same thing can be authoritatively stated at different times and place by different people using different words and different languages in relation to different customs and traditions. For instance, the versions of the Golden Rule given in the Gospels of Matthew or Luke are conceptually identical with the sentiment expressed in the Hindu Mahabarata, the Confucian Analects, the Jewish Talmud, the Muslim 13th Hadith, and the Buddhist Unada-Varga (;

So, rather than defining consensus in terms of strict agreement (with no uncertainty) on the absolute value of various propositions, it should be defined in terms of probabilities of consistent agreement (within a range of uncertainty) on the relative value of various propositions. Instead of evaluating isolated and decontextualized value statements one at a time, I propose evaluating value statements hypothesized to cohere with one another within a larger context together, as a unit.Instead of demanding complete data on a single set of propositions, I propose requiring and demonstrating that the same results be obtained across different sets of propositions addressing the same thing. Instead of applying statistical models of group level inter-variable relations to these data, I propose applying measurement models of individual level within-variable relations. Instead of setting policy on the basis of centrally controlled analytic results that vary incommensurably across data sets I propose setting policy on the basis of decentralized, distributed results collectively produced by networks of individuals whose behaviors and decisions are coordinated and aligned by calibrated instruments measuring in common commensurable units. All of these proposals are described in detail in previous posts here, and in the references included in those posts.

What I’m proposing is rooted in and extends existing practical solutions to the definition and implementation of standards. And though research across a number of fields suggests that a new degree of consensus on some basic issues seems quite possible, that consensus will not be universal and it should not be used as a basis for compelling conformity. Rather, the efficiencies that stand to be gained by capitalizing (literally) on existing but unrecognized standards of behavior and performance are of a magnitude that would easily support generous latitude in allowing poets, nonconformists, and political dissenters to opt out of the system at little or no cost to themselves or anyone else.

That is, as has been described and explained at length in previous posts here, should we succeed in establishing an Intangible Assets Metric System and associated genuine progress indicator or happiness index, we would be in the position of harnessing the power of the profit motive as an economic driver of growth in human, social, and natural capital. Instead of taking mere monetary profits as a measure of improved quality of life, we would set up economic systems in which the measurement and the management of quality of life determines monetary profits. The basic idea is that individual ownership of and accountability for what is, more than anything else, our rightful property–our own abilities, motivations, health, trustworthiness, loyalty, etc.–ought to be a significant factor in promoting the conservation and growth of these forms of capital.

In this context, what then might serve as a practical approach to resolving disputes between those who advocate standards and those who reject them, or between those who trust in our capacity to function satisfactorily as a society without standards and those who do not? Such an approach begins by recognizing the multitude of ways in which all of us rely on standards every day. We do not need to concern ourselves with the technical issues of electronics or manufacturing, though standards are essential here. We do not need even to take up the role of standards as guides to grocery or clothing store purchasing decisions or to planning meetings or travel across time zones.

All we need to think about is something as basic as communication. The alphabet, spelling, pronunciation, and grammatical rules, dictionaries, and educational curricula are all forms of standards that must be accepted, recognized and adhered to before the most basic communication can be achieved. The shapes of various letters or symbols, and the sounds associated with them, are all completely arbitrary. They are conventions that arose over centuries of usage that passed long before the rules were noted, codified, and written down. And spoken languages remain alive, changing in ways that break the rules and cause them to be rewritten, as when new words emerge, or previously incorrect constructions become accepted.

But what is the practical value for a new consensus in recognizing our broad acceptance of linguistic standards? Contrary to the expectations of l’Academie Francaise, for instance, we cannot simply make up new rules and expect people to follow them. No, the point of taking language as a key example goes deeper than that. We noted that usage precedes the formulation of rules, and so it must also be in finding our way to a basis for a new consensus. The question is, what are the lawful patterns by which we already structure behavior and decisions, patterns that might be codified in the language of a social science?

These patterns are being documented in research employing probabilistic measurement models. The fascinating thing about these patterns is that they often retain their characteristic features across different samples of people being measured, across time and space, and across different sets of questions on tests, surveys, or assessments designed to measure the same ability, behavior, attitude, or performance. The stability and constancy of these patterns are such that it appears possible to link all of the instruments measuring the same things to common units of measurement, so that everyone everywhere could think and act together in a common language.

And it is here, in linking instruments together in an Intangible Assets Metric System, that we arrive at a practical way of resolving some disputes between absolutists and relativists. Though we should and will take issue with his demand for certainty, Latour (2005, p. 228) asks the right question, saying,

“Standards and metrology solve practically the question of relativity that seems to intimidate so many people:
Can we obtain some sort of universal agreement? Of course we can! Provided you find a way to hook up your local instrument to one of the many metrological chains whose material network can be fully described, and whose cost can be fully determined. Provided there is also no interruption, no break, no gap, and no uncertainty along any point of the transmission. Indeed, traceability is precisely what the whole of metrology is about!”

Nowhere does Latour show any awareness of what has been accomplished in social research employing probabilistic measurement models, but he nonetheless grasps exactly how the results of that research will not realize its potential unless it is expanded into networks of interconnected instrumentation. He understands that his theory of networked actors coordinated via virtual threads of standardized forms, metrics, vocabularies describes how scientific metrology and standards set the benchmark for universal consensus. Latour stresses that the focus here is on concrete material practices that can be objectively observed and replicated. As he says, when those practices are understood, then you know how to “do the same operation for other less traceable, less materialized circulations” (p. 229).

Latour’s primary concerns are with the constitution of sociology as a science of the social, and with the understanding of the social as networks of actors whose interests are embodied in technical devices that mediate relationships. Throughout his work, he therefore focuses on the description of existing sociotechnical phenomena. Presumably because of his lack of familiarity with social measurement theory and practice, Latour does not speak to ways in which the social sciences could go beyond documenting less traceable and less materialized circulations to creating more traceable and more materialized circulations, ones capable of more closely emulating those found in the natural sciences.

Latour’s results suggest criteria that may show some disputes regarded as unresolvable to have unexplored potentials for negotiation. That potential depends, as Latour says, on calibrating instruments that can be hooked up in a metrological chain in an actual material network with known properties (forms, Internet connections and nodes, a defined unit of measurement with tolerable uncertainty, etc.) and known costs. In the same way that the time cannot be told from a clock disconnected from the chain of connections to the standard time, each individual instrument for measuring abilities, health, quality of life, etc. will also have to be connected to its standard via an unbroken chain.

But however intimidating these problems might be, they are far less imposing than the ignorance that prevents any framing of the relevant issues in the first place. Addressing the need for rigorous measurement in general, Rasch (1980, pp. xx) agreed that “this is a huge challenge, but once the problem has been formulated it does seem possible to meet it.” Naturally enough, the needed work will have to be done by those of us calibrating the instruments of education, health care, sociology, etc. Hence my ongoing involvement in IMEKO, the International Measurement Confederation (


Latour, B. (2005). Reassembling the social: An introduction to Actor-Network-Theory. Clarendon Lectures in Management Studies). Oxford, England: Oxford University Press.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at

Simple ideas, complex possibilities, elegant and beautiful results

February 11, 2011

Possibilities of great subtlety, elegance, and power can follow from the simplest ideas. Leonardo da Vinci is often credited with offering a variation on this theme, but the basic idea is much older. Philosophy, for instance, began with Plato’s distinction between name and concept. This realization that words are not the things they stand for has informed and structured each of several scientific revolutions.

How so? It all begins from the reasons why Plato required his students to have studied geometry. He knew that those familiar with the Pythagorean theorem would understand the difference between any given triangle and the mathematical relationships it represents. No right triangle ever definitively embodies a perfect realization of the assertion that the square of the hypotenuse equals the sum of the squares of the other two sides. The mathematical definition or concept of a triangle is not the same thing as any actual triangle.

The subtlety and power of this distinction became apparent in its repeated application throughout the history of science. In a sense, astronomy is a geometry of the heavens, Newton’s laws are a geometry of gravity, Ohm’s law is a geometry of electromagnetism, and relativity is a geometry of the invariance of mass and energy in relation to the speed of light. Rasch models present a means to geometries of literacy, numeracy, health, trust, and environmental quality.

We are still witnessing the truth, however partial, of Whitehead’s assertion that the entire history of Western culture is a footnote to Plato. As Husserl put it, we’re still struggling with the possibility of creating a geometry of experience, a phenomenology that is not a mere description of data but that achieves a science of living meaning. The work presented in other posts here attests to a basis for optimism that this quest will be fruitful.

Twelve principles I’m taking away from recent discussions

January 27, 2011
  1. Hypotheses non fingo A: Ideas about things are not hypothesized and tested against those things so much as things are determined to be what they are by testing them against ideas. Facts are recognizable as such only because they relate with a prior idea.
  2. Hypotheses non fingo B: Cohen’s introduction to Newton’s Opticks makes it plain that Newton is not offering a general methodological pointer in this phrase. Rather, he is answering critics who wanted him to explain what gravity is, and what it’s causes are. In saying, I feign no hypotheses, Newton is merely indicating that he’s not going to make up stories about something he knows nothing about. And in contrast with the Principia, the Opticks provides a much more accessible overview of the investigative process, from the initial engagement with light, where indeed no hypotheses as to its causes are offered, and onto more specific inquiries into its properties, where hypotheses necessarily inform experimental contrasts.
  3. Ideas, such as mathematical/geometrical theorems, natural laws, or the structure of Rasch models, do not exist and are unobservable. No triangle ever fits the Pythagorean theorem, there are no bodies left to themselves or balls rolling on frictionless planes, and there are no test, survey, or assessment results completely unaffected by the particular questions asked and persons answering.
  4. The clarity and transparency of an idea requires careful attention to the unity and sameness of the relevant class of things observed. So far as possible, the observational framework must be constrained by theory to produce observations likely to conform reasonably with the idea.
  5. New ideas come into language when a phenomenon or effect, often technically produced, exhibits persistent and stable properties across samples, observers, instruments, etc.
  6. New word-things that come into language, whether a galaxy, an element in the periodic table, a germ, or a psychosocial construct, may well have existed since the dawn of time and may well have exerted tangible effects on humans for millennia. They did not, however, do so for anyone in terms of the newly-available theory and understanding, which takes a place in a previously unoccupied position within the matrix of interrelated ideas, facts, and social networks.
  7. Number does not delimit the pure ideal concept of amount, but vice versa.
  8. Rasch models are one way of specifying the ideal form observations must approximate if they are to exhibit magnitude amounts divisible into ratios. Fitting data to such a model in the absence of a theory of the construct is only a very early step in the process of devising a measurement system.
  9. The invariant representation of a construct across samples, instruments, observers, etc. exhibiting magnitude amounts divisible into ratios provides the opportunity for allowing a pure ideal concept of amount to delimit number.
  10. Being suspended in language does not imply a denial of concrete reality and the separate independent existence of things. Rather, if those things did not exist, there would be no impetus for anything to come into words, and no criteria for meaningfulness.
  11. Situating objectivity in a sphere of signs removes the need for a separate sphere of facts constituted outside of language. Insofar as an ideal abstraction approximates convergence with and separation from different ways of expressing its meaning, an objective status owing nothing to a sphere of facts existing outside of language is obtained.
  12. The technology of a signifying medium (involving an alphabet, words as names for features of the environment, other symbols, syntactical and semantic rules, tools and instruments, etc.) gives rise to observations (data) that may exhibit regular patterns and that may come to be understood well enough to be reproduced at will via theory. Each facet (instrument, data, theory) mediates the relation of the other two.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at

Newton, Metaphysics, and Measurement

January 20, 2011

Though Newton claimed to deduce quantitative propositions from phenomena, the record shows that he brought a whole cartload of presuppositions to bear on his observations (White, 1997), such as his belief that Pythagoras was the discoverer of the inverse square law, his knowledge of Galileo’s freefall experiments, and his theological and astrological beliefs in occult actions at a distance. Without his immersion in this intellectual environment, he likely would not have been able to then contrive the appearance of deducing quantity from phenomena.

The second edition of the Principia, in which appears the phrase “hypotheses non fingo,” was brought out in part to respond to the charge that Newton had not offered any explanation of what gravity is. De Morgan, in particular, felt that Newton seemed to know more than he could prove (Keynes, 1946). But in his response to the critics, and in asserting that he feigns no hypotheses, Newton was making an important distinction between explaining the causes or composition of gravity and describing how it works. Newton was saying he did not rely on or make or test any hypotheses as to what gravity is; his only concern was with how it behaves. In due course, gravity came to be accepted as a fundamental feature of the universe in no need of explanation.

Heidegger (1977, p. 121) contends that Newton was, as is implied in the translation “I do not feign hypotheses,” saying in effect that the ground plan he was offering as a basis for experiment and practical application was not something he just made up. Despite Newton’s rejection of metaphysical explanations, the charge of not explaining gravity for what it is was being answered with a metaphysics of how, first, to derive the foundation for a science of precise predictive control from nature, and then resituate that foundation back within nature as an experimental method incorporating a mathematical plan or model. This was, of course, quite astute of Newton, as far as he went, but he stopped far short of articulating the background assumptions informing his methods.

Newton’s desire for a logic of experimental science led him to reject anything “metaphysical or physical, or based on occult qualities, or mechanical” as a foundation for proceeding. Following in Descartes’ wake, Newton then was satisfied to solidify the subject-object duality and to move forward on the basis of objective results that seemed to make metaphysics a thing of the past. Unfortunately, as Burtt (1954/1932, pp. 225-230) observes in this context, the only thing that can possibly happen when you presume discourse to be devoid of metaphysical assumptions is that your metaphysics is more subtly insinuated and communicated to others because it is not overtly presented and defended. Thus we have the history of logical positivism as the dominant philosophy of science.

It is relevant to recall here that Newton was known for strong and accurate intuitions, and strong and unorthodox religious views (he held the Lucasian Chair at Cambridge only by royal dispensation, as he was not Anglican). It must be kept in mind that Newton’s combination of personal characteristics was situated in the social context of the emerging scientific culture’s increasing tendency to prioritize results that could be objectively detached from the particular people, equipment, samples, etc. involved in their production (Shapin, 1989). Newton then had insights that, while remarkably accurate, could not be entirely derived from the evidence he offered and that, moreover, could not acceptably be explained informally, psychologically, or theologically.

What is absolutely fascinating about this constellation of factors is that it became a model for the conduct of science. Of course, Newton’s laws of motion were adopted as the hallmark of successful scientific modeling in the form of the Standard Model applied throughout physics in the nineteenth century (Heilbron, 1993). But so was the metaphysical positivist logic of a pure objectivism detached from everything personal, intuitive, metaphorical, social, economic, or religious (Burtt, 1954/1932).

Kuhn (1970) made a major contribution to dismantling this logic when he contrasted textbook presentations of the methodical production of scientific effects with the actual processes of cobbled-together fits and starts that are lived out in the work of practicing scientists. But much earlier, James Clerk Maxwell (1879, pp. 162-163) had made exactly the same observation in a contrast of the work of Ampere with that of Faraday:

“The experimental investigation by which Ampere established the laws of the mechanical action between electric currents is one of the most brilliant achievements in science. The whole, theory and experiment, seems as if it had leaped, full grown and full armed, from the brain of the ‘Newton of electricity.’ It is perfect in form, and unassailable in accuracy, and it is summed up in a formula from which all the phenomena may be deduced, and which must always remain the cardinal formula of electro-dynamics.

“The method of Ampere, however, though cast into an inductive form, does not allow us to trace the formation of the ideas which guided it. We can scarcely believe that Ampere really discovered the law of action by means of the experiments which he describes. We are led to suspect, what, indeed, he tells us himself* [Ampere’s Theorie…, p. 9], that he discovered the law by some process which he has not shewn us, and that when he had afterwards built up a perfect demonstration he removed all traces of the scaffolding by which he had raised it.

“Faraday, on the other hand, shews us his unsuccessful as well as his successful experiments, and his crude ideas as well as his developed ones, and the reader, however inferior to him in inductive power, feels sympathy even more than admiration, and is tempted to believe that, if he had the opportunity, he too would be a discoverer. Every student therefore should read Ampere’s research as a splendid example of scientific style in the statement of a discovery, but he should also study Faraday for the cultivation of a scientific spirit, by means of the action and reaction which will take place between newly discovered facts and nascent ideas in his own mind.”

Where does this leave us? In sum, Rasch emulated Ampere in two ways. He did so first in wanting to become the “Newton of reading,” or even the “Newton of psychosocial constructs,” when he sought to show that data from reading test items and readers are structured with an invariance analogous to that of data from instruments applying a force to an object with mass (Rasch, 1960, pp. 110-115). Rasch emulated Ampere again when, like Ampere, after building up a perfect demonstration of a reading law structured in the form of Newton’s second law, he did not report the means by which he had constructed test items capable of producing the data fitting the model, effectively removing all traces of the scaffolding.

The scaffolding has been reconstructed for reading (Stenner, et al., 2006) and has also been left in plain view by others doing analogous work involving other constructs (cognitive and moral development, mathematics ability, short-term memory, etc.). Dawson (2002), for instance, compares developmental scoring systems of varying sophistication and predictive control. And it may turn out that the plethora of uncritically applied Rasch analyses may turn out to be a capital resource for researchers interested in focusing on possible universal laws, predictive theories, and uniform metrics.

That is, published reports of calibration, error, and fit estimates open up opportunities for “pseudo-equating” (Beltyukova, Stone, & Fox, 2004; Fisher 1997, 1999) in their documentation of the invariance, or lack thereof, of constructs over samples and instruments. The evidence will point to a need for theoretical and metric unification directly analogous to what happened in the study and use of electricity in the nineteenth century:

“…’the existence of quantitative correlations between the various forms of energy, imposes upon men of science the duty of bringing all kinds of physical quantity to one common scale of comparison.’” [Schaffer, 1992, p. 26; quoting Everett 1881; see Smith & Wise 1989, pp. 684-4]

Qualitative and quantitative correlations in scaling results converged on a common construct in the domain of reading measurement through the 1960s and 1970s, culminating in the Anchor Test Study and the calibration of the National Reference Scale for Reading (Jaeger, 1973; Rentz & Bashaw, 1977). The lack of a predictive theory and the entirely empirical nature of the scale estimates prevented the scale from wide application, as the items in the tests that were equated were soon replaced with new items.

But the broad scale of the invariance observed across tests and readers suggests that some mechanism must be at work (Stenner, Stone, & Burdick, 2009), or that some form of life must be at play (Fisher, 2003a, 2003b, 2004, 2010a), structuring the data. Eventually, some explanation accounting for the structure ought to become apparent, as it did for reading (Stenner, Smith, & Burdick, 1983; Stenner, et al., 2006). This emergence of self-organizing structures repeatedly asserting themselves as independently existing real things is the medium of the message we need to hear. That message is that instruments play a very large and widely unrecognized role in science. By facilitating the routine production of mutually consistent, regularly observable, and comparable results they set the stage for theorizing, the emergence of consensus on what’s what, and uniform metrics (Daston & Galison, 2007; Hankins & Silverman, 1999; Latour, 1987, 2005; Wise, 1988, 1995). The form of Rasch’s models as extensions of Maxwell’s method of analogy (Fisher, 2010b) makes them particularly productive as a means of providing self-organizing invariances with a medium for their self-inscription. But that’s a story for another day.


Beltyukova, S. A., Stone, G. E., & Fox, C. M. (2004). Equating student satisfaction measures. Journal of Applied Measurement, 5(1), 62-9.

Burtt, E. A. (1954/1932). The metaphysical foundations of modern physical science (Rev. ed.) [First edition published in 1924]. Garden City, New York: Doubleday Anchor.

Daston, L., & Galison, P. (2007). Objectivity. Cambridge, MA: MIT Press.

Dawson, T. L. (2002, Summer). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3(2), 146-89.

Fisher, W. P., Jr. (1997). Physical disability construct convergence across instruments: Towards a universal metric. Journal of Outcome Measurement, 1(2), 87-113.

Fisher, W. P., Jr. (1999). Foundations for health status metrology: The stability of MOS SF-36 PF-10 calibrations across samples. Journal of the Louisiana State Medical Society, 151(11), 566-578.

Fisher, W. P., Jr. (2003a, December). Mathematics, measurement, metaphor, metaphysics: Part I. Implications for method in postmodern science. Theory & Psychology, 13(6), 753-90.

Fisher, W. P., Jr. (2003b, December). Mathematics, measurement, metaphor, metaphysics: Part II. Accounting for Galileo’s “fateful omission.” Theory & Psychology, 13(6), 791-828.

Fisher, W. P., Jr. (2004, October). Meaning and method in the social sciences. Human Studies: A Journal for Philosophy and the Social Sciences, 27(4), 429-54.

Fisher, W. P., Jr. (2010a). Reducible or irreducible? Mathematical reasoning and the ontological method. Journal of Applied Measurement, 11(1), 38-59.

Fisher, W. P., Jr. (2010b). The standard model in the history of the natural sciences, econometrics, and the social sciences. Journal of Physics: Conference Series, 238(1),

Hankins, T. L., & Silverman, R. J. (1999). Instruments and the imagination. Princeton, New Jersey: Princeton University Press.

Jaeger, R. M. (1973). The national test equating study in reading (The Anchor Test Study). Measurement in Education, 4, 1-8.

Keynes, J. M. (1946, July). Newton, the man. (Speech given at the Celebration of the Tercentenary of Newton’s birth in 1642.) MacMillan St. Martin’s Press (London, England), The Collected Writings of John Maynard Keynes Volume X, 363-364.

Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago, Illinois: University of Chicago Press.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. New York: Cambridge University Press.

Latour, B. (2005). Reassembling the social: An introduction to Actor-Network-Theory. (Clarendon Lectures in Management Studies). Oxford, England: Oxford University Press.

Maxwell, J. C. (1879). Treatise on electricity and magnetism, Volumes I and II. London, England: Macmillan.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Rentz, R. R., & Bashaw, W. L. (1977, Summer). The National Reference Scale for Reading: An application of the Rasch model. Journal of Educational Measurement, 14(2), 161-179.

Schaffer, S. (1992). Late Victorian metrology and its instrumentation: A manufactory of Ohms. In R. Bud & S. E. Cozzens (Eds.), Invisible connections: Instruments, institutions, and science (pp. 23-56). Bellingham, WA: SPIE Optical Engineering Press.

Shapin, S. (1989, November-December). The invisible technician. American Scientist, 77, 554-563.

Stenner, A. J., Burdick, H., Sanford, E. E., & Burdick, D. S. (2006). How accurate are Lexile text measures? Journal of Applied Measurement, 7(3), 307-22.

Stenner, A. J., Smith, M., III, & Burdick, D. S. (1983, Winter). Toward a theory of construct definition. Journal of Educational Measurement, 20(4), 305-316.

Stenner, A. J., Stone, M., & Burdick, D. (2009, Autumn). The concept of a measurement mechanism. Rasch Measurement Transactions, 23(2), 1204-1206.

White, M. (1997). Isaac Newton: The last sorcerer. New York: Basic Books.

Wise, M. N. (1988). Mediating machines. Science in Context, 2(1), 77-113.

Wise, M. N. (Ed.). (1995). The values of precision. Princeton, New Jersey: Princeton University Press.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at

Modern, Postmodern, or Amodern?

February 17, 2010

A few points of clarification might be in order for those wondering what the fuss is all about in the contrast between the modern and the postmodern (and the amodern, which is really what we ought to be about).

The modern world view takes its perspective from the foundational works of the European Enlightenment and the Scientific Revolution. One of its characteristic features is often referred to as the Cartesian duality, or subject-object split, in which we (the subjects) enter the previously-existing objective world as blank slates who deal with reality by adapting to the facts of existence (which are God-given in the full Christian version). Many Marxists, feminists, and postmodernists see modernism as a bastion of white males in positions of political and economic superiority oblivious to the way their ideas were shaped by their times, and happy to take full advantage of their positions for their own gain.

Postmodernism takes a variety of forms and has not yet really jelled into any kind of uniform perspective; in fact, it might not ever do so, as one of its few recurrent themes has to do with the fragmentation of thinking and its local dependence on the particular power relations of different times and places. That said, a wide variety of writers trace out the way we are caught up in the play of the language games that inevitably follow from the mutual implication of subject and object. Subject and object each imply the other in the way language focuses attention selectively and filters out 99% of incoming stimuli. Concepts originate in metaphors that take their meaning from the surrounding social and historical context, and so perception and cognition are constrained by the linguistic or theoretical paradigms dominating the thoughts and behaviors of various communities. We cannot help but find ourselves drawn up into the flow of discourses that always already embody the subject-object unities represented in speaking and writing.

When we choose discourse over violence, we do so on the basis of a desire for meaning (Ricoeur, 1974), of an inescapable attraction to the beautiful (Gadamer, 1989, 1998), of a care that characterizes the human mode of being (Heidegger, 1962), of a considerateness for the human vulnerability of others and ourselves (Habermas, 1995), of an enthrallment with the fecund abundance of sexual difference (Irigaray, 1984), of the joy we experience in recognizing ourselves in each other and the universal (Hegel, 2003), of the irresistible allure of things (Harman, 2005), or of the unavoidable metaphysical necessity that propositions must take particular forms (Derrida, 1978).

All violence is ultimately the violence of the premature conclusion (Ricoeur, 1974), in which discourse is cut off by the imposition of one particularity as representative of a potentially infinite whole. This reductionism is an unjustified reduction of a universal that precludes efforts aimed at determining how well what is said might work to represent the whole transparently. Of course, all reductions of abstract ideals to particular expressions in words, numbers, or other signs are, by definition, of a limited length, and so inevitably pose the potential for being nonsensical, biased, prejudiced, and meaningless. Measures experimentally justifying reductions as meaningfully and usefully transparent are created, maintained, and reinvented via a balance of powers. In science, powers are balanced by the interrelations of theories, instruments, and data; in democracy, by the interrelations of the judicial, legislative, and executive branches of government. Just as science is continuously open to the improvements that might be effected by means of new theories, instrumentation, or data, so, too, are democratic governments continuously reshaped by new court decisions, laws, and executive orders.

An essential idea here is that all thinking takes place in signs; this is not an idea that was invented or that is owned by postmodernists. C. S. Pierce developed the implications of semiotics in his version of pragmatism, and the letters exchanged by William James and Helen Keller explored the world projected by the interrelations of signs at length. The focus on signs, signification, and the play of signifiers does not make efforts at thinking futile or invalidate the search for truth. Things come into language by asserting their independent real existence, and by being appropriated in terms of relations with things already represented in the language. For instance, trees in the forest did not arrive on the scene hallmarked “white pine,” “pin oak,” etc. Rather, names for things emerge via the metaphoric process, which frames new experiences in terms of old, and which leads to a kind of conceptual speciation event that distinguishes cultural, historical, and ecological times and places from each other.

Modernists interpret the cultural relativism that emerges here as reducing all value systems to a false equality and an “anything goes” lack of standards. Unfortunately, the rejection of relativism usually entails the adoption of some form of political or religious fundamentalism in efforts aimed at restoring bellwether moral reference points. One of the primary characteristics of the current state of global crisis is our suspension in this unsustainable tension between equally dysfunctional alternatives of completely relaxed or completely rigid guides to behavior.

But the choice between fundamentalism and relativism is a false dichotomy. Science, democracy, and capitalism have succeeded as well as they have not in spite of, but because of, the social, historic, linguistic, and metaphoric factors that influence and constitute the construction of objective meaning. As Latour (1990, 1993) puts it, we have never actually been modern, so the point is not to be modern or postmodern, but amodern. We need to appropriate new, more workable conceptual reductions from the positive results produced by the deconstruction of the history of metaphysics. Though many postmodernists see deconstruction as an end in itself, and though many modernists see reductionism as a necessary exercise of power, there are other viable ways of proceeding through all three moments in the ontological method (Heidegger, 1982; Fisher, 2010b) that remain to be explored.

The amodern path informs the trajectory of my own work, from the focus on the creation of meaning in language to meaningful measurement (Fisher, 2003a, 2003b, 2004, 2010b), and from there to the use of measurement and metrological networks in bringing human, social, and natural capital to life as part of the completion of the capitalist and democratic projects (Fisher, 2000, 2002, 2005, 2009, 2010a). Though this project will also ultimately amount to nothing more than another failed experiment, perhaps sooner than later, it has its openness to continued questioning and ongoing dialogue in its favor.


Derrida, J. (1978). Structure, sign and play in the discourse of the human sciences. In Writing and difference (pp. 278-93). Chicago: University of Chicago Press.

Fisher, W. P., Jr. (2000). Objectivity in psychosocial measurement: What, why, how. Journal of Outcome Measurement, 4(2), 527-563 [].

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [].

Fisher, W. P., Jr. (2003a, December). Mathematics, measurement, metaphor, metaphysics: Part I. Implications for method in postmodern science. Theory & Psychology, 13(6), 753-90.

Fisher, W. P., Jr. (2003b, December). Mathematics, measurement, metaphor, metaphysics: Part II. Accounting for Galileo’s “fateful omission.” Theory & Psychology, 13(6), 791-828.

Fisher, W. P., Jr. (2004, October). Meaning and method in the social sciences. Human Studies: A Journal for Philosophy and the Social Sciences, 27(4), 429-54.

Fisher, W. P., Jr. (2005). Daredevil barnstorming to the tipping point: New aspirations for the human sciences. Journal of Applied Measurement, 6(3), 173-9 [].

Fisher, W. P., Jr. (2009, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement (Elsevier), 42(9), 1278-1287.

Fisher, W. P., Jr. (2010a). Bringing human, social, and natural capital to life: Practical consequences and opportunities. Journal of Applied Measurement, 11, in press.

Fisher, W. P., Jr. (2010b). Reducible or irreducible? Mathematical reasoning and the ontological method. Journal of Applied Measurement, 11(1), 38-59.

Gadamer, H.-G. (1989). Truth and method (J. Weinsheimer & D. G. Marshall, Trans.) (Rev. ed.). New York: Crossroad (Original work published 1960).

Gadamer, H.-G. (1998). Praise of theory: Speeches and essays ( Foreword by Joel Weinsheimer, Ed.) (C. Dawson, Trans.). New Haven, Connecticut: Yale University Press.

Habermas, J. (1995). Moral consciousness and communicative action. Cambridge, Massachusetts: MIT Press.

Harman, G. (2005). Guerrilla metaphysics: Phenomenology and the carpentry of things. Chicago: Open Court.

Hegel, G. W. F. (2003). Phenomenology of mind (J. B. Baillie, Trans.). New York: Dover (Original work published 1931).

Heidegger, M. (1962). Being and time (J. Macquarrie & E. Robinson, Trans.). New York: Harper & Row (Original work published 1927).

Heidegger, M. (1982). The basic problems of phenomenology (J. M. Edie, Ed.) (A. Hofstadter, Trans.). Studies in Phenomenology and Existential Philosophy. Bloomington, Indiana: Indiana University Press (Original work published 1975).

Irigaray, L. (1984). An ethics of sexual difference (C. Burke & G. C. Gill, Trans.). Ithaca, New York: Cornell University Press.

Latour, B. (1990). Postmodern? no, simply amodern: Steps towards an anthropology of science. Studies in History and Philosophy of Science, 21(1), 145-71.

Latour, B. (1993). We have never been modern. Cambridge, Massachusetts: Harvard University Press.

Ricoeur, P. (1974). Violence and language. In D. Stewart & J. Bien (Eds.), Political and social essays by Paul Ricoeur (pp. 88-101). Athens, Ohio: Ohio University Press.

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [].

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at

Tuning our assessment instruments to harmonize our relationships

January 10, 2010

“Music is the art of measuring well.”
Augustine of Hippo

With the application of Rasch’s probabilistic models for measurement, we are tuning the instruments of the human, social, and environmental sciences, with the aim of being able to harmonize relationships of all kinds. This is not an empty metaphor: the new measurement scales are equivalent, mathematically, with the well-tempered, and later 12-tone equal temperament, scales that were introduced in response to the technological advances associated with the piano.

The idea that the regular patterns found in music are akin to those found in the world at large and in the human psyche is an ancient one. The Pythagoreans held that

“…music’s concordances [were] the covenants that tones form under heaven’s watchful eye. For the Pythagoreans, though, the importance of these special proportions went well beyond music. They were signs of the natural order, like the laws governing triangles; music’s rules were simply the geometry governing things in motion: not only vibrating strings but also celestial bodies and the human soul” (Isacoff, 2001, p. 38).

I have already elsewhere in this blog elaborated on the progressive expansion of geometrical thinking into natural laws and measurement models; now, let us turn our attention to music as another fertile source of the analogies that have proven so productive over the course of the history of science (also explored elsewhere in this blog).

You see, tuning systems up to the invention of the piano (1709) required instruments to be retuned for performers to play in different keys. Each key had a particular characteristic color to its sound. And not only that, some note pairings in any key (such as every twelfth 5th in the mean tone tuning) were so dissonant that they were said to howl, and were referred to as wolves. Composers went out of their way to avoid putting these notes together, or used them in rare circumstances for especially dramatic effects.

Dozens of tuning systems had been proposed in the 17th century, and the concept of an equal-temperament scale was in general currency at the time of the piano’s invention. Bach is said to have tuned his own keyboards so that he could switch keys fluidly from within a composition. His “Well-Tempered Clavier” (published in 1722) demonstrates how a well temperament allows one to play in all 24 major and minor keys without retuning the instrument. Bach also is said to have deliberately used wolf note pairings to show that they did not howl in the way they did with the mean tone tuning.

Equal temperament is not equal-interval in the Pythagorean sense of same-sized changes in the frequencies of vibrating strings. Rather, those frequencies are scaled using the natural logarithm, and that logarithmic scale is what is divided into equal intervals. This is precisely what is also done in Rasch scaling algorithms applied to test, assessment, and survey data in contemporary measurement models.

Pianos are tuned from middle C out, with each sequential pair of notes to the left and right tuned to be the same distance away from C. As the tuner moves further and further away from C, the unit distance of the notes from middle C is slightly adjusted or stretched, so that the sharps and flats become the same note in the black keys.

What is being done, in effect, is that the natural logarithm of the note frequencies is being taken. In statistics, the natural logarithm is called a two-stretch transformation, because it pulls both ends of the normal distribution’s bell curve away from the center, with the ends being pulled further than the regions under the curve closer to the center. This stretching effect is of huge importance to measurement because it makes it possible for different collections of questions addressing the same thing to measure in the same unit.

That is, the instrument dependency of summed ratings or counts of right answers  or categorical response frequencies is like a key-dependent tuning system. The natural logarithm modulates transitions across musical notes in such a way as to make different keys work in the same scaling system, and it also modulates transitions across different reading tests so that they all measure in a unit that remains the same size with the same meaning.

Now, many people fear that the measurement of human abilities, attitudes, health, etc. must inherently involve a meaningless reduction of richly varied and infinite experience to a number. Many people are violently opposed to any suggestion that this could be done in a meaningful and productive way. However, is not music the most emotionally powerful and subtle art form in existence, and simultaneously also incredibly high-tech and mathematical? Even if you ignore the acoustical science and the studio electronics, the instruments themselves embody some of the oldest and most intensively studied mathematical principles in existence.

And, yes, these principles are used in TV, movies, dentists’ offices and retail stores to help create sympathies and environments conducive to the, sometimes painful and sometimes crass, commercial tasks at hand. But music is also by far the most popular art form, and it is accessible everywhere to everyone any time precisely as a result of the very technologies that many consider anathema in the human and social sciences.

But it seems to me that the issue is far more a matter of who controls the technology than it is one of the technology itself. In the current frameworks of the human and social sciences, and of the economic domains of human, social, and natural capital, whoever owns the instrument owns the measurement system and controls the interpretation of the data, since each instrument measures in its own unit. But in the new Rasch technology’s open architecture, anyone willing to master the skills needed can build instruments tuned to the reference standard, ubiquitous and universally available scale. What is more, the demand that all instruments measuring the same thing must harmonize will transfer control of data interpretation to a public sphere in which experimental reproducibility trumps authoritarian dictates.

This open standards system will open the door to creativity and innovation on a par with what musicians take for granted. Common measurement scales will allow people to jam out in an infinite variety of harmonic combinations, instrumental ensembles, choreographed moves, and melodic and rhythmic patterns. Just as music ranges from jazz to symphonic, rock to punk to hiphop to blues to country to techno, or atonal to R & B, so, too, do our relationships. A whole new world of potential innovations opens up in the context of methods for systematically evaluating naturally occurring and deliberately orchestrated variations in organizations, management, HR training methods, supply lines, social spheres, environmental quality, etc.

The current business world’s near-complete lack of comparable information on human, social, and natural capital is oppressive. It puts us in the situation of never knowing what we get for our money in education and healthcare, even as costs in these areas spiral into absolutely stratospheric levels. Having instruments in every area of education, health care, recreation, employment, and commerce tuned to common scales will be liberating, not oppressive. Having clear, reproducible, meaningful, and publicly negotiated measures of educational and clinical care outcomes, of productivity and innovation, and of trust, loyalty, and environmental quality will be a boon.

In conclusion, consider one more thing. About 100 years ago, a great many musicians and composers revolted against what they felt were the onerous and monotonous constraints of the equal-tempered tuning system. Thus we had an explosion of tonal and rhythmic innovations across the entire range of musical artistry. With the global popularity of world music’s blending of traditional forms with current technology and Western forms, the use of alternatives to equal temperament has never been greater. I read once that Joni Mitchell has used something like 32 different tunings in her recordings. Jimi Hendrix and Neil Young are also famous for using unique tunings to define their trademark sounds. What would the analogy of this kind of creativity be in the tuning of tests and surveys? I don’t know, but I’m looking forward to seeing it, experiencing it, and maybe even contributing to it. Les Paul may not be the only innovator in instrument design who figured out not only how to make it easy for others to express themselves in measured tones, but who also knew how to rock out his own yayas!

References and further reading:

Augustine of Hippo. (1947/2002). On music. In Writings of Saint Augustine Volume 2. Immortality of the soul and other works. (L. Schopp, Trans.) (pp. 169-384). New York: Catholic University of America Press.

Barbour, J. M. (2004/1954). Tuning and temperament: A historical survey. Mineola, NY: Dover Publications.

Heelan, P. A. (1979). Music as basic metaphor and deep structure in Plato and in ancient cultures. Journal of Social and Biological Structures, 2, 279-291.

Isacoff, S. M. (2001). Temperament: The idea that solved music’s greatest riddle. New York: Alfred A. Knopf.

Jorgensen, O. (1991). Tuning: Containing the perfection of eighteenth-century temperament, the lost art of nineteenth-century temperament and the science of equal temperament. East Lansing, Michigan: Michigan State University.

Kivy, P. (2002). Introduction to a philosophy of music. Oxford, England: Oxford University Press.

Mathieu, W. A. (1997). Harmonic experience: Tonal harmony from its natural origins to its modern expression. Rochester, Vermont: Inner Traditions International.

McClain, E. (1984/1976). The myth of invariance: The origin of the gods, mathematics and music from the Rg Veda to Plato (P. A. Heelan, Ed.). York Beach, Maine: Nicolas-Hays, Inc.

Russell, G. (2001/1953). Lydian chromatic concept of tonal organization (4th ed.). Brookline, MA: Concept Publishing.

Stone, M. (2002, Autumn). Musical temperament. Rasch Measurement Transactions, 16(2), 873.

Sullivan, A. T. (1985). The seventh dragon: The riddle of equal temperament. Lake Oswego, OR: Metamorphous Press.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at