Archive for the ‘comparable effectiveness’ Category

How bad will the financial crises have to get before…?

April 30, 2010

More and more states and nations around the world face the possibility of defaulting on their financial obligations. The financial crises are of epic historical proportions. This is a disaster of the first order. And yet, it is so odd–we have the solutions and preventative measures we need at our finger tips, but no one knows about them or is looking for them.

So,  I am persuaded to once again wonder if there might now be some real interest in the possibilities of capitalizing on

  • measurement’s well-known capacity for reducing transaction costs by improving information quality and reducing information volume;
  • instruments calibrated to measure in constant units (not ordinal ones) within known error ranges (not as though the measures are perfectly precise) with known data quality;
  • measures made meaningful by their association with invariant scales defined in terms of the questions asked;
  • adaptive instrument administration methods that make all measures equally precise by targeting the questions asked;
  • judge calibration methods that remove the person rating performances as a factor influencing the measures;
  • the metaphor of transparency by calibrating instruments that we really look right through at the thing measured (risk, governance, abilities, health, performance, etc.);
  • efficient markets for human, social, and natural capital by means of the common currencies of uniform metrics, calibrated instrumentation, and metrological networks;
  • the means available for tuning the instruments of the human, social, and environmental sciences to well-tempered scales that enable us to more easily harmonize, orchestrate, arrange, and choreograph relationships;
  • our understandings that universal human rights require universal uniform measures, that fair dealing requires fair measures, and that our measures define who we are and what we value; and, last but very far from least,
  • the power of love–the back and forth of probing questions and honest answers in caring social intercourse plants seminal ideas in fertile minds that can be nurtured to maturity and Socratically midwifed as living meaning born into supportive ecologies of caring relations.

How bad do things have to get before we systematically and collectively implement the long-established and proven methods we have at our disposal? It is the most surreal kind of schizophrenia or passive-aggressive avoidance pathology to keep on tormenting ourselves with problems for which we have solutions.

For more information on these issues, see prior blogs posted here, the extensive documentation provided, and http://www.livingcapitalmetrics.com.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Advertisements

Parameterizing Perfection: Practical Applications of a Mathematical Model of the Lean Ideal

April 2, 2010

To properly pursue perfection, we need to parameterize it. That is, taking perfection as the ideal, unattainable standard against which we judge our performance is equivalent to thinking of it as a mathematical model. Organizations are intended to realize their missions independent of the particular employees, customers, suppliers, challenges, products, etc. they happen to engage with at any particular time. Organizational performance measurement (Spitzer, 2007) ought to then be designed in terms of a model that posits, tests for, and capitalizes on the always imperfectly realized independence of those parameters.

Lean thinking (Womack & Jones, 1996) focuses on minimizing waste and maximizing value. At every point at which resources are invested in processes, services, or products, the question is asked, “What value is added here?” Resources are wasted when no value is added, when they can be removed with no detrimental effect on the value of the end product. In their book, Natural Capitalism: Creating the Next Industrial Revolution, Hawken, Lovins, and Lovins (1999, p. 133) say

“Lean thinking … changes the standard for measuring corporate success. … As they [Womack and Jones] express it: ‘Our earnest advice to lean firms today is simple. To hell with your competitors; compete against perfection by identifying all activities that are muda [the Japanese term for waste used in Toyota’s landmark quality programs] and eliminating them. This is an absolute rather than a relative standard which can provide the essential North Star for any organization.”

Further, every input should “be presumed waste until shown otherwise.” A constant, ongoing, persistent pressure for removing waste is the basic characteristic of lean thinking. Perfection is never achieved, but it aptly serves as the ideal against which progress is measured.

Lean thinking sounds a lot like a mathematical model, though it does not seem to have been written out in a mathematical form, or used as the basis for calibrating instruments, estimating measures, evaluating data quality, or for practical assessments of lean organizational performance. The closest anyone seems to have come to parameterizing perfection is in the work of Genichi Taguchi (Ealey, 1988), which has several close parallels with Rasch measurement (Linacre, 1993).  But meaningful and objective quantification, as required and achieved in the theory and practice of fundamental measurement (Andrich, 2004; Bezruczko, 2005; Bond & Fox 2007; Smith & Smith, 2004; Wilson, 2005; Wright, 1999), in fact asserts abstract ideals of perfection as models of organizational, social, and psychological processes in education, health care, marketing, etc. These models test the extent to which outcomes remain invariant across examination or survey questions, across teachers, students, schools, and curricula, or across treatment methods, business processes, or policies.

Though as yet implemented only to a limited extent in business (Drehmer, Belohlav, James, & Coye, 2000; Drehmer & Deklava, 2001;  Lunz & Linacre, 1998; Salzberger, 2009), advanced measurement’s potential rewards are great. Fundamental measurement theory has been successfully applied in research and practice thousands of times over the last 40 years and more, including in very large scale assessments and licensure/certification applications (Adams, Wu, & Macaskill, 1997; Masters, 2007; Smith, Julian, Lunz, et al., 1994). These successes speak to an opportunity for making broad improvements in outcome measurement that could provide more coherent product definition, and significant associated opportunities for improving product quality and the efficiency with which it is produced, in the manner that has followed from the use of fundamental measures in other industries.

Of course, processes and outcomes are never implemented or obtained with perfect consistency. This would be perfectly true only in a perfect world. But to pursue perfection, we need to parameterize it. In other words, to raise the bar in any area of performance assessment, we have to know not only what direction is up, but we also need to know when we have raised the bar far enough. But we cannot tell up from down, we do not know how much to raise the bar, and we cannot properly evaluate the effects of lean experiments when we have no way of locating measures on a number line that embodies the lean ideal.

To think together collectively in ways that lead to significant new innovations, to rise above what Jaron Lanier calls the “global mush” of confused and self-confirming hive thinking, we need the common languages of widely accepted fundamental measures of the relevant processes and outcomes, measures that remain constant across samples of customers, patients, employees, students, etc., and across products, sales techniques, curricula, treatment processes, assessment methods, and brands of instrument.

We are all well aware that the consequences of not knowing where the bar is, of not having product definitions, can be disastrous. In many respects, as I’ve said previously in this blog, the success or failure of health care reform hinges on getting measurement right. The Institute of Medicine report, To Err is Human, of several years ago stresses the fact that system failures pose the greatest threat to safety in health care because they lead to human errors. When a system as complex as health care lacks a standard product definition, and product delivery is fragmented across multiple providers with different amounts and kinds of information in different settings, the system becomes dangerously cumbersome and over-complicated, with unacceptably wide variations and errors in its processes and outcomes, not to even speak of its economic inefficiency.

In contrast with the widespread use of fundamental measures in the product definitions of other industries, health care researchers typically implement neither the longstanding, repeatedly proven, and mathematically rigorous models of fundamental measurement theory nor the metrological networks through which reference standard metrics are engineered. Most industries carefully define, isolate, and estimate the parameters of their products, doing so in ways 1) that ensure industry-wide comparability and standardization, and 2) that facilitate continuous product improvement by revealing multiple opportunities for enhancement. Where organizations in other industries manage by metrics and thereby keep their eyes on the ball of product quality, health care organizations often manage only their own internal processes and cannot in fact bring the product quality ball into view.

In his message concerning the Institute for Healthcare Improvement’s Pursuing Perfection project a few years ago, Don Berwick, like others (Coye, 2001; Coye & Detmer, 1998), observed that health care does not yet have an organization setting new standards in the way that Toyota did for the auto industry in the 1970s. It still doesn’t, of course. Given the differences between the auto and health care industries uses of fundamental measures of product quality and associated abilities to keep their eyes on the quality ball, is it any wonder then, that no one in health care has yet hit a home run? It may well be that no one will hit a home run in health care until reference standard measures of product quality are devised.

The need for reference standard measures in uniform data systems is crucial, and the methods for obtaining them are widely available and well-known. So what is preventing the health care industry from adopting and deploying them? Part of the answer is the cost of the initial investment required. In 1980, metrology comprised about six percent of the U.S. gross national product (Hunter, 1980). In the period from 1981 to 1994, annual expenditures on research and development in the U.S. were less than three percent of the GNP, and non-defense R&D was about two percent (NIST Subcommittee on Research, National Science and Technology Council, 1996). These costs, however, must be viewed as investments from which high rates of return can be obtained (Barber, 1987; Gallaher, Rowe, Rogozhin, et al., 2007; Swann, 2005).

For instance, the U.S. National Institute of Standards and Technology estimated the economic impact of 12 areas of research in metrology, in four broad areas including semiconductors, electrical calibration and testing, optical industries, and computer systems (NIST, 1996, Appendix C; also see NIST, 2003). The median rate of return in these 12 areas was 147 percent, and returns ranged from 41 to 428 percent. The report notes that these results compare favorably with those obtained in similar studies of return rates from other public and private research and development efforts. Even if health care metrology produces only a small fraction of the return rate produced in physical metrology, its economic impact could still amount to billions of dollars annually. The proposed pilot projects therefore focus on determining what an effective health care outcomes metrology system should look like. What should its primary functions be? What should it cost? What rates of return could be expected from it?

Metrology, the science of measurement (Pennella, 1997), requires 1) that instruments be calibrated within individual laboratories so as to isolate and estimate the values of the required parameters (Wernimont, 1978); and 2) that individual instruments’ capacities to provide the same measure for the same amount, and so be traceable to a reference standard, be established and monitored via interlaboratory round-robin trials (Mandel, 1978).

Fundamental measurement has already succeeded in demonstrating the viability of reference standard measures of health outcomes, measures whose meaningfulness does not depend on the particular samples of items employed or patients measured. Though this work succeeds as far as it goes, it being done in a context that lacks any sense of the need for metrological infrastructure. Health care needs networks of scientists and technicians collaborating not only in the first, intralaboratory phase of metrological work, but also in the interlaboratory trials through which different brands or configurations of instruments intended to measure the same variable would be tuned to harmoniously produce the same measure for the same amount.

Implementation of the two phases of metrological innovation in health care would then begin with the intralaboratory calibration of existing and new instruments for measuring overall organizational performance, quality of care, and patients’ health status, quality of life, functionality, etc.  The second phase takes up the interlaboratory equating of these instruments, and the concomitant deployment of reference standard units of measurement throughout a health care system and the industry as a whole. To answer questions concerning health care metrology’s potential returns on investment, the costs for, and the savings accrued from, accomplishing each phase of each pilot will be tracked or estimated.

When instruments measuring in universally uniform, meaningful units are put in the hands of clinicians, a new scientific revolution will occur in medicine. It will be analogous to previous ones associated with the introduction of the thermometer and the instruments of optometry and the clinical laboratory. Such tools will multiply many times over the quality improvement methods used by Brent James, touted as holding the key to health care reform in a recent New York Times profile. Instead of implicitly hypothesizing models of perfection and assessing performance relative to them informally, what we need is a new science that systematically implements the lean ideal on industry-wide scales. The future belongs to those who master these techniques.

References

Adams, R. J., Wu, M. L., & Macaskill, G. (1997). Scaling methodology and procedures for the mathematics and science scales. In M. O. Martin & D. L. Kelly (Eds.), Third International Mathematics and Science Study Technical Report: Vol. 2: Implementation and Analysis – Primary and Middle School Years (pp. 111-145). Chestnut Hill, MA: Boston College.

Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42(1), I-7–I-16.

Barber, J. M. (1987). Economic rationale for government funding of work on measurement standards. In R. Dobbie, J. Darrell, K. Poulter & R. Hobbs (Eds.), Review of DTI work on measurement standards (p. Annex 5). London: Department of Trade and Industry.

Berwick, D. M., James, B., & Coye, M. J. (2003, January). Connections between quality measurement and improvement. Medical Care, 41(1 (Suppl)), I30-38.

Bezruczko, N. (Ed.). (2005). Rasch measurement in health sciences. Maple Grove, MN: JAM Press.

Bond, T., & Fox, C. (2007). Applying the Rasch model: Fundamental measurement in the human sciences, 2d edition. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Coye, M. J. (2001, November/December). No Toyotas in health care: Why medical care has not evolved to meet patients’ needs. Health Affairs, 20(6), 44-56.

Coye, M. J., & Detmer, D. E. (1998). Quality at a crossroads. The Milbank Quarterly, 76(4), 759-68.

Drehmer, D. E., Belohlav, J. A., & Coye, R. W. (2000, Dec). A exploration of employee participation using a scaling approach. Group & Organization Management, 25(4), 397-418.

Drehmer, D. E., & Deklava, S. M. (2001, April). A note on the evolution of software engineering practices. Journal of Systems and Software, 57(1), 1-7.

Ealey, L. A. (1988). Quality by design: Taguchi methods and U.S. industry. Dearborn MI: ASI Press.

Gallaher, M. P., Rowe, B. R., Rogozhin, A. V., Houghton, S. A., Davis, J. L., Lamvik, M. K., et al. (2007). Economic impact of measurement in the semiconductor industry (Tech. Rep. No. 07-2). Gaithersburg, MD: National Institute for Standards and Technology.

Hawken, P., Lovins, A., & Lovins, H. L. (1999). Natural capitalism: Creating the next industrial revolution. New York: Little, Brown, and Co.

Hunter, J. S. (1980, November). The national system of scientific measurement. Science, 210(21), 869-874.

Linacre, J. M. (1993). Quality by design: Taguchi and Rasch. Rasch Measurement Transactions, 7(2), 292.

Lunz, M. E., & Linacre, J. M. (1998). Measurement designs using multifacet Rasch modeling. In G. A. Marcoulides (Ed.), Modern methods for business research. Methodology for business and management (pp. 47-77). Mahwah, New Jersey: Lawrence Erlbaum Associates, Inc.

Mandel, J. (1978, December). Interlaboratory testing. ASTM Standardization News, 6, 11-12.

Masters, G. N. (2007). Special issue: Programme for International Student Assessment (PISA). Journal of Applied Measurement, 8(3), 235-335.

National Institute for Standards and Technology (NIST). (1996). Appendix C: Assessment examples. Economic impacts of research in metrology. In C. o. F. S. Subcommittee on Research (Ed.), Assessing fundamental science: A report from the Subcommittee on Research, Committee on Fundamental Science. Washington, DC: National Standards and Technology Council [http://www.nsf.gov/statistics/ostp/assess/nstcafsk.htm#Topic%207; last accessed 18 February 2008].

National Institute for Standards and Technology (NIST). (2003, 15 January). Outputs and outcomes of NIST laboratory research. Retrieved 12 July 2009, from http://www.nist.gov/director/planning/studies.htm#measures.

Pennella, C. R. (1997). Managing the metrology system. Milwaukee, WI: ASQ Quality Press.\

Salzberger, T. (2009). Measurement in marketing research: An alternative framework. Northampton, MA: Edward Elgar.

Smith, R. M., Julian, E., Lunz, M., Stahl, J., Schulz, M., & Wright, B. D. (1994). Applications of conjoint measurement in admission and professional certification programs. International Journal of Educational Research, 21(6), 653-664.

Smith, E. V., Jr., & Smith, R. M. (2004). Introduction to Rasch measurement. Maple Grove, MN: JAM Press.

Spitzer, D. (2007). Transforming performance measurement: Rethinking the way we measure and drive organizational success. New York: AMACOM.

Swann, G. M. P. (2005, 2 December). John Barber’s pioneering work on the economics of measurement standards [Electronic version]. Retrieved http://www.cric.ac.uk/cric/events/jbarber/swann.pdf from Notes for Workshop in Honor of John Barber held at University of Manchester.

Wernimont, G. (1978, December). Careful intralaboratory study must come first. ASTM Standardization News, 6, 11-12.

Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Womack, J. P., & Jones, D. T. (1996, Sept./Oct.). Beyond Toyota: How to root out waste and pursue perfection. Harvard Business Review, 74, 140-58.

Wright, B. D. (1999). Fundamental measurement for psychology. In S. E. Embretson & S. L. Hershberger (Eds.), The new rules of measurement: What every educator and psychologist should know (pp. 65-104 [http://www.rasch.org/memo64.htm]). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

How Measurement, Contractual Trust, and Care Combine to Grow Social Capital: Creating Social Bonds We Can Really Trade On

October 14, 2009

Last Saturday, I went to Miami, Florida, at the invitation of Paula Lalinde (see her profile at http://www.linkedin.com/pub/paula-lalinde/11/677/a12) to attend MILITARY 101: Military Life and Combat Trauma As Seen By Troops, Their Families, and Clinicians. This day-long free presentation was sponsored by The Veterans Project of South Florida-SOFAR, in association with The Southeast Florida Association for Psychoanalytic Psychology, The Florida Psychoanalytic Society, the Soldiers & Veterans Initiative, and the Florida BRAIVE Fund. The goals of the session “included increased understanding of the unique experiences and culture related to the military experience during wartime, enhanced understanding of the assessment and treatment of trauma specific difficulties, including posttraumatic stress disorder, common co-occurring conditions, and demands of treatment on trauma clinicians.”

Listening to the speakers on Saturday morning at the Military 101 orientation, I was struck by what seemed to me to be a developmental trajectory implied in the construct of therapy-aided healing. I don’t recall if anyone explicitly mentioned Maslow’s hierarchy but it was certainly implied by the dysfunctionality that attends being pushed down to a basic mode of physical survival.

Also, the various references to the stigma of therapy reminded me of Paula’s arguments as to why a community-based preventative approach would be more accessible and likely more successful than individual programs focused on treating problems. (Echoes here of positive psychology and appreciative inquiry.)

In one part of the program, the ritualized formality of the soldier, family, and support groups’ stated promises to each other suggested a way of operationalizing the community-based approach. The expectations structuring relationships among the parties in this community are usually left largely unstated, unexamined, and unmanaged in all but the broadest, and most haphazard, ways (as most relationships’ expectations usually are). The hierarchy of needs and progressive movement towards greater self-actualization implies a developmental sequence of steps or stages that comprise the actual focus of the implied contracts between the members of the community. This sequence is a measurable continuum along which change can be monitored and managed, with all parties accountable for their contracted role in producing specific outcomes.

The process would begin from the predeployment baseline, taking that level of reliability and basis of trust existing in the community as what we want to maintain, what we might want to get back to, and what we definitely want to build on and surpass, in time. The contract would provide a black-and-white record of expectations. It would embody an image of the desired state of the relationships and it could be returned to repeatedly in communications and in visualizations over time. I’ll come back to this after describing the structure of the relational patterns we can expect to observe over the course of events.

The Saturday morning discussion made repeated reference to the role of chains in the combat experience: the chain of command, and the unit being a chain only as strong as its weakest link. The implication was that normal community life tolerates looser expectations, more informal associations, and involves more in the way of team interactions. The contrast between chains and teams brought to mind work by Wright (1995, 1996a, 1996b; Bainer, 1997) on the way the difficulties of the challenges we face influence how we organize ourselves into groups.

Chains tend to form when the challenge is very difficult and dangerous; here we have mountain climbers roped together, bucket brigades putting out fires, and people stretching out end-to-end over thin ice to rescue someone who’s fallen through. In combat, as was stressed repeatedly last Saturday, the chain is one requiring strict follow-through on orders and promises; lives are at risk and protecting them requires the most rigorous adherence to the most specific details in an operation.

Teams form when the challenge is not difficult and it is possible to coordinate a fluid response of partners whose roles shift in importance as the situation changes. Balls are passed and the lead is taken by each in turn, with others getting out of the way or providing supports that might be vitally important or merely convenient.

A third kind of group, packs, forms when the very nature of the problem is at issue; here, individuals take completely different approaches in an exploratory determination of what is at issue, and how it might be addressed. Examples include the Manhattan Project, for instance, where scientists following personal hunches went in their own directions looking for solutions to complex problems. Wolves and other hunting parties form packs when it is impossible to know where the game might be. And though the old joke says that the best place to look for lost keys is where there’s the most light, if you have others helping you, it’s best to split up and not be looking for them in the same place.

After identifying these three major forms of organization, Wright (1996b) saw that individual groups might transition to and from different modes of organization as the nature of the problem changed. For instance, a 19th-century wagon train of settlers heading through the American West might function well as a team when everyone feels safe traveling along with a cavalry detachment, the road is good, the weather is pleasant, and food and water are plentiful. Given vulnerability to attacks by Native Americans, storms, accidents, lack of game, and/or muddy, rutted roads, however, the team might shift toward a chain formation and circle the wagons, with a later return to the team formation after the danger has passed. In the worst case scenario, disaster breaks the chain into individuals scattered like a pack to fend for themselves, with the limited hope of possibly re-uniting at some later time as a chain or team.

In the current context of the military, it would seem that deployment fragments the team, with the soldier training for a position in the chain of command in which she or he will function as a strong link for the unit. The family and support network can continue to function together and separately as teams to some extent, but the stress may require intermittent chain forms of organization. Further, the separation of the soldier from the family and support would seem to approach a pack level of organization for the three groups taken as a whole.

An initial contract between the parties would describe the functioning of the team at the predeployment stage, recognize the imminent breaking up of the team into chains and packs, and visualize the day when the team would be reformed under conditions in which significant degrees of healing will be required to move out of the pack and chain formations. Perhaps there will be some need and means of countering the forcible boot camp enculturation with medicinal antidote therapies of equal but opposite force. Perhaps some elements of the boot camp experience could be safely modified without compromising the operational chain to set the stage for reintegrating the family and community team.

We would want to be able to draw qualitative information from all three groups as to the nature of their experiences at every stage. I think we would want to focus the information on descriptions of the extent to which each level in Maslow’s hierarchy is realized. This information would be used in the design of an assessment that would map out the changes over time, set up the evaluation framework, and guide interventions toward reforming the team. Given their experience with the healing process, the presenters from last Saturday have obvious capacities for an informed perspective on what’s needed here. And what we build with their input would then also plainly feed back into the kind of presentation they did.

There will likely be signature events in the process that will be used to trigger new additions to the contract, as when the consequences of deployment, trauma, loss, or return relative to Maslow’s hierarchy can be predicted. That is, the contract will be a living document that changes as goals are reached or as new challenges emerge.

This of course is all situated then within the context of measures calibrated and shared across the community to inform contracts, treatment, expectations, etc. following the general metrological principles I outline in my published work (see references).

The idea will be for the consistent production of predictable amounts of impact in the legally binding contractual relationships, such that the benefits produced in terms of individual functionality will attract investments from those in positions to employ those individuals, and from the wider society that wants to improve its overall level of mental health. One could imagine that counselors, social workers, and psychotherapists will sell social capital bonds at prices set by market forces on the basis of information analogous to the information currently available in financial markets, grocery stores, or auto sales lots. Instead of paying taxes, corporations would be required to have minimum levels of social capitalization. These levels might be set relative to the value the organization realizes from the services provided by public schools, hospitals, and governments relative to the production of an educated, motivated, healthy workforce able to get to work on public roads, able to drink public water, and living in a publicly maintained quality environment.

There will be a lot more to say on this latter piece, following up on previous blogs here that take up the topic. The contractual groundwork that sets up the binding obligations for formal agreements is the thought of the day that emerged last weekend at the session in Miami. Good stuff, long way to go, as always….

References
Bainer, D. (1997, Winter). A comparison of four models of group efforts and their implications for establishing educational partnerships. Journal of Research in Rural Education, 13(3), 143-152.

Fisher, W. P., Jr. (1995). Opportunism, a first step to inevitability? Rasch Measurement Transactions, 9(2), 426 [http://www.rasch.org/rmt/rmt92.htm].

Fisher, W. P., Jr. (1996, Winter). The Rasch alternative. Rasch Measurement Transactions, 9(4), 466-467 [http://www.rasch.org/rmt/rmt94.htm].

Fisher, W. P., Jr. (1997a). Physical disability construct convergence across instruments: Towards a universal metric. Journal of Outcome Measurement, 1(2), 87-113.

Fisher, W. P., Jr. (1997b, June). What scale-free measurement means to health outcomes research. Physical Medicine & Rehabilitation State of the Art Reviews, 11(2), 357-373.

Fisher, W. P., Jr. (1998). A research program for accountable and patient-centered health status measures. Journal of Outcome Measurement, 2(3), 222-239.

Fisher, W. P., Jr. (2000). Objectivity in psychosocial measurement: What, why, how. Journal of Outcome Measurement, 4(2), 527-563 [http://www.livingcapitalmetrics.com/images/WP_Fisher_Jr_2000.pdf].

Fisher, W. P., Jr. (2004, October). Meaning and method in the social sciences. Human Studies: A Journal for Philosophy and the Social Sciences, 27(4), 429-54.

Fisher, W. P., Jr. (2005). Daredevil barnstorming to the tipping point: New aspirations for the human sciences. Journal of Applied Measurement, 6(3), 173-9 [http://www.livingcapitalmetrics.com/images/FisherJAM05.pdf].

Fisher, W. P., Jr. (2008). Vanishing tricks and intellectualist condescension: Measurement, metrology, and the advancement of science. Rasch Measurement Transactions, 21(3), 1118-1121 [http://www.rasch.org/rmt/rmt213c.htm].

Fisher, W. P., Jr. (2009, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement (Elsevier), 42(9), 1278-1287.

Wright, B. D. (1995). Teams, packs, and chains. Rasch Measurement Transactions, 9(2), 432 [http://www.rasch.org/rmt/rmt92j.htm].

Wright, B. D. (1996a). Composition analysis: Teams, packs, chains. In G. Engelhard & M. Wilson (Eds.), Objective measurement: Theory into practice, Vol. 3 (pp. 241-264). Norwood, New Jersey: Ablex [http://www.rasch.org/memo67.htm].

Wright, B. D. (1996b). Pack to chain to team. Rasch Measurement Transactions, 10(2), 501 [http://www.rasch.org/rmt/rmt102s.htm].

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Posted today at HealthReform.gov

July 26, 2009

Any bill serious about health care reform needs to demand that the industry take advantage of readily available and dramatically improved measurement methods. We manage what we measure, and 99% of existing outcome measures are measures in name only. A kind of metric system for outcomes could provide standard product definitions, could effect huge reductions in information transaction costs, and could bring about a whole new magnitude of market efficiencies. Far from being a drag on the system, the profit motive is the best source of energy we have for driving innovation and resetting the cost-quality equation. But the disastrously low quality of our measures corrupts the data and prevents informed decision making by consumers and quality improvement experts. Any health care reform effort that does not demand improved measurement is doomed to fall far short of the potential that is within our reach. For more information, see www.Rasch.org, www.livingcapitalmetrics.com, http://dx.doi.org/10.1016/j.measurement.2009.03.014, or http://home.att.net/~rsmith.arm/RMHS_flyer.pdf.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

A Tale of Two Industries: Contrasting Quality Assessment and Improvement Frameworks

July 8, 2009

Imagine the chaos that would result if industrial engineers each had their own tool sets calibrated in idiosyncratic metrics with unit sizes that changed depending on the size of what they measured, and they conducted quality improvement studies focusing on statistical significance tests of effect sizes. Furthermore, these engineers ignore the statistical power of their designs, and don’t know when they are finding statistically significant results by pure chance, and when they are not. And finally, they also ignore the substantive meaning of the numbers, so that they never consider the differences they’re studying in terms of varying probabilities of response to the questions they ask.

So when one engineer tries to generalize a result across applications, what happens is that it kind of works sometimes, doesn’t work at all other times, is often ignored, and does not command a compelling response from anyone because they are invested in their own metrics, samples, and results, which are different from everyone else’s. If there is any discussion of the relative merits of the research done, it is easy to fall into acrimonious and heated arguments that cannot be resolved because of the lack of consensus on what constitutes valid data, instrumentation, and theory.

Thus, the engineers put up the appearance of polite decorum. They smile and nod at each other’s local, sample-dependent, and irreproducible results, while they build mini-empires of funding, students, quoting circles, and professional associations on the basis of their personal authority and charisma. As they do so, costs in their industry go spiralling out of control, profits are almost nonexistent, fewer and fewer people can afford their products, smart people are going into other fields, and overall product quality is declining.

Of course, this is the state of affairs in education and health care, not in industrial engineering. In the latter field, the situation is much different. Here, everyone everywhere is very concerned to be sure they are always measuring the same thing as everyone else and in the same unit. Unexpected results of individual measures pop out instantly and are immediately redone. Innovations are more easily generated and disseminated because everyone is thinking together in the same language and seeing effects expressed in the same images. Someone else’s ideas and results can be easily fitted into anyone else’s experience, and the viability of a new way of doing things can be evaluated on the basis of one’s own experience and skills.

Arguments can be quite productive, as consensus on basic values drives the demand for evidence. Associations and successes are defined more in terms of merit earned from productivity and creativity demonstrated through the accumulation of generalized results. Costs in these industries are constantly dropping, profits are steady or increasing, more and more people can afford their products, smart people are coming into the field, and overall product quality is improving.

There is absolutely no reason why education and health care cannot thrive and grow like other industries. It is up to us to show how.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Table Comparing Scores, Ratings, and Percentages with Real Measures

July 6, 2009

(Documentation to be posted tomorrow.)

Characteristics Raw Scores and/or Percentages Rasch Measurement

Quantitative hypothesis

Neither formulated nor tested

Formulated and tested

Criteria for falsifying quantitative hypothesis None Additivity, conjoint transitivity, parameter separation, unidimensionality, invariance, statistical sufficiency, monotonicity, homogeneity, infinite divisibility, etc.
Relation to sample distribution Dependent Independent

Paradigm

Descriptive statistics

Prescriptive measurement

Model-data relation

Models describe data, models fit to data, model with best statistics chosen

Models prescribe data quality needed for objective inference, data fit to models, GIGO principle

Relation to structure of natural laws

None

Identical

Statistical tests of quantitative hypothesis None Information-weighted and outlier-sensitive model fit, Principal Components Analysis, many other fit statistics available
Reliability coefficients Cronbach’s alpha, KR-20, etc. Cronbach’s alpha, KR-20, etc. and Separation, Strata
Reliability error source Unexplained portion of variance Mean square of individual error estimates
Range of measurement Arbitrary, from minimum to maximum score Nonarbitrary, infinite
Unit status Ordinal, nonlinear Interval, linear
Unit status assumed in statistical comparisons Interval, linear Interval, linear
Proofs of unit status Correlational Axiomatic; reproduced physical metrics; graphical plots; independent cross-sample recalibrations; etc.
Error theory for individual scores/measures None Derived from sampling theory
Architecture (capacity to add/delete items) Closed Open
Supports adaptive administration and mass customization No (changes to items change meaning of scores) Yes (changes to items do not change meaning of measure)
Supports traceability to metrological reference standard No Yes
Domains scored Either persons or items but rarely both All facets in model (persons, items, rating categories, judges, tasks, etc.)
Comparability of domains scored Would be incomparable if scored Comparable; each interpreted in terms of the other
Unscored domain characteristics Assumed all same score or random (though probably not)
No unscored domain
Relation with other measures of same construct Incommensurable Commensurable and equatable
Construct definition None Consistency, meaningfulness, interpretability, and predictability of calibration/measure hierarchies
Focus of interpretation Mean scores or percentages relative to demographics or experimental groups Measures relative to calibrations and vice versa; measures relative to demographics or experimental groups
Relation to qualitative methods Stark difference in philosophical commitments Rooted in same philosophical commitments
Quality of research dialogue Researchers’ expertise elevated relative to research subjects Research subjects voice individual and collective perspectives on coherence of construct as defined by researchers’ questions
Source of narrative theme Researcher Object of unfolding dialogue

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Graphic Illustrations of Why Scores, Ratings, and Percentages Are Not Measures, Part Two

July 2, 2009

Part One of this two-part blog offered pictures illustrating the difference between numbers that stand for something that adds up and those that do not. The uncontrolled variation in the numbers that pass for measures in health care, education, satisfaction surveys, performance assessments, etc. is analogous to the variation in weights and measures found in Medieval European markets. It is well established that metric uniformity played a vital role in the industrial and scientific revolutions of the nineteenth century. Metrology will inevitably play a similarly central role in the economic and scientific revolutions taking place today.

Clients and students often express their need for measures that are manageable, understandable, and relevant. But sometimes it turns out that we do not understand what we think we understand. New understandings can make what previously seemed manageable and relevant appear unmanageable and irrelevant. Perhaps our misunderstandings about measurement will one day explain why we have failed to innovate and improve as much as we could have.

Of course, there are statistical methods for standardizing scores and proportions that make them comparable across different normal distributions, but I’ve never once seen them applied to employee, customer, or patient survey results reported to business or hospital managers. They certainly are not used in determining comparable proficiency levels of students under No Child Left Behind. Perhaps there are consultants and reporting systems that make standardized z-scores a routine part of their practices, but even if they are, why should anyone willingly base their decisions on the assumption that normal distributions have been obtained? Why not use methods that give the same result no matter how scores are distributed?

To bring the point home, if statistical standardization is a form of measurement, why don’t we use the z-scores for height distributions instead of the direct measures of how tall we each are? Plainly, the two kinds of numbers have different applications. Somehow, though, we try to make do without the measures in many applications involving tests and surveys, with the unfortunate consequence of much lost information and many lost opportunities for better communication.

Sometimes I wonder, if we would give a test on the meaning of the scores, percentages, and logits discussed in Part One to managers, executives, and entrepreneurs, would many do any better on the parts they think they understand than on the parts they find unfamiliar? I suspect not. Some executives whose pay-for-performance bonuses are inflated by statistical accidents are going to be unhappy with what I’m going to say here, but, as I’ve been saying for years, clarifying financial implications will go a long way toward motivating the needed changes.

How could that be true? Well, consider the way we treat percentages. Imagine that three different hospitals see their patients’ percents agreement with a key survey item change as follows. Which one changed the most?

 

A. from 30.85% to 50.00%: a 19.15% change

B. from 6.68% to 15.87%: a 9.18% change

C. from 69.15% to 84.13%: a 14.99% change

As is illustrated in Figure 1 below, given that all three pairs of administrations of the survey are included together in the same measure distribution, it is likely that the three changes were all the same size.

In this scenario, all the survey administrations shared the same standard deviation in the underlying measure distribution that the key item’s percentage was drawn from, and they started from different initial measures. Different ranges in the measures are associated with different parts of the sample’s distribution, and so different numbers and percentages of patients are associated with the same amount of measured change. It is easy to see that 100-unit measured gains in the range of 50-150 or 1000-1100 on the horizontal axis would scarcely amount to 1% changes, but the same measured gain in the middle of the distribution could be as much as 25%.

Figure 1. Different Percents, Same Measures

Figure 1. Different Percentages, Same Measures

Figure 1 shows how the same measured gain can look wildly different when expressed as a percentage, depending on where the initial measure is positioned in the distribution. But what happens when percentage gains are situated in different distributions that have different patterns of variation?

More specifically, consider a situation in which three different hospitals see their percents agreement with a key survey item change as follows.

A. from 30.85% to 50.00%: a 19.15% change

B. from 30.85% to 50.00%: a 19.15% change

C. from 30.85% to 50.00%: a 19.15% change

Did one change more than the others? Of course, the three percentages are all the same, so we would naturally think that the three increases are all the same. But what if the standard deviations characterizing the three different hospitals’ score distributions are different?

Figure 2, below, shows that the three 19.15% changes could be associated with quite different measured gains. When the distribution is wider and the standard deviation is larger, any given percentage change will be associated with a larger measured change than in cases with narrower distributions and smaller standard deviations.

Same Percentage Gains, Different Measured Gains

Figure 2. Same Percentage Gains, Different Measured Gains

And if this is not enough evidence as to the foolhardiness of treating percentages as measures, bear with me through one more example. Imagine another situation in which three different hospitals see their percents agreement with a key survey item change as follows.

A. from 30.85% to 50.00%: a 19.15% change

B. from 36.96% to 50.00%: a 13.04% change

C. from 36.96% to 50.00%: a 13.04% change

Did one change more than the others? Plainly A obtains the largest percentage gain. But Figure 3 shows that, depending on the underlying distribution, A’s 19.15% gain might be a smaller measured change than either B’s or C’s. Further, B’s and C’s measures might not be identical, contrary to what would be expected from the percentages alone.

Figure 3. Percentages Completely at Odds with Measures

Figure 3. Percentages Completely at Odds with Measures

Now we have a fuller appreciation of the scope of the problems associated with the changing unit size illustrated in Part One. Though we think we understand percentages and insist on using them as something familiar and routine, the world that they present to us is as crazily distorted as a carnival funhouse. And we won’t even begin to consider how things look in the context of distributions skewed toward one end of the continuum or the other! There is similarly no point at all in going to bimodal or multimodal distributions (ones that have more than one peak). The vast majority of business applications employing scores, ratings, and percentages as measures do not take the underlying distribution into account. Given the problems that arise in optimal conditions (i.e., with a normal distribution), there is no need to belabor the issue with an enumeration of all the possible things that could be going wrong. Far better to simply move on and construct measurement systems that remain invariant across the different shapes of local data sets’ particular distributions.

How could we have gone so far in making these nonsensical numbers the focus of our attention? To put things back in perspective, we need to keep in mind the evolving magnitude of the problems we face. When Florence Nightingale was deploring the lack of any available indications of the effectiveness of her efforts, a little bit of flawed information was a significant improvement over no information. Ordinal, situation-specific numbers provided highly useful information when problems emerged in local contexts on a scale that could be comprehended and addressed by individuals and small groups.

We no longer live in that world. Today’s problems require kinds of information that must be more meaningful, precise, and actionable than ever before. And not only that, this information cannot remain accessible only to managers, executives, researchers, and data managers. It must be brought to bear in every transaction and information exchange in the industry.

Information has to be formatted in the common currency of uniform metrics to make it as fluid and empowering as possible. Would the auto industry have been able to bring off a quality revolution if every worker’s toolkit was calibrated in a different unit? Could we expect to coordinate schedules easily if we each had clocks scaled in different time units? Obviously not; why should we expect quality revolutions in health care and education when nearly all of our relevant metrics are incommensurable?

Management consultants realized decades ago that information creates a sense of responsibility in the person who possesses it. We cannot expect clinicians and teachers to take full responsibility for the outcomes they produce until they have the information they need to evaluate and improve them. Existing data and systems plainly are not up to the task.

The problem is far less a matter of complex or difficult issues than it is one of culture and priorities. It often takes less effort to remain in a dysfunctional rut and deal with massive inefficiencies than it does to get out of the rut and invent a new system with new potentials. Big changes tend to take place only when systems become so bogged down by their problems that new systems emerge simply out of the need to find some way to keep things in motion. These blogs are written in the hope that we might be able to find our way to new methods without suffering the catastrophes of total system failure. One might well imagine an entrepreneurially-minded consortium of providers, researchers, payors, accreditors, and patient advocates joining forces in small pilot projects testing out new experimental systems.

To know how much of something we’re getting for our money and whether its a fair bargain, we need to be able to compare amounts across providers, vendors, treatment options, teaching methods, etc. Scores summed from tests, surveys, or assessments, individual ratings, and percentages of a maximum possible score or frequency do not provide this information because they are not measures. Their unit sizes vary across individuals, collections of indicators (instruments), time, and space. The consequences of treating scores and percentages as measures are not trivial. We will eventually come to see that measurement quality is the primary source of the differences between the current health care and education systems’ regional variations and endlessly spiralling costs, on the one hand, and the geographically uniform quality, costs, and improvements in the systems we will create in the future.

Markets are dysfunctional when quality and costs cannot be evaluated in common terms by consumers, providers’ quality improvement specialists, researchers, accreditors, and payers. There are widespread calls for greater transparency in purchasing decisions, but transparency is not being defined and operationalized meaningfully or usefully. As currently employed, transparency refers to making key data available for public scrutiny. But these data are almost always expressed as scores, ratings, or percentages that are anything but transparent. In addition to not adding up, these data are also usually presented in indigestibly large volumes, and are not quality assessed.

All things considered, we’re doing amazingly well with our health care and education systems given the way we’ve hobbled ourselves with dysfunctional, incommensurable measures. And that gives us real cause for hope! What will we be able to accomplish when we really put our minds to measuring what we want to manage? How much better will we be able to do when entrepreneurs have the tools they need to innovate new efficiences? Who knows what we’ll be capable of when we have meaningful measures that stand for amounts that really add up, when data volumes are dramatically reduced to manageable levels, and when data quality is effectively assessed and improved?

For more on the problems associated with these kinds of percentages in the context of NCLB, see Andrew Dean Ho’s article in the August/September, 2008 issue of Educational Researcher, and Charles Murray’s “By the Numbers” column in the July 25, 2006 Wall Street Journal.

This is not the end of the story as to what the new measurement paradigm brings to bear. Next, I’ll post a table contrasting the features of scores, ratings, and percentages with those of measures. Until then, check out the latest issue of the Journal of Applied Measurement at http://www.jampress.org, see what’s new in measurement software at http://www.winsteps.com or http://www.rummlab.com.au, or look into what’s up in the way of measurement research projects with the BEAR group at UC Berkeley (http://gse.berkeley.edu/research/BEAR/research.html).

Finally, keep in mind that we are what we measure. It’s time we measured what we want to be.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.

Infrastructure and Health Care Reform

June 25, 2009

As an educator and researcher involved in the theory and application of advanced measurement methods, I am both encouraged by the (June 14) New York Times Sunday magazine’s focus on infrastructure, and chagrined at the uninformed level at which ongoing health care and economic reform discussions and analyses are taking place (as evident in the Sunday, June 21, Times editorial and business pages).

Socialistic solutions to problems in education, health care, and the economy at large are the inevitable outcome of our incomplete implementation and understanding of market capitalism. Take, for instance, the rancorous debate as to whether we should create a new public health insurance plan to compete with private plans. None of the proposals or counter proposals amount to anything more than alternate ways of manhandling health care resources toward one or another politically predetermined end. Accordingly, we find ourselves in the dilemma of choosing between equally real dangers. On the one hand, reduced payments and cost-cutting might do nothing but lower the quality and quantity of the available services, and, on the other hand, maintaining quality and quantity will eventually make health care completely unaffordable.

And here is what really gets me: apart from blind faith in the power of reduced payments to promote innovation, there is nary a word about how to set up a market infrastructure that will allow the invisible hand to do its work in bringing supply and demand efficiently into balance. Far from seeking ways in which costs can be reduced and profits enhanced at the same time, as they are in other industries, the automatic assumption in health care always seems to be that lower costs mean lower profits. We have always thought socialistically about health care, with economists, since Arrow, widely holding that health care is constitutionally incapable of sustaining a market economy. Hope that the economists are wrong appears to spring eternal, but who is doing the work to find a new way?

A new direction shows itself when we listen more closely to ourselves, and follow through on our basically valid intuitions. For instance, issues of sustainability, justice, and responsibility in the economic conversation employ the word “capital” to refer to a wide variety of resources essential to productivity, such as health, literacy, numeracy, community, and the air, water, and food services provided by nature.

The problem is that there seems to be little or no interest in figuring out how to transform this usage from an empty metaphor into a powerful tool. We similarly repeat ad nauseum the mantra, “you manage what you measure,” but almost nothing is being done to employ the highly advantageous features of advanced measurement theory and practice in the management of intangible forms of capital.

Better measurement of living capital is, however, absolutely essential to health care reform, entrepreneurial innovations in education, and to reinventing capitalism.  Instead of continuing to rely on highly variable local efforts at measuring and managing human, social, and natural capital, we need a broad program of capacity building focused on a metrological infrastructure of living capital, and its implementations.  If there is any one single blind spot that prevents us from fully learning the lessons of our recent economic disasters, it is the potential that new measurement technologies offer for reduced frictions and lower transaction costs in the intangible capital markets.

We know where to start, from two basic principles of market economics. First, we know the transaction costs are the most important costs in any market.  High transaction costs can strangle a market as the flow of capital is stifled. Second, we know that innovation, essential to product development, improvements, marketing, and enhanced profitability, is almost never accomplished by an individual working in isolation. Innovation requires an environment in which it is safe to play, to make mistakes, and through which new value can be immediately and decisively recognized for what it is.

How can living capital market frictions be reduced? For starters, we could focus on effecting order-of-magnitude improvements in the meaningfulness of the metrics we use for screening, diagnosis, research, and accountability. We can do whatever arithmetic we want with the numbers we have at hand, but most of the numbers that pass for measures of health, functionality, quality of life and care, etc. do not actually stand for something that adds up. The good news is that, again, the intuitions informing our efforts so far are largely valid, and have the ball rolling in the right direction.

How can better measurement advance the cause of innovation in health care? By providing a common language that all stakeholders can think and act in together, harmoniously. Research over the last 80 years has repeatedly proven the viability of a kind of a metric system for the things we measure with surveys, assessments, and tests. Such a system of universally uniform metrics would provide the common currency unifying the health care economy and establishing the basis for market self-organization. But contrary to our predominant metaphysical faith, scientifically proven results do not magically propagate themselves into the world. We have to invent and construct the systems we need.

Our efforts in this direction are stymied, as Tom Vanderbilt put it in the Times Sunday magazine on infrastructure, to the extent that we have “an inimical incuriosity” about the banal fundamentals of the systems that shape our world. We simply take dry technicalities for granted, and notice them only when they fail us. Our problem with intangibles measurement, then, is compounded by the fact that the infrastructure we are taking for granted is not just invisible or broken, it is nonexistent. Until we make the effort to build our capacity for managing health and other forms of living capital by creating reference standard common currencies for expressing, managing, and trading on their value, all of our efforts at health care reform–and at reinventing capitalism–will fall far short of what is possible.
William P. Fisher, Jr., Ph.D.
william@livingcapitalmetrics.com
http://www.LivingCapitalMetrics.com

We are what we measure.
It’s time we measured what we want to be.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at livingcapitalmetrics.wordpress.com.
Permissions beyond the scope of this license may be available at http://www.livingcapitalmetrics.com.