Archive for August, 2013

Networking Agenda for the September 4-6 IMEKO Meeting in Genoa

August 30, 2013

I’ve been wondering about how to make the most of the opportunities presented by next week’s IMEKO meeting for engaging with the members of the Measurement Science, Bio-Medical, and Education/Training Technical Committees. I am hopeful about the joint presentation Mark Wilson is doing with Luca Mari, and I’m excited about the presentations scheduled from Andrew Maul (Colorado), Andrew Stephanou (ACER, Melbourne, Australia), Bob Massof (Johns Hopkins), Jack Stenner (MetaMetrics), and Nick Bezruzcko (Chicago).

On the other hand, four of the seven Rasch-relevant presentations are all together in one session, opposite two other metrology sessions. I’m also disappointed that the excellent paper Leslie Pendrill and I put together was relegated to a poster.

Given the experience of past meetings and the program for this one, it seems there is some possibility of our being isolated and disconnected from the larger proceedings. But none of us full-time psychometricians, I suspect, would feel satisfied traveling so far and making so much effort simply for the pleasure of conferring among ourselves. A plan of some kind then is in order to promote active engagement with other attendees, and pose questions in the sessions we attend.

Luca Mari, of course, is already quite engaged. It was he who suggested inviting Mark Wilson to the 2011 IMEKO meeting in Jena, Germany, and at Mark Wilson’s invitation, he spoke last summer at the International Psychometric Society meeting in Nebraska. In addition, he and I have been in dialogue for some months around the definitions and terms in the VIM (International Vocabulary of Measurement, a standard backed by all eight major international standards organizations, such as BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, OIML, and ILAC).

An important goal for the VIM is to make it relevant to measurement in all sciences. As is stated in the Wikipedia entry on metrology, it is “the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology.” A problem that arises in this context is that the effort to embrace psychology and the social sciences was not well informed about contemporary psychometrics, giving rise to unfortunate phrases like “ordinal quantity.” There is, of course, ample opportunity for far more rigorous and productive approaches to measurement in disciplines not previously represented in the VIM, and at the 2011 IMEKO meeting in Jena, Gordon Cooper and I did a poster taking up this issue.

In addition to Luca Mari, other IMEKO members who have previously shown interest in psychometric issues include Giovanni Rossi (one of the meeting hosts and organizers), Gerhard Linss, Eric Benoit, S. Khan, and Roman Morawski. Further, it may be useful to indicate a show of support for Rense Lange’s new psychometrics institute at Universidad Lusofona in Porto to Joao Sousa and Raul Carneiro Martins of Portugal.

Questions that might productively be raised include the following, among others:

Are you familiar with the requirements for linearity, monotonicity, additivity, and parameter separation in psychometrics? Do you think these requirements might provide a basis for common or related definitions of measurement across all fields?

Given the proven and longstanding capacity in psychometrics for equating different instruments measuring the same thing, don’t you think there might be considerable value in exploring the possibility of metrological traceability to standardized units for the major constructs measured in education, health care, human resource management, etc.?

Advertisements

A Moral Quandry Around Better Measurement

August 29, 2013

Three of my recent articles (listed below) form a kind of a progression that tells a larger story, beginning from the Fisher and Stenner (2013) connection with the history of science. With its focus on causality, Stenner, Fisher, Stone, and Burdick (2013) picks up where that first article leaves off, and the Fisher (2013) article sees formative assessment unfolding into predictive causal theory, standardized metrics, and a new capacity for total quality management in education.

A moral issue that comes up in the context of this progression concerns what it means to cling to methods in psychology and the social sciences that systematically preclude (a) thorough theoretical understandings of our constructs; (b) the emergence of shared languages for communicating those constructs’ structures, processes, and outcomes; and (c) the application of proven approaches to quality improvement in education. Because that is what currently popular methods in psychology and the social sciences do: they systematically short-circuit understanding and prevent us from taking responsibility for what we would learn if we were paying attention to what’s going on around us.

Is there some kind of cultural blind spot or fear of success or hypochondria or inability to galvanize leadership that we need to overcome? Or is this just the herky-jerky haphazard tragicomic nature of evolution, maturation, and development?

The question might seem to involve asking what we can do to promote wider appreciation for what’s at stake, for what’s possible, for the billions of individual life opportunities lost with each passing year in which nothing or far too little is done. After all, since proven methods are so routinely ignored and underused, isn’t it natural to suppose that motivation or awareness need building up?

But I think a huge movement has already been underway for years, and needs to be given a focus, a set of boundary or mediating objects, as a catalyst. That would seem in fact to be the message of Paul Hawken’s 2007 book, Blessed Unrest.

So my answers to these questions (in multiple blog posts over the last few years, such as here) focus on how to channel existing will, motivation, energy and resources in new ways. Philosophically, this is a matter of the playful absorption of nonCartesian unified subject-objects in a flow of meaningful relationships. Economically, it is a matter of creating markets for intangible assets (human, social, and natural capital). The idea is to harness the profit motive for growing meaningful, productive relationships. If you’re not growing all forms of capital, you cannot make a profit. To have a fair basis for determining whether capital is being created or destroyed, we need better measurement, better theory, better instruments, more systematic methods, all of which are in hand. As I’ve pointed out in the past, there is a huge entrepreneurial opportunity in the fact that the availability of the needed tools is not widely recognized.

So to say it out loud one more time, the goal is for social capitalist markets’ accountable and efficient living capital to displace the socialist dead capital externalized markets that are not accountable and that are so wildly ineffective in matching supply with demand.

That’s my story and I’m sticking to it…

Fisher, W. P., Jr. (2013). Imagining education tailored to assessment as, for, and of learning: Theory, standards, and quality improvement. Assessment and Learning, 2, in press.

Fisher, W. P., Jr., & Stenner, A. J. (2013). On the potential for improved measurement in the human and social sciences. In Q. Zhang & H. Yang (Eds.), Pacific Rim Objective Measurement Symposium 2012 Conference Proceedings (pp. 1-11). Berlin, Germany: Springer-Verlag.

Stenner, A. J., Fisher, W. P., Jr., Stone, M., & Burdick, D. (2013, August). Causal Rasch models. Frontiers in Psychology: Quantitative Psychology and Measurement, 4(536), 1-14.

Dispelling Myths about Measurement in Psychology and the Social Sciences

August 27, 2013

Seven common assumptions about measurement and method in psychology and the social sciences stand as inconsistent anomalies in the experience of those who have taken the trouble to challenge them. As evidence, theory, and instrumentation accumulate, will we see a revolutionary break and disruptive change across multiple social and economic levels and areas as a result? Will there be a slower, more gradual transition to a new paradigm? Or will the status quo simply roll on, oblivious to the potential for new questions and new directions? We shall see.

1. Myth: Qualitative data and methods cannot really be integrated with quantitative data and methods because of opposing philosophical assumptions.

Fact: Qualitative methods incorporate a critique of quantitative methods that leads to a more scientific theory and practice of measurement.

2. Myth: Statistics is the logic of measurement.

Fact: Statistics did not emerge as a discipline until the 19th century, while measurement, of course, has been around for millennia. Measurement is modeled at the individual level within a single variable whereas statistics model at the population level between variables. Data are fit to prescriptive measurement models using the Garbage-In, Garbage-Out (GIGO) Principle, while descriptive statistical models are fit to data.

3. Myth: Linear measurement from ordinal test and survey data is impossible.

Fact: Ordinal data have been used as a basis for invariant linear measures for decades.

4. Myth: Scientific laws like Newton’s laws of motion cannot be successfully formulated, tested, or validated in psychology and the social sciences.

Fact: Mathematical laws of human behavior and cognition in the same form as Newton’s laws are formulated, tested, and validated in numerous Rasch model applications.

5. Myth: Experimental manipulations of psychological and social phenomena are inherently impossible or unethical.

Fact: Decades of research across multiple fields have successfully shown how theory-informed interventions on items/indicators/questions can result in predictable, consistent, and substantively meaningful quantitative changes.

6. Myth: “Real” measurement is impossible in psychology and the social sciences.

Fact: Success in predictive theory, instrument calibration, and in maintaining stable units of comparison over time are all evidence supporting the viability of meaningful uniform units of measurement in psychology and the social sciences.

7. Myth: Efficient economic markets can incorporate only manufactured and liquid capital, and property. Human, social, and natural capital, being intangible, have permanent status as market externalities as they cannot be measured well enough to enable accountability, pricing, or transferable representations (common currency instruments).

Fact: The theory and methods necessary for establishing an Intangible Assets Metric System are in hand. What’s missing is the awareness of the scientific, human, social, and economic value that would be returned from the admittedly very large investments that would be required.

References and examples are available in other posts in this blog, in my publications, or on request.