Posts Tagged ‘infrastructure’

Economy of language, Eros, meaning, the public, and its problems

July 11, 2017

The medium is the message. The more transparent the medium is, the more seductive the messages expressed in it. The seductiveness of numbers stems from their roots in the mathematical quality of all thinking: the way that signs are used as the media of concept-thing relations. Our captivation with numbers is entirely embedded in the allure of language, which stems in large part from its economy: knowing how to read, write, speak, and listen saves us the trouble of re-inventing words and concepts for ourselves, and of having to translate each other’s private languages. The problem is, of course, that having words for things and sharing them by no means assures understanding. But when it works, it really works, as the history of science shows.

Seductive enthrallment with meaning and beauty defines the parameters of the difference between the modern Cartesian dualist world view and the emerging unmodern nondualist world view. This is the whole point of taking up Heidegger’s sense of method as meta-odos. As Plato saw, Socrates’ recounting of the myth of Eros told to him by Diotima conveys how captivation with beauty embodies the opposites of wealth and poverty in a simultaneous possession and absence, neither of which is ever complete.

The evolutionary/developmental paradigm shift taking place will transform everything by institutionalizing in every area of life an order of magnitude increase in the complexity of relationships, and a corresponding increase in the simplicity with which those relationships can be managed. The compelling absorption into the flow of meaning that necessarily informs discourse but currently functions as an unacknowledged assumption informing operations will itself be brought into view and will become an object of operations.

As Dewey understood, public consciousness of an issue or set of issues, and the will to take them on, emerges when existing institutions fail. We are certainly living in a time in which our political, economic, social, educational, medical, legal, environmental, etc. institutions have been failing to live up to their responsibilities for quite a number of years. The efforts of the public to address these failures have been obstructed by the lack of the media needed for integrating the complex, multilevel, and discontinuous opposites of harmony and dissonance, agreement and dissent, that structure a binding, coherent culture.

Science is nothing but an extension of everyday reasoning. Instead of imitating the natural sciences, the social sciences need to focus on how science extends the complex cognitive ecologies of language. As we figure that out and get these metasystems in place, we will simultaneously create the media the public needs to find its voice and organize itself to meet the challenges of how to build new institutions capable of successfully countering human suffering, social discontent, and environmental degradation.


Excerpts and Notes from Goldberg’s “Billions of Drops…”

December 23, 2015

Goldberg, S. H. (2009). Billions of drops in millions of buckets: Why philanthropy doesn’t advance social progress. New York: Wiley.

p. 8:
Transaction costs: “…nonprofit financial markets are highly disorganized, with considerable duplication of effort, resource diversion, and processes that ‘take a fair amount of time to review grant applications and to make funding decisions’ [citing Harvard Business School Case No. 9-391-096, p. 7, Note on Starting a Nonprofit Venture, 11 Sept 1992]. It would be a major understatement to describe the resulting capital market as inefficient.”

A McKinsey study found that nonprofits spend 2.5 to 12 times more raising capital than for-profits do. When administrative costs are factored in, nonprofits spend 5.5 to 21.5 times more.

For-profit and nonprofit funding efforts contrasted on pages 8 and 9.

p. 10:
Balanced scorecard rating criteria

p. 11:
“Even at double-digit annual growth rates, it will take many years for social entrepreneurs and their funders to address even 10% of the populations in need.”

p. 12:
Exhibit 1.5 shows that the percentages of various needs served by leading social enterprises are barely drops in the respective buckets; they range from 0.07% to 3.30%.

pp. 14-16:
Nonprofit funding is not tied to performance. Even when a nonprofit makes the effort to show measured improvement in impact, it does little or nothing to change their funding picture. It appears that there is some kind of funding ceiling implicitly imposed by funders, since nonprofit growth and success seems to persuade capital sources that their work there is done. Mediocre and low performing nonprofits seem to be able to continue drawing funds indefinitely from sympathetic donors who don’t require evidence of effective use of their money.

p. 34:
“…meaningful reductions in poverty, illiteracy, violence, and hopelessness will require a fundamental restructuring of nonprofit capital markets. Such a restructuring would need to make it much easier for philanthropists of all stripes–large and small, public and private, institutional and individual–to fund nonprofit organizations that maximize social impact.”

p. 54:
Exhibit 2.3 is a chart showing that fewer people rose from poverty, and more remained in it or fell deeper into it, in the period of 1988-98 compared with 1969-1979.

pp. 70-71:
Kotter’s (1996) change cycle.

p. 75:
McKinsey’s seven elements of nonprofit capacity and capacity assessment grid.

pp. 94-95:
Exhibits 3.1 and 3.2 contrast the way financial markets reward for-profit performance with the way nonprofit markets reward fund raising efforts.

Financial markets
1. Market aggregates and disseminates standardized data
2. Analysts publish rigorous research reports
3. Investors proactively search for strong performers
4. Investors penalize weak performers
5. Market promotes performance
6. Strong performers grow

Nonprofit markets
1. Social performance is difficult to measure
2. NPOs don’t have resources or expertise to report results
3. Investors can’t get reliable or standardized results data
4. Strong and weak NPOs spend 40 to 60% of time fundraising
5. Market promotes fundraising
6. Investors can’t fund performance; NPOs can’t scale

p. 95:
“…nonprofits can’t possibly raise enough money to achieve transformative social impact within the constraints of the existing fundraising system. I submit that significant social progress cannot be achieved without what I’m going to call ‘third-stage funding,’ that is, funding that doesn’t suffer from disabling fragmentation. The existing nonprofit capital market is not capable of [p. 97] providing third-stage funding. Such funding can arise only when investors are sufficiently well informed to make big bets at understandable and manageable levels of risk. Existing nonprofit capital markets neither provide investors with the kinds of information needed–actionable information about nonprofit performance–nor provide the kinds of intermediation–active oversight by knowledgeable professionals–needed to mitigate risk. Absent third-stage funding, nonprofit capital will remain irreducibly fragmented, preventing the marshaling of resources that nonprofit organizations need to make meaningful and enduring progress against $100 million problems.”

pp. 99-114:
Text and diagrams on innovation, market adoption, transformative impact.

p. 140:
Exhibit 4.2: Capital distribution of nonprofits, highlighting mid-caps

pages 192-3 make the case for the difference between a regular market and the current state of philanthropic, social capital markets.

p. 192:
“So financial markets provide information investors can use to compare alternative investment opportunities based on their performance, and they provide a dynamic mechanism for moving money away from weak performers and toward strong performers. Just as water seeks its own level, markets continuously recalibrate prices until they achieve a roughly optimal equilibrium at which most companies receive the ‘right’ amount of investment. In this way, good companies thrive and bad ones improve or die.
“The social sector should work the same way. .. But philanthropic capital doesn’t flow toward effective nonprofits and away from ineffective nonprofits for a simple reason: contributors can’t tell the difference between the two. That is, philanthropists just don’t [p. 193] know what various nonprofits actually accomplish. Instead, they only know what nonprofits are trying to accomplish, and they only know that based on what the nonprofits themselves tell them.”

p. 193:
“The signs that the lack of social progress is linked to capital market dysfunctions are unmistakable: fundraising remains the number-one [p. 194] challenge of the sector despite the fact that nonprofit leaders divert some 40 to 60% of their time from productive work to chasing after money; donations raised are almost always too small, too short, and too restricted to enhance productive capacity; most mid-caps are ensnared in the ‘social entrepreneur’s trap’ of focusing on today and neglecting tomorrow; and so on. So any meaningful progress we could make in the direction of helping the nonprofit capital market allocate funds as effectively as the private capital market does could translate into tremendous advances in extending social and economic opportunity.
“Indeed, enhancing nonprofit capital allocation is likely to improve people’s lives much more than, say, further increasing the total amount of donations. Why? Because capital allocation has a multiplier effect.”

“If we want to materially improve the performance and increase the impact of the nonprofit sector, we need to understand what’s preventing [p. 195] it from doing a better job of allocating philanthropic capital. And figuring out why nonprofit capital markets don’t work very well requires us to understand why the financial markets do such a better job.”

p. 197:
“When all is said and done, securities prices are nothing more than convenient approximations that market participants accept as a way of simplifying their economic interactions, with a full understanding that market prices are useful even when they are way off the mark, as they so often are. In fact, that’s the whole point of markets: to aggregate the imperfect and incomplete knowledge held by vast numbers of traders about much various securities are worth and still make allocation choices that are better than we could without markets.
“Philanthropists face precisely the same problem: how to make better use of limited information to maximize output, in this case, social impact. Considering the dearth of useful tools available to donors today, the solution doesn’t have to be perfect or even all that good, at least at first. It just needs to improve the status quo and get better over time.
“Much of the solution, I believe, lies in finding useful adaptations of market mechanisms that will mitigate the effects of the same lack of reliable and comprehensive information about social sector performance. I would even go so far as to say that social enterprises can’t hope to realize their ‘one day, all children’ visions without a funding allociation system that acts more like a market.
“We can, and indeed do, make incremental improvements in nonprofit funding without market mechanisms. But without markets, I don’t see how we can fix the fragmentation problem or produce transformative social impact, such as ensuring that every child in America has a good education. The problems we face are too big and have too many moving parts to ignore the self-organizing dynamics of market economics. As Thomas Friedman said about the need to impose a carbon tax at a time of falling oil prices, ‘I’ve wracked my brain trying to think of ways to retool America around clean-power technologies without a price signal–i.e., a tax–and there are no effective ones.”

p. 199:
“Prices enable financial markets to work the way nonprofit capital markets should–by sending informative signals about the most effective organizations so that money will flow to them naturally..”

p. 200:
[Quotes Kurtzman citing De Soto on the mystery of capital. Also see p. 209, below.]
“‘Solve the mystery of capital and you solve many seemingly intractable problems along with it.'”
[That’s from page 69 in Kurtzman, 2002.]

p. 201:
[Goldberg says he’s quoting Daniel Yankelovich here, but the footnote does not appear to have anything to do with this quote:]
“‘The first step is to measure what can easily be measured. The second is to disregard what can’t be measured, or give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t very important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.'”

Goldberg gives example here of $10,000 invested witha a 10% increase in value, compared with $10,000 put into a nonprofit. “But if the nonprofit makes good use of the money and, let’s say, brings the reading scores of 10 elementary school students up from below grade level to grade level, we can’t say how much my initial investment is ‘worth’ now. I could make the argument that the value has increased because the students have received a demonstrated educational benefit that is valuable to them. Since that’s the reason I made the donation, the achievement of higher scores must have value to me, as well.”

p. 202:
Goldberg wonders whether donations to nonprofits would be better conceived as purchases than investments.

p. 207:
Goldberg quotes Jon Gertner from the March 9, 2008, issue of the New York Times Magazine devoted to philanthropy:

“‘Why shouldn’t the world’s smartest capitalists be able to figure out more effective ways to give out money now? And why shouldn’t they want to make sure their philanthropy has significant social impact? If they can measure impact, couldn’t they get past the resistance that [Warren] Buffet highlighted and finally separate what works from what doesn’t?'”

p. 208:
“Once we abandon the false notions that financial markets are precision instruments for measuring unambiguous phenomena, and that the business and nonproft sectors are based in mutually exclusive principles of value, we can deconstruct the true nature of the problems we need to address and adapt market-like mechanisms that are suited to the particulars of the social sector.
“All of this is a long way (okay, a very long way) of saying that even ordinal rankings of nonprofit investments can have tremendous value in choosing among competing donation opportunities, especially when the choices are so numerous and varied. If I’m a social investor, I’d really like to know which nonprofits are likely to produce ‘more’ impact and which ones are likely to produce ‘less.'”

“It isn’t necessary to replicate the complex working of the modern stock markets to fashion an intelligent and useful nonprofit capital allocation mechanism. All we’re looking for is some kind of functional indication that would (1) isolate promising nonprofit investments from among the confusing swarm of too many seemingly worthy social-purpose organizations and (2) roughly differentiate among them based on the likelihood of ‘more’ or ‘less’ impact. This is what I meant earlier by increasing [p. 209] signals and decreasing noise.”

p. 209:
Goldberg apparently didn’t read De Soto, as he says that the mystery of capital is posed by Kurtzman and says it is solved via the collective intelligence and wisdom of crowds. This completely misses the point of the crucial value that transparent representations of structural invariance hold in market functionality. Goldberg is apparently offering a loose kind of market for which there is an aggregate index of stocks for nonprofits that are built up from their various ordinal performance measures. I think I find a better way in my work, building more closely from De Soto (Fisher, 2002, 2003, 2005, 2007, 2009a, 2009b).

p. 231:
Goldberg quotes Harvard’s Allen Grossman (1999) on the cost-benefit boundaries of more effective nonprofit capital allocation:

“‘Is there a significant downside risk in restructuring some portion of the philanthropic capital markets to test the effectiveness of performance driven philanthropy? The short answer is, ‘No.’ The current reality is that most broad-based solutions to social problems have eluded the conventional and fragmented approaches to philanthropy. It is hard to imagine that experiments to change the system to a more performance driven and rational market would negatively impact the effectiveness of the current funding flows–and could have dramatic upside potential.'”

p. 232:
Quotes Douglas Hubbard’s How to Measure Anything book that Stenner endorsed, and Linacre and I didn’t.

p. 233:
Cites Stevens on the four levels of measurement and uses it to justify his position concerning ordinal rankings, recognizing that “we can’t add or subtract ordinals.”

pp. 233-5:
Justifies ordinal measures via example of Google’s PageRank algorithm. [I could connect from here using Mary Garner’s (2009) comparison of PageRank with Rasch.]

p. 236:
Goldberg tries to justify the use of ordinal measures by citing their widespread use in social science and health care. He conveniently ignores the fact that virtually all of the same problems and criticisms that apply to philanthropic capital markets also apply in these areas. In not grasping the fundamental value of De Soto’s concept of transferable and transparent representations, and in knowing nothing of Rasch measurement, he was unable to properly evaluate to potential of ordinal data’s role in the formation of philanthropic capital markets. Ordinal measures aren’t just not good enough, they represent a dangerous diversion of resources that will be put into systems that take on lives of their own, creating a new layer of dysfunctional relationships that will be hard to overcome.

p. 261 [Goldberg shows here his complete ignorance about measurement. He is apparently totally unaware of the work that is in fact most relevant to his cause, going back to Thurstone in 1920s, Rasch in the 1950s-1970s, and Wright in the 1960s to 2000. Both of the problems he identifies have long since been solved in theory and in practice in a wide range of domains in education, psychology, health care, etc.]:
“Having first studied performance evaluation some 30 years ago, I feel confident in saying that all the foundational work has been done. There won’t be a ‘eureka!’ breakthrough where someone finally figures out the one true way to guage nonprofit effectiveness.
“Indeed, I would venture to say that we know virtually everything there is to know about measuring the performance of nonprofit organizations with only two exceptions: (1) How can we compare nonprofits with different missions or approaches, and (2) how can we make actionable performance assessments common practice for growth-ready mid-caps and readily available to all prospective donors?”

p. 263:
“Why would a social entrepreneur divert limited resources to impact assessment if there were no prospects it would increase funding? How could an investor who wanted to maximize the impact of her giving possibly put more golden eggs in fewer impact-producing baskets if she had no way to distinguish one basket from another? The result: there’s no performance data to attract growth capital, and there’s no growth capital to induce performance measurement. Until we fix that Catch-22, performance evaluation will not become an integral part of social enterprise.”

pp. 264-5:
Long quotation from Ken Berger at Charity Navigator on their ongoing efforts at developing an outcome measurement system. [wpf, 8 Nov 2009: I read the passage quoted by Goldberg in Berger’s blog when it came out and have been watching and waiting ever since for the new system. wpf, 8 Feb 2012: The new system has been online for some time but still does not include anything on impacts or outcomes. It has expanded from a sole focus on financials to also include accountability and transparency. But it does not yet address Goldberg’s concerns as there still is no way to tell what works from what doesn’t.]

p. 265:
“The failure of the social sector to coordinate independent assets and create a whole that exceeds the sum of its parts results from an absence of.. platform leadership’: ‘the ability of a company to drive innovation around a particular platform technology at the broad industry level.’ The object is to multiply value by working together: ‘the more people who use the platform products, the more incentives there are for complement producers to introduce more complementary products, causing a virtuous cycle.'” [Quotes here from Cusumano & Gawer (2002). The concept of platform leadership speaks directly to the system of issues raised by Miller & O’Leary (2007) that must be addressed to form effective HSN capital markets.]

p. 266:
“…the nonprofit sector has a great deal of both money and innovation, but too little available information about too many organizations. The result is capital fragmentation that squelches growth. None of the stakeholders has enough horsepower on its own to impose order on this chaos, but some kind of realignment could release all of that pent-up potential energy. While command-and-control authority is neither feasible nor desirable, the conditions are ripe for platform leadership.”

“It is doubtful that the IMPEX could amass all of the resources internally needed to build and grow a virtual nonprofit stock market that could connect large numbers of growth-capital investors with large numbers of [p. 267] growth-ready mid-caps. But it might be able to convene a powerful coalition of complementary actors that could achieve a critical mass of support for performance-based philanthropy. The challenge would be to develop an organization focused on filling the gaps rather than encroaching on the turf of established firms whose participation and innovation would be required to build a platform for nurturing growth of social enterprise..”

p. 268-9:
Intermediated nonprofit capital market shifts fundraising burden from grantees to intermediaries.

p. 271:
“The surging growth of national donor-advised funds, which simplify and reduce the transaction costs of methodical giving, exemplifies the kind of financial innovation that is poised to leverage market-based investment guidance.” [President of Schwab Charitable quoted as wanting to make charitable giving information- and results-driven.]

p. 272:
Rating agencies and organizations: Charity Navigator, Guidestar, Wise Giving Alliance.
Online donor rankings: GlobalGiving, GreatNonprofits, SocialMarkets
Evaluation consultants: Mathematica

Google’s mission statement: “to organize the world’s information and make it universally accessible and useful.”

p. 273:
Exhibit 9.4 Impact Index Whole Product
Image of stakeholders circling IMPEX:
Trading engine
Listed nonprofits
Data producers and aggregators
Trading community
Researchers and analysts
Investors and advisors
Government and business supporters

p. 275:
“That’s the starting point for replication [of social innovations that work]: finding and funding; matching money with performance.”

[WPF bottom line: Because Goldberg misses De Soto’s point about transparent representations resolving the mystery of capital, he is unable to see his way toward making the nonprofit capital markets function more like financial capital markets, with the difference being the focus on the growth of human, social, and natural capital. Though Goldberg intuits good points about the wisdom of crowds, he doesn’t know enough about the flaws of ordinal measurement relative to interval measurement, or about the relatively easy access to interval measures that can be had, to do the job.]


Cusumano, M. A., & Gawer, A. (2002, Spring). The elements of platform leadership. MIT Sloan Management Review, 43(3), 58.

De Soto, H. (2000). The mystery of capital: Why capitalism triumphs in the West and fails everywhere else. New York: Basic Books.

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [].

Fisher, W. P., Jr. (2003). Measurement and communities of inquiry. Rasch Measurement Transactions, 17(3), 936-8 [].

Fisher, W. P., Jr. (2005). Daredevil barnstorming to the tipping point: New aspirations for the human sciences. Journal of Applied Measurement, 6(3), 173-9 [].

Fisher, W. P., Jr. (2007, Summer). Living capital metrics. Rasch Measurement Transactions, 21(1), 1092-3 [].

Fisher, W. P., Jr. (2009a). Bringing human, social, and natural capital to life: Practical consequences and opportunities. In M. Wilson, K. Draney, N. Brown & B. Duckor (Eds.), Advances in Rasch Measurement, Vol. Two (p. in press []). Maple Grove, MN: JAM Press.

Fisher, W. P., Jr. (2009b, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement (Elsevier), 42(9), 1278-1287.

Garner, M. (2009, Autumn). Google’s PageRank algorithm and the Rasch measurement model. Rasch Measurement Transactions, 23(2), 1201-2 [].

Grossman, A. (1999). Philanthropic social capital markets: Performance driven philanthropy (Social Enterprise Series 12 No. 00-002). Harvard Business School Working Paper.

Kotter, J. (1996). Leading change. Cambridge, Massachusetts: Harvard Business School Press.

Kurtzman, J. (2002). How the markets really work. New York: Crown Business.

Miller, P., & O’Leary, T. (2007, October/November). Mediating instruments and making markets: Capital budgeting, science and the economy. Accounting, Organizations, and Society, 32(7-8), 701-34.

Moore’s Law at 50

May 13, 2015

Thomas Friedman interviewed Gordon Moore on the occasion of the 50th anniversary of Moore’s 1965 article predicting that computing power would exponentially increase at little additional cost. Moore’s ten-year prediction for the doubling rate of the numbers of transistors on microchips held up, and has now, with small adjustments, guided investments and expectations in electronics for five decades.

Friedman makes an especially important point, saying:

But let’s remember that it [Moore’s Law] was enabled by a group of remarkable scientists and engineers, in an America that did not just brag about being exceptional, but invested in the infrastructure and basic scientific research, and set the audacious goals, to make it so. If we want to create more Moore’s Law-like technologies, we need to invest in the building blocks that produced that America.”

These kinds of calls for investments in infrastructure and basic research, for new audacious goals, and for more Moore’s Law-like technologies are, of course, some of the primary and recurring themes of this blog (here, here, here, and here) and presentations and publications of the last several years. For instance, Miller and O’Leary’s (2007) close study of how Moore’s Law has aligned and coordinated investments in the electronics industry has been extrapolated into the education context (Fisher, 2012; Fisher & Stenner, 2011).

Education already has had over 60 years experience with a close parallel to Moore’s Law in reading measurement. Stenner’s Law retrospectively predicts exactly the same doubling period for the increasing numbers from 1960 to 2010 of children’s reading abilities measured in a common (or equatable) unit with known uncertainty and personalized consistency indicators. Knowledge of this kind has enabled manufacturers, suppliers, marketers, customers, and other stakeholders in the electronics industry to plan five and ten years into the future, preparing products and markets to take advantage of increased power and speed at the same or lower cost. Similarly, that same kind of knowledge could be used in education, health care, social services, and natural resource management to define the rules, roles, and responsibilities of actors and institutions involved in literacy, health, community, and natural capital markets.

Reading instruction, for example, requires text complexities to be matched to reader abilities at a comprehension rate that challenges but does not discourage the reader. Uniform grade-level textbooks are often too easy for a third of a given classroom, and too hard for another third. Individualized instruction by teachers in classrooms of 25 and more students is too cumbersome to implement. Connecting classroom reading assessments with known text complexity measures informed by judicious teacher input sets the stage for the realization of new potentials in educational outcomes. Electronic resources tapping existing text complexity measures for millions of articles and books connect individual students’ high stakes and classroom assessments in a common instructional framework (for instance, see here for an offering from Pearson). As the numbers of student reading measures made in a common unit continues to grow exponentially, capacities for connecting readers to texts, and for communicating about what works and what doesn’t in education, will grow as well.

This model is exactly the kind of infrastructure, basic scientific research, and audacious goal setting that’s needed if we are to succeed in creating more Moore’s Law-like technologies. If we as a society made the decision to invest deliberately, intentionally, and massively in infrastructure of this kind across education, health care, social services, and natural resource management, who knows what kinds of powerful results might be attained?


Fisher, W. P., Jr. (2012). Measure and manage: Intangible assets metric standards for sustainability. In J. Marques, S. Dhiman & S. Holt (Eds.), Business administration education: Changes in management and leadership strategies (pp. 43-63). New York: Palgrave Macmillan.

Fisher, W. P., Jr., & Stenner, A. J. (2011, August 31 to September 2). A technology roadmap for intangible assets metrology. In Fundamentals of measurement science. International Measurement Confederation (IMEKO) TC1-TC7-TC13 Joint Symposium,, Jena, Germany.

Miller, P., & O’Leary, T. (2007, October/November). Mediating instruments and making markets: Capital budgeting, science and the economy. Accounting, Organizations, and Society, 32(7-8), 701-734.

Professional capital as product of human, social, and decisional capitals

April 18, 2014

Leslie Pendrill gave me a tip on a very interesting book, Professional Capital, by Michael Fullan. The author’s distinction between business capital and professional capital is somewhat akin to my distinction (Fisher, 2011) between dead and living capital. The primary point of contact between Fullan’s sense of capital and mine stems from his inclusion of social and decisional capital as crucial enhancements of human capital.

Of course, defining human capital as talent, as Fullan does, is not going to go very far toward supporting generalized management of it. Efficient markets require that capital be represented in transparent and universally available instruments (common currencies or metrics). Transparent, systematic representation makes it possible to act on capital abstractly, in laboratories, courts, and banks, without having to do anything at all with the physical resource itself. (Contrast this with socialism’s focus on controlling the actual concrete resources, and the resulting empty store shelves, unfulfilled five-year plans, pogroms and purges, and overall failure.) Universally accessible transparent representations make capital additive (amounts can be accrued), divisible (it can be divided into shares), and mobile (it can be moved around in networks accepting the currency/metric). (See references below for more information.)

Fullan cites research by Carrie Leanna at the U of Pittsburgh showing that teachers with high social capital increased their students math scores by 5.7% more than teachers with low social capital. The teachers with the highest skill levels (most human capital) and high social capital did the overall best. Low-ability teachers in schools with high social capital did as well as average teachers.

This is great, but the real cream of Fullan’s argument concerns the importance of what he calls decisional capital. I don’t think this will likely work out to be entirely separate from human capital, but his point is well taken: the capacity to consistently engage with students with competence, good judgment, insight, inspiration, creative improvisation, and openness to feedback in a context of shared responsibility is vital. All of this is quite consistent with recent work on collective intelligence (Fischer, Giaccardi, Eden, et al., 2005; Hutchins, 2010; Magnus, 2007; Nersessian, 2006; Woolley, Chabris, Pentland, et al., 2010; Woolley and Fuchs, 2011).

And, of course, you can see this coming: decisional capital is precisely what better measurement provides. Integrated formative and summative assessment informs decision making at the individual level in ways that are otherwise impossible. When those assessments are expressed in uniformly interpretable and applicable units of measurement, collective intelligence and social capital are boosted in the ways documented by Leanna as enhancing teacher performance and boosting student outcomes.

Anyway, just wanted to share that. It fits right in with the trading zone concept I presented at IOMW (the slides are available on my LinkedIn page).

Fischer, G., Giaccardi, E., Eden, H., Sugimoto, M., & Ye, Y. (2005). Beyond binary choices: Integrating individual and social creativity. International Journal of Human-Computer Studies, 63, 482-512.

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [].

Fisher, W. P., Jr. (2003). Measurement and communities of inquiry. Rasch Measurement Transactions, 17(3), 936-938 [].

Fisher, W. P., Jr. (2004a, Thursday, January 22). Bringing capital to life via measurement: A contribution to the new economics. In R. Smith (Chair), Session 3.3B. Rasch Models in Economics and Marketing. Second International Conference on Measurement. Perth, Western Australia:  Murdoch University.

Fisher, W. P., Jr. (2004b, Friday, July 2). Relational networks and trust in the measurement of social capital. Twelfth International Objective Measurement Workshops. Cairns, Queensland, Australia: James Cook University.

Fisher, W. P., Jr. (2005a). Daredevil barnstorming to the tipping point: New aspirations for the human sciences. Journal of Applied Measurement, 6(3), 173-179.

Fisher, W. P., Jr. (2005b, August 1-3). Data standards for living human, social, and natural capital. In Session G: Concluding Discussion, Future Plans, Policy, etc. Conference on Entrepreneurship and Human Rights. Pope Auditorium, Lowenstein Bldg, Fordham University.

Fisher, W. P., Jr. (2007, Summer). Living capital metrics. Rasch Measurement Transactions, 21(1), 1092-1093 [].

Fisher, W. P., Jr. (2008a, 3-5 September). New metrological horizons: Invariant reference standards for instruments measuring human, social, and natural capital. 12th IMEKO TC1-TC7 Joint Symposium on Man, Science, and Measurement. Annecy, France: University of Savoie.

Fisher, W. P., Jr. (2008b, March 28). Rasch, Frisch, two Fishers and the prehistory of the Separability Theorem. In J. William P. Fisher (Ed.), Session 67.056. Reading Rasch Closely: The History and Future of Measurement. American Educational Research Association. New York City [Paper available at SSRN: Rasch Measurement SIG.

Fisher, W. P., Jr. (2009a, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement, 42(9), 1278-1287.

Fisher, W. P., Jr. (2009b). NIST Critical national need idea White Paper: Metrological infrastructure for human, social, and natural capital ( Washington, DC: National Institute for Standards and Technology (11 pages).

Fisher, W. P., Jr. (2010a, 22 November). Meaningfulness, measurement, value seeking, and the corporate objective function: An introduction to new possibilities. Sausalito, California: (

Fisher, W. P. J. (2010b). Measurement, reduced transaction costs, and the ethics of efficient markets for human, social, and natural capital (p. Bridge to Business Postdoctoral Certification, Freeman School of Business: Tulane University.

Fisher, W. P., Jr. (2010c). The standard model in the history of the natural sciences, econometrics, and the social sciences. Journal of Physics: Conference Series, 238(1),

Fisher, W. P., Jr. (2011a). Bringing human, social, and natural capital to life: Practical consequences and opportunities. In N. Brown, B. Duckor, K. Draney & M. Wilson (Eds.), Advances in Rasch Measurement, Vol. 2 (pp. 1-27). Maple Grove, MN: JAM Press.

Fisher, W. P., Jr. (2011b). Measuring genuine progress by scaling economic indicators to think global & act local: An example from the UN Millennium Development Goals project. [Online]. Available: (Accessed 18 January 2011).

Fisher, W. P., Jr. (2012). Measure and manage: Intangible assets metric standards for sustainability. In J. Marques, S. Dhiman & S. Holt (Eds.), Business administration education: Changes in management and leadership strategies (pp. 43-63). New York: Palgrave Macmillan.

Fisher, W. P., Jr., & Stenner, A. J. (2005, Tuesday, April 12). Creating a common market for the liberation of literacy capital. In R. E. Schumacker (Ed.), Rasch Measurement: Philosophical, Biological and Attitudinal Impacts. American Educational Research Association. Montreal, Canada: Rasch Measurement SIG.

Fisher, W. P., Jr., & Stenner, A. J. (2011a, January). Metrology for the social, behavioral, and economic sciences. Available: (Accessed 12 January 2014).

Fisher, W. P., Jr., & Stenner, A. J. (2011b, August 31 to September 2). A technology roadmap for intangible assets metrology. In Fundamentals of measurement science. International Measurement Confederation (IMEKO) TC1-TC7-TC13 Joint Symposium. Jena, Germany:

Hutchins, E. (2010). Cognitive ecology. Topics in Cognitive Science, 2, 705-715.

Magnus, P. D. (2007). Distributed cognition and the task of science. Social Studies of Science, 37(2), 297-310.

Nersessian, N. J. (2006, December). Model-based reasoning in distributed cognitive systems. Philosophy of Science, pp. 699-709.

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010, 29 October). Evidence for a collective intelligence factor in the performance of human groups. Science, pp. 686-688.

Woolley, A. W., & Fuchs, E. (2011, September-October). Collective intelligence in the organization of science. Organization Science, pp. 1359-1367.

Convergence, Divergence, and the Continuum of Field-Organizing Activities

March 29, 2014

So what are the possibilities for growing out green shoots from the seeds and roots of an ethical orientation to keeping the dialogue going? What kinds of fruits might be expected from cultivating a common ground for choosing discourse over violence? What are the consequences for practice of planting this seed in this ground?

The same participant in the conversation earlier this week at Convergence XV who spoke of the peace building processes taking place around the world also described a developmental context for these issues of mutual understanding. The work of Theo Dawson and her colleagues (Dawson, 2002a, 2002b, 2004; Dawson, Fischer, and Stein, 2006) is especially pertinent here. Their comparisons of multiple approaches to cognitive and moral development have provided clear and decisive theory, evidence, and instrumentation concerning the conceptual integrations that take place in the evolution of hierarchical complexity.

Conceptual integrations occur when previously tacit, unexamined, and assumed principles informing a sphere of operations are brought into conscious awareness and are transformed into explicit objects of new operations. Developmentally, this is the process of discovery that takes place from the earliest stages of life, in utero. Organisms of all kinds mature in a process of interaction with their environments. Young children at the “terrible two” stage, for instance, are realizing that anything they can detach from, whether by throwing or by denying (“No!”), is not part of them. Only a few months earlier, the same children will have been fascinated with their fingers and toes, realizing these are parts of their own bodies, often by putting them in their mouths.

There are as many opportunities for conceptual integrations between the ages of 21 to 99 as there are between birth and 21. Developmental differences in perspectives can make for riotously comic situations, and can also lead to conflicts, even when the participants agree on more than they disagree on. And so here we arrive at a position from which we can get a grip on how to integrate convergence and divergence in a common framework that follows from the prior post’s brief description of the ontological method’s three moments of reduction, application, and deconstruction.


Woolley and colleagues (Woolley, et al., 2010; Woolley and Fuchs, 2011) describe a continuum of five field-organizing activities categorizing the types of information needed for effective collective intelligence (Figure 1). Four of these five activities (defining, bounding, opening, and bridging) vary in the convergent versus divergent processes they bring to bear in collective thinking. Defining and bounding are convergent processes that inform judgment and decision making. These activities are especially important in the emergence of a new field or organization, when the object of interest and the methods of recognizing and producing it are in contention. Opening and bridging activities, in contrast, diverge from accepted definitions and transgress boundaries in the creative process of pushing into new areas. Undergirding the continuum as a whole is the fifth activity, grounding, which serves as a theory- and evidence-informed connection to meaningful and useful results.

There are instances in which defining and bounding activities have progressed to the point that the explanatory power of theory enables the calibration of test items from knowledge of the component parts included in those items. The efficiencies and cost reductions gained from computer-based item generation and administration are significant. Research in this area takes a variety of approaches; for more information, see Daniel and Embretson (2010), DeBoeck and Wilson (2004), Stenner, et al. (2013), and others.

The value of clear definitions and boundaries in this context stems in large part from the capacity to identify exceptions that prove (test) the rules, and that then also provide opportunities for opening and bridging. Kuhn (1961, p. 180; 1977, p. 205) noted that

To the extent that measurement and quantitative technique play an especially significant role in scientific discovery, they do so precisely because, by displaying significant anomaly, they tell scientists when and where to look for a new qualitative phenomenon.

Rasch (1960, p. 124) similarly understood that “Once a law has been established within a certain field then the law itself may serve as a tool for deciding whether or not added stimuli and/or objects belong to the original group.” Rasch gives the example of mechanical force applied to various masses with resulting accelerations, introducing idea that one of the instruments might exert magnetic as well as mechanical force, with noticeable effects on steel masses, but not on wooden masses. Rasch suggests that exploration of these anomalies may result in the discovery of other similar instruments that vary in the extent to which they also exert the new force, with the possible consequence of discovering a law of magnetic attraction.

There has been an intense interest in the assessment of divergent inconsistencies in measurement research and practice following in the wake of Rasch’s early work in psychological and social measurement (examples from a very large literature in this area include Karabatsos and Ulrich, 2002, and Smith and Plackner, 2009). Andrich, for instance, makes explicit reference to Kuhn (1961), saying, “…the function of a model for measurement…is to disclose anomalies, not merely to describe data” (Andrich, 2002, p. 352; also see Andrich, 1996, 2004, 2011). Typical software for applying Rasch models (Andrich, et al., 2013; Linacre, 2011, 2013; Wu, et al., 2007) thus accordingly provides many more qualitative numbers evaluating potential anomalies than quantitative measuring numbers. These qualitative numbers (digits that do not stand for something substantive that adds up in a constant unit) include uncertainty and confidence indicators that vary with sample size; mean square and standardized model fit statistics; and principal components analysis factor loadings and eigenvalues.

The opportunities for divergent openings onto new qualitative phenomena provided by data consistency evaluations are complemented in Rasch measurement by a variety of bridging activities. Different instruments intended to measure the same or closely related constructs may often be equated or co-calibrated, so they measure in a common unit (among many publications in this area, see Dawson, 2002a, 2004; Fisher, 1997; Fisher, et al., 1995; Massof and Ahmadian, 2007; Smith and Taylor, 2004). Similarly, the same instrument calibrated on different samples from the same population may exhibit consistent properties across those samples, offering further evidence of a potential for defining a common unit (Fisher, 1999).

Other opening and bridging activities include capacities (a) to drop items or questions from a test or survey, or to add them; (b) to adaptively administer subsets of custom-selected items from a large bank; and (c) to adjust measures for the leniency or severity of judges assigning ratings, all of which can be done, within the limits of the relevant definitions and boundaries, without compromising the unit of comparison. For methodological overviews, see Bond and Fox (2007), Wilson (2005), and others.

The various field-organizing activities spanning the range from convergence to divergence are implicated not only in research on collective thinking, but also in the history and philosophy of science. Galison and colleagues (Galison, 1997, 1999; Galison and Stump, 1996) closely examine positivist and antipositivist perspectives on the unity of science, finding their conclusions inconsistent with the evidence of history. A postpositivist perspective (Galison, 1999, p. 138), in contrast, finds “distinct communities and incommensurable beliefs” between and often within the areas of theory, experiment, and instrument-making. But instead of finding these communities “utterly condemned to passing one another without any possibility of significant interaction,” Galison (1999, p. 138) observes that “two groups can agree on rules of exchange even if they ascribe utterly different significance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself.” In practice, “trading partners can hammer out a local coordination despite vast global differences.”

In accord with Woolley and colleagues’ work on convergent and divergent field-organizing activities, Galison (1999, p. 137) concludes, then, that “science is disunified, and—against our first intuitions—it is precisely the disunification of science that underpins its strength and stability.” Galison (1997, pp. 843-844) concludes with a section entitled “Cables, Bricks, and Metaphysics” in which the postpositivist disunity of science is seen to provide its unexpected coherence from the simultaneously convergent and divergent ways theories, experiments, and instruments interact.

But as Galison recognizes, a metaphor based on the intertwined strands in a cable is too mechanical to support the dynamic processes by which order arises from particular kinds of noise and chaos. Not cited by Galison is a burgeoning literature on the phenomenon of noise-induced order termed stochastic resonance (Andò  and Graziani 2000, Benzi, et al., 1981; Dykman and McClintock, 1998; Fisher, 1992, 2011; Hess and Albano, 1998; Repperger and Farris, 2010). Where the metaphor of a cable’s strands breaks down, stochastic resonance provides multiple ways of illustrating how the disorder of finite and partially independent processes can give rise to an otherwise inaccessible order and structure.

Stochastic resonance involves small noisy signals that can be amplified to have very large effects. The noise has to be of a particular kind, and too much of it will drown out rather than amplify the effect. Examples include the interaction of neuronal ensembles in the brain (Chialvo, Lontin, and Müller-Gerking, 1996), speech recognition (Moskowitz and Dickinson, 2002), and perceptual interpretation (Rianni and Simonotto, 1994). Given that Rasch’s models for measurement are stochastic versions of Guttman’s deterministic models (Andrich, 1985), the question has been raised as to how Rasch’s seemingly weaker assumptions could lead to a measurement model that is stronger than Guttman’s (Duncan, 1984, p. 220). Stochastic resonance may provide an essential clue to this puzzle (Fisher, 1992, 2011).

Another description of what might be a manifestation of stochastic resonance akin to that brought up by Galison arises in Berg and Timmermans’ (2000, p. 56) study of the constitution of universalities in a medical network. They note that, “Paradoxically, then, the increased stability and reach of this network was not due to more (precise) instructions: the protocol’s logistics could thrive only by parasitically drawing upon its own disorder.” Much the same has been said about the behaviors of markets (Mandelbrot, 2004), bringing us back to the topic of the day at Convergence XV earlier this week. I’ll have more to say on this issue of universalities constituted via noise-induced order in due course.


Andò, B., & Graziani, S. (2000). Stochastic resonance theory and applications. New York: Kluwer Academic Publishers.

Andrich, D. (1985). An elaboration of Guttman scaling with Rasch models for measurement. In N. B. Tuma (Ed.), Sociological methodology 1985 (pp. 33-80). San Francisco, California: Jossey-Bass.

Andrich, D. (1996). Measurement criteria for choosing among models with graded responses. In A. von Eye & C. Clogg (Eds.), Categorical variables in developmental research: Methods of analysis (pp. 3-35). New York: Academic Press, Inc.

Andrich, D. (2002). Understanding resistance to the data-model relationship in Rasch’s paradigm: A reflection for the next generation. Journal of Applied Measurement, 3(3), 325-359.

Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42(1), I-7–I-16.

Andrich, D. (2011). Rating scales and Rasch measurement. Expert Reviews in Pharmacoeconomics Outcome Research, 11(5), 571-585.

Andrich, D., Lyne, A., Sheridan, B., & Luo, G. (2013). RUMM 2030: Rasch unidimensional models for measurement. Perth, Australia: RUMM Laboratory Pty Ltd [].

Benzi, R., Sutera, A., & Vulpiani, A. (1981). The mechanism of stochastic resonance. Journal of Physics. A. Mathematical and General, 14, L453-L457.

Berg, M., & Timmermans, S. (2000). Order and their others: On the constitution of universalities in medical work. Configurations, 8(1), 31-61.

Bond, T., & Fox, C. (2007). Applying the Rasch model: Fundamental measurement in the human sciences, 2d edition. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Chialvo, D., Longtin, A., & Müller-Gerking, J. (1996). Stochastic resonance in models of neuronal ensembles revisited [Electronic version].

Daniel, R. C., & Embretson, S. E. (2010). Designing cognitive complexity in mathematical problem-solving items. Applied Psychological Measurement, 34(5), 348-364.

Dawson, T. L. (2002a, Summer). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3(2), 146-89.

Dawson, T. L. (2002b, March). New tools, new insights: Kohlberg’s moral reasoning stages revisited. International Journal of Behavioral Development, 26(2), 154-66.

Dawson, T. L. (2004, April). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11(2), 71-85.

Dawson, T. L., Fischer, K. W., & Stein, Z. (2006). Reconsidering qualitative and quantitative research approaches: A cognitive developmental perspective. New Ideas in Psychology, 24, 229-239.

De Boeck, P., & Wilson, M. (Eds.). (2004). Explanatory item response models: A generalized linear and nonlinear approach. Statistics for Social and Behavioral Sciences). New York: Springer-Verlag.

Duncan, O. D. (1984). Notes on social measurement: Historical and critical. New York: Russell Sage Foundation.

Dykman, M. I., & McClintock, P. V. E. (1998, January 22). What can stochastic resonance do? Nature, 391(6665), 344.

Fisher, W. P., Jr. (1992, Spring). Stochastic resonance and Rasch measurement. Rasch Measurement Transactions, 5(4), 186-187 [].

Fisher, W. P., Jr. (1997). Physical disability construct convergence across instruments: Towards a universal metric. Journal of Outcome Measurement, 1(2), 87-113.

Fisher, W. P., Jr. (1999). Foundations for health status metrology: The stability of MOS SF-36 PF-10 calibrations across samples. Journal of the Louisiana State Medical Society, 151(11), 566-578.

Fisher, W. P., Jr. (2011). Stochastic and historical resonances of the unit in physics and psychometrics. Measurement: Interdisciplinary Research & Perspectives, 9, 46-50.

Fisher, W. P., Jr., Harvey, R. F., Taylor, P., Kilgore, K. M., & Kelly, C. K. (1995, February). Rehabits: A common language of functional assessment. Archives of Physical Medicine and Rehabilitation, 76(2), 113-122.

Galison, P. (1997). Image and logic: A material culture of microphysics. Chicago: University of Chicago Press.

Galison, P. (1999). Trading zone: Coordinating action and belief. In M. Biagioli (Ed.), The science studies reader (pp. 137-160). New York: Routledge.

Galison, P., & Stump, D. J. (1996). The disunity of science: Boundaries, contexts, and power. Palo Alto, California: Stanford University Press.

Hess, S. M., & Albano, A. M. (1998, February). Minimum requirements for stochastic resonance in threshold systems. International Journal of Bifurcation and Chaos, 8(2), 395-400.

Karabatsos, G., & Ullrich, J. R. (2002). Enumerating and testing conjoint measurement models. Mathematical Social Sciences, 43, 487-505.

Kuhn, T. S. (1961). The function of measurement in modern physical science. Isis, 52(168), 161-193. (Rpt. in T. S. Kuhn, (Ed.). (1977). The essential tension: Selected studies in scientific tradition and change (pp. 178-224). Chicago: University of Chicago Press.)

Linacre, J. M. (2011). A user’s guide to WINSTEPS Rasch-Model computer program, v. 3.72.0. Chicago, Illinois:

Linacre, J. M. (2013). A user’s guide to FACETS Rasch-Model computer program, v. 3.71.0. Chicago, Illinois:

Mandelbrot, B. (2004). The misbehavior of markets. New York: Basic Books.

Massof, R. W., & Ahmadian, L. (2007, July). What do different visual function questionnaires measure? Ophthalmic Epidemiology, 14(4), 198-204.

Moskowitz, M. T., & Dickinson, B. W. (2002). Stochastic resonance in speech recognition: Differentiating between /b/ and /v/. Proceedings of the IEEE International Symposium on Circuits and Systems, 3, 855-858.

Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut.

Repperger, D. W., & Farris, K. A. (2010, July). Stochastic resonance –a nonlinear control theory interpretation. International Journal of Systems Science, 41(7), 897-907.

Riani, M., & Simonotto, E. (1994). Stochastic resonance in the perceptual interpretation of ambiguous figures: A neural network model. Physical Review Letters, 72(19), 3120-3123.

Smith, R. M., & Plackner, C. (2009). The family approach to assessing fit in Rasch measurement. Journal of Applied Measurement, 10(4), 424-437.

Smith, R. M., & Taylor, P. (2004). Equating rehabilitation outcome scales: Developing common metrics. Journal of Applied Measurement, 5(3), 229-42.

Stenner, A. J., Fisher, W. P., Jr., Stone, M. H., & Burdick, D. S. (2013, August). Causal Rasch models. Frontiers in Psychology: Quantitative Psychology and Measurement, 4(536), 1-14 [doi: 10.3389/fpsyg.2013.00536].

Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010, 29 October). Evidence for a collective intelligence factor in the performance of human groups. Science, 330, 686-688.

Woolley, A. W., & Fuchs, E. (2011, September-October). Collective intelligence in the organization of science. Organization Science, 22(5), 1359-1367.

Wu, M. L., Adams, R. J., Wilson, M. R., Haldane, S.A. (2007). ACER ConQuest Version 2: Generalised item response modelling software. Camberwell: Australian Council for Educational Research.

On the Criterion Institute’s Leaders Shaping Markets initiative

November 14, 2013

The Criterion Institute’s Leaders Shaping Markets initiative is an encouraging development in large part because of its focus on systems level change. As the Institute recognizes, the questions being raised and the resources being invested are essential to overcoming recurrent problems of fragmentation and marginalization in efforts being made in more piecemeal fashion across a number of other arenas.

Of particular interest from the Institute’s second roundtable session is Joy Anderson’s list of Strategies for Shaping Market Systems. Anderson presents five strategies:

  1. reframing the issues, problems, and boundaries of the system;
  2. engaging systems of power, elegantly;
  3. continuously identifying leverage points in the system;
  4. building structures and leadership for sustained systems-level disruption; and
  5. attending to change over time and across context.

Reframing is the right place to start. As I’ve said elsewhere in this blog, the problem is the problem. At this level of complexity, problems cannot be solved from within the same paradigm they were born from. Conceiving ways of redefining problems that truly reframe the issues and boundaries of a system is hard enough, but implementing them is even harder.

From my point of view, philosophically, the central problem that makes everything so difficult has to do with our deeply ingrained Western habits of thought around not viewing problems and solutions as of a piece, as wholes in which each implies the other. As long as we keep defining problems and solutions in ways that separate them, as though the solution is in no way involved in perpetuating the problem, we are hopelessly stuck.

So we restrict our options for solving problems by the way we frame the issues. And when we misidentify the problem, as when we fail to properly frame it, then we will likely not only not solve it, we will make it worse. That seems to be exactly what’s been going on in the struggle for economic and social justice for decades and centuries.

So if we reframe the problem of shaping markets around the mutual implication of problems and solutions, how do we move to the next step, to engaging systems of power, elegantly? There are a lot of deep and complex philosophical concepts involved here, but we can cut to the chase and note that our language and tools embody problem-solution unities. Social ecologies of relationships define the meanings and uses of things and ideas.

One way of engaging systems of power elegantly to shape markets might then be to harness the power driving those markets in new, more efficient and meaningful ways. The question that then immediately arises concerns the next of Anderson’s five points: where do we find the leverage in the system that would enable the harnessing of its power?

There is likely no greater concentration of power in markets than the profit motive. How might it become the primary lever for engaging the power of the market? We might, for instance, deploy tools and ideas that co-opt the interests of the systems of power by enhancing the predictability of market forces and sustainability of profits. Concentrating now on dwelling within the problem-solution unity of how to shape markets, we can tap into a key factor that makes markets efficient: we manage what we measure, and management is facilitated when we can measure quality and quantity cheaply and easily.

Common currencies for the exchange of value are essential not just to trade and commerce, but also take shape as the standard metrics employed in science, engineering, music, and as the signs and symbols of basic communication. Money is such an easy to manage measure of value that the problems we are addressing here stem in large part from using it too exclusively as a proxy for the authentic wealth we really want. Engaging with systems of power elegantly also then requires us to think in terms of extending the power of standard units of measurement into the new domains of intangible assets: human, social, and natural capital.

This is where we arrive at the structures for sustained system-level disruption. Current economic models and financial spreadsheets focus on the three classic forms of capital: land, labor, and manufactured tools/commodities. (Money, as liquid capital, is fungible relative to all three.) Of these three, we have a metric system for measuring and managing only property and manufactured tools/commodities.

Green economics offers an alternative four-capitals model that adds social capital and reframes land as natural capital and labor as human capital. Both of the latter are found to be far more complex and valuable than their usual reductions to a piece of ground or “hands” would suggest. Human capital involves health, abilities, and motivations; natural capital includes the earth’s air and water purification systems, and food supplies. The addition of social capital is justified on the grounds that, without it, markets are impossible.

What we do not have is a metric system for three of the four forms of capital. Nor do we have the legal and financial systems needed to bring these forms of capital to life in efficient markets, to make them recognized and accepted in banks and courts of law. We further also do not have leaders aware of the need for these structures, and of the established basis in scientific research that makes them viable.

The science is complex and technical, but it brings to bear practical capacities for meaningful, individual level, qualitatively informative and quantitatively rigorous measurement. There is considerable elegance in this method of approaching engagement with the systems of power. There is mathematical beauty in the symmetry and harmony of instruments tuned to the same scales. There is exquisite grace in the way the program for shaping markets grows organically from the seeds of existing markets. The human value of enabling the realization of heretofore unreachable degrees of individual potentials would be enormous, as would be the social value of being able to make returns on investments in education, health care, social services, and the environment accountable.

Successful new markets harnessing the profit motive in the name of socially responsible and sustainable economies well ought to provoke a new cultural renaissance as the proven relationships between higher rates of educational attainment and health, community relations, and environmental quality are born out. The challenges are huge, but properly framing the problems and their solutions will unify our energies in common purpose like never before, bringing joy to the effort.

For further reading along these lines, see:

Fisher, W. P., Jr., & Stenner, A. J. (2011, August 31 to September 2). A technology roadmap for intangible assets metrology. In Fundamentals of measurement science. International Measurement Confederation (IMEKO) TC1-TC7-TC13 Joint Symposium,, Jena, Germany.

Fisher, W. P., Jr. (2009). NIST Critical national need idea White Paper: metrological infrastructure for human, social, and natural capital. Washington, DC: National Institute for Standards and Technology,

Fisher, W. P., Jr., & Stenner, A. J. (2011, January). Metrology for the social, behavioral, and economic sciences (Social, Behavioral, and Economic Sciences White Paper Series). Washington, DC: National Science Foundation,

Fisher, W. P., Jr. (2012, May/June). What the world needs now: A bold plan for new standards [Third place, 2011 NIST/SES World Standards Day paper competition]. Standards Engineering, 64(3), 1 & 3-5, or

Fisher, W. P., Jr. (2009, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement, 42(9), 1278-1287, http://doi:10.1016/j.measurement.2009.03.014

Fisher, W. P., Jr. (2011). Bringing human, social, and natural capital to life: Practical consequences and opportunities. Journal of Applied Measurement, 12(1), 49-66,

Fisher, W. P. J. (2010). Measurement, reduced transaction costs, and the ethics of efficient markets for human, social, and natural capital. Qualifying Paper, Bridge to Business Postdoctoral Certification, Freeman School of Business, Tulane University,

The New Information Platform No One Sees Coming

December 6, 2012

I’d like to draw your attention to a fundamentally important area of disruptive innovations no one seems to see coming. The biggest thing rising in the world of science today that does not appear to be on anyone’s radar is measurement. Transformative potential beyond that of the Internet itself is available.

Realizing that potential will require an Intangible Assets Metric System. This system will connect together all the different ways any one thing is measured, bringing common languages for representing human, social, and economic value into play everywhere. We need these metrics on the front lines of education, health care, social services, and in human, reputation, and natural resource management, as well as in the economic models and financial spreadsheets informing policy, and in the scientific research conducted in dozens of fields.

All reading ability measures, for instance, should be transparently, inexpensively, and effortlessly expressed in a universally uniform metric, in the same way that standardized measures of weight and volume inform grocery store purchasing decisions. We have made starts at such systems for reading, writing, and math ability measures, and for health status, functionality, and chronic disease management measures. There oddly seems to be, however, little awareness of the full value that stands to be gained from uniform metrics in these areas, despite the overwhelming human, economic, and scientific value derived from standardized units in the existing economy. There has accordingly been virtually no leadership or investment in this area.

Measurement practice in business is woefully out of touch with the true paradigm shift that has been underway in psychometrics for years, even though the mantra “you manage what you measure” is repeated far and wide. In a fascinating twist, practically the only ones who notice the business world’s conceptual shortfall in measurement practice are the contrarians who observe that quantification can often be more of a distraction from management than the medium of its execution—but this is true only when measures are poorly conceived, designed, and implemented.

Demand for better measurement—measurement that reduces data volume not only with no loss of information but with the addition of otherwise unavailable interstitial information; that supports mass customized comparability for informed purchasing and quality improvement decisions; and that enables common product definitions for outcomes-based budgeting—is growing hand in hand with the spread of resilient, nimble, lean, and adaptive business models, and with the ongoing geometrical growth in data volume.

An even bigger source of demand for the features of advanced measurement is the increasing dependence of the economy on intangible assets, those forms of human, social, and natural capital that comprise 90% or more of the total capital under management. We will bring these now economically dead forms of capital to life by systematically standardizing representations of their quality and quantity. The Internet is the planetary nervous system through which basic information travels, and the Intangible Assets Metric System will be the global cerebrum, where higher order thinking takes place.

It will not be possible to realize the full potential of lean thinking in the information- and service-based economy without an Intangible Assets Metric System. Given the long-proven business value of standards and the role of measurement in management, it seems self-evident that our ongoing economic difficulties stem largely from our failure to develop and deploy an Intangible Assets Metric System providing common currencies for the exchange of authentic wealth. The future of sustainable and socially responsible business practices must surely depend extensively on universal access to flexible and practical uniform metrics for intangible assets.

Of course, for global intangible assets standards to be viable, they must be adaptable to local business demands and conditions without compromising their comparability. And that is just what is most powerfully disruptive about contemporary measurement methods: they make mass customization a reality. They’ve been doing so in computerized testing since the 1970s. Isn’t it time we started putting this technology to systematic use in a wide range of applications, from human and environmental resource management to education, health care, and social services?

Measuring/Managing Social Value

August 28, 2012

From my December 1, 2008 personal journal, written not long after the October 2008 SoCap conference. I’ve updated a few things that have changed in the intervening years.

Over the last month, I’ve been digesting what I learned at the Social Capital Markets conference at Fort Mason in San Francisco, and at the conference I attended just afterward, Bioneers, in Marin county. Bioneers ( could be called Natural Capital Markets. It was quite like the Social Capital Markets conference with only a slight shift in emphasis, and lots of discussion of social value.

The main thing that impressed me at both of these conferences, apart from what I already knew about the caring passion I share with so many, is the huge contrast between that passion and the quality of the data that so many are basing major decisions on. Seeing this made me step back and think harder about how to shape my message.

First, though it may not seem like it initially, there is incredible practical value to be gained from taking the trouble to construct good measures. We do indeed manage what we measure. So whatever we measure becomes what we manage. If we’re not measuring anything that has anything to do with our mission, vision, or values, then what we’re managing won’t have anything to do with those, either. And when the numbers we use as measures do not actually represent a constant unit amount that adds up the way the numbers do, then we don’t have a clue what we’re measuring and we could be managing just about anything.

This is not the way to proceed. First take-away: ask for more from your data. Don’t let it mislead you with superficial appearances. Dig deeper.

Second, to put it a little differently, percentages, scores, and counts per capita, etc. are not measures that have the same meaning or quality that measures of height, weight, time, temperature, or volts have. However, for over 50 years, we have been constructing measures mathematically equivalent to physical measures from ability tests, surveys, assessments, checklists, etc. The technical literature on this is widely available. The methods have been mainstream at ETS, ACT, state and national departments of education globally, etc for decades.

Second take-away: did I say you should ask for more from your data? You can get it. A lot of people already are, though I don’t think they’re asking for nearly as much as they could get.

Third, though the massive numbers of percentages, scores, and counts per capita are not the measures we seek, they are indeed exactly the right place to start. I have seen over and over again, in education, health care, sociology, human resource management, and most recently in the UN Millennium Development Goals data, that people do know exactly what data will form a proper basis for the measurement systems they need.

Third take-away: (one more time!) ask for more from your data. It may conceal a wealth beyond what you ever guessed.

So what are we talking about? There are methods for creating measures that give you numbers that verifiably stand for a substantive unit amount that adds up in the same way one-inch blocks do (probabilistically, and within a range of error). If the instrument is properly calibrated and administered, the unit size and meaning will not change across individuals or samples measured. You can reduce data volume dramatically, not only with no loss of information but also with false appearances of information either indicated as error or flagged for further attention. You can calibrate a continuum of less to more that is reliably and reproducibly associated with, annotated by, and interpreted through your own indicators. You can equate different collections of indicators that measure the same thing so that they do so in the same unit.

Different agencies using the same, different, or mixed collections of indicators in different countries or regions could assess their measures for comparability, and if they are of satisfactory quality, equate them so they measure in the same unit. That is, well-designed instruments written and administered in different languages routinely have their items calibrate in the same order and positions, giving the same meaning to the same unit of measurement. For instance, see the recent issue of the Journal of Applied Measurement ([link]) devoted to reports on the OECD’s Programme for International Student Assessment.

This is not a data analysis strategy. It is an instrument calibration strategy. Once calibrated, the instrument can be deployed. We need to monitor its structure, but the point is to create a tool people can take out into the world and use like a thermometer or clock.

I’ve just been looking at the Charity Navigator (for instance, [link]) and the UN’s Millenium Development Goals ([link]), and the databases that have been assembled as measures of progress toward these goals ([link]). I would suppose these web sites show data in forms that people are generally familiar with, so I’m working up analyses to use as teaching tools from the UN data.

You don’t have to take any of this at my word. It’s been documented ad nauseum in the academic literature for decades. Those interested can find out more than they ever wanted to know at, in the Wikipedia Rasch entry, in the articles and books at, or in dozens of academic journals and hundreds of books. Though I’ve done my share of it, I’m less interested in continuing to add to that than I am in making a tangible contribution to improving people’s lives.

Sorry to go on like this. I meant to keep this short. Anyway, there it is.

PS, for real geeks: For those of you serious about learning about measurement as it is rigorously and mathematically defined, look into taking Everett Smith’s measurement course at ([link]) or David Andrich’s academic units at the University of Western Australia ([link]). Available software includes Mike Linacre’s Winsteps, Andrich’s RUMM, and Mark Wilson’s, at UC Berkeley, Conquest.

The methods Ev, Mike, David, and Mark teach have repeatedly been proven, both in mathematical theory and in real life, to be both necessary and sufficient in the construction of meaningful, practical measurement. Any number of ways of defining objectivity in measurement have been shown to reduce to the mathematical models they use. Why all the Chicago stuff? Because of Ben Wright. I’m helping (again) to organize a conference in his honor, to be held in Chicago next March. His work won him a Career Achievement Award from the Association of Test Publishers, and the coming conference will celebrate his foundational contributions to computerized measurement in health care.

As a final note, for those of you fearing reductionistic meaninglessness, look into my philosophical work.  But enough…

Review of “Advancing Social Impact Investments Through Measurement”

August 24, 2012

Over the last few days, I have been reading several of the most recent issues of the Community Development Investment Review, especially volume 7, number 2, edited by David Erickson of the Federal Reserve Bank of San Francisco, reporting the proceedings of the March 21, 2011 conference in Washington, DC on advancing social impact investments through measurement. I am so excited to see this work that I am (truly) fairly trembling with excitement. I feel as though I’ve finally made my way home. There are so many points of contact, it’s hard to know where to start. After several days of concentrated deep breathing and close study of the CDIR, it’s now possible to formulate some coherent thoughts to share.

The CDIR papers start to sort out the complex issues involved in clarifying how measurement might contribute to the integration of impact investing and community development finance. I am heartened by the statement that “The goal of the Review is to bridge the gap between theory and practice and to enlist as many viewpoints as possible—government, nonprofits, financial institutions, and beneficiaries.” On the other hand, the omission of measurement scientists from that list of viewpoints adds another question to my long list of questions as to why measurement science is so routinely ignored by the very people who proclaim its importance. The situation is quite analogous to demanding more frequent conversational interactions from colleagues while ignoring the invention of the telephone and not providing them with the tools and network connections.

The aims shared by the CDIR contributors and myself are evident in the fact that David Erickson opens his summary of the March 21, 2011 conference with the same quote from Robert Kennedy that I placed at the end of my 2009 article in Measurement (see references below; all papers referenced are available by request if they are not already online). In that 2009 paper, in others I’ve published over the last several years, in presentations I’ve made to my measurement colleagues abroad and at home, and in various entries in my blog, I take up virtually all of the major themes that arose in the DC conference: how better measurement can attract capital to needed areas, how the cost of measurement repels many investors, how government can help by means of standard setting and regulation, how diverse and ambiguous investor and stakeholder interests can be reconciled and/or clarified, etc.

The difference, of course, is that I present these issues from the technical perspective of measurement and cannot speak authoritatively or specifically from the perspectives represented by the community development finance and impact investing fields. The bottom line take-away message for these fields from my perspective is this: unexamined assumptions may unnecessarily restrict assessments of problems and their potential solutions. As Salamon put it in his remarks in the CDIR proceedings from the Washington meeting (p. 43), “uncoordinated innovation not guided by a clear strategic concept can do more than lose its way: it can do actual harm.”

A clear strategic concept capable of coordinating innovations in social impact measurement is readily available. Multiple, highly valuable, and eminently practical measurement technologies have proven themselves in real world applications over the last 50 years. These technologies are well documented in the educational, psychological, sociological, and health care research literatures, as well as in the practical experience of high stakes testing for professional licensure and certification, for graduation, and for admissions.

Numerous reports show how to approach problems of quantification and standards with new degrees of rigor, transparency, meaningfulness, and flexibility. When measurement problems are not defined in terms of these technologies, solutions that may offer highly advantageous features are not considered. When the area of application is as far reaching and fundamental as social impact measurement, not taking new technologies into account is nothing short of tragic. I describe some of the new opportunities for you in a Technical Postscript, below.

In his Foreword to the CDIR proceedings issue, John Moon mentions having been at the 2009 SoCap event bringing together stakeholders from across the various social capital markets arenas. I was at the 2008 SoCap, and I came away from it with much the same impression as Moon, feeling that the palpable excitement in the air was more than tempered by the evident fact that people were often speaking at cross purposes, and that there did not seem to be a common object to the conversation. Moon, Erickson, and their colleagues have been in one position to sort out the issues involved, and I have been in another, but we are plainly on converging courses.

Though the science is in place and has been for decades, it will not and cannot amount to anything until the people who can best make use of it do so. The community development finance and impact investing fields are those people. Anyone interested in getting together for an informal conversation on topics of mutual interest should feel free to contact me.

Technical Postscript

There are at least six areas in efforts to advance social impact investments via measurement that will be most affected by contemporary methods. The first has to do with scale quality. I won’t go into the technical details, but numbers do not automatically stand for something that adds up the way they do. Mapping a substantive construct onto a number line requires specific technical expertise; there is no evidence of that expertise in any of the literature I’ve seen on social impact investing, or on measuring intangible assets. This is not an arbitrary bit of philosophical esoterica or technical nicety. This is one of those areas where the practical value of scientific rigor and precision comes into its own. It makes all the difference in being able to realize goals for measurement, investment, and redefining profit in terms of social impacts.

A second area in which thinking on social impact measurement will be profoundly altered by current scaling methods concerns the capacity to reduce data volume with no loss of information. In current systems, each indicator has its own separate metric. Data volume quickly multiplies when tracking separate organizations for each of several time periods in various locales. Given sufficient adherence to data quality and meaningfulness requirements, today’s scaling methods allow these indicators to be combined into a single composite measure—from which each individual observation can be inferred.

Elaborating this second point a bit further, I noted that some speakers at the 2011 conference in Washington thought reducing data volume is a matter of limiting the number of indicators that are tracked. This strategy is self-defeating, however, as having fewer independent observations increases uncertainty and risk. It would be far better to set up systems in which the metrics are designed so as to incorporate the amount of uncertainty that can be tolerated in any given decision support application.

The third area I have in mind deals with the diverse spectrum of varying interests and preferences brought to the table by investors, beneficiaries, and other stakeholders. Contemporary approaches in measurement make it possible to adapt the content of the particular indicators (counts or frequencies of events, or responses to survey questions or test items) to the needs of the user, without compromising the comparability of the resulting quantitative measure. This feature makes it possible to mass customize the content of the metrics employed depending on the substantive nature of the needs at that time and place.

Fourth, it is well known that different people judging performances or assigning numbers to observations bring different personal standards to bear as they make their ratings. Contemporary measurement methods enable the evaluation and scaling of raters and judges relative to one another, when data are gathered in a manner facilitating such comparisons. The end result is a basis for fair comparisons, instead of scores that vary depending more on which rater is observing than on the quality of the performance.

Fifth, much of the discussion at the conference in Washington last year emphasized the need for shared data formatting and reporting standards. As might be guessed from the prior four areas I’ve described, significant advances have occurred in standard setting methods. It is suggested in the CDIR proceedings that the Treasury Department should be the home to a new institute for social impact measurement standards. In a series of publications over the last few years, I have suggested a need for an Intangible Assets Metric System to NIST and NSF (see below for references and links; all papers are available on request). That suggestion comes up again in my third-prize winning entry in the 2011 World Standards Day paper competition, sponsored by NIST and SES (the Society for Standards Professionals), entitled “What the World Needs Now: A Bold Plan for New Standards.” (See below for link.)

Sixth, as noted by Salamon (p. 43), “metrics are not neutral. They not only measure impact, they can also shape it.” Though this is not likely exactly what Salamon meant, one of the most exciting areas in measurement applications in education in recent years, one led in many ways by my colleague, Mark Wilson, and his group at UC Berkeley, concerns exactly this feedback loop between measurement and impact. In education, it has become apparent that test scaling reveals the order in which lessons are learned. Difficult problems that require mastery of easier problems are necessarily answered correctly less often than the easier problems. When the difficulty order of test questions in a given subject remains constant over time and across thousands of students, one may infer that the scale reveals the path of least resistance. Individualizing instruction by targeting lessons at the student’s measure has given rise to a concept of formative assessment, distinct from the summative assessment of accountability applications. I suspect this kind of a distinction may also prove of value in social impact applications.

Relevant Publications and Presentations

Fisher, W. P., Jr. (2002, Spring). “The Mystery of Capital” and the human sciences. Rasch Measurement Transactions, 15(4), 854 [].

Fisher, W. P., Jr. (2004, Thursday, January 22). Bringing capital to life via measurement: A contribution to the new economics. In  R. Smith (Chair), Session 3.3B. Rasch Models in Economics and Marketing. Second International Conference on Measurement in Health, Education, Psychology, and Marketing: Developments with Rasch Models, The International Laboratory for Measurement in the Social Sciences, School of Education, Murdoch University, Perth, Western Australia.

Fisher, W. P., Jr. (2005, August 1-3). Data standards for living human, social, and natural capital. In Session G: Concluding Discussion, Future Plans, Policy, etc. Conference on Entrepreneurship and Human Rights [], Pope Auditorium, Lowenstein Bldg, Fordham University.

Fisher, W. P., Jr. (2007, Summer). Living capital metrics. Rasch Measurement Transactions, 21(1), 1092-3 [].

Fisher, W. P., Jr. (2008, 3-5 September). New metrological horizons: Invariant reference standards for instruments measuring human, social, and natural capital. Presented at the 12th International Measurement Confederation (IMEKO) TC1-TC7 Joint Symposium on Man, Science, and Measurement, Annecy, France: University of Savoie.

Fisher, W. P., Jr. (2009, November). Invariance and traceability for measures of human, social, and natural capital: Theory and application. Measurement, 42(9), 1278-1287.

Fisher, W. P.. Jr. (2009). NIST Critical national need idea White Paper: Metrological infrastructure for human, social, and natural capital (Tech. Rep., Washington, DC: National Institute for Standards and Technology.

Fisher, W. P., Jr. (2010). The standard model in the history of the natural sciences, econometrics, and the social sciences. Journal of Physics: Conference Series, 238(1),

Fisher, W. P., Jr. (2011). Bringing human, social, and natural capital to life: Practical consequences and opportunities. In N. Brown, B. Duckor, K. Draney & M. Wilson (Eds.), Advances in Rasch Measurement, Vol. 2 (pp. 1-27). Maple Grove, MN: JAM Press.

Fisher, W. P., Jr. (2011). Measuring genuine progress by scaling economic indicators to think global & act local: An example from the UN Millennium Development Goals project. Retrieved 18 January 2011, from Social Science Research Network:

Fisher, W. P., Jr. (2012). Measure and manage: Intangible assets metric standards for sustainability. In J. Marques, S. Dhiman & S. Holt (Eds.), Business administration education: Changes in management and leadership strategies (pp. 43-63). New York: Palgrave Macmillan.

Fisher, W. P., Jr. (2012, May/June). What the world needs now: A bold plan for new standards. Standards Engineering, 64(3), 1 & 3-5 [].

Fisher, W. P., Jr., & Stenner, A. J. (2011, January). Metrology for the social, behavioral, and economic sciences (Social, Behavioral, and Economic Sciences White Paper Series). Retrieved 25 October 2011, from National Science Foundation:

Fisher, W. P., Jr., & Stenner, A. J. (2011, August 31 to September 2). A technology roadmap for intangible assets metrology. In Fundamentals of measurement science. International Measurement Confederation (IMEKO) TC1-TC7-TC13 Joint Symposium,, Jena, Germany.

Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at

HEY GREECE!!! One more time through the basics

May 10, 2012

As the battle between austerity and growth mindsets threatens to freeze into a brittle gridlock, it seems time once again to simplify and repeat some painfully obvious observations.

1. Human, social, and natural capital make up at least 90 percent of the capital under management in the global economy.

2. There is no system of uniform weights and measures for these forms of capital.

3. We manage what we measure; so, lacking proper measures for 90 percent of the capital in the economy, we cannot possibly manage it properly.

4. Measurement theory and practice have advanced to the point that the technical viability of a meaningful, objective, and precise system of uniform units for human, social, and natural capital is no longer an issue.

5. A metric system for intangible assets (human, social, and natural capital) is the infrastructural capacity building project capable of supporting sustainable and responsible growth we are looking for.

6. Individual citizens, philanthropists, entrepreneurs, corporations, NGOs, educators, health care advocates, innovators, researchers, and governments everywhere ought to be focusing intensely on building systems of consensus measures that take full advantage of existing technical means for instrument scaling, equating, adaptive administration, mass customization, growth modeling, data quality assessment, and diagnostic individualized reporting.

7. Uniform impact measurement will make it possible to price outcomes in ways that allow market forces to inform consumers as to where they can obtain the best cost/value relation for the money. In other words, the profit motive will be directly harnessed in growing human, social, and natural capital.

8. Happiness indexes and gross national or domestic authentic wealth products will not obtain any real practical utility until individuals, firms, NGOs, and governments can directly manage their own intangible asset bottom lines.

See other posts in this blog or the links below for more information.

William P. Fisher, Jr., Ph.D.

Research Associate
BEAR Center
Graduate School of Education
University of California, Berkeley
LivingCapitalMetrics Consulting

We are what we measure.

It’s time we measured what we want to be.

Connect with me on LinkedIn:
View my research on my SSRN Author page:
Read my blog at
See my web site at
Creative Commons License
LivingCapitalMetrics Blog by William P. Fisher, Jr., Ph.D. is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at
Permissions beyond the scope of this license may be available at