Seminar Review: Making things valuable, CBS (first part)
Last week, I was lucky enough to attend an excellent two-day workshop “Making things valuable” held at the Copenhagen Business School. The final program had eight presenters – Peter Miller, Paulo Quattrone, Wendy Espeland, David Stark, Martha Poon, Lucien Karpik, Celia Lury and Vicent Lépinay – each with an hour for presentation and Q&A. Unsurprisingly, these very rich two days left me thinking about many different things and my thoughts went in many directions. In this post (and hopefully in a second one too), I am going to try and organize what I heard. However, more than giving a full account of the event, I am going to focus mostly on one main issue, which, as expected, was central in at least half of the presentations, namely: quantification in the form of rankings and scores. Considering that the lineup of the workshop included some of the most influential authors of these topics today, in the next paragraphs I am going to use their work to illustrate what I understand is the state of the art in this domain (follow this link for my slightly longer summary of the previous literature), to finish with a short remark about an issue I believe has somehow been left aside: how to stop rankings.
Rankings are not necessarily “expert knowledge” but they act upon experts’ decision making
Wendy Espeland talked about her very influential work regarding the “reactivity” of Law School rankings. University rankings in the US, she explained, were originally developed by a type of agent that had little knowledge about the institutions they were listing: journalists. Accordingly, the numbers they produced were not seriously considered by the academic authorities when evaluating the performance of their institutions. Later on, however, as a second type of agent, not famous for their expertise, namely prospective students, started to consider school rankings in their decisions, these lists began influencing the flow of students and resources channeled to the different schools. Today, rankings elaborated by non specialist magazines are one of (if not) the most relevant variables considered by Law School managers in their decision making, which, in turn, is affecting some very important issues. For instance: giving grants to students that could improve the schools’ relative positions, leaving other important actions, such as positive discrimination, unattended.
The diffusion of rankings is one of these socio-technical stories with funny twists
Martha Poon discussed a new side of the history of Fair & Isaac’s credit scoring technology which she has been working on in the last years. Her presentation went in two main directions. First, she firmly stated that algorithms in credit scorings have been something like the Watson steam engine of financialization. They would be the most important factors explaining the amazing rise of consumer lending in the last couple of decades. However, the second and main part of her paper did not try to probe this statement, but it focused on something else. She described (or more precisely re-enacted!) the heated controversy there was in the US Congress in the 70s regarding the potentially discriminatory character of including “social variables” – such as age, sex, marital status, race and zip code – in consumer lending decisions. Poon showed that this story ended with a somehow paradoxical turn. The decision to regulate the type of information considered in consumer lending, rather than stopping the diffusion of credit scores, produced what a neo-institutional scholar would call “coercive isomorphism”. By forcing accountability, regulators paved the way for the expansion of algorithms that formalize and standardize credit decisions, in other words: Fair & Isaac’s business.
Forget about transparency, in rankings the medium is the message
Celia Lury presented her co-authored case study of Klout, an online platform that, drawing on information collected from different online social networks, produces a score that goes from 1 to 100 and accounts for the users’ “social influence”. Lury, referring to recent work by authors such as J. Guyer, H. Verran and E. Esposito, talked about what she calls the “participatory” character of rankings. Rankings, like the Klout Score, or more generally quantitative accounts like the ones Paolo Quattrone discussed in his presentation, are not about transparency or even representation, but are a sort of forms (in the Spencer-Brown sense) whose main operation is recursively processing its own code. Rankings are a type of medium that process numeric divisions, but that, like money, can be circulated and commercialized. (For instance, Klout is sold to companies that use this information to prioritize their answers in customer service).
To summarize, rankings do many different things… but how to stop them?
An excellent summary was provided by P. Miller. As expected, he didn’t talk about rankings, but about accounting. However, his review of what accounting does can also be used as a nice synthesis of the productive character of rankings. Rankings, like accounting then, act in four different ways: (i) adjudicating (as shown by Espeland, ranking are used to evaluate and quantify the success or failure of organizations and accordingly attribute responsibility); (ii) mediating (as discussed by Lury); (iii) subjectivising (they enable a particular mode of subjectivity associated, for instance, with choice and competition); and (iv) territorializing (they draw boundaries around things, or to use Callon’s formulation, they frame and make things calculable, or, to go a bit further, as Poon has shown in her previous work describing the role of consumer scores in the production of asset backed securities, they enable the enactment of new things).
It is amazing how much we have learnt in the last years about rankings. Rankings are everywhere, and recent sociology has made it its homework to follow them. This, though, has also meant that sociologists have had to find new ways of dealing with numbers and statistics, with perhaps the pragmatic shift being the most important one. In this context, the success of rankings doesn’t depend on the quality of their representation, but rather on their ability to place themselves in between and connect, translate, and enroll an increasing amount of agents. Rankings therefore are not seen anymore as carriers of increasing “rationalization”, let alone “efficiency”, but they are evaluated in the light of their ability to numerically count and connect things.
Where the old good story of “rationalization” seems to remain healthy, though, is in that the different papers left the impression that the diffusion of rankings is a kind of unstoppable trend. In fact, the only one who told a story where rankings where tried to be limited, Poon, showed that this finally did nothing but enhance their diffusion. Of course, this does not mean that there are things that are out of quantification. Sociology is full of good theories about the plurality of values, multiple orders of worth, and so on and so forth. Espenland’s story about the Yavapai Indians’ resistance in the controversy regarding a dam construction seems to be just about that. And, another presenter in the workshop, Lucien Karpik, has written a book-length argument showing that even contemporary markets are populated by what he calls “singularities”. However, even in Karpik’s accounts, rankings are central (they are some of the “judgment devices” that allow us to compare singular goods such as movies or fine wine).
In other words, a lot is known too about what is outside rankings. What I see unattended is another question: How are rankings stopped? – And, accordingly, what tools could sociology provide to study this type of process? None of the presenters explicitly dealt with that kind of question. Some hints, though, may be found in some of the less central points that were mentioned in the event. For instance, Miller guaranteed that, when hiring new faculty staff for his department at the LSE, they give more priority to their own assessment of the quality of the papers submitted by the applicants than the rankings of the journal where they were published or their citation factor. Espenland mentioned Porter’s remark in his Trust in Numbers about the strong resistance of some professions for being externally accounted (I think he was comparing actuaries and accountants). And while hearing that, I was thinking of Annelise Riles’ notion of “glitches”, when she refers to the sort of organizational holes she found between lawyers and traders in her ethnography of finance in Japan. Are, therefore, rankings stopped by organizational practices and professional boundaries? Is there anything in the rankings’ own practical formation that makes them particularly resilient? I really don’t know. But I think there are still some things to learn.
Acknowledgements
Image by sillygwailo, used under a Creative Commons license. As part of our inter-network collaboration this entry is co-posted with estudios de la economia.