How do people know things?

  “How do people know things?” – the title of this blog post – seems like a simple question, but as our new publication, Information and Empire: Mechanisms of Communication in Russia, 1600-1850 demonstrates, the answer is complex. The volume focuses … Continue reading

#QUTGOODDATA – a workshop update

A contribution by Dr Kayleigh Hodgkinson-Murphy and Dr Angela Daly.

The QUT Pathways to Ethical Data project combines the interdisciplinary expertise of Dr Angela Daly, Dr Kate Devitt and Dr Monique Mann, assisted by Dr Kayleigh Hodgkinson-Murphy, to investigate and promote ethical data practices & initiatives, towards a fair and just digital economy. The team members are co-editing an INC Theory on Demand book on ‘Good Data’ and currently have a call out for proposed chapters for the book, which will be published in late 2018.

Last month, on 22nd November, approximately 50 industry practitioners and academics gathered at Queensland University of Technology’s Gardens Point campus in Brisbane, Australia to participate in the ‘Pathways to Ethical Data’ workshop – known as ‘Good Data’ for short. Collecting participants from across Australia and overseas, as well as from a variety of industry, government and discipline backgrounds, the workshop sought to examine and discuss the complex issues surrounding ‘good’ and ‘ethical’ data practices.

Early in the workshop, Dr Kate Devitt from QUT facilitated an activity that asked everyone to order themselves from most to least dystopian in their view of data practices today and in the future. This activity served not only as an introduction between participants but as an introduction to the depth and breadth of opinions held on the practices of data collection, retention, research, and use. Many participants stood around the centre of the line, reflecting the general group opinion that data is neither inherently ‘good’ or ‘bad’, but rather that it’s the structures and practices surrounding the collection, retention or use of data that has an impact on whether the data can be considered ‘good’ or ‘bad’.

The themes that appeared during these early discussions were revisited in the longer talks given by various professionals in the field. Associate Professor Raymond Lovett (ANU), Dr Donna Cormack (University of Otago Wellington) and Dr Vanessa Lee (University of Sydney) spoke to the group on perspectives of Indigenous Data Sovereignty, identifying the specific complexities in both collecting and using data relating to Indigenous populations. Their discussions paralleled issues raised within the group discussions, namely that good data needs to engage with the community and further consideration needs to be given to data practices that reproduce colonial structures. Following this, a collection of lightening talks paired industry professionals with academics to speak in detail on particular topics such as community wifi initiatives, electronic health records, the ethics of open data and the recent ‘robodebt’ controversy in Australia where the government has badly implemented algorithmic decision-making to identify overpayments of welfare. These talks gave an opportunity to blend industry expertise with a broader academic framework and led to passionate and spirited discussion throughout the room.

The workshop proved to be an informative and valuable introduction to the project – identifying just how complex the issues surrounding data can be. However, while participants were honest about the very real issues and problems currently enmeshed in various industry and government data practices, there was also considerable discussion about the routes available to move towards more ethical data futures.

Following the daytime workshop, the QUT Good Data project team partnered with Thoughtworks’ Brisbane office for a public evening event. After a technical demonstration of bias and discrimination in machine learning, QUT’s Dr Monique Mann facilitated a discussion with special guest Asher Wolf, a well-known information journalist, digital activist and founder of the world-wide CryptoParty movement. Asher spoke about her own journey as an activist and fighter for good information and good data to great applause from a packed audience. A good end to a good day of good data!

Do you want to participate, consider sending in a proposal to the Call.

Governing Development Failure

In the last few years there has been a proliferation of new “little development devices” and practices in places where we might least expect them: at the World Bank and in national development agencies usually associated with the kinds of large-scale infrastructure mega-projects that these institutions pioneered after World War II. Yet the current emphasis on “little” development devices cannot be understood as a straightforward reaction to earlier forms of development policy that used “big” development devices. Rather, if we want to understand the current fascination with little development devices, we need to look at a different moment in international development institutions’ history: the many prominent failures in development assistance that marked the 1990s, such as the AIDS epidemic, the Asian financial crisis, and the “lost decade” of development in sub-Saharan Africa.

If we cannot understand the emergence of these new devices without paying attention to the recent failures of development policy, does that mean that they signal the failure of international development as we’ve known it? Yes and no: yes because many of them have been developed as innovative responses to the failures of development assistance, and no because they are nonetheless still very much development devices aimed at many of the same objectives that have held sway since the mid-twentieth century, including economic growth and poverty reduction.

In fact, although policy failures are central to this story, the part they play is a surprisingly creative one. These failures were profound enough to provoke a crisis of development expertise, leading development practitioners to question their very metrics of success and failure. Over time, these practitioners sought to re-establish the grounds for their authority, reconceiving the object of development—poverty—by forging new metrics of aid success, by developing new techniques for its measurement, and by adopting new devices amenable to this kind of measurement.

Rather than the failure of development, what precipitated the proliferation of these new micro-devices was thus the transformation of development governance through its engagement and problematization of failure, as well as its growing preoccupation with the ever-present possibility of future failures.

Responding to Past Failures

Beginning in the 1990s, there was a lot of talk about the failure of development policies. Some external critics focused on the persistence of extreme poverty in sub-Saharan Africa, while others pointed to the AIDS crisis in Africa, or the sudden increase in poverty in Asia after the 1997–1998 financial crisis. All of these crises had occurred on the watch of the major development organizations in spite of (or, as many critics suggested, because of) their efforts.

Inspired by these crises, both external critics and many of those working in the policy development and evaluation units at the World Bank and the International Monetary Fund (IMF) began to point to various policy failures (Collier 1997; Killick 1997). Staff in the Policy Development and Review department at the IMF, for example, noted that the ever-increasing number of conditions that aid packages imposed on poor countries had no positive effect on compliance, and were significantly reducing borrower governments’ “ownership” of the reforms (Boughton 2003). Meanwhile, the World Bank’s Operation Evaluation Department’s (OED) assessments were pointing to dramatically declining success rates—from 80% to 85% in the 1980s to less than 65% in the 1990s (OED 1994), figures that were of great concern to World Bank president James Wolfensohn.

One of the underlying targets of these criticisms was the policy framework known as the “Washington Consensus,” a broadly neoliberal approach to development that put growth at its core and saw the market as the best way of achieving development goals (e.g., Stiglitz 1998:1). Yet, even as the World Bank dedicated its 1997 flagship World Development Report to the “rediscovery” of the state after two decades of denigrating or denying its role, the report was also very careful to distinguish the World Bank’s present strategy from earlier state-led approaches to development, arguing for the need to “take the burden off the state by involving citizens and communities in the delivery of core collective goods” (World Bank 1997:3).  Treating both state- and market-dominated approaches as failures, the World Bank has pursued a middle way between the two, forging new and dynamic assemblages of public and private actors, claims, and practices to simultaneously pursue public goals and private interests (Best 2014a).

Contested Failures

Of course, policy failures occur all the time. Sometimes they are perceived as failures, and sometimes they are ignored. Yet occasionally they become what I call “contested failures”: failures important enough to produce widespread debates about the meaning of success and failure and the metrics through which we evaluate them (Best 2014b). The concept of contested failure is connected to what Andrew Barry calls “knowledge controversies,” in which the metrics that are usually taken for granted become, for a time, politicized (Barry 2012).

These are interesting moments when we confront them in our everyday lives. Many of those of us who teach for a living, for example, have confronted a set of exams that fall so far below our expectations that they force us to re-evaluate our conceptions of success and failure (and, at least in my case, to change the assignment altogether). Such contested failures are fascinating moments in politics because the question of what counts as success is both highly technical—involving questions of evaluation and calculation—and normative—raising the question of what we value enough to define as success.

The “aid effectiveness” debate that emerged in the 1990s and early 2000s was a classic example of this kind of contested failure, as its participants responded by problematizing and ultimately rethinking what makes aid succeed or fail. This widespread debate, which included practitioners, nongovernmental organizations (NGOs), academics, and politicians, raised important questions about why aid did not seem to be working, and ultimately produced some rather different definitions of what counts as successful development (World Bank 1998).

New Definitions of Success

The new definitions of success that began to take hold from the late 1990s onward were somewhat paradoxical.

On the one hand, the conception of success that began to emerge was far bigger and messier than it had been in the past. In the place of narrowly economic definitions of effectiveness, agencies now sought to pursue a much broader and longer-term set of objectives, recognizing that economic development is inextricably linked to political, social, and cultural dynamics that are often particular to a given country or region. For example, development staff hoped to achieve a much greater level of “country ownership” over the policies that they believed needed to be pursued, seeking to encourage domestic engagement by various stakeholders. Their goal was to build political support for ambitious, longer-term institutional reforms, whether through (at least somewhat) participatory consultations or community-driven development.

On the other hand, the metrics for measuring success became increasingly narrow, particularly as the enthusiasm for results and outcomes-based evaluation began to grow in the 2000s. These new metrics sought to respond to (and reduce) the ambiguities produced by the expanded conception of development objectives by making them more readily quantifiable. If development policymakers and aid ministers were no longer able to point to a school or dam to show where the dollars had gone, at least (the theory went) they could point to a measureable result that affirmed a direct line of causality between policy, output, and longer-term outcome.

Not surprisingly, one of the effects of this drive to make aid outcomes measurable has been to create incentives for pursuing policies that are easier to measure. For example, the “cash on delivery” approach, developed in 2006 by the U.S.-based think tank Center for Global Development, promises to pay a set amount for each “unit” of an agreed result. One pilot project developed by the British Department for International Development (DFID) in Ethiopia pays the government £50 for each student who sits a particular exam, and £100 for each one who passes it. This kind of fixation on measurable results creates a proliferation of policies aimed at getting students in exam seats and bed-nets on beds while driving policymakers away from the kind of complex, messy conceptions of development success that the aid effectiveness debate had revealed to be so important.

New Micro-Devices: Poverty, Cash Transfers, and Microcredit

Many of the devices and practices that emerged in the years since the aid effectiveness debate reflect this hybrid character. Although the large-scale, macro-level ambitions of market-led development and poverty reduction remain at the heart of these policies, they are now increasingly pursued through more cautious, smaller-scale, micro-level techniques. This does not just mean that these interventions address the same targets at a smaller scale. Rather, the embrace of these new techniques of intervention corresponds to a new ontology of the object of development.

One area in which we can clearly see this combination of macro-ambitions and micro-techniques is in efforts to reduce poverty. Part of what development researchers and practitioners found so unsettling about the Asian financial crisis and AIDS crisis was how these events pushed huge numbers of people back into poverty, undoing decades of progress. Led by the Social Protection Unit at the World Bank shortly after its creation in 1996, a number of aid agencies began to move away from static conceptions of poverty that generally assumed once an individual or family moved out of poverty they would stay that way (World Bank 2001a). These policy failures forced aid practitioners to rethink poverty on an ontological level, seeing it as a dynamic process rather than a static state (Best 2013). Staff working on social protection at the World Bank sought to redefine poverty as social risk and vulnerability, and to devise a range of more flexible devices in response. This approach to poverty reduction ultimately became a core part of the influential 2000–2001 World Development Report Attacking Poverty, and has been adopted by a number of other organizations, including the DFID and the Organisation for Economic Co-operation and Development (OECD; World Bank 2001b).

The logic of the social risk approach is straightforward: in a volatile and unpredictable world where political, economic, climate, and health crises are always possible, poverty-reduction policy needs to help individuals and communities become better risk managers, capable of preparing for and responding to external shocks. Because some risks are covariant (affecting a large community or even the entire national population), traditional forms of insurance may not be effective because they were designed to respond to idiosyncratic risks (such as a single individual’s health difficulties, or a house fire). The state therefore becomes an important part of the solution, but only as one actor among many, resolving problems of market failure, supporting and combining with private sector initiatives, and enabling individuals to become more active in managing their own risks.

Some of the most popular devices for managing poor people’s vulnerability to poverty, including conditional cash transfers (CCTs) and microcredit initiatives, clearly reflect this hybrid public-private, micro-level focus. CCTs are state-provided funds targeted toward very poor populations, particularly women, generally on the condition that they keep their children in school and bring them in for regular health check-ups. The funds are supposed to help poor people respond to immediate shocks, whereas the conditions are aimed at increasing the resilience of future generations and improving their chances of becoming better risk managers.

Microcredit initiatives, which provide very small loans to people who would not qualify for conventional credit, started out as state and NGO-funded programs but have become increasingly market (and profit) driven in recent years. Their objective is to provide poor individuals with the kind of financial credit that they need to actively take “good” economic risks (such as investing in education or an entrepreneurial activity), in the belief that this will allow them to become more active and autonomous participants in the market economy.

As the World Bank’s first Social Protection Strategy’s title made clear, although this approach works at the micro level, it continues to have macro-level development ambitions, even as it reconceives them in more dynamic terms: seeking to transform social protection efforts from “safety-net to springboard” (World Bank 2001a). The Social Protection Unit’s current website builds on this idea:

In a world filled with risk and potential, social protection systems help individuals and families especially the poor and vulnerable cope with crises and shocks, find jobs, improve productivity, invest in the health and education of their children, and protect the aging population (World Bank 2017).


For the many experts and officials at international development agencies seeking to re-establish their authority in the wake of the failures of the 1990s, these new development devices are attractive in part because their promise of calculability. Many CCT programs have been explicitly designed to collect evidence about their effectiveness, and their growing popularity among development agencies is linked to the promise of demonstrating measurable results. After inconclusive evidence about whether it was the cash or the conditions in CCTs that had some positive effects on school enrollment, a growing number of CCT programs have been designed as randomized experiments that test the effectiveness of conditional and unconditional payments (Baird et al. 2010).

In the case of microcredit, calculability plays a very different but nonetheless crucial role: the development of increasingly sophisticated techniques for evaluating and pricing credit risk among the very poor has made it possible for large financial firms to become involved, not only expanding microcredit but also building a new financial industry around the packaging and resale of these loans to foreign investors (Langevin 2017). These firms have managed in some cases to securitize large portfolios of microloans (rather like the subprime mortgages at the heart of the last global financial crisis), translating the often very high interest rates charged to poor borrowers into global flows of investor value (Aitken 2013).

Governing Failure

Although these various new development devices hold the promise of measurable results, we should not overestimate their technical proficiency; they continue to face the problem of failure even as they seek to respond to it. In fact, many of these new development initiatives have failed to meet at least some of their main objectives. The evidence on conditional cash transfers, though plentiful, is mixed: they do seem to have positive short-term effects on educational enrollment in particular, but their longer-term effects are difficult to demonstrate, and it is not clear yet whether the conditions themselves make any difference. There have also been some highly publicized failures in microcredit, including a rash of suicides by individuals crushed by microfinance debts in Andhra Pradesh, India, that have reinforced a broader questioning of its capacity to alleviate poverty.

More fundamentally, the tension that I identify at the outset of this article—between a growing recognition of the messiness of development success and a persistent desire to tame and often deny that complexity by simplifying forms of measurement and evaluation—remains itself a nagging source of failure. Many of the development practitioners I have spoken to are well aware that it is nearly impossible to make tidy causal links between a given policy action and a complex series of longer-term outcomes, particularly where there are multiple other aid actors and external dynamics in play. Yet, because they are forced to play the game of measurable results, they have begun to design their policies so that they are as easy to measure as possible, distorting development objectives to make them appear calculable (Natsios 2010).

This emergent micro approach to development assistance remains a paradoxical one: cultivating public goals by mobilizing private interests, pursuing more complex objectives while trying to translate them into simpler metrics, and ultimately courting repeated failure to give the veneer of success.

Jacqueline Best is Professor in the School of Political Studies at the University of Ottawa, where she works on political, cultural and social underpinnings of the global economy. Her most recent research examines the concept and role of economic exceptionalism in times of crisis.


Aitken, R. 2013. “The Financialization of Micro-Credit.” Development and Change 44(3):473–499.

Baird, S., C. McIntosh, and B. Ösler. 2010. “Cash or Condition? Evidence from a Randomized Cash Transfer Program.” World Bank Policy Research Working Paper 5259.

Barry, A. 2012. “Political Situations: Knowledge Controversies in Transnational Governance.” Critical Policy Studies 6(3):324–336.

Best, J. 2013. “Redefining Poverty as Risk and Vulnerability: Shifting Strategies of Liberal Economic Governance.” Third World Quarterly 34(2):109–129.

———. 2014a. “The ‘Demand’ Side of Good Governance: The Return of the Public in World Bank Policy.” In The Return of the Public in Global Governance, edited by J. Best and A. Gheciu, pp. 97–119. Cambridge, UK: Cambridge University Press.

———. 2014b. Governing Failure: Provisional Expertise and the Transformation of Global Development Finance. Cambridge, UK: Cambridge University Press.

Boughton, J. M. 2003. “Who’s in Charge? Ownership and Conditionality in IMF-Supported Programs.” IMF Working Paper WP/03/191.

Collier, P. 1997. “The Failure of Conditionality.” Perspectives on Aid and Development, edited by C. Gwyn and J. Nelson, pp. 51–77. Washington, DC: Overseas Development Council.

Killick, T. 1997. “Principals, Agents and the Failings of Conditionality.” Journal of International Development 9(4):483–494.

Langevin, M. 2017. “L’agencement entre la haute finance et l’univers du développement: des conséquences multiples pour la formation des marchés (micro)financiers.” Canadian Journal of Development .

Natsios, A. 2010. The Clash of the Counter-Bureaucracy and Development. Washington, DC: Center for Global Development.

Operation Evaluation Department, World Bank (OED). 1994. Annual Review of Evaluation Results 1993. Washington, DC: Operation Evaluation Department, World Bank.

Stiglitz, J. 1998. Towards a New Paradigm for Development: Strategies, Policies, and Processes. Prebisch Lecture. Geneva, Switzerland: United Nations Conference on Trade and Development.

World Bank. 1997. World Development Report 1997: The State in a Changing World. Washington, DC,: World Bank.

———. 1998. Assessing Aid: What Works, What Doesn’t, and Why. New York: Oxford University Press.

———. 2001a. Social Protection Sector Strategy: From Safety Net to Springboard. Washington, DC: World Bank.

———. 2001b. World Development Report 2000/01: Attacking Poverty. Washington, DC: World Bank.

———. 2017. “Social Protection: Overview.” Available at  link.

Image Credit: Consumer pays for her purchase with 1000 rupiah at a market in Lenek Village, East Lombok District, Indonesia. Asian Development Bank.

“Ich kann nicht”: Hearing Racialized Language in Josh Inocéncio’s Purple Eyes (Ojos Violetas)

In Spring 2017, I brought Houston-based playwright/performer Josh Inocéncio to my campus—the University of Houston—to perform his solo show Purple Eyes (for more on the event, see “Campus Organizing, or How I Use Theatre to Resist”). Purple Eyes is what Inocéncio calls an “ancestral auto/biographical” performance piece which explores his upbringing as a closeted gay Chicano living in the midst of the cultural heritage of machismo. Following a legacy of solo performance storytelling aesthetics seen in John Leguizamo’s Freak and Luis Alfaro’s Downtown, Inocéncio plays with memory to understand how the United States and Mexico have influenced his family and his own identity formation. Moreover, Purple Eyes explores the intersections of queerness and Chican@ identity alongside the legacy of machismo in his family (For more on the play, see “Queering Machismo from Michoacán to Montrose”).

Still from Purple Eyes (Ojos Violetas), with permission from Josh Inocencio who retains copyright.

During my Intro to LGBT Studies course following the performance, students discussed issues of representation and how many of them had never seen a queer Latin@/x play or performance, with some of them having never seen a live play. Many students picked up on how Purple Eyes foregrounds the intersections of race, ethnicity, gender, and sexuality. While these discussions were indeed fruitful, what struck me most was how both classes harped on Inocéncio’s use of different linguistic registers. Put simply, what stayed with them was how the performance sounded. My students obsessed over the Spanish in the play, leading me to question why this group of students at a Hispanic-Serving Institution in a city that is over 40% Latin@ had so much trouble whenever Inocéncio spoke Spanish, or the sounds of Latinidad.

In what follows, I discuss how my students heard Purple Eyes. While the play is predominately in English, Inocéncio often code-switches into Spanish and German to more accurately embody particular family members. This blog adds to previous research by Dolores Inés Casillas, Sara V. Hinojos, Marci R. McMahon, Liana Silva, and Jennifer Stoever  on the relationship between the Spanish language and non-Spanish speaking Americans. Indeed, my students racialized the Spanish in Purple Eyes while completely disregarding the German in the play. Why?

Drawing from sociology, racialization is the process of imposing racial identities to a social practice or group that might not have identified in such a way. Typically, the dominant group racializes the marginalized group; i.e. Latin@s in the U.S. become racialized by the mainstream. Even so, Latin@s are not a race, but are an ethnic group. Yet, I argue that non-Latin@ Americans view Latin@s through a lens of race which often becomes a sonic one, in which language becomes one of the most overt identity markers. In terms of Spanish, while many races and ethnicities speak the language, in the United States it is often viewed as a way to mark Spanish-speaking Latin@s as Other. In this way, language plays a fundamental role in shaping mainstream ideas about race. According to Dolores Inés Casillas, “For unfamiliar ears, the sounds of Spanish, the mariachi ensemble, and/or accented karaoke all work together to signal brownness, working-class,” and as Jennifer Stoever argues, the sounds of Latinidad indicate “illegality” in the U.S.

Drawing from the intersections of race, language, and racism, the relatively new academic field Raciolinguistics has emerged as a means to explain how people use language to shape their identity (For more, see Raciolinguistics: How Language Shapes Our Ideas About Race). Branching off from Raciolinguistics, I am most interested in exploring how the mainstream hears languages and racializes what they are hearing. The result is that Spanish is seen as Other, meaning that monolingual U.S. listeners hear Spanish-speakers as inherently different and a threat to a mainstream United States cultural and, more importantly, national identity.

Still from Purple Eyes (Ojos Violetas), with permission from Josh Inocencio who retains copyright.

Reflecting Inocéncio’s cultural multiplicity, Purple Eyes features English, Spanish, and German strategically used at different moments in the play to reflect the temporality, positionality, and relationship to language of each character that Inocéncio inhabits. While the chapter on his father is entirely in English, the final chapter focusing on Josh himself opens with a monologue in Spanish in which the performer narrates the events of the FIFA World Cup before finally announcing to the crowd that the epilogue is Inocéncio’s journey of young love and heartbreak on his journey of queer discovery. This moment features the longest extended use of Spanish in the play. The remaining Spanish is sprinkled in as Josh code-switches between the two languages for added cultural specificity.

While some of my Spanish-speaking students appreciated hearing a play that reflected their linguistic identities, monolingual English speakers in my class claimed that the Spanish confused them and made it difficult for them to follow certain parts of the play. After several students echoed these thoughts, a student from Mexico without full fluency in English comprehension told others about how her experiences were the exact opposite. She had trouble following some of the parts in English since she is still learning the language. I then pivoted the conversation to discuss how my English-dominant students approached the play with the assumption that English is the norm and a performance on a university campus should reflect this. Case in point: several told me that the show should have been subtitled.

But what was most telling was the following exchange. After several expressed confusion over the Spanish, one particularly woke student from Nigeria raised her hand and said: “I haven’t heard anyone say anything about the German in the play and not being able to follow the play during the German part.”  She then noted how, in the United States, Spanish is racialized whereas German is not. In fact, most of the students did not even recall German in the play. Admittedly, the play features far more Spanish than German, but the scene in which Inocéncio speaks German occurs while dramatizing his Austrian grandmother’s abortion. As Inocéncio (as Oma) frantically repeated “Ich kann nicht” (I can’t), my students had no trouble; to use some Millennial vernacular, it was with Spanish that they “couldn’t even.” Arguably, this is the most intense scene in the performance and one that my students wanted to discuss. That the majority of them understood this scene without fully registering the German, coupled with their confusion over lines spoken in Spanish, speaks to not only how race and ethnicity impact how languages are heard in the United States. German is viewed as familiar and accessible whereas Spanish is immediately heard as foreign, i.e. undesirable, not welcome here.

As the Latin@ population continues to grow and the Spanish language becomes an increasingly present reality in U.S. everyday life, audiences must consider possibilities not grounded in an English-only narrative. My experiences with Purple Eyes are not unique. I have witnessed and heard many stories about audiences at mainstream theatre companies who have struggled whenever a play included Spanish. While I don’t claim to have the answers to address this across the nation, as an educator, I question what tools I can give my students to help prepare them for sonic experiences outside of their comfort zone and, specifically, how they become aware of subconscious racialization practices. What will they hear? And, more importantly, how will they react?

Featured Image: Still from Purple Eyes (Ojos Violetas), with permission from Josh Inocencio who retains copyright.

Trevor Boffone is a Houston-based scholar, educator, writer, dramaturg, producer, and the founder of the 50 Playwrights Project. He is a member of the National Steering Committee for the Latinx Theatre Commons and the Café Onda Editorial Board. Trevor has a Ph.D. in Latin@ Theatre and Literature from the Department of Hispanic Studies at the University of Houston where he holds a Graduate Certificate in Women’s, Gender, & Sexuality Studies. He holds an MA in Hispanic Studies from Villanova University and a BA in Spanish from Loyola University New Orleans. Trevor researches the intersections of race, ethnicity, gender, sexuality, and community in Chican@ and Latin@ theater and performance. His first book project, Eastside Latinidad: Josefina López, Community, and Social Change in Los Angeles, examines the textual and performative strategies of contemporary Latin@ theatermakers based in Boyle Heights that use performance as a tool to expand notions of Latinidad and (re)build a community that reflects this diverse and fluid identity. He is co-editing (with Teresa Marrero and Chantal Rodriguez) an anthology of Latinx plays from the Los Angeles Theatre Center’s Encuentro 2014 (under contract with Northwestern University Press).

  REWIND!…If you liked this post, you may also dig:

“Don’t Be Self-Conchas”: Listening to Mexican Styled Phonetics in Popular Culture*–Sara V. Hinojos and Dolores Inés Casillas

Deaf Latin@ Performance: Listening with the Third Ear–Trevor Boffone

Moonlight’s Orchestral Manoeuvers: A duet by Shakira Holt and Christopher Chien

If La Llorona Was a Punk Rocker: Detonguing The Off-Key Caos and Screams of Alice Bag–Marlen Rios-Hernández

Registration Open for NECS Post-Conference: Open Media Studies

The process of scholarly communication is changing dramatically. Digitization of archives, online research methods and tools, and new ways to disseminate research results are developing fast. During the past four annual NECS (Network for Cinema and Media Studies) conferences, we have held two-hour workshops to discuss the implementation of open access, organized by, among others, editors of the open access journals VIEW: Journal of European Television History and Culture and NECSUS: European Journal of Media Studies. Both journals were founded in 2012 with a NWO grant.

In 2018 we will expand on our experience by organizing a one-day workshop immediately following the annual NECS conference, which this year will be held in Amsterdam, organized by the University of Amsterdam (UvA), University Utrecht (UU), and the Free University of Amsterdam (VU) on 27-29 June. Our post-conference workshop will take place on Saturday 30 June 2018 at the Netherlands Institute for Sound and Vision, Hilversum, The Netherlands.

The developments in open research follow each other at a rapid pace. For the discipline of media studies, developments appear to be a bit faster than for other disciplines in the humanities, as there is already a longer tradition of online sharing, and different media are used for scholarly communication (blogs, videos, audiovisual essays, etc.), besides the traditional peer-reviewed journal article or monograph. With this one-day workshop we aim to explore the concept of ‘open’ in media studies by sharing best practices as well as to investigate what is needed for media scholars to make the entire scholarly communication process (research, analysis, writing, review, publishing, etc.) more transparent.

We will do so by bringing together a group of maximum 25 researchers in media studies in a series of workshops devoted to the themes: 1) research and analysis, 2) writing and publishing, 3) peer review, and 4) public engagement. The day will open with a keynote by prof.dr. Malte Hagener (Philipps-Universität Marburg), one of the co-founding editors of NECSUS and founder of the recently launched project MediaRep, a subject repository for media studies.

The main goals for the day are: creating awareness among the researchers; offer solutions to concrete issues; and explore new open access/science initiatives in relation to media studies. Outcomes of the workshop will be published on the website of NECS, as well as on the Open Access in Media Studies website.

Registration is free. However, there is a maximum of 25 participants. Workshops will be hands-on and active participation is encouraged. Interested?

For the preliminary program and registration, please follow this link.

Hope to see you in Amsterdam/Hilversum!

Organising team: Jeroen Sondervan (Utrecht University), Jeffrey Pooley (Muhlenberg College, US), Jaap Kooijman (University of Amsterdam), Erwin Verbruggen (Netherlands Institute for Sound and Vision).

New Learned Society Network: ScholarlyHub

On October 21st the ScholarlyHub initiative launched its website, mission and ideas about developing a new social academic open access network for sharing papers and other scholarly literature. The project is in its incubator stage and needs crowdfunding to further develop these plans. ScholaryHub wants to directly compete with academic social networking platforms like and ResearchGate. The big difference between these commercial networking behemoths and the ScholarlyHub initiative, is the scholarly-led bottom-up approach of the latter. A remarkable group of academics from different disciplines have gathered to take the first steps towards a non-profit framework with options to share papers, collaborate with other researchers, and enhance public engagement, using social networking tools.

From the project website:

ScholarlyHub will be a non-profit framework, where members pay a small annual fee (directly or through an existing learned society, network, project or institution) and create personal, thematic, project-based, associational or institutional profiles and populate them with scholarly and educational materials as they see fit. These are stored in a searchable, real open-access archive, and are directly viewable and downloadable from the portal by anyone (that is, not only members), without having to register or volunteer personal data.”

In order to make this happen, money is needed to built an infrastructure. No venture capital, but actual support from actual researchers. On November 29th ScholarlyHub launched a crowdfunding campaign hoping to raise € 500.000,- for developing the first version of the platform.

It will be very interesting to see how this initiative will evolve in the next few months, because in the last few years criticism has grown about the commercialization of the aforementioned platforms and ResearchGate. For these enterprises, the user is the product and that obviously leads to important (ethical) questions power, ownership, reuse, and archiving policies, etc..[1] All in all, practices ScholarlyHub explicitly rejects.

As can be found on their website: “Growing threats to open science have made it more crucial than before to develop a sustainable, not-for-profit environment. One that allows you to publish, share, and access quality work without financial constraints.”

But some have already asked the question how this platform will relate to, for example, the Humanities Commons, which pursues similar goals and which saw the light last year[2]. And another example the Open Science Framework platform, which offers an open repository for papers and data. A very interesting and much needed discussion will happen in the coming months to investigate whether and how these non-profit platforms should co-exist.

In any case, it will be a much healthier situation if, in addition to the existing commercial academic social networks, non-profit equivalents enter this market.


[1] Further reading: Pooley, J. (2017). Scholarly Communication Shouldn’t Just be Open but Non-Profit Too:

[2] / ScholarlyHub Response:

Sounding Out! Podcast #64: Standing Rock, Protest, Sound and Power (Part 2)

CLICK HERE TO DOWNLOADStanding Rock, Protest, Sound and Power (Part 2)



Part Two of a special series on Standing Rock, Protest, Sound and Power. The guest for today’s podcast is Tracy Rector. Tracy is a Choctaw/Seminole filmmaker, curator, community organizer, and Executive Director and Co-founder of Longhouse Media. In 2017 Indigenous grassroots leaders called upon allies across the United States and around the world to peacefully march in support of the Standing Rock Sioux Tribe. They asked allies to simply exist, resist, and rise in solidarity with Indigenous peoples and their rights–rights which protect mother earth for all future generations. In this podcast we talk about Tracy’s thoughts and observations as a filmmaker who was present at Standing Rock. We discuss the election of a new administration, increasing threats to native land, and police violence in today’s podcast.

In Part One, our host Marcella Ernest spoke with Dr. Nancy Marie Mithlo, a Native American art historian and Associate Professor of Art History and American Indian studies. They discussed how Nancy experiences the sonic elements of Native activism as a trained anthropologist. In Part Two, Tracy’s experience playing with sound and visuals as a documentarian brings a different perspective to understanding Native activism.

Marcella Ernest is a Native American (Ojibwe) interdisciplinary video artist and scholar. Her work combines electronic media with sound design with film and photography in a variety of formats; using multi-media installations incorporating large-scale projections and experimental film aesthetics. Currently living in California, Marcella is completing an interdisciplinary Ph.D. in American Studies at the University of New Mexico. Drawing upon a Critical Indigenous Studies framework to explore how “Indianness” and Indigenity are represented in studies of American and Indigenous visual and popular culture, her primary research is an engagement with contemporary Native art to understand how members of colonized groups use a re-mix of experimental video and sound design as a means for cultural and political expressions of resistance.

Featured image used with permission by Tracy Rector.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #60: Standing Rock, Protest, Sound, and Power (Part 1) — Marcella Ernest

Sounding Out! Podcast #51: Creating New Worlds From Old Sounds – Marcella Ernest

Sounding Out! Podcast #58: The Meaning of Silence – Marcella Ernest

Rational Sin

In the last 20 years, global health experts have recognized the importance and encouraged the adoption of sin taxes in the fight against non-communicable diseases (NCDs) in the Global South. At the level of discourse, this is illustrated by the vast global health literature on NCDs published from the late 1990s onwards: reports and action plans issued by international organizations like the World Health Organization (WHO) and the World Bank, editorials and scientific papers in medical journals like The Lancet, and policy documents and pamphlets prepared by aid agencies, health charities, and private philanthropies. Most of these documents start by reminding readers that NCDs—chronic diseases such as cancer and diabetes associated with behavioral risk factors like smoking, drinking, and unhealthy diets—are now responsible for most of the burden of death and disability across the Global South. They then identify excise taxes levied on tobacco, alcohol, and sugar as the most effective strategy to address this burden of death and disability.

WHO poster for the 2014 World No Tobacco Day advocating for taxes on tobacco products as a strategy to lower the associated burden of death and disease.

WHO poster for the 2014 World No Tobacco Day advocating for taxes on tobacco products as a strategy to lower the associated burden of death and disease.

This literature explains how—given that price is correlated with demand for tobacco, alcohol, and sugar—increasing taxes on these products will markedly reduce rates of smoking, drinking, and unhealthy eating and thereby the incidence of chronic diseases associated with these behaviors. It also stresses how sin taxes not only improve the health of nations, but also strengthen their finances. Indeed, as many of the experts cited in this literature make clear, increased taxation rates largely compensate for the decrease in tobacco, alcohol, and sugar consumption, thus allowing national governments to amass larger tax revenues that can be earmarked to finance national health systems and achieve universal health coverage. Last but not least, this literature also extols the fact that, as indirect taxes, sin taxes are relatively easy to set up and administer for governments. At the level of practice, the growing importance of sin taxes within global health can be illustrated by the mounting number of countries in the Global South—from Chile, Mexico, and South Africa to Thailand, India, and the Philippines—that have introduced taxation schemes for tobacco, alcohol, and/or sugar to combat the NCD epidemic. Many of these national schemas have been supported by international efforts such as the Bloomberg Initiative, a US$1 billion project to reduce tobacco use in developing countries led by the Bloomberg and Gates foundations, in which sin taxes play a central role.

In many ways, sin taxes are typical of the micro-technologies that have proliferated in the fields of development and humanitarian aid in the past two decades, what Stephen Collier, Peter Redfield, and their colleagues have called “little development devices” and “humanitarian goods” (Collier et al., 2017; Cross, 2013; Redfield, 2012). Indeed, like many of these micro-devices, sin taxes are meant to improve people’s quality of life, are eminently portable, and, as I discuss below, operate at the micro level, targeting individuals’ aspirations, preferences, and calculations rather than any larger macroeconomic aggregate. In this essay I shed some light on the complex genealogies of these micro-technologies by unpacking some of the political theories, scientific concepts, and ethical norms that make up sin taxes. I suggest that sin taxes are built around a particular subject—the rational actor seeking to maximize their welfare in line with their own preferences—whose origins can be traced back to the Chicago School’s microeconomic tradition and its concern with rational choice theory. In doing so, I draw on Madeleine Akrich’s (1992) concept of “de-scription” and her claim that one can find inscribed in a technical device many of the assumptions, aspirations, and values of those who designed it. In my de-scription of sin taxes I examine the work of a small network of economists led by University of Chicago professor Gary Becker and two of his collaborators, Mike Grossman and Frank Chaloupka, that was instrumental in transforming sin taxes into an accepted global health strategy. In particular, I focus on this network’s research on tobacco taxation, which was the first type of sin tax to gain acceptance in the global health field and later served as a model for excises on alcohol and sugar. I begin by showing how this research grew out of Chicago’s microeconomic tradition and Becker’s work in particular before examining how it radically transformed international tobacco control and the model of the smoker that underpins it. I conclude by reflecting on what this story can teach us about the wider history of the recent proliferation of micro-technologies in the fields of development and humanitarian aid.

Tax revenue stamp from South Africa. From Andrey Vasiunin’s online collection.

The Chicago microeconomic tradition was articulated by George Stigler, Gary Becker, and other members of the Chicago School from the 1950s onwards. As historian Steven Medema (2011:153) has carefully documented, for the earlier generations of Chicago economists, from Frank Knight to Milton Friedman, economics was the study of the “social organization of economic activity” and, in particular, “markets as coordinating devices.” This changed after the 1960s following the arrivals of Stigler and especially Becker at the University of Chicago. For this new generation, economics was redefined as the study of “human behavior” and, specifically, “rational individual choices” under “conditions of scarcity” (Medema 2011:161–162). By redefining their object of study in this way, the new generation of economists at Chicago profoundly altered their discipline (Foucault 2008). First, they made it possible to analyze how individual decisions had implications at the macro level, thus extending economic analysis within its own domain. Second, they encouraged economists to espouse an expansionist agenda and apply their methods to traditionally non-economic domains. As Medema (2011:172) has also showed, the reason for the shift of focus from social organization and markets to individual behavior and choice lies in the marked influence that rational choice theory had on many of the new generation of Chicago economists. Indeed, this “new science of choice,” articulated during the Cold War around the notion of the “rational actor,” was a “catalyst for change” in the American social sciences, where it introduced a fresh focus on and new techniques to analyze the role of individuals and their decisions in the making of complex social phenomena (Amadae 2003:5–8).

Gary Becker’s work has been central to Chicago’s microeconomic tradition (Medema 2011). Becker established the idea that economics was about the study of human behavior and choice. A disciplinary imperialist, he also believed that economics should not be limited to behaviors usually studied by economists but expanded to behaviors traditionally analyzed by other social scientists such as sociologists and anthropologists. As Becker explained, economics was about “problems of choice,” whether that was “the choice of a car, a marriage mate [or] a religion” (cited in Medema 2001:161). These beliefs strongly influenced the sort of questions (Why do individuals decide to invest in education? Why do they elect to marry and have children? Why do they choose to engage in criminal activity?) that he sought to address in his own research. The way in which Becker approached and analyzed human behavior was informed by rational choice theory. Specifically, he suggested that choices made by individuals should always be considered rational, even when they are criminal or antisocial. By rational, Becker (1992:38) meant that these choices are made by individuals who seek to “maximize welfare as they conceive it.” He believed that when doing so, individuals take into account their own “values and preferences” and anticipate as best they can “the uncertain consequences of their actions” (Becker 1992:38). He also supposed that their choices are “constrained by income, time, imperfect memory, calculating capacities and other limited resources” and shaped by “the available opportunities in the economy and elsewhere” (Becker 1992:38). For Becker, the task of the economist was to develop and empirically test mathematical models that identified and organized these different variables in a way that explained and predicted the type of behavior being analyzed.

Not until the 1980–1990s did economists systematically apply the tools and concepts of Chicago microeconomics to the study of smoking (Reubi 2013, 2016). Two interrelated bodies of work were critical in that respect. The first encompassed the studies on the demand for tobacco products carried out by Mike Grossman together with his former student Frank Chaloupka and others (e.g., Chaloupka and Grossman 1996; Lewit et al. 1981). Grossman was key in popularizing the use of Chicago microeconomics to analyze health-related behaviors, both in his own research and as director of the National Bureau of Economic Research’s (NBER) Health Economics Program. For his PhD carried out under Becker’s supervision, Grossman constructed a model of the “demand for good health” where health was a form of “human capital” that everyone possessed and could choose to invest in and increase (Grossman 1972:xiv–vx). Given his interest in health at a time when smoking had become a major public health issue in North America and Europe, it is unsurprising that Grossman subsequently chose to work on the demand for cigarettes together with Chaloupka and other colleagues. This research first established that price was a key factor for the demand for cigarettes. The research also showed that price was a particularly powerful motivator for young adults and individuals of low socioeconomic status, who have less income and are more resistant to public information campaigns on the dangers of smoking. The second body of work encompassed the studies on addiction conducted by Becker in collaboration with Grossman, Chaloupka, and a few others (e.g., Becker and Murphy 1988; Chaloupka 1990). Building on insights from rational choice theory, Becker and his collaborators claimed that contrary to popular belief, “addictions are rational in the sense of involving forward-looking maximization with stable preferences” (Becker and Murphy 1988:675). Using cigarettes and alcohol as their case study, they also built and tested a behavioral model that predicted the demand for addictive substances was greater among individuals who had “low incomes,” were “more present-oriented” and/or had experienced “unhappy” and “stressful events” (Becker and Murphy, 1988:694; Chaloupka 1990:737).

Up to this point, two very different intellectual traditions dominated the field of international tobacco control. The first, which stemmed from the field of health education, was built on the notion of knowledge or information (Berridge 2007, chapter 2; Reubi and Berridge 2016). Public health experts working within this tradition assumed that people smoked because they did not know that tobacco was harmful to their health. Following that assumption, experts believed that their main task was to ensure people were informed about the dangers of smoking. This meant educating people about these dangers through warning labels on cigarette packages, school education programs, and, most important, public information campaigns, which were deemed to be the most powerful anti-smoking measure at the time. This also meant shielding people from the tobacco industry’s marketing and public relations efforts through advertising bans and advocacy tactics to monitor and counter the industry. The second tradition, which grew out of developments in psychology and psychopharmacology, was centered on the notion of addiction (Berridge 2007, chapter 9; Brandt 2004). For public health experts and psychologists who came from this tradition, the reason people smoked, or continued to smoke, was their addiction to nicotine, the psychoactive substance in tobacco. Specifically, they contended that nicotine could, by acting on the brain via complex biomolecular pathways, control the behavior of smokers and compel them to continue smoking. For these experts, the main task was to treat this addiction, which they viewed as a pathology, by using smoking cessation techniques such as behavioral and nicotine replacement therapies.

Cover of the International Union against Tuberculosis and Lung Disease’s Factsheet on Tobacco Taxation

Cover of the International Union against Tuberculosis and Lung Disease’s Factsheet on Tobacco Taxation, with the caption “Young people are most likely to quit when prices rise.”

The work on smoking carried out by Becker and his colleagues posed a direct challenge to these two intellectual traditions, leading to a rupture in and a partial reconfiguration of the field of international tobacco control in the late 1990s. To start, the work of Becker and his colleagues radically altered the view public health experts held on taxation (Reubi 2013). Until then, these experts largely ignored sin taxes as an anti-smoking measure for many reasons, ranging from ignorance about how taxation worked to discomfort about sin taxes’ regressive nature. The network of economists led by Becker helped change this perception, progressively bringing public health experts to see taxation (rather than public information campaigns) as the most potent strategy in the fight against tobacco. Grossman’s work in particular, which showed that price (rather than knowledge) was key in curbing tobacco use in groups where prevalence rates had remained stubbornly high (like the young and the poor), was critical in that respect. Furthermore, the work of Becker and his colleagues also helped establish a new model of the smoker in public health thought. Inscribed in the taxation schemes now multiplying across the tobacco control field, this model was centered on the idea of individual choice rather than the notions of knowledge and addiction associated with health education and psychology, respectively. In this new model, people smoked because they made a rational choice to do so in the sense of a welfare-maximizing calculus based on their preferences and existing circumstances. Although knowledge and addiction retained a place within this model, they were only two factors among many others such as price, education, and pleasure that could influence an individual’s decision to smoke. Moreover, it was up to that individual to determine the importance of these two factors when they weighed their options. As Chaloupka and other leading public health experts and economists argued in an influential World Bank (1999:3) report on tobacco control:

Consumers are usually the best judges of how to spend their money…. [They make] rational and informed choices after weighing the costs and benefits of [their actions]…. Smokers clearly perceive benefits from smoking, such as pleasure and the avoidance of withdrawal, and weigh these against the private costs of their choice. Defined this way, the perceived benefits outweigh the perceived costs, otherwise smokers would not pay to smoke.

To recapitulate, I showed here how a global health device like sin taxes grew out of Chicago’s microeconomic tradition and, in particular, Becker’s project to redefine economics as a function of individual choice and expand it to non-economic domains. Moreover, I outlined how sin taxes were later decoupled from Becker’s project and redeployed as a key strategy in public health efforts to fight the smoking epidemic in the Global South. This redeployment, I also showed, was accompanied by the introduction of a new model of the smoker—the rational, welfare-maximizing individual—within the international tobacco control field. To conclude, I want to reflect on how this story relates to wider historical accounts about the proliferation of micro-technologies within international development and humanitarian aid. In their writings, Collier, Redfield, and others caution against the familiar and well-rehearsed explanation that this proliferation is the result of a shift from welfare states and the social to markets and the individual (e.g., Collier 2011; Cross, 2013; Redfield 2012). Instead, they suggest that the multiplication of these micro-devices is associated with a rupture in development thought from a macroeconomic concern with large, national physical infrastructure projects to a microeconomic focus on the investments in human capital (Collier et al. 2017; see also Reubi 2016). The story of sin taxes outlined here strongly resonates with this broad historical tableau sketched by Collier and others. To begin with, sin taxes emerge from the reconfiguration of Chicago economics from a macroeconomic discipline concerned with markets as coordinating devices to a microeconomic tradition focused on rational individual behavior. It is worth emphasizing that, in the context of this reconfiguration, markets and individual choices stand in contrast to each other. Indeed, this might come as a surprise to some readers for whom markets and individual choice are necessarily—almost naturally—associated. Furthermore, it is critical to realize that the shift from mass public information campaigns to sin taxes that marked the field of international tobacco control in the late 1990s was not a shift from the social to the individual, but rather a change in the concept of the individual. It was a move away from an individual for whom knowledge always and automatically triggered certain actions to an individual who could decide not to act on knowledge and prioritize other elements such as money and pleasure instead. Last, the strong emphasis placed on individual choice in both Becker’s attempts to reform economic thought and global health efforts to curb smoking should not be interpreted as the death of the social. Indeed, in echo of Collier’s (2011) work on the post-Soviet social, the notion of the social or society has remained important for both projects, albeit in different forms. Thus, for Becker (1997:150), sin taxes are “social taxes” that can protect American “society” from the “social harms” associated with rational addictive behaviors, whereas for global health experts, sin taxes are public health “interventions” that can shield developing “societies” from the health and “socio-economic toll” of “21st-century lifestyles” (WHO 2010:vii, 37).

David Reubi is a Wellcome Trust Fellow in the Department of Global Health & Social Medicine, King’s College London, where he is currently working on a manuscript about the biopolitics of the African smoking epidemic.


This essay draws on research funded through a Wellcome Trust Society and Ethics Fellowship. The essay also benefited from Stephen Collier and Peter Redfield’s thoughtful comments.


Akrich, Madeleine. 1992. “The De-Scription of Technical Objects.” In Shaping Technology/Building Society, edited by W. E. Bijker and J. Law, pp. 205–224. Cambridge, MA: MIT Press.

Amadae, S. M. 2003. Rationalizing Capitalist Democracy. Chicago, IL: University of Chicago Press.

Becker, Gary. 1992. “The Economic Way of Looking at Life.” In Nobel Lectures, Economics 1991–1995, edited by T. Persson, pp. 38–58. Singapore: World Scientific.

———. 1997. The Economics of Life. New York: McGraw-Hill.

Becker, Gary, and Kevin Murphy. 1988. “A Theory of Rational Addiction.” Journal of Political Economy 96(4):675–700.

Berridge, Virginia. 2007. Marketing Health. Oxford, UK: Oxford University Press.

Brandt, Allan. 2004. “From Nicotine to Nicotrol.” In Altering American Consciousness, edited by S. W. Tracy and C. J. Acker, pp. 383–402. Boston, MA: University of Massachusetts Press.

Chaloupka, Frank. 1990. “Rational Addictive Behavior and Cigarette Smoking.” Journal of Political Economy 99(4):722–742.

Chaloupka, Frank, and Mike Grossman. 1996. Price, Tobacco Control Policies and Youth Smoking. New York: NBER.

Collier, Stephen. 2011. Post-Soviet Social. Princeton, NJ: Princeton University Press.

Collier, Stephen, Peter Redfield, Jamie Cross, and Alice Street. 2017. “Little Development Devices/Humanitarian Goods.” Limn 9.

Cross, Jamie. 2013. “The 100th Object.” Journal of Material Culture 18(4):367–387.

Foucault, Michel. 2008. The Birth of Biopolitics. Basingstoke, UK: Palgrave Macmillan.

Grossman, Mike. 1972. The Demand for Health. New York: NBER.

Lewit, Eugene, Douglas Coate, and Mike Grossman. 1981. “The Effects of Government Regulation on Teenage Smoking.” Journal of Law and Economics 24:545–569.

Medema, Steven. 2011. “Chicago Price Theory and Chicago Law and Economics.” In Building Chicago Economics, edited by R. Van Horn, P. Mirowski, and T. Stapleford, pp. 151–178. Cambridge, UK: Cambridge University Press.

Redfield, Peter. 2012. “Bioexpectations.” Public Culture 24(1):157–184.

Reubi, David. 2013. “Health Economists, Tobacco Control and International Development.” BioSocieties 8:205–228.

———. 2016. “Of Neoliberalism and Global Health.” Critical Public Health 26(5):481–486.

Reubi, David, and Virginia Berridge. 2016. “The Internationalisation of Tobacco Control, 1950–2010.” Medical History 60(4):453–472.

World Health Organization (WHO). 2010. Global Status Report on Noncommunicable Diseases. Geneva, Switzerland: WHO.

World Bank. 1999. Curbing the Epidemic. Washington, DC: World Bank.

Image Credits: Featured image of Gary Becker, from The University of Chicago

The Participatory Development Toolkit

The Participatory Development Toolkit is a “small briefcase (26 x 33 x 10 cm) containing 221 activity cards, 65 pictures, 11 charts, 1 guidebook”; it is “covered in brown pattered cloth, with leather handle and leather snap closure.” It is decorated with drawings of women, abstract patterns, huts, trees, animals: drawings, the kit’s guide explains, “by the Warli tribe, who live in the Sahadri mountains in Maharashtra state north of Bombay” and who are “known for their mythic vision of Mother Earth, their traditional agricultural methods, and their lack of caste differentiation” (Narayan-Parker and Srinivasan 1994).

The Participatory Development Toolkit, created by Deepa Narayan, Lyra Srinivasan and others, funded by the World Bank and the United Nations Develop- ment Program, produced in India by Whisper Design of New Delhi, coordinated by Sunita Chakravarty of the Regional Water and Sanitation Group in New Delhi in 1994. This copy owned by the Getty Research Library, Los Angeles, CA.

The Participatory Development Toolkit was created in 1994 primarily by Lyra Srinivasan and Deepa Narayan, two development professionals who at the time worked for the United Nations Development Program (UNDP) and the World Bank. Unsnapped and opened, it reveals a set of 25 folders and a booklet: “Each individual envelope is coded with a number and a title on its flap.” The lid folds back to allow the kit to form a stand, and “every fifth envelope has a color-coded tab. To gain access to the materials in each set of envelopes, pull the tab and the envelopes will extend toward you” (Narayan-Parker and Srinivasan 1994). The Participatory Development Toolkit arrived at the zenith of the rage for “participatory” development. That enthusiasm lasted from the early 1970s, when the United Nations created a “Popular Participation Program” (Pearse and Stiefel 1979), to the 1980s spread of “participatory action research” (Reason 2008), to the prominence in the 1990s of the “participatory rural appraisal” (Chambers 1994) to the 2000/2007 World Bank Development Report, which incorporated “Participatory Poverty Assessments” from around the world (Green 2014; World Bank 2001). Alongside the World Bank Development Sourcebook (World Bank 1996) and a range of other handbooks and sourcebooks and kits, the Participatory Development Toolkit stands out for being an actual kit: a briefcase containing folders that reveal a range of activities, cards, photographs, game pieces, puppets (“flexi-flans”), and, especially, sets of images.

Activty #3 Cards a and b; Flexi-flans, Activity #8 sheets 1 and 2, (Narayan- Parker and Srinivasan 1994).

One can sense in the Participatory Development Toolkit an enthusiasm for inclusiveness, respect, curiosity, and a close-to-the-community style of development; these games, cards, and images are designed to draw people into discussing problems and situations that immediately affect them, to elicit stories and images of the future they would prefer to have, and to debate the solutions to the problems they experience. There are countless versions of a sort of now-and-later game: pictures of unsanitary, impoverished, violent nows, followed by cleaner, wealthier, more humane laters.

It’s not clear how often the kit (itself) was used. The games and images and techniques it contains show up in different settings across decades of attempts to install participatory development in various times and places. Many of the 25 different folders contain activities from Lyra Srinivasan’s SARAR[1] methodology, one of dozens of different packaged methods for engaging people in participation and collective uplift. Others are cribbed directly from the social psychology of Kurt Lewin, who himself inspired a generation of “participatory” research, especially in management (Alden 2012; Lezaun and Calvillo 2014).

Pump Repair Issues. Activity #19 in (Narayan-Parker and Srinivasan 1994, pp. 44-45).

But the simple fact that the kit exists at all is worth dwelling upon. Why was a “toolkit” necessary for an activity called “participatory development” in the 1990s? Who were the tool users, and what might they have done with it? Is the toolkit a device for enticing participation, for improving it, or for something else? What imagination drove its form and function, and can we learn anything from it about today’s attempts to build little development devices, or design humanitarian goods? Can we think of the Participatory Development Toolkit as a precursor to our contemporary attempts to transform development through apps, platforms, algorithms or infrastructures?

The Problem of a Participatory Development Toolkit

At the heart of this kit is a conundrum. The toolkit seeks to “scale up” and spread globally something conceived of as essentially “context specific.” Participatory development, in most of its different guises, has always resisted the idea of a uniform, universal, top-down, one-size-fits-all development. Along with many other critiques of such dreams, participatory development proposes that proper development success should depend on attending to the very specific needs of particular people. Each community, village, neighborhood, council, or agricultural extension district is its own special place, with its own special needs that cannot be simply treated just like the next. Rather, development should involve the residents in diagnosing problems and planning solutions.

A toolkit is a device for decontextualizing: it is filled with tools that can be used in multiple different contexts, tools that are standardized and hardened into a semi-universal state. But the tools are not automatic; a toolkit implies the existence of a skilled tool user as well. A toolkit sits somewhere between an imagination of a context-specific, autonomous, and self-guided development without any facilitation on the one hand; and on the other, the large-scale, universal, automatic spread of one-size-fits-all solutions everywhere. The Participatory Development Toolkit itself reflects exquisite awareness of this problem. The authors take pains to mount warnings at every turn: the kit does not stand alone; the images and games should not be used without adapting them; the kit should not be used to extract information (rather than incite participation); the user of the kit should be prepared to give up control of the kit; the kit, indeed, is not essential (see, for example, Narayan-Parker and Srinivasan 1994:1–5; Srinivasan 1990:12–13).

Activity #3, Charts 2, (Narayan-Parker and Srinivasan 1994).

In between universalism and hyper-specificity sits the kit: mediating by taking what works at a local level, attempting to quasi-formalize it, and inserting it into a briefcase so that it can be carried to the next site to repeat its context-specific success.

“Scalability” of this sort is also at the heart of our contemporary enthusiasm for apps, platforms, and quasi-algorithmic solutions to the problems of developments. The large-scale “big development” projects of mid-century, where scale often meant simply “large,” used “economies of scale” to attain a certain economization or efficiency as a project grew larger; conversely, the “small-is-beautiful” technology solutions of the 1970s counseled a return to the local, the situated, and the appropriate. But contemporary scalability sees in the small a mere instance of the large: a solution at the small scale (e.g., a LifeStraw for dirty water; Redfield 2016) can be “scaled up” and distributed globally. It is small and large at the same time. Some kinds of “tools” are scalable in this sense (software and algorithms preeminent among them), and others, perhaps, are not (dams and bush pumps).

The Participatory Development Toolkit tries to accomplish something similar: it takes a program for participation developed in response to specific cases, generalizes it, and spreads it to other sites and cases. It is “quasi-algorithmic” in the sense that it involves a set of steps in a sort of recipe, but it also relies on the existence of both a skilled tool user (the facilitator of participation, usually a development professional of some kind) and a defined group of participants (women, members of a village, a congress of delegates, extension workers, etc.). Such collectives are called into being just at the moment when the kit is in use. This process produces an experience called “participation.”

To put this contemporary problem in perspective, it is important to emphasize that there have over the decades been plenty of examples of “experiments with participation” not only in development, but also in art, in science and technology policy, in urban planning, or in the workplace (Kelty 2017; Lezaun et al. 2016). It is worth turning to the history of participatory development to understand better what these past experiments sought to achieve.

The Participation That Was

Participatory development has failed at least once already. This is perhaps not obvious to a generation of development workers or scholars discovering participation for the first time in the 2010s. In the 1970s both small, alternative groups (such as Budd Hall and the Participatory Development Network) and large organizations such as the United Nations Popular Participation Program embraced an earlier version of participatory development with enthusiasm. And as it succeeded from the 1970s to the 1990s, it came in for its own critique: by the year 2001, participatory development was being called “a new tyranny” (Cooke and Kothari 2001). The book bearing that subtitle suggested that many things had gone wrong with participation: that it had been bureaucratically routinized; that recipients were gaming the system to become “professional participants”; that it rested on a myth of community or village structure that was inadequate in most places, or to the realities of globalization, and so on. Perhaps most important, it wasn’t clear that participatory development alleviated poverty any better than non-participatory development had.

There was also a clear sense, captured best in Francis Cleaver’s critique, that true participation had been betrayed by toolkits in general:

“Participation” in development activities has been translated into a managerial exercise based on “toolboxes” of procedures and techniques. It has been turned away from its radical roots: we now talk of problem solving through participation rather than problematization, critical engagement and class (Cooke and Kothari 2001:53).

Unserialized Posters; “Fourteen pictures showing various human situations and interactions.” Activity #7 in (Narayan-Parker and Srinivasan 1994, pp. 20-21)

What were these radical roots, and how did they grow into a Participatory Development Toolkit? There are multiple interesting origin points for the Participatory Development Toolkit. The “radical” that Cleaver is no doubt thinking of is the work of Paolo Freire and more generally of “participatory action research” from the early 1970s onward (Freire 2014; Reason and Bradbury 2001). The idea that toolkit makers might try to roll up Paolo Freire and tuck him inside a kit is perhaps surprising, but actually quite obvious if one reads his work carefully. Freire’s ideas of “conscientização” dictated not just a participatory engagement with the impoverished subject, but in particular the use of imagery, games, and specific forms of contextualization. The instructions for using these images in the Participatory Development Toolkit parallel Freire’s own discussion of them in Pedagogy of the Oppressed (2014): they must be “non-directive” (i.e., not “sectarian”) and they must rely entirely on the perceptions (and “perceptions of previous perceptions”) of the “wretched of the earth” themselves. Many of the activities of the kit are directed toward instilling first an understanding of this “non-directive” form of analytical work, to be followed only later by substantive discussion of pumps, latrines, disease, and so on. Once inside the kit, however, Freire’s radical, Marxist pedagogy runs the risk of appearing lightweight and inauthentic, transformed into an exercise in “project management” ripe for critique.

Stuffed inside the kit alongside Freire is Robert Chambers, the development scholar and practitioner most often associated with the rise of participatory development in the 1980s. Chambers started life as a colonial administrator in Kenya, and it was only late in the 1980s that he began to embrace participation as a technique (Cornwall and Scoones 2011). He came to it not as Freire did, as a liberation of the wretched of the earth, but primarily as a question of ascetic practice, which is to say it was less about the participation of the impoverished villager than a form of work on the self for the development professional. Chambers was primarily concerned with “seeing reality” clearly in the hopes of transforming poverty, and he insisted that most of what development professionals did obscured reality: they engaged in “rural development tourism,” they suffered from “tarmac blindness” and “survey slavery” (Chambers 1983). They needed to be given the tools to see what was right in front of them, and to this end, Chambers advocated the flexible use of multiple different methods.

Activity #6, Diagrams 1–3, (Narayan-Parker and Srinivasan 1994)

To address this problem, Chambers pioneered a kind of “method of any method,” by which development workers could transform the simplest of techniques, like walking around and talking with people, into legitimate tools in a toolkit. Interviews, transect walks, pocket charts, ethnographic observation, and much more were lumped together and labeled “participatory rural appraisal.” The approach is clear in the Participatory Development Toolkit: there are 25 folders with different games and activities, each appropriate to a different challenge. There are also explicit directions, much like those that Chambers issued in everything he wrote, to “improvise” and adjust activities to the context and the site in question, to extend the kit and add to it, and, especially, to do so with the participation of those at the receiving end of development’s interventions.

Chambers’ approach implied that any such toolkit required a skilled tool user, and to become such a person, one had to work on oneself, develop new capacities, overcome blindness, see reality clearly, and so on. Only such transformed development workers would be able to effectively take this kit to the field to elicit the kind of participation promised by the likes of Freire (whom he recognized but did not claim as an inspiration). Despite the step-by-step nature of the toolkit (or any of the sourcebooks, scripts, or manuals promulgated as “participatory development”), the quasi-algorithm required a bit of human input: not just any human input, but that of self-reflective, awakened experts.

From the perspective of a later critic such as Cleaver, the toolkit is a proxy for the rigid, hierarchical, male engineer who sees a standard, technological solution to every problem. Participatory development—radical or not—is directly opposed to such powerful, unaccountable forms of decision-making. To the extent that the figure of the Engineer is the tool user, the toolkit is dangerous.

From Chambers’ perspective, however, the enlightened user of the toolkit can achieve a different outcome; tools are figured as neutral and emancipating when given to the right people by the right people, and the result would be the scalable development of both the professional agent and the impoverished subject of development.

Device, Toolkit, Algorithm

What is at stake in thinking of the Participatory Development Tookit as a “quasi-algorithm”? What might be the difference between a briefcase of paper games and routines for eliciting participation, and a piece of software that tries to do something similar, but is implemented on a solar-powered, GPS-enabled, data-intensive smartphone app? Can we see this kit as a vantage point from which to evaluate the contemporary explosion of various devices for development, especially those that demand the input of users concerning local conditions while using standard forms and algorithmic procedures to scale up and travel?

One obvious thing to say about the Participatory Development Toolkit is that it does not contain tools or supplies of a conventional kind. There are no hammers, pliers, or wrenches; there are no Band-Aids, gauze, or Bactine as there would be in a first aid kit; it is not quite the “kit” pioneered by Médecins Sans Frontières capable of unfolding an emergency treatment center in a remote or decimated location (Redfield 2013:69ff). Instead, it contains scripts, games, and procedures designed to elicit experiences. When opened and set into operation, it tries to create a joyful occurrence: people are called to draw pictures, make maps, play a game, or discuss a problem related to their immediate life experience and surroundings. In this respect, its “devices” are similar to what Soneryd and Lezaun call “technologies of elicitation,” or what Caroline Lee refers to as “do-it-yourself” or “designer” democracy; they are procedures and practices of convoking individuals to elicit debate, deliberation, opinion, or decision-making (Lee 2014; Lezaun and Soneryd 2007).

The toolkit is not, however, immaterial as a result. The material properties of the Participatory Development Toolkit are important; it is meant to travel, it has a handle, and it carries both its theory and its practice in easily accessible compartments and a handy users’ manual. The toolkit is not a device itself, but more like a “platform”: a box full of different devices all dependent on a similar form of action and general theory of participation. These devices are not technologically sophisticated, but neither, really, are most apps or software programs. They may depend on a technologically sophisticated infrastructure (to exist), but at the end of the day they are simple programs: devices designed to achieve particular results. What is the relation between the participating humans and the toolkit? In the toolkit, the games and images and scripts call on people to interact in specific ways. The development agents, along with those they interact with (villagers, women, engineers, farmers, politicians), are given rules, or shown images, or follow loose scripts for “non-directive” interaction with each other. The goal, or outcome, is to either diagnose a problem or propose solutions to it. It does not solve a problem diagnosed elsewhere, higher up or far away, without the involvement of people, but presumes instead that the diagnoses of a problem itself has yet to happen, or that the proposed solutions must come from the context-specific encounter itself.

This is the origin of its power: it promises a highly context-dependent exploration of problems specific to those who meet and engage in the production of these experiences. This is why it enrolls people into its project. The conundrum comes from the fact that the devices for eliciting such experiences are (perhaps unwillingly) universalized in the toolkit, made to travel. Whereas an individual development consultant might bring a set of techniques and procedures with her to a variety of (necessarily limited) places, the Participatory Development Toolkit implicitly suggests that through replication, many more people can carry these procedures to many more places.

What’s more, it is not merely a toolkit-as-commodity being replicated; it is also a toolkit funded by and branded with the insignia of the World Bank and the UNDP. These institutions make participation more or less bureaucratic, and authorize them as forms of practice. It is not clear that the Participatory Development Toolkit was required in any way, but along with manuals such as the World Bank Participation Sourcebook t(World Bank 1996) he techniques and procedures were incorporated into the standardized practices of development. One can find the same games and scripts in the Sourcebook that appear in the Participatory Development Toolkit.

The institutional standardization or participation is what provokes the suspicion of the toolkit itself, in cases such as Francis Cleaver’s critique above; rather than a highly contextualized participatory engagement, it suggests instead a bureaucratically standardized set of forms and practices, riven from the context. Soneryd makes a similar point in discussing more recent “technologies of participation”: it is not an accident that this standardization happens precisely because many actors in these organizations actively seek to “imitate and replicate” forms of participation that have worked elsewhere (Soneryd 2016:149).

Such institutional embedding (to use the new institutionalist language) is not dissimilar to the kind of infrastructural “network effects” (to use the engineering/economic language) of internet-based apps and platforms that similarly circulate plans, techniques, and procedures in the interest of producing an experience. As the kit succeeds, it draws more people into a particular form of participation, and produces professionals and networks of practice that draw on these tools as exemplary forms of participation. Both aim at scaling up and circulating the local without losing (the character of that) local specificity. But such tools are inevitably subject to both technological and institutional mimcry, standardization, and control, whether that be an audit culture of measuring results or an advertising-dependent system of revenue generation.

The Participatory Development Toolkit represents a stage in this evolution. It is “quasi-algorithmic” but not fully routine in the sense that it does not operate automatically, in the absence of context, judgment, or serendipity. Nor is it “computational” in any sense. Rather, a development agent takes the place of the networked computer: he or she runs the program (as a neutral agent: a CPU, as it were) and records the data into memory. The users of the algorithm are the participants: villagers, women, extension agents, etc. They give their data and ideas to the machine in the hopes that it will spit out a solution and perhaps some money.

The term “algorithm” used to mean a set of rules, not unlike a recipe, or the rules of a game. In this respect, the operator is like a player or a chef: some are good and some are bad. Robert Chambers’ desire to see development agents remake themselves as agents of participation relies on such a notion: you can have the best recipes in the world, and still produce a bad meal.

Open Ended Snakes and Ladders; Activity #24 in (Narayan-Parker and Srinivasan 1994, pp. 54–55)/

Lately, however, the “algorithm” has come to mean something more than just a set of steps. Rather, it is a kind of living system that depends both on computational processing of recipe-like rules, and on the constant input of many participants: participants who feed it regularly, not just use it. The Facebook timeline, to take only the most storied case, depends both on a large set of rules of searching, sorting, and comparing possible content, and on an always-changing database of what people who are connected to other people view, like, linger upon, or swipe past. This is not the same thing as a simple set of rules that depend on expert execution; rather, it seems to enable a certain fantasy of—and provoke a certain desire for—participating in an enormous, amorphous, yet nevertheless intimate collective that represents itself to itself constantly.

In its ideal version, this happens completely without human control or intervention, making the local into a universal. In reality, such “automation” reproduces the good and the bad of the local (as Facebook, Twitter, and others are discovering in the case of the 2016 U.S. election), and a reversion to the former meaning of algorithm becomes more appealing again.

Seen from this perspective, the Participatory Development Toolkit is an interesting moment in the development of devices for development. It is perhaps more like the algorithm-as-recipe in its quaint leather-bound form, but perhaps it also betrays a desire for the newer algorithm-as-system in which all over the world, people are enabled to participate constantly in the diagnosis and solution of their own problems. Or maybe it should be seen from the success of the contemporary demand for constant, unreflective participation of the sort promoted by social media. Perhaps it reveals a now nearly forgotten desire for scaling up something difficult to scale up: the reflexive practitioner whose “algorithm” is human judgment, memory, and discernment, and not an automatic, machine-learning, artificial intelligence. Perhaps it reveals a present danger of an endless participation without deliberation, whereas the analog briefcase could still, at least, contain a trace of the reflexive practitioner, the Marxist pedagogue, or the evangelical development aesthete.

Christopher Kelty is professor at the University of California, Los Angeles.


Alden, Jenna. 2012. “Bottom-Up Management: Participative Philosophy and Humanistic Psychology in American Organizational Culture, 1930-1970” [PhD dissertation]. Columbia University Academic Commons. New York, NY: Columbia University.

Chambers, Robert. 1983. Rural Development: Putting the Last First. World Development Series. Newark, NJ: Prentice Hall.

———. 1994. “The Origins and Practice of Participatory Rural Appraisal.” World Development 22(7):953–969.

Cooke, Bill, and Uma Kothari. 2001. Participation: The New Tyranny? London, UK: Zed Books.

Cornwall, Andrea, and Ian Scoones. 2011. Revolutionizing Development: Reflections on the Work of Robert Chambers. Oxon; New York: Earthscan.

Freire, Paulo. 2014. Pedagogy of the Oppressed: 30th Anniversary Edition. Originally published 1970. London, UK: Bloomsbury Publishing.

Green, Maia. 2014. The Development State: Aid, Culture & Civil Society in Tanzania. Suffolk, U.K.: James Currey.

Kelty, Christopher M. 2017. “Too Much Democracy in All the Wrong Places: Toward a Grammar of Participation.” Current Anthropology 58(S15):S77–S90.

Lee, Caroline. 2014. Do-It-Yourself Democracy: The Rise of the Public Engagement Industry. New York, NY: Oxford University Press.

Lezaun, Javier, and Nerea Calvillo. 2014. “In the Political Laboratory: Kurt Lewin’s Atmospheres.” Journal of Cultural Economy 7(4):434–457.

Lezaun, Javier, Noortje Marres, and Manuel Tironi. 2016. Experiments in Participation. In Handbook of Science and Technology Studies, edited by Ulrike Felt, Rayvon Fouché, Clark A. Miller, and Laurel Smith-Doerr, pp. 195-222. Cambridge, MA: MIT Press.

Lezaun, Javier, and Linda Soneryd. 2007. “Consulting Citizens: Technologies of Elicitation and the Mobility of Publics.” Public Understanding of Science 16(3):279–297.

Narayan-Parker, Deepa, and Lyra Srinivasan. 1994. Participatory Development Tool Kit: Materials to Facilitate Community Empowerment. Compiled by Deepa Narayan-Parker and Lyra Srinivasan. 221 activity cards, 65 pictures, 11 charts, 1 guidebook in briefcase; 26 ´ 33 ´ 10 cm. Washington, DC: World Bank.

Pearse, Andrew, and Matthias Stiefel. 1979. “Inquiry into Participation: A Research Approach.” Technical report UNRISD/79/C.14 GE.79-2103. Geneva, Switzerland: United Nations Research Institute for Social Development.

Reason, Peter. 2008. The SAGE Handbook of Action Research: Participative Inquiry and Practice. Los Angeles, CA, and London, UK: SAGE.

Reason, Peter, and Hilary Bradbury. 2001. Handbook of Action Research: Participative Inquiry and Practice. London, UK: Sage.

Redfield, Peter. 2013. Life in Crisis: The Ethical Journey of Doctors Without Borders. Berkeley, CA: University of California Press.

———. 2016. “Fluid Technologies: The Bush Pump, the LifeStraw®, and Microworlds of Humanitarian Design.” Social Studies of Science 46(2):159–183.

Sawyer, Ron. 2011. “SARAR: A Methodology by Lyra Srinivasan.” Culture Unplugged. Available at online/play/12780/SARAR–a-methodology-by-Lyra-Srinivasan”>link.

Soneryd, Linda. 2016. “Technologies of Participation and the Making of Technologized Futures.” In Remaking Participation: Science, Environment and Emergent Publics, edited by Jason Chilvers and Matthew Kearnes, pp. 144–161. London, UK, and New York, NY: Routledge.

Srinivasan, Lyra. 1990. Tools for Community Participation: A Manual for Training Trainers in Participatory Techniques. New York, NY: PROWWESS/United Nations Development Program.

World Bank. 1996. The World Bank participation sourcebook. Washington, D.C. : The World Bank. link.

World Bank. 2001. World Development Report 2000/2007: Attacking Poverty. World Development Report. New York: World Bank.


[1] SARAR is a an acronym for “Self esteem, Associative strength, Resourcefulness, Action Planning, Responsibility.” For more on SARAR, see Sawyer’s documentary (2011).

The Firesign Theatre’s Wax Poetics: Overdub, Dissonance, and Narrative in the Age of Nixon

Screen Shot 2017-11-22 at 12.29.57 AM

The Firesign Theatre are the only group that can claim among its devoted fans both Thom Yorke and John Ashbery; who have an album in the National Recording Registry at the Library of Congress and also coined a phrase now used as a slogan by freeform giant WFMU; and whose albums were widely distributed by tape among U.S. soldiers in Vietnam, and then sampled by the most selective classic hip hop DJs, from Steinski and DJ Premier to J Dilla and Madlib.

Formed in 1966, they began their career improvising on Los Angeles’s Pacifica station KPFK, and went on to work in numerous media formats over their four-decade career. They are best known for a series of nine albums made for Columbia Records, records that remain unparalleled for their density, complexity, and sonic range. Realizing in an astonishing way the implications of the long playing record and the multi-track recording studio, the Firesign Theatre’s Columbia albums offer unusually fertile ground for bringing techniques of literary analysis to bear upon the fields of sound and media studies (and vice versa). This is a strategy that aims to reveal the forms of political consciousness that crafted the records, as well as the politics of the once-common listening practices binding together the disparate audiences I have just named. It is no accident that the associative and referential politics of the sample in “golden age” hip hop would have recognized a similar politics of reference and association in Firesign Theatre’s sound work, in particular in the group’s pioneering use of language, time, and space.

Screen Shot 2017-11-22 at 12.31.54 AM

The Firesign Theatre (wall of cables): John Rose, Image courtesy of author

The Firesign Theatre is typically understood as a comedy act from the era of “head music” — elaborate album-oriented sounds that solicited concerted, often collective and repeated, listening typically under the influence of drugs. But it may be better to understand their work as attempting to devise a future for literary writing that would be unbound from the printed page and engaged with the emergent recording technologies of the day. In this way, they may have crafted a practice more radical, but less recognizable, than that of poets —such as Allen Ginsberg or David Antin, both of whose work Firesign read on the air — who were also experimenting with writing on tape during these years (see Michael Davidson’s Ghostlier Demarcations: Modern Poetry and the Material Word, in particular 196-224). Because their work circulated almost exclusively on vinyl (secondarily on tape), it encouraged a kind of reading (in the strictest sense) with the ears; the fact that their work was distributed through the networks of popular music may also have implications for the way we understand past communities of music listeners as well.

The period of Firesign’s contract (1967-1975) with the world’s largest record company parallels exactly the recording industry’s relocation from New York to Los Angeles, the development of multitrack studios which made the overdub the dominant technique for recording pop music, and the rise of the LP as a medium in its own right, a format that rewarded, and in Firesign’s case required, repeated listening. These were all factors the Firesign Theatre uniquely exploited. Giving attention to the musicality of the group’s work, Jacob Smith has shown (in an excellent short discussion in Spoken Word: Postwar American Phonograph Cultures that is to date the only academic study of Firesign) how the group’s attention to the expansion of television, and in particular the new practice of channel-surfing, provided both a thematic and a formal focus for the group’s work: “Firesign […] uses channel surfing as the sonic equivalent of parallel editing, a kind of horizontal or melodic layering in which different themes are woven in and out of prominence until they finally merge. Firesign also adds vertical layers to the narrative in a manner analogous to musical harmony or multiple planes of cinematic superimposition” (181). But more remains to be said not only about the effect of the Firesign Theatre’s work, but about its carefully wrought semantics, in particular the way the “horizontal” and “vertical” layers that Smith identifies were used as ways of revealing the mutually implicated regimes of politics, culture, and media in the Vietnam era — at the very moment when the explosion of those media was otherwise working to disassociate those fields.

The group’s third album, Don’t Crush That Dwarf, Hand Me the Pliers is typically understood as their first extended meditation on the cultural phenomenology of television. Throughout the record, though there is much else going on, two pastiches of 1950s genre movies (High School Madness and a war film called Parallel Hell!) stream intermittently, as if through a single channel-surfing television set. The films coincide in two superimposed courtroom scenes that include all the principal characters from both films. By interpenetrating the school and the war, the record names without naming the killing of four students at Kent State and two students at Jackson State University, two events that occurred eleven days apart in May 1970 while the group was writing and recording in Los Angeles. Until this point rationalized by the framing fiction of a principal character watching both films on television, the interpenetration of the narratives is resolvable within the album’s diegesis—the master plot that accounts for and rationalizes every discrete gesture and event—only as a representation of that character’s having fallen asleep and dreaming the films together, a narrative sleight of hand that would testify to the group’s comprehension of literary modernism and the avant-garde.

The question of what may “cause” the interpenetration of the films is of interest, but the Firesign Theatre did not always require justification to elicit the most outrageous representational shifts of space (as well as of medium and persona). What is of more interest is the way rationalized space — the space implied by the “audioposition” of classic radio drama, as theorized by Neil Verma in Theater of the Mind— could be de-emphasized or even abandoned in favor of what might instead be called analytic space, an aural fiction in which the institutions of war and school can be understood as simultaneous and coterminous, and which more broadly represents the political corruptions of the Nixon administration by means of formal and generic corruption that is the hallmark of the Firesign Theatre’s approach to media (35-38).

While the techniques that produce this analytic soundscape bear some resemblance to what Verma terms the “kaleidosonic style” pioneered by radio producer Norman Corwin in the 1940s — in which the listener is moved “from place to place, experiencing shallow scenes as if from a series of fixed apertures” — even this very brief sketch indicates how radically the Firesign Theatre explored, deepened, and multiplied Corwin’s techniques in order to stage a more politically diagnostic and implicative mode of cultural interpretation. Firesign’s spaces, which are often of great depth, are rarely traversed arbitrarily; they are more typically experienced either in a relatively seamless flow (perspective and location shifting by means of an associative, critical or analytical, logic that the listener may discover), or are instead subsumed within regimes of media (a radio broadcast within a feature film which is broadcast on a television that is being watched by the primary character on the record album to which you are listening). According to either strategy the medium may be understood to be the message, but that message is one whose horizon is as critical as it is aesthetic.

Screen Shot 2017-11-22 at 12.33.38 AM

Firesign Theatre (pickup truck): John Rose, Image courtesy of author

The creation of what I am terming an analytic space was directly abetted by the technological advancement of recording studios, which underwent a period of profound transformation during the years of their Columbia contract, which spanned the year of The Beatles’s Sergeant Pepper’s Lonely Hearts Club Band (arguably the world’s first concept album, recorded on four tracks) to Pink Floyd’s Wish You Were Here (arguably that band’s fourth concept album, recorded on 24 tracks). Pop music had for years availed itself of the possibilities of recording vocals and solos separately, or doubly, but the dominant convention was for such recordings to support the imagined conceit of a song being performed live. As studios’ technological advances increased the possibilities for multitracking, overdubbing, and mixing, pop recordings such as Sgt. Pepper and the Beach Boys’ Pet Sounds (1966) became more self-evidently untethered from the event of a live performance, actual or simulated. In the place of the long-dominant conceit of a recording’s indexical relation to a particular moment in time, pop music after the late 60s came increasingly to define and inhabit new conceptions of space, and especially time. Thus, when in 1970 Robert Christgau asserted that the Firesign Theatre “uses the recording studio at least as brilliantly as any rock group” (and awarding a very rare A+), he was remarking the degree to which distortions and experiments with time and space were if anything more radically available to narrative forms than they were to music.

The overdub made possible much more than the simple multiplication and manipulation of aural elements, it also added depth and richness to the soundfield. New possibilities of mixing, layering, and editing also revealed that the narrative representation of time, as well as spatial element I’ve just described, could be substantially reworked and given thematic meaning. In one knowing example, on 1969’s How Can You Be in Two Places at Once When You’re Not Anywhere at All, an accident with a time machine results in the duplication of each of the narrative’s major characters, who then fight or drink with each other.

This crisis of the unities is only averted when a pastiche of Franklin Delano Roosevelt interrupts the record’s fictional broadcast, announcing the bombing of Pearl Harbor, and his decision to surrender to Japan. On a record released the year the United States began secret bombing in Cambodia, it is not only the phenomenological, but also the social and political, implications of this kind of technologically mediated writing that are striking: the overdub enables the formal representation of “duplicity” itself, with the gesture of surrender ironically but pointedly offered as the resolution to the present crisis in Southeast Asia.

To take seriously the Firesign Theatre’s experiments with medium, sound, and language may be a way of reviving techniques of writing — as well as recording, and of listening — that have surprisingly eroded, even as technological advances (cheaper microphones, modeling software, and programs from Audacity and Garage Band to Pro Tools and Ableton Live) have taken the conditions of production out of the exclusive purview of the major recording studios. In two recent essays in RadioDoc Review called “The Arts of Amnesia: The Case for Audio Drama Part One” and “Part Two,” Verma has surveyed the recent proliferation of audio drama in the field of podcasting, and urged artists to explore more deeply the practices and traditions of the past, fearing that contemporary aversion to “radio drama” risks “fall[ing] into a determinism that misses cross-fertilization and common experiment” (Part Two, 4). Meanwhile, Chris Hoff and Sam Harnett’s live performances from their excellent World According to Sound podcast are newly instantiating a form of collective and immersive listening that bears a resemblance to the practices that were dominant among Firesign Theatre listeners in the 1960s and 70s; this fall they are hosting listening events for Firesign records in San Francisco.

Screen Shot 2017-11-22 at 12.34.37 AM

The Firesign Theatre (mixing board): Bob & Robin Preston,  Image courtesy of  author

It is tempting to hope for a wider range of experimentation in the field of audio in the decade to come, one that either critically exploits or supersedes the hegemony of individualized listening emblematized by podcast apps and noise-cancelling headphones. But if the audio field instead remains governed by information-oriented podcasts, leavened by a subfield of relatively classical dramas like the very good first season of Homecoming, a return to the Firesign Theatre’s work can have methodological, historical, and theoretical value because it could help reveal how the experience of recorded sound had an altogether different political inflection in an earlier era. Thinking back to the remarkably heterogeneous set of Firesign Theatre fans with which I began, it is hard not to observe that the dominant era of the sample in hip hop is one where it was not the Walkman but the jambox — with its politics of contesting a shared social space through collective listening — was the primary apparatus of playback. However unwished- for, this determinist line of technological thinking would clarify the way media audiences are successively composed and decomposed, and show more clearly how, to use Nick Couldry’s words in “Liveness, ‘Reality,’ and the Mediated Habitus from Television to the Mobile Phone,” “the ‘habitus’ of contemporary societies is being transformed by mediation itself” (358).

Featured Image: The Firesign Theatre (ice cream baggage claim): John Rose, courtesy of author.

Jeremy Braddock is Associate Professor of English at Cornell University, where he specializes on the production and reception of modernist literature, media, and culture from the 1910s throughout the long twentieth century. His scholarship has examined the collective and institutional forms of twentieth-century authorship that are obscured by the romanticized figure of the individual artist. His book Collecting as Modernist Practic— a study of anthologies, archives, and private art collections — won the 2013 Modernist Studies Association book prize. Recent publications include a short essay considering the literary education of Robert Christgau and Greil Marcus and an essay on the Harlem reception of James Joyce’s Ulysses. He is currently working on a book on the Firesign Theatre.

REWIND! . . .If you liked this post, you may also dig:


“Radio’s “Oblong Blur”: Notes on the Corwinesque”–Neil Verma

The New Wave: On Radio Arts in the UK–Magz Hall

This is Your Body on the Velvet Underground–Jacob Smith