Has hacking jumped the shark? It used to be teens, then criminals, then nation-states, and now it's nation-states pretending to be teens.— Dino A. Dai Zovi (@dinodaizovi) September 19, 2016
There is nothing to see at the moment so only the home page is public. Login to use the navigation. We should be done August, 2017.
A few weeks ago, Peter Suber, one of the leading figures of the open access movement, published a blog post on the website of The American Philosophical Association, entitled: ‘Why Open Access is Moving so Slow in the Humanities’. In there, he sums up 9 reasons why this is the case and I will just mention a few below:
‘Journal subscriptions are much higher in the Sciences Technology and Medicine (STM), than in the Humanities & Social Sciences (HSS). In the humanities, relatively affordable journal prices defuse the urgency of reducing prices or turning to open access as part of the solution.’
‘Much more STM research is funded than humanities research, so there is more money available for paying any open access charges.’
‘STM faculty typically need to publish journal articles to earn tenure, while humanities faculty need to publish books. But the logic of open access applies better to articles, which authors give away, than to books, which have the potential to earn royalties.’
Sadness of it all is that this post is a slightly revised version from the original from 2004. Today we’re still dealing with almost the same issues as 13 years ago. One of Suber’s conclusions is that “Open access isn’t undesirable or unattainable in the humanities. But it is less urgent and harder to subsidize than in the sciences.”
I fully agree with this conclusion. But did we achieve nothing for the humanities then? No, a lot of things have happened in the last 5 to 10 years helping the humanities to make a transition to open access. But we are not there yet.
Open Access Journals
Globally several humanities journals have made the flip from toll access (TA) to open access and several new open access (niche) journals have seen the light in the last couple of years. Currently 9,426 open access journals are indexed by the DOAJ, of which a substantial part is in the humanities. A majority of those journals however, and we must not forget this, don’t charge a dime to publish research in open access. In many cases, and this is exemplary for the humanities, foundations, institutions, and societies are paying for publishing research.
The financial model for open access in the humanities is not an easy road. In my previous life as a publisher in the humanities I’ve developed a few gold open access journals, all financed with money from institutions or research grants. However, subsidies for a journal coming from different institutions is a fragile model. Some of the journals had the ambition to move towards an APC model. None have done it so far.
New kid on the block, but very successful, is the Open Library of Humanities, run by Martin Eve and Caroline Edwards. They proposed and have implemented a model, which is a library funded model. With enough supporting libraries they are able to publish humanities research with no APCs. Main goal is to unburden authors with all kinds of financial hassle.
Another trend is the renewed rise of institutional (library) open access publishing. Some examples are Stockholm University Press, UCL Press and Meson Press. They distinguish themselves from traditional university press in the way that they only publish research in open access.
Online research tools
Other interesting developments are the experiments with redefining online publishing. I think it’s safe to say that these experiments just happen in the field of media studies. Collaborative research, writing and publication platforms like MediaCommons and the recently launched Manifold are very exiting initiatives. They all experiment with new digital formats, writing and publishing tools, and data publications.
Open Access Books
Open access for the academic book is on the agenda since 2008 / 2009 with the development of, amongst others, the OAPEN platform. And with indexes like the Directory of Open Access Books, established in 2011, open access books become visible and findable. Two weeks ago, a new milestone was reached with 8000+ open access books being indexed by DOAB and published by 213 publishers.
However, open access for books is still underrated. There is a lack of aligned policies. Also, the lack of funding options makes it still very difficult for (smaller) humanities publishers to come up with a sustainable model for open access books. The focus for open access funding still lies with article publishing in journals and the financial models that come along with it.
For this website, I keep track of funders (research councils and universities) that actively support open access book publishing in media studies. I do this since 2015, but up till now the options for funding can be counted on 4 hands maximum. But even in the field of open access books things are happening with projects like Knowledge Unlatched. This project looks at funding coming directly from university libraries, supporting the ‘platform’ or book package and not the individual publication.
So, the important question now is what types of sustainable business models are appropriate for open access publishing in the humanities?
I think one important thing to keep in mind is that if we keep comparing the STM with the HSS it will not getting us very far. Another problem is that (open access) funding policies are still very focused on a local or national level or simply only look at APCs/BPCs. We need to work on a better international alignment of open access policies (per discipline) with different stakeholders (funders, libraries, publishers).
The Dutch Approach: Open Science
In February of this year, the National Plan Open Science was launched in the Netherlands. Towards 2020 this roadmap concentrates on three key areas:
- Open access to scientific publications (open access).
- Make optimal use and reuse of research data.
- Adapting evaluation and award systems to bring them in line with the objectives of open science (reward systems).
One of the requirements is that by 2020 all researchers working for a Dutch research university need to publish their work (journals and books(!)) in open access. So this includes the HSS as well. To accomplish this the plan is launched to align all Dutch stakeholders to meet these requirements.
During the launch all the important academic stakeholders (research funders and associations) in the Netherlands explicitly committed themselves to this job. In Finland, similar things are happening. And in other countries discussions have started about open access and open science requirements and indicators as well. It’s of great importance to connect these initiatives together as much as possible.
One other thing that Suber also mentions in his blog and I’d like to bring into this discussion, are preprints. In the humanities depositing preprints or post prints is not so common as it is in the sciences. That is for obvious reasons; loss of arguments and research outcomes, scooping, etc. etc. But are all these reasons still valid?
As academic community, it’s important to share your research to improve science. In the HSS we are apparently in need for platforms that can quickly disseminate research, based on the popularity (also among humanities scholars) of commercial social sharing platforms like Academia.edu and Researchgate. Note that I deliberately call them social sharing platforms, because that’s what they are.
It’s important that we need to make clear to academics what the implications are when using platforms like Academia.edu and ResearchGate. Both examples are commercial enterprises and interested in as much (personal) data as possible. The infrastructure serves a need but it comes with a cost. We need to think of sustainable alternatives.
Back to the preprint discussion. In the humanities (thus for media studies), it is unusual to share research before it is published in a journal or book. But if everyone is so eager to share their publications in different stages of their research why is it still not common practice to share the work on a preprint server, comparable with ArXiv or SSRN (when it was not Elsevier property), and new servers like LawArXiv, SocArXiv, PsyArXiv, etc.
Will it ever become common practice in the humanities to share research in an earlier stage? Maybe this practice could help moving the humanities a bit quicker?
Header image credit: Slughorn’s hourglass in Harry Potter and the Half-Blood Prince. © Warner Brothers
On Monday, June 26, the Netherlands Research Council (NWO) announced that they will terminate the Incentive Fund Open Access on January 1, 2018. NWO started this Incentive Fund in 2010 to finance open access publications and activities that highlight open access during scientific conferences.
The fund has been useful for advancing open access since it became available in 2010. However, this decision soon follows the launch of the National Plan Open Science (NPOS), signed by NWO, early 2017. In this plan institutions commit themselves explicitly to work on a healthy open access climate to achieve 100% open access for researchers affiliated to Dutch research universities. Now it’s obvious that this fund is not going to be the solution. However, it’s a remarkable step especially now. There is still a lot to do.
The choice is unfortunate, the more because NWO has been one of the first national research councils in Europe with an active open access policy and, moreover, a well-funded program from which APCs (and BPCs) could be paid, provided that the research will be available immediately after publication (the Gold route). On a national level NWO and the Austrian Science Fund (FWF) were the first funding bodies to mandate books and allocate money for BPCs. This policy is therefore quite unique, and only in the last three years or so, it’s under development at other places.
The Incentive Fund was founded with the aim to stimulate Gold open access. NWO hoped that with such a fund, this could be a model that universities would take over; individual institutions should bear the cost of open access with their own budgets. This has hardly come to fruition. Only the University of Amsterdam, Utrecht University, Delft University of Technology, and Wageningen University & Research have had such funds. At this very moment only Utrecht still runs an Open Access fund.
It is absolutely fair to ask why NWO should keep on spending money if it turns out that universities seem to find this step difficult. But now the boy scout decides to throw in the towel. Understandable, but disappointing. There are enough pros (and yes, cons as well) to consider.
In this piece, I would like to give some considerations why it would not (or would) be wise to terminate this fund. I take the arguments that NWO puts forward:
“NWO believes that the academic world is now sufficiently aware of open access publishing and its importance.”
I doubt this very much. The debate on open access has so far been predominantly conducted by policy makers, libraries, and publishers. Researchers often submit their articles to the established and renowned, usually high-impact, journals. This (imposed) culture does not necessarily lead to more articles in open access journals. And yes, there are many researchers who are aware or the benefits of open access and publish their work in open access, but to say that this is ‘sufficiently’? The ‘academic world’ is any case an international one.
“Currently there are many more opportunities for authors to make their publications available via open access channels without having to pay for publication costs. In part, this has been achieved through open access agreements between Dutch universities and publishers. In addition, there are a growing number of open access journals and platforms that do not charge publication costs.”
True, enormous steps have been taken over the past 20 years. Lots of journals made the transfer to open access. There are (commercial and non-profit) platforms for articles, preprints, post prints – you name it. But are these all for free?
NWO brings up the current OA Big Deals in the Netherlands. However, these deals are mainly focused on hybrid journals. All Gold open access journals from, for example, Springer or Wiley are out of the deal. For these journals, an APC is still required. At present, only the deal with Cambridge University Press includes 20 Gold open access journals.
In addition, the OA Big Deals only cover a part of all the Dutch open access publications in journals. At the moment, as academic community, we are trying to get more insight into this.
Not to mention the diversity of the deals. At Elsevier, it is possible to publish in 276 journals for ‘free’. All other (1800+) are still paid for. It is therefore nonsense to think that there are enough channels for researchers to publish their research in open access? I want to stress here that I don’t want to say that the APC-model is the holy-grail. Far from. But it’s the reality with which researchers are faced.
“Finally, there is the green route, which authors can use to deposit their articles in a (university) repository at no cost.”
Yes correct. And we have repositories for every university for more than 10 years. With varying success. However, for the time being, the government has also been advocating open access through the Golden route (i.e. via journals) since 2013 and above all stating that it is the most future-proof. Not in the last place by the VSNU. For NWO, the Golden route has always been the main goal. In addition, NWO demands immediate open access (without embargo period). This is hardly possible with Green (self-archiving) open access unless NWO wants to force researchers to publish the preprint without peer review? Apparently, they have revised their own terms and policy. This can happen of course, but I find it strange to argue that a fund aimed at publishing in journals needs to be terminated when Gold is the standard.
You could also argue that this fund leads to pushing more money in the (publishing) system. Then I’d like to say, let’s do better with the national deals and not only focus on hybrid journals.
In addition, there is the already mentioned National Open Science Program (NPOS). This plan focuses on three key areas, which are: 1. Promoting open access to scientific publications (open access). 2. Promoting optimal use and reuse or research data. 3. Adapting evaluation and award systems to bring them into line with the objectives of open science (reward systems).
One of the ambitions is full open access to publications. As stated:
“The ambition of the Netherlands is to achieve full open access in 2020. The principle that publicly funded research results should also be publicly available at no extra cost is paramount. Until the ambition of full open access to publications in the Netherlands and beyond is achieved, access to scientific information will be limited for the majority of society.”
In this transition phase. And with this, NWO supported, ambition in mind, the termination of a transit fund (this is how it should be seen) seems a bit premature to me. However, it should be said that the possibility remains to budget open access publications in project funding at NWO. But it is to be seen for how long this will happen considering their response: ‘for the time being’.
 National Plan Open Science, p.21
In the coming period, I will interview a number of researchers about their work and to what extent open access has a role to play in it. The debate around open access is often held on a policy level, with university boards or libraries and publishers. But the voices of those that actually make use of research papers, books and research data are often not heard. How does a researcher or practitioner see the open access movement enabling free online access to scholarly works? How does this affect their work? What initiatives of interest are being developed in particular fields and what are personal experiences with open access publishing? All kinds of questions that hopefully lead to helpful answers for other researchers engaging with open access.
First interview is with Adrian Martin. Adrian was born in 1959 in Australia. He is a film and arts critic for more than 30 years and as an associate professor in Film Culture and Theory he is currently affiliated with Monash University. His work has appeared in many journals and newspapers around the world, and has been translated into over twenty languages.
The interview starts:
Jeroen: When did you first hear of open access as a new way of distributing research to a wider audience?
Adrian: To appreciate my particular viewpoint on open access issues, you probably need to know where I am ‘coming from’. I am not now, and have rarely been in my life so far, a salaried academic. I have spent most of my life as what I guess is called an ‘independent researcher’. I have sometimes called myself a ‘freelance intellectual’, but I guess the more prosaic description would simply be ‘freelance writer/speaker’. So, not a journalist in the strict sense (I have never worked full-time for any newspaper or magazine), and only sometimes an employed academic within the university system.
Therefore, my entry into these issues is as someone who, at the end of the 1990s, began to get heavily involved in the publication of online magazines, whether as editor, writer, or translator. These were not commercial or industrial publications, they were ‘labour of love’ projects, kin to the world of ‘small print magazines’ in the Australian arts scene (which I had been a part of in the 1980s). No special subscription process was required; it was always, simply, a completely open and accessible website. My entrée to this new, global, online, scene was through Bill Mousoulis, the founder of Senses of Cinema and later I was part of the editorial teams of Rouge, and currently LOLA. And I have contributed to many Internet publications of this kind since the start of the 21st century. The latter two publications do not use academic ‘peer review’ (although everything is carefully checked and edited), and are run on an active ‘curation’ model (i.e., we approach specific people to ask for texts) rather than an ‘open submission’ model.
I say this in order to make clear that my attitude and approach does not come from only, or even mainly, an academic/scholarly perspective. For me, open access is not primarily or solely about making formerly ‘closed’ academic research available to all – although that is certainly one important part of the field. Open access is about – well, open access, in the strongly political sense of making people feel that they are not excluded from reading, seeing, learning or experiencing anything that exists in the world. Long before I encountered the inspiring works of Jacques Rancière, I believe I agreed deeply with his political philosophy: that what we have to fight, at every moment, is the unequal ‘distribution of the sensible’, which means the ways in which a culture tries to enforce what is ‘appropriate’ for the citizens in each sector of society. As a kid who grew up in a working-class suburb of Australia before drifting off on the lines-of-flight offered by cinephilia and cultural writing, I am all too painfully aware of the types of learning and cultural experience that so many people deny themselves, because they have already internalised the sad conviction that it is ‘not for them’, not consistent with their ‘place’ in the world. Smash all such places, I say!
This is why I am temperamentally opposed to any tendency to keep the discussion of open access restricted to a discussion of university scholarship – or, indeed, as sometimes happens, with the effect of strengthening the ‘professional’ borders around this scholarship, and thus shutting non-university people (such as I consider myself today) out of the game. Let me give you a controversial example. I use, and encourage the use of Academia.edu. It is the only ‘repository of scholarly knowledge’ I know of that – despite its unwise name! – anyone can easily join and enjoy (once they are informed of it, and are encouraged to do so). Now, many people complain about the capitalistic nature of this site, and everything they say in this regard may be true. But when I ask them for an alternative that is as good and as extensive in its holdings, I am directed to ‘professional’ university repositories for texts – from which I am necessarily excluded from the outset, since I do not have a university job. This is bad! And reinforces all the worst tendencies in the field.
Likewise, I bristle at the suggestion (it occasionally comes up) that an online publication such as LOLA (among many other examples) is not really ‘scholarly’. Online magazines are regularly downgraded by being described as mere ‘blogs’ (when this is not so!), with no professional standards, etc. etc.. But my drive is, above all, a democratic one. I work mainly outside the university setting because I want access to be truly open. And I want the work to be lively and unalienated. A tall order, but we must forever strive for it! So, in a nutshell, for me the term ‘open access’ simply means ‘material freely available to all online’ – but material that is well written, well prepared, well edited and well presented.
Jeroen: Did you ever publish one of your papers (or other scholarly material) in open access?
Adrian: Well, according to my above context of criteria, yes: a great deal, literally hundreds of essays! I believe I have covered a wide range of venues, from what I am calling Internet magazines (such as Transit and Desistfilm), through to online-only peer-reviewed publications (such as Movie, Necsus and The Cine-Files), through to the ‘paywall’ academic journals (such as Screen, Studies in Documentary Film and Continuum) which seem to exist less and less as solid, physical entities that one could actually obtain and hold a copy of (try buying one if you’re not a library), and more and more as a bunch of detached, virtual items (each article its own little island) on a digital checkout page of a wealthy publishing house’s website! This last point also applies to the chapters I have written for various academic books.
When I taught at Monash (Australia) and Goethe (Germany) universities from 2007 to 2015, I decided to ‘take a detour’ into this world of academic writing – partly because the institution demands or requires it, for the sake of judging promotions and so forth. I do not regret the type of in-depth, historical work, on a range of subjects, that this opportunity allowed me to do. But I am more than happy to be back in the less constrained, less rule-bound world of freelance writing. The university, finally, is all about a far too severe, restricted and vicious ‘distribution of the sensible’ – it tends to perpetuate itself, and close its professional ranks, rather than truly open its borders to what is beyond itself.
One of my best and happiest experiences with open access has been with the small American publisher, punctum books. I did my little book Last Day Every Day with them, and it has had three editions in three different languages there. Their care and dedication to projects is outstanding. The politics of punctum as an enterprise are incredibly noble and radical: people can opt to pay something for their books, or download them for free if they wish. Likewise, authors can take any money that comes to them, or choose to plough it back into the company (that’s what I did, and probably most of their authors do). At the same time, certain professional/academic standards are upheld: punctum has an extraordinary board, manuscripts are sent out for reporting, and so forth. They both ‘play the game’ of academic publishing as far as they have to, and also challenge the system in a remarkable way. I am proud to be involved with them.
Jeroen: You are an Australian scholar, living in Spain, traveling for lectures and conferences and studying and writing about a global topic as film and media studies is. How does free online scholarly content affect your daily work as a scholar?
Adrian: Well, I enjoy an extraordinary amount of access to the work of other critics and scholars, especially through Academia.edu, and through postings of links by individuals on social media. At the same time, the ‘paywalls’ shut me out, because the purchase rates are too high for me as an individual, and I have no university-sanctioned reading/downloading access. As a freelance writer, I have to go where the work is, and where the money (very modest!) is. So that itinerary necessarily cuts across ‘commercial’ and ‘academic’ lines, and also involves me with many brave projects that are largely non-academic, and commercial only on an artisanal scale: literary projects such as Australia’s Cordite, for example.
Jeroen: In your first answer, you already addressed the issues of Academia.edu (and I guess you can extend this to other commercial products with similar functionalities like ResearchGate) but you also stress the need for a good place to share papers and research output. In the sciences, the preprint and postprint is an excepted and efficient standard in the scholarly communication process. Even publishers allow it. Lots of institutional archives (e.g. ArXiv, and SSRN) have seen the light mid-90s. And the use of those repositories increases every year. In the humanities, there is no such culture. Do you think this could change in a time where sharing initial ideas is becoming easier? Or is the writing and publishing culture in the humanities intrinsically different from that in the sciences?
Adrian: You offer a very intriguing comparative perspective here, Jeroen. I have no experience of scholarship in the sciences, so what you say is surprising (and good!) news to me. Perhaps, in the humanities, there has been, for too long a time, a certain anxious aura built up around the individual ‘ownership’ of one’s ideas – and thereby most of us have gone along with this perceived need not to share our work so readily or easily in the preprint and postprint ways that you describe. But I do think this can change, and quite radically, if humanities people are encouraged to go in this direction. One can already see the signs of it, when scholars share their drafts of papers more readily (and widely) than before. I think it would be a very productive development.
Jeroen: One of the biggest hurdles to take in the next 5 to 10 years regarding open access in the humanities are the costs of publishing. In the sciences, the dominant business model is based on APCs (Article Processing Charges). In the humanities this model is a problem. One of the reasons is that research budgets in the humanities and social sciences are much lower. Other reasons given are that since journal prices in the sciences are much higher there was an urgency to transfer to an open access environment. Subscription costs for humanities journals are much lower.
The majority of open access journals in the humanities and also in media studies have another business model and are often subsidized by institutions or foundations. But subsidies are often temporary. New initiatives like Open Library of Humanities and Knowledge Unlatched come up with different financial models, all aimed at unburdening individual authors, but all of these models still need to prove themselves. Nevertheless, things are changing. How do you see a sustainable open access publishing environment for the humanities, and more specifically film and media studies?
Adrian: Issues of funding – and money, in general – are vexing indeed. Once again, let me make clear where I’m exactly ‘coming from’. With Rouge and LOLA magazines, we have never received, or even sought, any government funding or any kind of arts-industry subsidy; we have never sought or accepted any advertising revenue; and we have never benefitted from any university grants of any kind. We run these magazines on virtually no money (beyond basic operating costs) and of course, as a result, we are unable to pay any contributor (and we are always upfront about that). This is perhaps an extreme, but not uncommon position. It was a decision that, in each case, we took. Why? Because we didn’t want the restrictions, and obligations, that come with the ‘public purse’ – or, indeed, with almost any source of ‘filthy lucre’! In Australia, for example, to accept government funding means you will have to meet a ‘quota’ of ‘local/national content’ – and if you don’t, you won’t get that subsidy again. Senses of Cinema has struggled with that poisoned chalice. With Rouge and LOLA, on the other hand, we enjoy the ‘stateless’ potentiality of online publishing – it is ‘of the world’ and belongs to the whole world (or at least, those in it who can read English!). Sometimes we engaged in (perhaps at our initiative) ‘co-production’ ventures, some of which panned out well (such as a book that Rouge made in collaboration with the Rotterdam Film Festival on Raúl Ruiz in 2004, or the publication last year in LOLA of certain chapters from a Japanese book tribute to Shigehiko Hasumi), and others which did not. But I and my colleagues stick to this generally penniless state of idealism!
I was naively shocked when I realised that academic publishers usually fund their open access projects through payments from writers! And that – as I discovered upon asking a few friends – some universities routinely subsidise these types of publications for their scholars. As a freelancer, once more, I am shut out from this particular system. Therefore, my next ‘academic’ book (Mysteries of Cinema for Amsterdam University Press) – ironically, largely comprised of my essays from non-academic print publications! – will not be Open Access, because I cannot personally afford that, and I have no ‘channel’ of institutional funding that I can access. Once again, that’s just the name of the game. I will be very happy when that book exists, but it will purely be a physical book for purchase only!
I have, therefore, no utopian visions for how to fund open access across the humanities board. Personally, I am currently looking into Patreon as a possible way to sustain arts/criticism-related website projects. It’s a democratic model: people pay to support your ongoing work, to give you time and space to creatively do it. It’s not like Kickstarter, which is geared to a single production, such as a feature film project. Patreon has proved a godsend for artists such as musicians. We shall see if it can also work in an open access publishing context.
Jeroen: You are one of the founding fathers and practitioners of the so-called audiovisual essay, a new rising digital video format in academic publishing. Instead of writing a paper in words, a compilation of images offers a new textual structure. Another digital format is the enriched publication; articles or books with data included. One of the issues, besides arranging new forms of reviewing, is copyright and reuse. The audiovisual essay format obviously benefits from images with an open license, like the Creative Commons licenses. This makes it possible to reuse and remix these images. Archives are being digitized rapidly, but only a small portion is currently available in the public domain. Scholars are often not allowed to make use of film quotes or stills in their works. How do you see the nearby future for using digitized media files for academic purposes in relation to copyright laws?
Adrian: We are in an extraordinarily ‘grey area’ here – appropriately, I suppose, since things like LOLA are (I’m told) classified as ‘grey Open Access’! And the legal situation for audiovisual works can vary greatly from nation to nation. We are in a historical moment when a lot of experimentation is going ‘under the radar’ of legal restriction, or (in the eyes of the big corporations) is considered simply too minor to consider taking any action against. Bear in mind that most critical/scholarly work in audiovisual essays (of the kind that I do in collaboration with my partner, Cristina Álvarez López) is not about making large sums of money; it is still a marginal, ‘labour of love’ activity, just as small, cultural magazines were in the 1980s.
This general fuzziness of the present moment is all to the good, in my opinion; we can all enjoy a certain freedom within it (with, occasionally, a ‘bite’ from above on particular questions of copyright: music use, for instance). I speak of no specific works or practitioners here, but much work in the audiovisual essay field happens both inside and outside of Creative Commons licenses. I don’t think anyone should be restricted to using just that. The front on which we all have to battle is ‘fair use’ or ‘fair dealing’ (hence the disclaimer ‘for study purposes only’ that Cristina & I place at the end of all our videos): the right to quote (and hence manipulate) audiovisual quotations for scholarly and artistic purposes, ranging all the way from lecture demonstration and re-montage analysis to parody and creative détournement/appropriation. The fully scholarly publication [in]Transition to which I and many others have contributed – no one will ever call that a blog! – takes full advantage, via its publishing ‘home base’ of USA, of everything that the fair use provisions in that country can allow. And I think you can see, if you peruse that site, how far the possibilities can go.
I very much liked the recent essay by Noah Berlatsky, “Fair Use Too Often Goes Unused” in The Chronicle of Higher Education, which argued that we – meaning not only writers and artists, but perhaps even more significantly editors and publishers – need to be questioning and pushing at the limits of the definition, practice and enforcement of fair use regulations. Too often (and I have experienced this myself) editors and publishers assume, at the outset, that a great deal is simply impossible, unthinkable: even the use of screenshots from movies! There is so much unnecessary fear and trepidation over such matters. Sure, no one wants to take a stupid risk and be sued as a result. But, to cite Berlatsky’s conclusion:
“Books and journal articles about visual culture need to be able to engage with, analyse, and share visual culture. Fair use makes that possible — but only if authors and presses are willing to assert their rights. Presses may take on a small risk in asserting fair use. But in return they give readers an invaluable opportunity to see [and I would add: hear!] what scholars are talking about.”
Jeroen: I want to thank you for this interview.
© Adrian Martin, June 2017
*During the NECS 2017 conference in Paris the session ‘The Changing Landscape of Open Access Publications in Film and Media Studies: Distributing Research and Exchanging Data’ will be held on Saturday July 1st. Download the final program here.
** 15 June 2018: some minor updates in lay-out and added a few links to mentioned projects.
U.S. intelligence officers discuss Chinese espionage in dramatically different terms than they use in talking about the Russian interference in the U.S. presidential election of 2016. Admiral Michael Rogers, head of NSA and U.S. Cyber Command, described the Russian efforts as “a conscious effort by a nation state to attempt to achieve a specific effect” (Boccagno 2016). The former director of NSA and subsequently CIA, General Michael Hayden, argued, in contrast, that the massive Chinese breach of records at the U.S. Office of Personnel Management was “honorable espionage work” of a “legitimate intelligence target” (American Interest 2016; Gilman et.al 2017). Characterizing the Chinese infiltration as illegal hacking or warfare would challenge the legitimacy of state-sanctioned hacking for acquiring information and would upset the norms permitting every state to hack relentlessly into each other’s information systems.
The hairsplitting around state-sanctioned hacking speaks to a divide between the doctrinal understanding of intelligence professionals and the intuitions of non-professionals. Within intelligence and defense circles of the United States and its close allies, peacetime hacking into computers with the primary purpose of stealing information is understood to be radically different than using hacked computers and the information from them to cause what are banally called “effects”—from breaking hard drives or centrifuges, to contaminating the news cycles of other states, to playing havoc with electric grids. One computer or a thousand, the size of a hack doesn’t matter: scale doesn’t transform espionage into warfare. Intent is key. The Chinese effort to steal information: good old espionage, updated for the information age. The Russian manipulation of the election: information or cyber warfare.
Discussing the OPM hack, Gen. Hayden candidly acknowledged,
If I as director of CIA or NSA would have had the opportunity to grab the equivalent [employee records] in the Chinese system, I would not have thought twice… I would not have asked permission. I would have launched the Starfleet, and we would have brought those suckers home at the speed of light.
Under Hayden and his successors, NSA has certainly brought suckers home from computers worldwide. Honorable computer espionage has become multilateral, mundane, and pursued at vast scale.
In February 1996 John Perry Barlow declared to the “Governments of the Industrial World,” that they “have no sovereignty where we gather”—in cyberspace (Barlow 1996). Whatever their naivety in retrospect, such claims in the 1990s from right and left, from civil libertarians as well as defense hawks, justified governments taking preemptive measures to maintain their sovereignty. Warranted or not, the fear that the Internet would weaken the state fueled its dramatic, mostly secret, expansion at the beginning of the current century. By understanding the ways state-sponsored hacking exploded from the late 1990s onward, we see more clearly the contingent interplay of legal authorities and technical capacities that created the enhanced powers of the nation-state.
How did we get a mutual acceptance of state-sanctioned hacking? In a legal briefing for new staff, NSA tells a straightforward story of the march of technology. The movement from telephonic and other communication to the mass “exploitation” of computers was “a natural transition of the foreign collection mission of SIGINT” (signals intelligence). As communications moved from telex to computers and switches, NSA pursued those same communications” (NSA OGC n.d.). Defenders of NSA and its partner agencies regularly make similar arguments: anyone unwilling to accept the necessity of government hacking for the purposes of foreign intelligence is seen as having a dangerous and unrealistic unawareness of the threats nations face today. For many in the intelligence world today, hacking into computers and network infrastructures worldwide is, quite simply, an extension of the long-standing mission of “signals intelligence”—the collection and analysis of communications by someone other than the intended recipient.
Contrary to the seductive simplicity of the NSA slide, little was natural about the legalities around computer hacking in the 1990s. The legitimization of mass hacking into computers to collect intelligence wasn’t technologically or doctrinally pre-given, and hacking into computers didn’t—and doesn’t—easily equate to earlier forms of espionage. In the late 1990s and 2000s, information warfare capacities were being developed, and authority distributed, before military doctrine or legal analysis could solidify. Glimpsed even through the fog of classification, documents from the U.S. Department of Defense and intelligence agencies teem with discomfort, indecision, and internecine battles that testify to the uncertainty within the military and intelligence communities about the legal, ethical, and doctrinal use of these tools. More “kinetic” elements of the armed services focused on information warfare within traditional conceptions of military activity: the destruction and manipulation of the enemy command and control systems in active battle. Self-appointed modernizers demanded a far more encompassing definition that suggested the distinctiveness of information warfare and, in many cases, the radical disruption of traditional kinetic warfare.
The first known official Department of Defense definition of “Information Warfare,” promulgated in an only recently declassified 1992 document, comprised:
The competition of opposing information systems to include the exploitation, corruption, or destruction of an adversary’s information system through such means as signals intelligence and command and control countermeasures while protecting the integrity of one’s own information systems from such attacks (DODD TS 3600.1 1992:1).
Under this account, warfare included “exploitation”: the acquiring of information from an adversary’s computers whether practiced on or by the United (ibid.:4). A slightly later figure (Figure 2) illustrates this inclusion of espionage in information warfare.
According to an internal NSA magazine, information warfare was “one of the new buzzwords in the hallways” of the Agency by 1994 (Redacted 1994:3). Over the next decade, the military services competed with NSA and among themselves over the definition and partitioning of information warfare activities. One critic of letting NSA control information warfare worried about “the Intelligence fox being put in charge of the Information Warfare henhouse” (Rothrock 1997:225).
Information warfare techniques were too valuable only to be used in kinetic war, a point Soviet strategists had long made. By the mid-1990s, the U.S. Department of Defense had embraced a broader doctrinal category, “Information Operations” (DODD S-3600 1996). Such operations comprised many things, including “computer network attack” (CNA) and “computer network defense” (CND) as well as older chestnuts like “psychological operations.” Central to the rationale for the renaming was that information warfare-like activities did not belong solely within the purview of military agencies and they did not occur only during times of formal or even informal war. One influential strategist, Dan Kuehl, explained, “associating the word ‘war’ with the gathering and dissemination of information has been a stumbling block in gaining understanding and acceptance of the concepts surrounding information warfare” (Kuehl 1997). Information warfare had to encompass collection of intelligence, deception, and propaganda, as well as more warlike activities such as deletion of data or destruction of hardware. Exploitation had to become peaceful.
Around 1996, a new doctrinal category, “Computer Network Exploitation” (CNE), emerged within the military and intelligence communities to capture the hacking of computer systems to acquire information from them. The definition encompassed the acquisition of information but went further. “Computer network exploitation” encompassed collection and enabling for future use. The military and intelligence communities produced a series of tortured definitions. A 2001 draft document offered two versions, one succinct,
Intelligence collection and enabling operations to gather data from target or adversary automated information systems (AIS) or networks.
and the other clearer about this “enabling”:
Intelligence collection and enabling operations to gather data from target or adversary automated information systems or networks. CNE is composed of two types of activities: (1) enabling activities designed to obtain or facilitate access to the target computer system where the purpose includes foreign intelligence collection; and, (2) collection activities designed to acquire foreign intelligence information from the target computer system (Wolfowitz 2001:1-1).
Enabling operations were carefully made distinct from affecting a system, which takes on a war-like demeanor. Information operations involved “actions taken to affect adversary information and information systems, while defending one’s own information and information systems” (CJCSI 3210.1A 1998). CNE was related to but was not in fact an information “operation.” A crucial 1999 document from the CIA captured the careful, nearly casuistical, excision of CNE from Information Operations: “CNE is an intelligence collection activity and while not viewed as an integral pillar of DoD IO doctrine, it is recognized as an IO-related activity that requires deconfliction with IO” (DCID 7/3 2003: 3). With this new category, “enabling” was hived off from offensive warfare, to clarify that exploiting a machine—hacking in and stealing data—was not an attack. It was espionage, whose necessity and ubiquity everyone ought simply to accept.
The new category of CNE subdued the protean activity of hacking and put it into an older legal box—that of espionage. The process of hacking into computers for the purpose of taking information and enabling future activities during peacetime was thus grounded in pre-existing legal foundations for signals intelligence. In contrast to the flurry of new legal authorities that emerged around computer network attack, computer network exploitation was largely made to rest on the hoary authorities of older forms of signals intelligence.
A preliminary DoD document captured this domestication of hacking in 1999:
The treatment of espionage under international law may help us make an educated guess as to how the international community will react to information operations activities. . . . international reaction is likely to depend on the practical consequences of the activity. If lives are lost and property is destroyed as a direct consequence, the activity may very well be treated as a use of force. If the activity results only in a breach of the perceived reliability of an information system, it seems unlikely that the world community will be much exercised. In short, information operations activities are likely to be regarded much as is espionage—not a major issue unless significant practical consequences can be demonstrated (Johnson 1999:40; emphasis added).
In justifying computer espionage, military and intelligence thinkers rested on a Westphalian order of ordinary state relations with long standing norms. At the very moment that the novelty of state-sanctioned hacking for information was denied, however, a range of strategists and legal thinkers expounded how the novelty of information warfare would necessitate a radical alteration of the global order.
Mirroring Internet visionaries of left and right alike, military and defense wonks in the 1990s detailed how the Net would undermine national sovereignty. An article in RAND’s journal in 1995 explained,
Information war has no front line. Potential battlefields are anywhere networked systems allow access–oil and gas pipelines, for example, electric power grids, telephone switching networks. In sum, the U.S. homeland may no longer provide a sanctuary from outside attack (Rand Research Review 1995; emphasis added.)
In this line of thinking, a wide array of forms of computer intrusion became intimately linked to other forms of asymmetric dangers to the homeland, such as biological and chemical warfare.
The porousness of the state in the global information age accordingly demanded an expansion—a hypertrophy—of state capacities and legal authorities at home and abroad to compensate. The worldwide network of surveillance revealed in the Snowden documents is a key product of this hypertrophy. In the U.S. intelligence community, the challenges of new technologies demanded rethinking Fourth Amendment prohibitions against unreasonable search and seizure. In a document intended to gain the support of the incoming presidential administration, NSA explained in 2000,
Make no mistake, NSA can and will perform its missions consistent with the Fourth Amendment and all applicable laws. But senior leadership must understand that today’s and tomorrow’s mission will demand a powerful, permanent presence on a global telecommunications network that will host the ‘protected’ communications of Americans as well as the targeted communications of adversaries (NSA 2000:32).
The briefing for the future president and his advisors delivered the hard truths of the new millennium. In the mid- to late 1990s, technically minded circles in the Departments of Defense and Justice, in corners of the Intelligence Community, and in various scattered think tanks around Washington and Santa Monica began sounding the call for a novel form of homeland security, where military and law enforcement, the government and private industry, and domestic and foreign surveillance would necessarily mix in ways long seen as illicit if not illegal. Constitutional interpretation, jurisdictional divisions, and the organization of bureaucracies alike would need to undergo dramatic—and painful—change. In a remarkable draft “Road Map for National Security” from 2000, a centrist bipartisan group argued, “in the new era, sharp distinctions between ‘foreign’ and ‘domestic’ no longer apply. We do not equate national security with ‘defense’” (U.S. Commission on National Security 2001). 9/11 proved the catalyst, but not the cause, of the emergence of the homeland security state of the new millennium. The George W. Bush administration drew upon this dense congeries of ideas, plans, vocabulary, constitutional reflection, and an overlapping network of intellectuals, lawyers, ex-spies, and soldiers to develop the new homeland security state. This intellectual framework justified the dramatic leap in the foreign depth and domestic breadth of the acquisition, collection, and analysis of communications of NSA and its Five Eyes partners, including computer network exploitation.
The Golden Age of SIGINT
In its 2000 prospectus for the incoming presidential administration, the NSA included an innocent sounding clause: “in close collaboration with cryptologic and Intelligence Community partners, establish tailored access to specialized communications when needed” (National Security Agency 2001: 4). Tailored access meant government hacking—CSE. In the early 1990s, NSA seemed to many a cold-war relic, inadequate to the times, despite its pioneering role in computer security and penetration testing from the late 1960s onward. By the late 2010s, NSA was at the center of the “golden age of SIGINT” focused ever more on computers, their contents, and the digital infrastructure (NSA 2012: 2).
From the mid 1990s, NSA and its allies gained extraordinary worldwide capacities, both in the “passive” collection of communications flowing through cables or the air and the “active” collection through hacking into information systems, whether it be the president’s network, Greek telecom networks during the Athens Olympics, or in tactical situations throughout Iraq and Afghanistan (see Redacted-Texas TAO 2010; SID Today 2004).
Prioritizing offensive hacking over defense became very easy in this context. An anonymous NSA author explained the danger in 1997:
The characteristics that make cyber-based operations so appealing to us from an offensive perspective (i.e., low cost of entry, few tangible observables, a diverse and expanding target set, increasing amounts of ‘freely available’ information to support target development, and a flexible base of deployment where being ‘in range’ with large fixed field sites isn’t important) present a particularly difficult problem for the defense… before you get too excited about this ‘target-rich environment,’ remember, General Custer was in a target-rich environment too! (Redacted 1997: 9; emphasis added).
The Air Force and NSA pioneered computer security from the late 1960s: their experts warned that the wide adoption of information technology in the United States would make it the premier target-rich environment (Hunt 2012). NSA’s capacities developed as China, Russian, and other nations dramatically expanded their own computer espionage efforts (see figure 4 for the case of China c. 2010).
By 2008, and probably much earlier, the Agency and its close allies probed computers worldwide, tracked their vulnerabilities, and engineered viruses and worms both profoundly sophisticated and highly targeted. Or as a key NSA hacking division bluntly put it: “Your data is our data, your equipment is our equipment—anytime, anyplace, by any legal means” (SID Today 2006: 2).
While the internal division for hacking was named “Tailored Access Operations,” its work quickly moved beyond the highly tailored—bespoke—hacking of a small number of high priority systems. In 2004, the Agency built new facilities to enable them to expand from “an average of 100-150 active implants to simultaneously managing thousands of implanted targets” (SID Today 2004a:2). According to Matthew Aid, NSA had built tools (and adopted easily available open source tools) for scanning billions of digital devices for vulnerabilities; hundreds of operators were covertly “tapping into thousands of foreign computer systems” worldwide (Aid 2013). By 2008, the Agency’s distributed XKeyscore database and search system offered its analysts the option to “Show me all the exploitable machines in country X,” meaning that the U.S. government systematically evaluated all the available machines in some nations for potential exploitation and catalogued their vulnerabilities. Cataloging at scale is matched by exploiting machines at scale (National Security Agency 2008). One program, Turbine, sought to “allow the current implant network to scale to large size (millions of implants)” (Gallagher and Greenwald 2014). The British, Canadian, Australian partner intelligence agencies play central roles in this globe-spanning work.
The disanalogy with espionage
The legal status of government hacking to exfiltrate information rests on an analogy with traditional espionage. Yet the scale and techniques of state hacking strain the analogy. Two lawyers associated with U.S. Cyber Command, Col. Gary Brown and Lt. Col. Andrew Metcalf, offer two examples: “First, espionage used to be a lot more difficult. Cold Warriors did not anticipate the wholesale plunder of our industrial secrets. Second, the techniques of cyber espionage and cyber attack are often identical, and cyber espionage is usually a necessary prerequisite for cyber attack” (Brown and Metcalf 1998:117).
The colonels are right: U.S. legal work on intelligence in the digital age has tended to deny that scale is legally significant. The international effort to exempt sundry forms of metadata such as calling records from legal protection stems from the intelligence value of studying metadata at scale. The collection of the metadata of one person, on this view, is not legally different from the collection of the metadata of many people, as the U.S. Foreign Intelligence Surveillance Court has explained:
[so] long as no individual has a reasonable expectation of privacy in meta data [sic], the large number of persons whose communications will be subjected to the . . . surveillance is irrelevant to the issue of whether a Fourth Amendment search or seizure will occur.
Yet metadata is desired by intelligence agencies just because it is revealing at scale. Since their inception, NSA and its Commonwealth analogues have focused as much at working with vast databases of “metadata” as on breaking cyphered texts. NSA’s historians celebrate a cryptographical revolution afforded through “traffic analysis” (Filby 1993). From reconstructing the Soviet “order of battle” in the Cold War to seeking potential terrorists now, the U.S. Government has long recognized the transformative power of machine analysis of large volumes of metadata while simultaneously denying the legal salience of that transformative power.
As in the case of metadata, U.S. legal work on hacking into computers does not consider scale as legally significant. Espionage at scale used to be tough going: the very corporeality of sifting through physical mail, or garbage, or even setting physical wiretaps, or other devices to capture microwave transmissions scale only with great expense, difficulty, and potential for discovery (Donovan 2017). Scale provided a salutary limitation on surveillance, domestic or foreign. As with satellite spying, computer network exploitation typically lacks this corporeality, barring cases of getting access to air-gapped computers, as in the case of the StuxNet virus. With the relative ease of hacking, the U.S. and its allies can know the exploitable machines in a country X, whether those machines belong to generals, presidents, teachers, professors, jihadis, or eight-year olds.
Hacking into computers unquestionably alters them, so the analogy with physical espionage is imperfect at best. A highly-redacted Defense Department “Information Operations Policy Roadmap” of 2003 underscores the ambiguity of “exploitation versus attack.” The document calls for clarity about the definition of an attack, both against the U.S. (slightly redacted) and by the U.S. (almost entirely redacted). “A legal review should determine what level of data or operating system manipulation constitutes an attack” (Department of Defense 2003:52). Nearly every definition—especially every classified definition—of computer network exploitation includes “enabling” as well as exploitation of computers. The military lawyers Brown and Metcalf argue, “Cyber espionage, far from being simply the copying of information from a system, ordinarily requires some form of cyber maneuvering that makes it possible to exfiltrate information. That maneuvering, or ‘enabling’ as it is sometimes called, requires the same techniques as an operation that is intended solely to disrupt” (Brown and Metcalfe 1998:117) “Enabling” is the key moment where the analogy between traditional espionage and hacking into computers breaks down. The secret definition, as of a few years ago, explains that enabling activities are “designed to obtain or facilitate access to the target computer system for possible later” computer network attack. The enabling function of an implant placed on a computer, router, or printer is the preparation of the space of future battle: it’s as if every time a spy entered a locked room to plant a bug, that bug contained a nearly unlimited capacity to materialize a bomb or other device should distant masters so desire. An implant essentially grants a third-party control over a general-purpose machine: it is not limited to the exfiltration of data. Installing an implant within a computer is like installing a cloaked 3-D printer into physical space that can produce a photocopier, a weapon, and a self-destructive device at the whim of its master. One NSA document put it clearly: “Computer network attack uses similar tools and techniques as computer network exploitation. If you can exploit it, you can attack it” (SID Today 2004b).
In a leaked 2012 Presidential Policy Directive, the Obama administration clarified the lines between espionage and information warfare explicitly to allow that espionage may produce results akin to an information attack. Amid a broad array of new euphemisms, CNE was transformed into “cyber collection,” which “includes those activities essential and inherent to enabling cyber collection, such as inhibiting detection or attribution, even if they create cyber effects” (Presidential Policy Directive (PPD)-20: 2-3). The bland term ‘cyber effects’ is defined as “the manipulation, disruption, denial, degradation, or destruction of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident thereon.” Espionage, then, often will be attack in all but name. The creation of effects akin to attack need not require the international legal considerations of war, only the far weaker legal regime around espionage. With each clarification, the gap between actual government hacking for the purpose of obtaining information and traditional espionage widens; and the utility of espionage as a category for thinking through the tough policy and legal choices around hacking diminishes.
By the end of the first decade of the 2000s, sardonic geek humor within NSA reveled in the ironic symbols of government overreach. A classified NSA presentation trolled civil libertarians: “Who knew that in 1984” an iPhone “would be big brother” and “the Zombies would be paying customers” (Spiegel Online 2013). Apple’s famous 1984 commercial dramatized how better technology would topple the corporatized social order, presaging a million dreams of the Internet disrupting wonted order. Far from undermining the ability of traditional states to know and act, the global network has created one of the greatest intensifications of the power of sovereign states since 1648. Whether espoused by cyber-libertarians or RAND strategists, the threat from the Net enabled new authorities and undermined civil liberties. The potential weakening of the state justified its hypertrophy. The centralization of online activity into a small number of dominant platforms—Weibo, Google, Facebook, with their billions of commercial transactions, has enabled a scope of surveillance unexpected by the most optimistic intelligence mavens in the 1990s. The humor is right on.
Signals intelligence is a hard habit to break—civil libertarian presidents like Jimmy Carter and Barack Obama quickly found themselves taken with being able to peek at the intimate communications of friends and foes alike, to know their negotiating positions in advance, to be three steps ahead in the game of 14-dimensional chess. State hacking at scale seems to violate the sovereignty of states at the same time as it serves as a potent form of sovereign activity today. Neither the Chinese hacking into OPM databases nor the alleged Russian intervention in the recent US and French elections accords well with many basic intuitions about licit activities among states. If it would be naïve to imagine the evanescence of state-sanctioned hacking, it is doctrinally and legally disingenuous to treat that hacking as entirely licit based on ever less applicable analogies to older forms of espionage.
As the theorists in the U.S. military and intelligence worlds in the 1990s called for new concepts and authorities appropriate to the information age, they nevertheless tamed hacking for information by treating it as continuous with traditional espionage. The near ubiquity of state-sanctioned hacking should not sanction an ill-fitting legal and doctrinal frame that ensures its monotonic increase. Based on an analogy to spying that ignores scale, “computer network exploitation” and its successor concepts preclude the rigorous analysis necessary for the hard choices national security professionals rightly insist we must collectively make. We need a ctrl+alt+del. Let’s hope the implant isn’t persistent.
Aid, Matthew M. 2013. “Inside the NSA’s Ultra-Secret China Hacking Group,” Foreign Policy. June 10. Available at: link.
American Interest. 2015. “Former CIA Head: OPM Hack was ‘Honorable Espionage Work.’” The American Interest. June 16. Available at: link.
Andrews, Duane. 1996. “Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D),” December.
Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation, February 8, 1996. Available at: link.
Berkowitz, Bruce D. 2003. The New Face of War: How War Will Be Fought in the 21st Century. New York: Free Press
Boccagno, Julia. 2016. “NSA Chief speaks candidly of Russia and U.S. Election.” CBS News. November 17. Available at: link.
Brown, Gary D. and Andrew O. Metcalf. 1998. “Easier Said Than Done: Legal Reviews of Cyber Weapons,” Journal of National Security Law and Policy 7.
CJCSI 3210.01A. 1998. “Joint Information Operations Policy,” Joint Chiefs, November 6. Available at: link.
DCID 7/3. 2003. “Information Operations and Intelligence Community Related Activities.” Central Intelligence Agency, June 5. Available at: link.
Department of Defense. 2003. “Information Operations Roadmap,” October 30. Available at: link.
DODD TS 3600.1. 1992. “Information Warfare (U),” December 21. Available at: link.
DODD S-3600.1, 1996. “Information Operations (IO) (U),” December 9. Available at: link.
Donovan, Joan. 2017. “Refuse and Resist!” Limn 8, February. Available at: link.
Falk, Richard A. 1962. “Space Espionage and World Order: A Consideration of the Samos-Midas Program,” in Essays on Espionage and International Law. Akron: Ohio State University Press.
Fields, Craig, and James McCarthy, eds. 1994. “Report of the Defense Science Board Summer Study Task Force on Information Architecture for the Battlefield,” October. Available at: link.
Filby, Vera R. 1993. United States Cryptologic History, Sources in Cryptologic History, Volume 4, A Collection of Writings on Traffic Analysis. Fort Meade, MD: NSA Center for Cryptological History.
Gallagher, Ryan and Glenn Greenwald. 2014. “How the NSA Plans to Infect ‘Millions’ of Computers with Malware,” The Intercept. March 12. Available at: link.
Gilman, Nils, Jesse Goldhammer, and Steven Weber. 2017. “Can You Secure an Iron Cage?” Limn 8, February. Available at: link.
Hunt, Edward. 2012. “U.S. Government Computer Penetration Programs and the Implications for Cyberwar,” IEEE Annals of the History of Computing. 34(3):4–21.
Johnson, Philip A. 1999. “An Assessment of International Legal Issues in Information Operations,” 1999, 40. Available at: link.
Kaplan, Fred M. Dark Territory: The Secret History of Cyber War. New York: Simon & Schuster, 2016.
Kuehl, Dan. 1997. “Defining Information Power,” Strategic Forum: Institute for National Strategic Studies, National Defense University, no. 115 (June). Available at: link.
Lin Herbert S. 2010. “Offensive Cyber Operations and the Use of Force,” Journal of National Security Law & Policy, 4.
National Security Agency/Central Security Service. 2000. “Transition 2001” December. Available at: link.
National Security Agency. 2008 “XKEYSCORE.” February 25. Available at: link
National Security Agency. 2012. “(U) SIGINT Strategy, 2012-2016,” February 23. Available at: link.
NSA Office of General Counsel. n.d. “(U/FOUO) CNO Legal Authorities,” slide 8, available at: link.
Owens, William, Kenneth W. Dam, and Herbert S. Lin 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington D.C.: National Academies Press.
Presidential Policy Directive (PPD)-20: “U.S. Cyber Operations Policy,” October 16, 2012. Available at: link.
Rand Research Review. 1995. “Information Warfare: A Two-Edged Sword.” Rand Research Review: Information Warfare and Cuberspace Security. Ed. A. Schoben. Santa Monica: Rand. Available at: link.
Rattray, Gregory J. 2001. Strategic Warfare in Cyberspace. Cambridge, Mass: MIT Press.
Redacted. 1994 “Information Warfare: A New Business Line for NSA,” Cryptolog. July.
Redacted. 1997. “IO, IO, It’s Off to Work We Go . . . (U),” Cryptolog. Spring.
Redacted-NTOC, V225. 2010, “BYZANTINE HADES: An Evolution of Collection,” June. Slides available at: link.
Redacted-Texas TAO/FTS327. 2010. “Computer-Network Exploitation Successes South of the Border,” November 15. Available from link.
Rid, Thomas. Rise of the Machines: A Cybernetic History. New York: W. W. Norton & Company, 2016.
Rothrock, John. 1997. “Information Warfare: Time for Some Constructive Criticism,” in Athena’s Camp: Preparing for Conflict in the Information Age, ed. John Arquilla and David Ronfeldt. Santa Monica: Rand.
Schmitt, Michael N., ed. 2017. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations: Prepared by the International Groups of Experts at the Invitation of the NATO Cooperative Cyber Defence Centre of Excellence, 2nd ed. Cambridge: Cambridge University Press. DOI:10.1017/9781316822524.
SID Today. 2004. “Another Successful Olympics Story,” October 6, 2004. Available at: link.
SID Today. 2004a. “Expanding Endpoint Operations.” September 17. Available at: link.
SID Today. 2004b. “New Staff Supports Network Attack.” October 21. Available at: link
SIDToday. 2006. “The ROC: NSA’s Epicenter for Computer Network Operations,” September 6. Available at: link.
Spiegel Online. 2013. “Spying on Smartphones,” SPIEGEL ONLINE, September 9. Available at: link.
United States Commission on National Security/21st Century. 2001. Road Map for National Security: Imperative for Change. January 31. Final Draft. Available at: link
Wolfowitz, Paul. 2001. “Department of Defense Directive 3600.1 Draft,” October.
 For the current state of international consensus on cyber espionage among international lawyers, see Schmitt 2017, rule 32.
 See Berkowitz 2003:59-65; Rattray 2003; Rid 2016:294-339 and Kaplan 2016
 Drawn from the signals intelligence idiolect, “exploitation” means, roughly, making some qualities of a communication available for acquisition. With computers, this typically means discovering bugs in systems, or using pilfered credentials, and then building robust ways to gain control of the system or at least to exfiltrate information from it.
 Computer Network Exploitation (CNE) was developed alongside two new doctrinal categories emerging in 1996: more aggressive “Computer Network Attack,” (CNA) which uses that access to destroy information or systems, and “Computer Network Defense” (CND). For exploitation versus attack, see (Owens et. al. 2009; Lin 2010:63).
 Especially NSCID-6 and Executive Order 12,333. The development of satellite reconnaissance had earlier challenged mid twentieth century conceptions of espionage. For a vivid sense of the difficulty of resolving these challenges, see (Falk 1962: 45-82).
 Quotation from secret decision with redacted name and date, p. 63, quoted in Amended Memorandum Opinion, No. BR 13-109 (Foreign Intelligence Surveillance Court August 29, 2013).