Universities’s Most Excellent Adventures

Yours truly, reposted from the Ronin Institute blog

Academics in traditional university environments tend to be keenly aware of where their university ranks, whether they like to admit this or not. Most familiar are the college-level rankings like those from the US News & World Report, which weigh the undergraduate experience heavily. However in the research world, the notion of “excellence” has become the coin of the realm as evidenced by a proliferation of “excellence frameworks” such as the Research Excellence Framework  (UK), the German Universities Excellence Initiative, the Excellence in Research for Australia and the Performance Based Research Fund (New Zealand).  Given that many resources from capital funds, grants and permanent positions are doled out in accordance with rankings, where one’s institution stands goes beyond mere bragging rights. Most academics understand the arbitrary nature of such rankings and despite regular kvetching that they are either “unfair” (usually from those at an institution “lower” in the rankings) or that they have “finally” recognized the true worth of their institution (usually from those rising in the rankings), the existence of the ranking system itself, is normally taken as given.  After all, how are we to sort the worthy from the unworthy?

Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel Paul O’Donnell and Damian Pattinson have published an (ahem), excellent research paper “Excellence R Us”: university research and the fetishisation of excellence that comprehensively examines both the notion and practices of “excellence” in research.  Excellence, as most of the research frameworks define it, essentially boils down to some combination of ranking institutions by their scholars ability to publish in established prestige journals, ability to gain external grants and other easily-measured metric of scholarly output.

Their conclusion, in a nutshell: “excellence” is totally bogus:

…a focus on “excellence” impedes rather than promotes scientific and scholarly activity: it at the same time discourages both the intellectual risk-taking required to make the most significant advances in paradigm-shifting research and the careful “Normal Science” (Kuhn [1962] 2012) that allows us to consolidate our knowledge in the wake of such advances. It encourages researchers to engage in counterproductive conscious and unconscious gamesmanship. And it impoverishes science and scholarship by encouraging concentration rather than distribution of effort.

In other words in the context of scientific scholarship: focusing on excellence prevents the two things that we say we want from from science: careful reproducible science and the big breakthroughs. The article covers familiar ground to those who have been following the state of academia including discussions of the lack of reproducibility in science, the pernicious use of journal prestige to evaluate academics, and the general environment of hypercompetition in research. Many, if not most, academics are aware these issues, having been covered extensively in the trade press in recent years, but continue to view them through the lens of their effect on traditional tenure-track (or equivalent) faculty with established research programs. So it is refreshing that the article tackles how the rhetoric of ”excellence” can restrict the range of types and styles of scholarship, issues that are close to the heart of the Ronin Institute:

There is, however, another effect of the drive for “excellence”: a restriction in the range of scholars, of the research and scholarship performed by such scholars, and the impact such research and scholarship has on the larger population. Although “excellence” is commonly presented as the most fair or efficient way to distribute scarce resources (Sewitz, 2014), it in fact can have an impoverishing effect on the very practices that it seeks to encourage. A funding programme that looks to improve a nation’s research capacity by differentially rewarding “excellence” can have the paradoxical effect of reducing this capacity by underfunding the very forms of “normal” work that make science function (Kuhn [1962] 2012) or distract attention from national priorities and well-conducted research towards a focus on performance measures of North America and Europe (Vessuri et al., 2014)

The article continues by pointing out that “excellence” is often used as a proxy for  academic work that fit certain “standard” modes, which can result in a more bland and conformist world of scholarship:

Given the strong evidence that there is systemic bias within the institutions of research against women, under-represented ethnic groups, non-traditional centres of scholarship, and other disadvantaged groups (for a forthright admission of this bias with regard to non-traditional centres of scholarship, see Goodrich, 1945), it follows that an emphasis on the performance of “excellence”—or, in other words, being able to convince colleagues that one is even more deserving of reward than others in the same field—will create even stronger pressure to conform to unexamined biases and norms within the disciplinary culture: challenging expectations as to what it means to be a scientist is a very difficult way of demonstrating that you are the “best” at science; it is much easier if your appearance, work patterns, and research goals conform to those of which your adjudicators have previous experience. In a culture of “excellence” the quality of work from those who do not work in the expected “normative” fashion run a serious risk of being under-estimated and unrecognised.

As the authors point out it is common in such pieces to identify:

institutional administrators captured by neo-liberal ideologies, funders over-focussed on delivering measurable returns rather than positive change, governments obsessed with economic growth at the cost of social or community value

as the primary cultural driver of metric-driven “excellence”. And this is definitely a huge part of the issue (see Ronin blog posts “Graeber on the Transformation of Universities” and “Henry Heller on IP-Based Capitalism at Universities”), but it’s not the only driver. Attributing these issues purely to external forces lets the academy somewhat off the hook since:

the roots of the problem in fact lie in the internal narratives of the academy and the nature of “excellence” and “quality” as supposedly shared concepts that researchers have developed into shields of their autonomy. The solution to such problems lies not in arguing for more resources for distribution via existing channels as this will simply lead to further concentration and hypercompetition. Instead, we have argued, these internal narratives of the academy must be reformulated.

In other words: academia probably needs to take a look in the mirror once in a while and should question whether current norms really still serve their twin stated goals of encouraging sound “normal” scholarship as well as risky breakthroughs. I would also add: enabling all scholars to participate in whatever way fits their individual talents. There is much more to the article than space allows here, it’s a good piece for anybody interested in the future of scholarship, and it includes a highly detailed bibliography.

Citation: Moore, Neylon, Eve, O’Donnell, Pattinson. Palgrave Communications 3, Article number: 16105 (2017)

Coda: In a nice example of walking the walk, the authors have this note about “subverting traditional scarce markers of prestige” by adopting:

a redistributive approach to the order of their names in the byline. As an international collaboration of uniformly nice people (cf. Moran et al., 2016; Hoover et al., 1987; see Tartamelia, 2014 for an explanation), lacking access to a croquet field (cf. Hassell and May, 1974), writing as individuals rather than an academic version of the Borg (see Guedj, 2009), and not identifying any excellent pun (cf. Alpher et al., 1948; Lord et al., 1986) or “disarmingly quaint nom de guerre” (cf. Mrs Kinpaisby, 2008, 298 [thanks to Oli Duke-Williams for this reference]) to be made from the ordering of our names, we elected to assign index numbers to our surnames and randomize these using an online tool.

Nature’s cultural blindspot

Yours truly on the Ronin Institute blog

A recent editorial in NatureYoung scientists thrive in life after academia” on the future of careers for today’s scientists is on one hand, both optimistic, but on the other, deeply unsatisfying. The editorial is clearly well-intentioned, providing what it sees as a hope for a generation of new scientists facing the worse funding climate and academic job market in decades. I agree with the editors that it is encouraging that people with PhDs and long periods of training are finding gainful employment.

However the editorial has what might be called a cultural blindspot: the default assumption that doing research science is largely an activity that one undertakes only within a specific set of jobs performed in specific institutions and once you’re out of those institutions, there’s both no way to continue, nor any way back.  Of those who moved out of academic positions:

Many had managed to stay in touch with science, and worked in a related function such as administration, outreach or publishing.

This seems to me to be disempowering: the best one can hope for is “to stay in touch with science”[1]. Is this really the most we can do for those who have spent many years acquiring skill and knowledge of a subject? Is doing science really like a step function: all or nothing? To be fair, the editorial doesn’t say this, but that’s how I read the subtext.

Read more on the Ronin Institute blog post….

“Future of Careers in Scholarship”: November Unconference

The Ronin Institute is holding an Unconference on the 5th November on the future of careers in scholarship in which I’m participating. The Digital Biologist has more about the meeting:

If you’re interested in the future of research and scholarship, and like many in the field, you also subscribe to the consensus view that the current system is broken, you won’t want to miss the Ronin Institute’sThe Future of Careers in Scholarship”, being held in Cambridge MA on November 5th. The unconference format of the meeting, will even allow you and other attendees to shape the agenda of the meeting, so come prepared to be an active participant rather than just a spectator. The meeting will be hosted at The Democracy Center in Harvard Square and you will have the chance to meet and network with an eclectic and forward-thinking group of people from a range of fields of study.

If you are really interested in the future of research and scholarship, and would like to get involved in the movement to advance beyond our current broken system and build new models for doing research, take a look at the Ronin Institute website. The Ronin Institute is devoted to facilitating and promoting scholarly research outside the confines of traditional academic research institutions.

More from The Digital Biologist

Python for the Life Sciences is now available on Leanpub

Biosystems Analytics

I’m very proud to announce that, together with my Amber Biology colleague Gordon Webster, that our book Python For The Life Sciences, is now available for purchase via Leanpub:

leanpubThe book has ended up somewhat larger than originally planned, clocking in at over 300 pages, and covers a wide range of life science research topics from biochemistry and gene sequencing, to molecular mechanics and agent-based models of complex systems. We hope that there’s something in it for anybody who’s a life scientist with little or no computer programming experience, but who would love to learn to code.

python-for-the-life-newYou can download the complete first chapter for free at Leanpub and everybody who buys this first edition will have complete access to book updates to this particular edition.  Help us improve the book by emailing us feedback or if you spot any errors to: info@amberbiology.com

View original post

Open Science and Its Discontents

My first post on the Ronin Institute blog:

Open science has well and truly arrived. Preprints. Research Parasites. Scientific Reproducibility. Citizen science. Mozilla, the producer of the Firefox browser, has started an Open Science initiative. Open science really hit the mainstream in 2016. So what is open science? Depending on who you ask, it simply means more timely and regular releases of data sets, and publication in open-access journals. Others imagine a more radical transformation of science and scholarship and are advocating “open-notebook” science with a continuous public record of scientific work and concomitant release of open data. In this more expansive vision: science will be ultimately transformed from a series of static snapshots represented by papers and grants into a more supple and real-time practice where the production of science involves both professionals and citizen scientists blending, and co-creating a publicly available shared knowledge. Michael Nielsen, author of the 2012 book Reinventing Discovery: The New Era of Networked Science describes open science, less as a set of specific practices, but ultimately as a process to amplify collective intelligence to solve scientific problems more easily:

To amplify collective intelligence, we should scale up collaborations, increasing the cognitive diversity and range of available expertise as much as possible. This broadens the range of problems that can be easy solved … Ideally, the collaboration will achieve designed serendipity, so that a problem that seems hard to the person posing it finds its way to a person with just the right microexpertise to easily solve it.

Read the rest at the Ronin Institute blog

The limitations of Big Data in life science R&D

Biosystems Analytics

Big Data has become an increasingly large presence in the life science R&D world, but as I have blogged about previously, increasingly larger datasets and better machine algorithms alone, will not leverage that data into bankable knowledge and can lead to erroneous inferences.  My Amber Biology colleague, Gordon Webster has a great post over on LinkedIn leavening the hype around Big Data, pointing out that analytics and visualizations alone are insufficient for making progress in extracting knowledge from biological datasets:

Applying the standard pantheon of data analytics and data visualization techniques to large biological datasets, and expecting to draw some meaningful biological insight from this approach, is like expecting to learn about the life of an Egyptian pharaoh by excavating his tomb with a bulldozer

“-omics” such as those produced by transcriptomic and proteomic analyses are ultimately generated by dynamic processes consisting of individual genes, proteins and other molecules…

View original post 130 more words

Table of contents preview for Python for the Life Sciences

Biosystems Analytics

book-coverOur Amber Biology book Python For The Life Sciences is now nearing publication – we anticipate sometime in the early summer of 2016 for the publication date. As requested by many folks we are releasing the first draft of the table of contents.  If you’re interested in updates you can sign up for our book mailing list.  You can also checkout a preview chapter on Leanpub.

Python at the bench:
In which we introduce some Python fundamentals and show you how to ditch those calculators and spreadsheets and let Python relieve the drudgery of basic lab calculations (freeing up more valuable time to drink coffee and play Minecraft)

Building biological sequences:
In which we introduce basic Python string and character handling and demonstrate Python’s innate awesomeness for handling nucleic acid and protein sequences.

Of biomarkers and Bayes:
In which we discuss Bayes’ Theorem and implement it in Python, illustrating in the…

View original post 524 more words

Where is this cancer moonshot aimed?

Biosystems Analytics

Much has been made of the recent announcement of VP Biden’s cancer moonshot program.  In these days of ever tightening research funding, every little bit helps, and the research community is obviously grateful for any infusion of funds.   However, large-scale approaches to tackling cancer have been a staple of funding ever since Nixon announced his “War on Cancer” back in the 1970s, and any new approaches must grapple with the often complicated history of research funding in this area.  Ronin Institute Research Scholar, Curt Balch, has a interesting post over on LinkedIn breaking down some of these issues.

What seems relatively new in this iteration of the “war”, however, is a greater awareness of the lack of communication between different approaches to those working on cancer.  Biden has specifically mentioned this need and has pledged to “break down silos and bring all cancer fighters together”.  This…

View original post 536 more words

Life scientists: what are you looking to code?

Biosystems Analytics

My Amber Biology colleague, Gordon Webster, and I are working on an accessible introduction for biologists interested in getting into programming.  Python for the Life Scientists will cover an array of topics to introduce Python and also serve as inspiration for your own research projects.

But we’d also like to hear from you.

What are the life science research problems that you would tackle computationally, if you were able to use code?

You can contact us here in the comments, on info@amberbiology.com  or on the more detailed post:

“Are you still using calculators and spreadsheets for research projects that would be much better tackled with computer code?” on the Digital Biologist.

View original post

Connecting the cognitive dots in systems biology modeling

Biosystems Analytics

Building computational models in any discipline has many challenges starting at inclusion (what goes in, what’s left out), through to representation (are we keeping track of aggregate numbers, or actual individuals), implementation (efficiency, cost) and finally  verification and validation (is it correct?).  Creating entire modeling softwareplatforms intended for end-user scientists within a discipline brings an entirely new level of challenge.  Cognitive issues of representation within the modeling platform – always present when trying to communicate the content of a model to others – become one of the most central challenges.  To create modeling platforms that, say, a biologist might want to use, requires paying close attention to the idioms and metaphors used at the most granular level of biology: at the whiteboard, the bench, or even in the field.

Constructing such software with appropriate metaphors, visual or otherwise, requires close collaboration with working scientists at every…

View original post 402 more words