Open science has well and truly arrived. Preprints. Research Parasites. Scientific Reproducibility. Citizen science. Mozilla, the producer of the Firefox browser, has started an Open Science initiative. Open science really hit the mainstream in 2016. So what is open science? Depending on who you ask, it simply means more timely and regular releases of data sets, and publication in open-access journals. Others imagine a more radical transformation of science and scholarship and are advocating “open-notebook” science with a continuous public record of scientific work and concomitant release of open data. In this more expansive vision: science will be ultimately transformed from a series of static snapshots represented by papers and grants into a more supple and real-time practice where the production of science involves both professionals and citizen scientists blending, and co-creating a publicly available shared knowledge. Michael Nielsen, author of the 2012 book Reinventing Discovery: The New Era of Networked Science describes open science, less as a set of specific practices, but ultimately as a process to amplify collective intelligence to solve scientific problems more easily:
To amplify collective intelligence, we should scale up collaborations, increasing the cognitive diversity and range of available expertise as much as possible. This broadens the range of problems that can be easy solved … Ideally, the collaboration will achieve designed serendipity, so that a problem that seems hard to the person posing it finds its way to a person with just the right microexpertise to easily solve it.
Python at the bench:
In which we introduce some Python fundamentals and show you how to ditch those calculators and spreadsheets and let Python relieve the drudgery of basic lab calculations (freeing up more valuable time to drink coffee and play Minecraft)
Building biological sequences:
In which we introduce basic Python string and character handling and demonstrate Python’s innate awesomeness for handling nucleic acid and protein sequences.
Of biomarkers and Bayes: In which we discuss Bayes’ Theorem and implement it in Python, illustrating in the…
Much has been made of the recent announcement of VP Biden’s cancer moonshot program. In these days of ever tightening research funding, every little bit helps, and the research community is obviously grateful for any infusion of funds. However, large-scale approaches to tackling cancer have been a staple of funding ever since Nixon announced his “War on Cancer” back in the 1970s, and any new approaches must grapple with the often complicated history of research funding in this area. Ronin Institute Research Scholar, Curt Balch, has a interesting post over on LinkedIn breaking down some of these issues.
What seems relatively new in this iteration of the “war”, however, is a greater awareness of the lack of communication between different approaches to those working on cancer. Biden has specifically mentioned this need and has pledged to “break down silos and bring all cancer fighters together”. This…
My Amber Biology colleague, Gordon Webster, and I are working on an accessible introduction for biologists interested in getting into programming. Python for the Life Scientists will cover an array of topics to introduce Python and also serve as inspiration for your own research projects.
But we’d also like to hear from you.
What are the life science research problems that you would tackle computationally, if you were able to use code?
You can contact us here in the comments, on email@example.com or on the more detailed post:
Building computational models in any discipline has many challenges starting at inclusion (what goes in, what’s left out), through to representation (are we keeping track of aggregate numbers, or actual individuals), implementation (efficiency, cost) and finally verification and validation (is it correct?). Creating entire modeling softwareplatforms intended for end-user scientists within a discipline brings an entirely new level of challenge. Cognitive issues of representation within the modeling platform – always present when trying to communicate the content of a model to others – become one of the most central challenges. To create modeling platforms that, say, a biologist might want to use, requires paying close attention to the idioms and metaphors used at the most granular level of biology: at the whiteboard, the bench, or even in the field.
Constructing such software with appropriate metaphors, visual or otherwise, requires close collaboration with working scientists at every…
The promise of the Internet as a means to “level the playing field’ has seriously gone off the rails. A two-day conference at The New School that just wound up this last weekend, explored the emergence of platform cooperativism. Platform cooperativism aims to return the democratic promise of the Internet away from the rapacious, heavily-leveraged extractive models of the so-called “sharing economy” such as Uber and AirBnB, and towards models of true user ownership and governance. As pointed out in a set of 5 summary essays that appeared in The Nation, these are not (mainly) technical challenges but legal and political ones. An example is FairCoop:
FairCoop is one among a whole slew of new projects attempting to create a more democratic Internet, one that serves as a global commons. These projects include user-owned cooperatives, “open value” companies structured like a wiki, and forms of community-based financing. Part of what distinguishes them from mainstream tech culture is the determination to put real control and ownership in the hands of the users. When you do that, the platform becomes what it always should have been: a tool for those who use it, not a means of exploiting them.
Many of these efforts will face an uphill battle, and as pointed out by Astra Taylor at the conference (she follows Douglas Rushkoff’s presentation in the video link), will probably be fiercely resisted by the newly entrenched platforms of Google, Facebook and the like. But we can also say the same thing about those platforms many of which were just small upstarts back in the 1990s. The real challenge is one that is familiar to evolutionary biologists in game theory: building systems that reduce the chance of “invaders” or “cheaters” (in this case, rapacious VC firms and super-capitalism in general) from swamping a population of mutually beneficial co-operators (or turning those cooperators into cheaters). It doesn’t have to be, and could never be, perfect: you’ll never reduce the population of cheaters to zero, but at least keep them from taking over your population completely.
Read more about platform cooperativism at The Nation…
On-demand computing, often known as “cloud computing” provides access to the computing power of a large data center without having to maintain an in-house high performance computing (HPC) cluster, with attendent management and maintenance costs. As even the most casual observers of the tech world will know, cloud computing is growing in any many sectors of the economy, including scientific research. Cheap “computing as a utility” has the potential to bring many large-scale analyses within reach of smaller organizations that may lack the means or infrastructure to run a traditional HPC. These organizations or individuals could include smaller clinics, hospitals, colleges, non-profit organizations and even individual independent researchers or groups of researchers. But beyond the industry enthusiasm, how much can cloud computing really help enable low-cost scientific analyses?