Open Science and Its Discontents

My first post on the Ronin Institute blog:

Open science has well and truly arrived. Preprints. Research Parasites. Scientific Reproducibility. Citizen science. Mozilla, the producer of the Firefox browser, has started an Open Science initiative. Open science really hit the mainstream in 2016. So what is open science? Depending on who you ask, it simply means more timely and regular releases of data sets, and publication in open-access journals. Others imagine a more radical transformation of science and scholarship and are advocating “open-notebook” science with a continuous public record of scientific work and concomitant release of open data. In this more expansive vision: science will be ultimately transformed from a series of static snapshots represented by papers and grants into a more supple and real-time practice where the production of science involves both professionals and citizen scientists blending, and co-creating a publicly available shared knowledge. Michael Nielsen, author of the 2012 book Reinventing Discovery: The New Era of Networked Science describes open science, less as a set of specific practices, but ultimately as a process to amplify collective intelligence to solve scientific problems more easily:

To amplify collective intelligence, we should scale up collaborations, increasing the cognitive diversity and range of available expertise as much as possible. This broadens the range of problems that can be easy solved … Ideally, the collaboration will achieve designed serendipity, so that a problem that seems hard to the person posing it finds its way to a person with just the right microexpertise to easily solve it.

Read the rest at the Ronin Institute blog

Life scientists: what are you looking to code?

Biosystems Analytics

My Amber Biology colleague, Gordon Webster, and I are working on an accessible introduction for biologists interested in getting into programming.  Python for the Life Scientists will cover an array of topics to introduce Python and also serve as inspiration for your own research projects.

But we’d also like to hear from you.

What are the life science research problems that you would tackle computationally, if you were able to use code?

You can contact us here in the comments, on info@amberbiology.com  or on the more detailed post:

“Are you still using calculators and spreadsheets for research projects that would be much better tackled with computer code?” on the Digital Biologist.

View original post

Connecting the cognitive dots in systems biology modeling

Biosystems Analytics

Building computational models in any discipline has many challenges starting at inclusion (what goes in, what’s left out), through to representation (are we keeping track of aggregate numbers, or actual individuals), implementation (efficiency, cost) and finally  verification and validation (is it correct?).  Creating entire modeling softwareplatforms intended for end-user scientists within a discipline brings an entirely new level of challenge.  Cognitive issues of representation within the modeling platform – always present when trying to communicate the content of a model to others – become one of the most central challenges.  To create modeling platforms that, say, a biologist might want to use, requires paying close attention to the idioms and metaphors used at the most granular level of biology: at the whiteboard, the bench, or even in the field.

Constructing such software with appropriate metaphors, visual or otherwise, requires close collaboration with working scientists at every…

View original post 402 more words

Academic publishing for fun and profit

Anthropologist David Graeber recently tweeted: “doing online research is SO much harder than it was when I was writing Debt. Everything’s being privatised. It’s a disaster for scholarship.”  The book he’s referring to is  Debt: The First 5000 Years, his groundbreaking book on the history of debt, from ancient times to the present debt crisis, first published back in ye olde 2011.  If things are bad in the humanities, over in the sciences, things aren’t much better:  The Digital Biologist, has published a particularly detailed and trenchant post on the current state of scientific academic publishing.  Worth a read:

The eye-watering prices that these academic publishing companies charge for their journals play a considerable role in further draining public money from a research system that is already enduring a major funding crisis. By some estimates, the subscriptions that universities must pay for access to these journals swallow up  as much as 10% of the public research funding that they receive.  This public money is essentially being channeled away from research and into the coffers of private sector corporations.

….

It is a testament to how expensive access to these journals has become, that even Harvard University, one of the wealthiest institutions of higher education in the world, recently sent a memo to its faculty members informing them that it could no longer afford the price hikes imposed by many large journal publishers.

Read more at: The Digital Biologist

Brian Eno on the vital role of the arts and humanities

There’s quiet, but steady, drumbeat of pushing children and college students into narrow STEM (science, technology, engineering, mathematics) fields, and away from anything that doesn’t contribute to the (narrowly defined) “economy”. The UK Education Secretary said in 2014 that choosing to study arts or humanities could “hold them back for the rest of their lives”.  Being trained in both science and engineering, I’m the first to agree that a well-informed scientific and technically literate citizenry is of utmost importance, but it doesn’t follow that we should be just shovelling people into STEM.  It’s short-term thinking at it’s worst and is born of the idea that the purpose of education is to train people to contribute to the global neoliberal corporate state, rather than a process of becoming a complete, well-rounded human being.

Read More »

Framing climate change as a national security issue has it’s perils

It’s good to see that climate change as a serious issue has returned to US electoral politics (albeit completely on one side of the aisle at this point). However it’s return is being framed in a particular way: using the language of national security.  After past efforts to use environmental, public health and economic security arguments have failed to gain the necessary amount of traction to change policy, supporters of action on climate change believe they may be now on to a winner.  In the recent Democratic primary debate, Senator Bernie Sanders suggested that climate was not just a “national security” issue, but the biggest national security issue.  While framing climate change this way has clear advantages: it gives the issue a sense of urgency and purpose, and it can perhaps convince more hawkish types to take the issue seriously, it is not without certain perils.

In an interesting piece by in Wired, dissecting this new approach, one professor of public policy notes that using national security metaphors:

…reinforces nationalistic responses to solving the problem, as opposed to collective efforts that might be mutually beneficial to the world

In a sense climate change is the ultimate collective action problem, and piecemeal national security responses are likely to run more towards local (or national) mitigation of the effects of climate change, rather long-term systemic changes in the global economy that will be needed to effectively tackle the problem.  So if the “national security” rhetoric takes off, environmentalists, politicians and scientists will want to be sure that the other dimensions of climate change policy aren’t abandoned or ignored.

Read more at Wired

(h/t to Tim De Chant of Per Square Mile)

Biologist Mickey von Dassow on collaboration, citizen science and ctenophores

Biosystems Analytics

Mickey von Dassow is a biologist who is interested in exploring how physics contributes to environmental effects on development. He created the website Independent Generation of Research (IGoR) to provide a platform to allow professional scientists, other scientists, non-scientists or anyone to collaborate and pursue any scientific project that they are curious about. I talked to him recently about his new site, citizen science and the future of scientific research and scholarship.

Mickey_headshot Mickey von Dassow

Can you describe your background?

My background is in biomechanics and developmental biology. My Ph.D. asked how feedback between form and function shapes marine invertebrate colonies. During my postdoc I worked on the physics of morphogenesis in vertebrate embryos, specifically focusing on trying to understand how the embryo tolerates inherent and environmentally driven mechanical variability. Since then I have been independently investigating interactions among ecology, biomechanics, and development of marine invertebrate embryos, as…

View original post 2,009 more words

What innovation isn’t

Biosystems Analytics

Innovation.  It’s as American as apple pie.  From the US President on down, everybody is talking about innovation.  From university presidents and corporate leaders to Silicon Valley tycoons, all agree that we need more of it.  Airport bookstores have walls of books on innovation: a quick search on Amazon resulted in 70,140 titles containing the word “innovation”, 711 of which were published in the last 90 days alone.  Many of them are little more than generic business advice books with the word “innovation” shoehorned into the title, including gems such as Creating Innovation Leaders (earning bonus points for including buzzwords “leadership” and “creativity”).  So it was with some trepidation that I recently picked up Scott Berkun’s The Myths of Innovation – first published in 2007 – and found it had a refreshing and unpretentious take on the subject.  Since it has become such an overused buzzword, Berkun argues that…

View original post 766 more words

All Big Data is equal, but some Big Data may be more equal than others

Biosystems Analytics

We are in the era of Big Data in human genomics: a vast treasure-trove of information on human genetic variation either is or will soon be available.   This includes older projects such as the HapMap, and 1000 Genomes to the in-progress 100,000 Genomes UK.  Two technologies have made this possible: the advent of massively parallel “next generation” sequencing where each individuals’ DNA is fragmented and amplified into billions of pieces; and powerful computational algorithms that use these fragments (or “reads”) to identify all the “variants” – any changes that are different to the “reference genome” – in each individual.

With existing tools this has become a relatively straightforward task.  Identification of single nucleotide polymorphisms or variants (SNVs) – single base differences between the individual and the reference genome – especially medically relevant ones –  is beginning to become routine. A project I worked on with…

View original post 780 more words

Finding data in the long-tail

Biosystems Analytics

blog_1_fig_1Scientists are increasingly examining the most comprehensive catalogue of datasets for any particular question.  Making sure you can find as much of the data relevant to a particular problem thus begins to loom as a large issue.   Although institutional repositories (such as NCBI, Dryad, Figshare etc.) are great at storing the final published versions of the data sets, some early and smaller-scale research data can get lost in the “long-tail“.   Anne Thessen has a great post over on her blog on the Data Detektiv, on how to locate and keep track of such “dark data”:

Finding relevant data, especially if the needed data are dark, can be a difficult and lengthy task. … Was there a way to discover data based on events earlier in the research workflow? After some thought, I realized that databases and lists of awards made by funding agencies were an…

View original post 21 more words