Open Science: just an idea or an actual alternative?
From the start, I will say that I entered the course “Ticket to Open Science” because it was 3 necessary transversal credits for my PhD, and it was an online course. So Eva actually nailed the “make Open Science enticing” from the get-go. However, it is also true that during the course I encountered a new science paradigm which, if it works, would greatly improve the quality of research, and maybe also improve the lives of researchers. The question then is IF it works, and how it works. Please note that this is just an opinion that looks for reflection on the Open Science community. I do not intent on portraying my thoughts as the truth, but I want to give my honest opinion.
I feel like most of the researchers I know would agree with most of the prospects that Open Science is trying to push. The open access to publications and data; the lesser the necessity of publishing as much as possible, as fast as possible, and as positive as possible; the outrageous capital benefits that publishing companies are getting from the scientific community knowledge, etc. But we need to understand that, in their path to become professors and to access grants, they have adapted to what the institutions and funding agencies expected from them. And that was to have a high h-index, to publish in high-impact factor journals, and to publish as much as possible. In extreme cases, this has lead to the creation of “hyperprolific” authors, who can publish hundreds of papers annually and, in some cases, recognize that they haven’t even read their own publications. How come someone who hasn’t even read (not to say contributed) to a publication is an author in it? The grant system and universities have created these monsters by rewarding this behaviour, and not questioning whether the scientist did participate in the study, nor the real impact of the study. It only cared about the metrics of the subject.
The Open Science community seems to agree that a reform of the current evaluation system is necessary to prevent this situation. It is one of the funding ideas of CoARA. For me, its presentation during the course filled a very big question regarding the Open Science movement. Having said so, I have my concerns regarding one of their commitments, specifically reviewing research based on qualitative evaluation. Nowadays, I think it is very difficult to evaluate a paper if you are not an expert on the subject. Grant committees have typically a number of researchers with experience in the subject the grant is trying to fund, but they can’t be experts regarding all possible research proposals they are going to receive for funding. Whether we like it or not, journal metrics are an “objective” parameter that can be referred to check the viability of the funding having publishable results. I would say that, based on what I commented previously on “hyperprolific” authors, using just the current journal metrics to evaluate the research is probably not a great way to provide funding. Nevertheless, I wonder if the qualitative evaluation of the research results can lead to more biasing when granting funding. If there are no “numbers” to check, how do you guarantee that a committee is giving grants based on what they truly believe the best proposals are instead of what they think will be positive for them? And what happens when results are misinterpreted because there are no experts to evaluate them in the committee? Metrics are an easy way to evaluate grants. They are hugely imperfect and are promoting practices that are, in my opinion, anti-scientific, but they could be a way to minimize the bias of the jury, as well as remove the non-expert factor. I think there is a lot of work to be done in order to properly assess research, and probably some qualitative assessment is necessary, but metrics can’t be discarded. Maybe we need to find new ones.
I am being very critical with the assessment of Open Science because I think that it is the most important aspect it has to tackle. Researchers are now publishing in the open more and more, but only in gold or green journals because they need to publish in high IF journals for funding and career advancement. Universities are paying not only subscriptions to be able to read the journals, not only APCs to publish the articles, but additional APCs to be able to publish in the open. This is only engrossing the benefits of the private publishers. If the research evaluation is going to still promote the use of metrics like JIF and JCI, then we need competitive diamond/platinum open-access journals in which we can publish and not be penalized because of low values in the metrics. If other metrics are used in the research evaluation, these OA journals need to adapt to them so that scientists are attracted to publish there. What cannot be expected is that researchers send articles to diamond/platinum OA journals, but then they are penalized in grant committees or university promotions because of their criteria.
Maybe another aspect that needs to be tackled by the Open Science movement is the research evaluation of other outcomes that are not articles. This has been reflected in the course on several occasions, and I agree with most of what was commented. I really liked the video Eva sent us regarding this topic (https://www.youtube.com/watch?v=B1cw8IfnOAY). I think that, as it is said in the video, if you document in the open all the steps required to go from a hypothesis to an experimental conclusion, the possibility of falsifying results becomes lesser, and the reproducibility of those results is higher. There is also a prove that you have been using the grant money towards the goals redacted in the proposal, and that those results may be negative, but that they are documented. Furthermore, collaboration between peers may be strengthened to look for possible failures during the research, which may help advance when you are stuck. Maybe publishing negative results is not feasible at the moment, but at least having a repository in which you can check the previous failures is helpful. The expertise in a topic not only comes from the positive results, it also comes from the negative ones. Even if not in article form, it has to be documented and taken into consideration.
In general, I am really thankful for the 3 credits (I’m joking :D). I am thankful for the course. It opened my eyes regarding how the Open Access movement is trying to improve the current situation of researchers, while also increasing public accessibility of science advancements. It also provided me with some useful tools (OpenAIRE, Zenodo, Argos) to be used in my research career. It has made me question, from now on, whether to try to publish in the open. For the advancement of science.