Where next for ETH?
We are all proud of ETH. And rightly so. Nevertheless I have come to suspect over the last years that ETH is making more and more decisions which may seem correct from a specific point of view but which do not take into account the bigger picture and ultimately contribute towards undesired developments.
ETH is one of the world’s leading technical universities. And rightly so! But how do we define “global leader”? Does it matter whether ETH moves up one or two places in a global ranking that is partially based on criteria widely regarded as dubious? This question could be dismissed as irrelevant as far as everyday university life is concerned. However, if we do consider the rankings to be important, it is only logical – and even necessary – that the quality of ETH staff is assessed using criteria that are commensurate with those employed in the rankings. It is noteworthy, this implies that the quality criteria are dictated from outside, rather than being based on internal agreement.
Indicatoritis
Rankings must be based on figures. When it comes to academic rankings, one disease is becoming increasingly rampant, especially in relation to publications: indicatoritis.
On the one hand, we have come to rely on a monopolist, the Institute for Scientific Information (ISI; now owned by Thomson Reuters, i.e. a private, profit-oriented company). Journals indexed by the ISI are widely considered potentially good, whereas those that aren’t are believed to be poor. Is it appropriate for the value of scientific results to be decided upon by a single private company? Among others, this makes publications in practice-oriented journals (which may well be peer-reviewed) worthless on the CVs of ETH researchers, thereby hindering the dialogue between research and practice. Is that really what we want?
Secondly, criteria are needed to assess those articles in ISI journals that are not just potentially good, but actually good. The impact factor (IF) of the journals is widely used for this purpose, i.e. the average number of citations gathered by all articles in that journal. Now, the mean value is known to be unsuitable for characterising skewed distributions. Yet, the distribution of citation frequencies in scientific publications is extremely skewed. For example, 80% of papers published in the highly regarded journal Nature have fewer than 15 citations eight years after their publication (base year 2005; analysis 2006-2013). They could therefore be described as flops. This statement is based on the assumption that the citation frequency is an indicator of the quality of a publication. Although this assumption is dubious as well, I will let it stand for now.
Thus, the fact that someone has recently published a paper in Nature is not a useful indicator of its quality. Nevertheless, many scientists continue to be written off for publishing papers in low-IF journals.
Not enough time for serious work?
What causes such neglect? ETH professors usually supervise many more than ten doctoral students; they conduct research themselves, teach and “on the side” manage an SME with an annual turnover of more than a million Swiss francs and around 20 to 30 employees. Time is the scarcest resource that they have. In theory, indicators can help to gain a quick overview and save a lot of time in such situations. In reality, however, each indicator is merely a model, i.e. a simplified and therefore limited depiction of reality. There is only one way to assess the quality of individual scientists: by understanding the subject matter, carefully studying the documentation, reading several publications by the person in question, and forming one’s own opinion. This all takes time. If we believe we can no longer spare this time and, instead, succumb to indicatoritis, then the quality of teaching and research will diminish de facto, even though we pretend otherwise.
The principle of “excellence before profile” has always applied when appointing professors at ETH. And rightly so! However, the key question is how to define excellence. Because we don’t believe we have enough time, we increasingly rely on indicators, and indicators are most readily available for publications – a super gap in the market discovered and filled accordingly by ISI. The quality and scope of teaching, personality traits such as team spirit, interdisciplinary interactions and the contact with practice partners are examples of criteria that I believe are just as important. But when indicatoritis strikes, they are barely (if at all) considered when assessing the excellence of colleagues.
The Bologna misunderstanding
I have just touched upon the quality of teaching. Professors are university teachers just as much as they are researchers. We should therefore have a shared understanding of what we are teaching at ETH, and why. In the system-oriented sciences at ETH, around 80% of the MSc graduates enter the world of work, where they are welcomed with open arms – this has been the case for decades. Only around 20% enrol in a PhD project. And rightly so! We are supplying the labour market with highly qualified specialists, while a (smaller) proportion of graduates pursue an academic career.
In the Bologna reform, Switzerland adopted the term 'Master’s' from the English-speaking world, where most students leave university with a Bachelor’s degree. Only a few leave with a Master’s, primarily those who wish to pursue a PhD, but subsequently realise they have chosen the wrong subject or – for whatever reason – drop out. That is why the term “graduate students” is used to describe both MSc and PhD students. By adopting the Anglo-Saxon terms in the entirely different education system of Switzerland, many professors who are accustomed to the Anglo-Saxon system erroneously believe that the primary purpose of a Master’s is to educate future doctoral students who should already be producing papers in ISI journals with a high IF. This is associated with corresponding tendencies in the curricula and great disappointment about what can actually be achieved within an ETH Master’s project (which usually takes merely six months). Thus, in education we are also about to succumb to indicatoritis.
It is often true that nothing is as practical as a good theory (or a good theoretical education). However: if, for instance, environmental science MSc graduates wishing to pursue a practical career in land use should become so excellent that they master spatial statistics but have no idea where in space the ecosystems that are typical of Switzerland are located, this would severely limit their chances in the Swiss job market. Whether Switzerland can or should afford to produce ETH graduates mainly for the international market and PhD studies, the ISI and IF instead of the requirements of our country is a political question that I would answer with a resounding no – unlike the USA, for example, we simply have too few universities to be able to do this.
What next?
As individual scientists, we are quite helpless against the aforementioned undesirable direction the academic system is taking at ETH and elsewhere. However, ETH enjoys an excellent reputation worldwide, one that we are all proud of. Thus, this reputation would enable us to distance our institution from the rankings and the externally imposed indicatoritis, to define our own criteria and decide what is excellent and where we want to be. We need to move towards a sensible future, and not necessarily higher up in the rankings.
At a public event around the time of the introduction of the Bologna reform at ETH, then-Rector Konrad Osterwalder was asked whether the ETH curricula were going to be certified after the reform. He replied with indignation that ETH is ETH and does not need any certification. And rightly so! We do not need rankings, either. What we do need is excellent research, so that we can remain among the international elite despite our country’s small size. And we need high-quality, theoretically grounded, and practice-oriented teaching, so that we can continue to produce outstanding graduates who are ready for the labour market. This goal is only attainable if we redefine the metrics of academic success. It would be worth discussing the ETH value scale in depth.
About the Author
Harald Bugmann currently has an h-Index of 33, which is rather high for his age and field. He has published in journals with a high IF, such as Science, Ecology Letters, Ecological Monographs, Ecology or the Journal of Ecology. However, he does not believe that these indicators can be used to pass judgement whether he is a good ETH professor. He regularly publishes articles in the Swiss Forestry Journal and even in Bündner Wald despite not being indexed by the ISI, since he is naive enough to believe that the translation of research results into practice is an important matter and does not happen by itself. He has never attempted to calculate how much higher his h-Index would be if he had written more papers for ISI journals rather than wasting his time for Bündner Wald and the like.