Navigation
Public engagement

Becoming a Scientist

Read online for free

Print your own copy

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive

Entries in science careers (102)

Wednesday
Jun302021

In praise of metrics during tenure review

Metrics, especially impact factor, have fallen badly out of favour as a mechanism for tenure review. There are good reasons for this - metrics have flaws, and journal impact factors clearly have flaws. It is important, however, to weigh up the pros and cons of the alternative systems that are being put in place, as they also have serious flaws. 

To put my personal experience on the table, I've always been in institutes with 5 yearly rolling tenure. I've experienced two tenure reviews based on metrics, and two based on soft measures. I've also been a part of committees designing these systems, for several institutes. I've seen colleagues hurt by metric-orientated systems, and colleagues hurt by soft measurement systems. There is no perfect system, but I think that people seriously underestimate the potential harm of soft measurement systems. 

Example of a metric-based system

When I first joined the VIB, they had a simple metric-based system. Over the course of 5 years, I was expected to publish 5 articles in journals with an impact factor over 10. I went into the system thinking that these objectives were close to unachievable, although the goals came along with serious support that made it all highly achievable.

For me, the single biggest advantage of the metric-based system was its transparency. It was not the system I would have designed, but I knew the goals, and more importantly I could tell when I had reached those goals. 3 years into my 5 year term I knew that I had met the objectives and that the 5 yearly-review would be fine. That gave me and my team a lot of peace of mind. We didn't need to stress about an unknowable outcome.

Example of a soft measurement system

The VIB later shifted to a system that is becoming more common, where output is assessed for scientific quality by the review panel, rather than by metrics. The Babraham Institute, where I am now, uses a similar system. Different institutes have different expectations and assessment processes, but in effect these soft measurement systems all come down to a small review panel making a verdict on the quality of your science, with the instruction not to use metrics.

This style of assessment creates an unknown. You really don't know for sure how the panel will judge your science until the day their verdict comes out. Certainly, they have the potential to save group leaders that would be hurt by metric-based systems, but equally they can fail group leaders who were productive but judged more harshly by biases introduced through the panel then by the peer-review they experienced by manuscript reviewers.

This in fact brings me to my central thesis: with either metrics or soft measurement systems, you end up having a small number of people read your papers and make their own judgement on the quality of the science. So let's compare how the two work in practice:

Metrics vs soft measurements

Under the metric-based system, essentially my tenure reviewers were the journal editors and external reviewers. For my metrics, I had to hit journals with impact factors about 10, which gives me around 10 journals to aim at in my field. I had 62 articles during my first 5 years, and let's say that the average article went to two journals, each with an editor and 3 reviewers. That gives me a pool of around 500 experts reviewing my work, and judging whether it is of the quality and importance worthy of hitting a major journal. There is almost certainly going to be overlap in that pool, and I published a lot more than many starting PIs, but it is not unreasonable to think that 100 different experts weighed in. Were all of those reviews quality? No, of course not. But I can say that I had the option to exclude particular reviewers, the reviewers could not have open conflicts of interest, the journal editor acted as an assessor of the review quality, and I had the opportunity to rebut claims with data. Each individual manuscript review is a reviewer roulette, a flawed process, but in aggregate it does create a body of work reviewed by experts in the field.

Consider now the soft measurement system. In my experience institutes reviewed all PIs at the same time. Some institutes do this with an external jury, with perhaps 10 individuals but maybe only 1-3 are actually experts on your topic. Other institutes do this with an internal jury, perhaps 3-5 individuals in the most senior posts. In each case, you have an extremely narrow range of experts, reviewing very large numbers of papers in a very short amount of time. In my latest review I had 79 articles over the prior 5 years. I doubt anyone actually read them all (I wouldn't expect them to). More realistically, I expect they read most of the titles, some of the abstracts, and perhaps 1-2 articles briefly. Instead, what would have heavily influenced the result is the general opinion of my scientific quality, which is going to be very dependent on the individuals involved. While both systems have treated me well, I have seen very productive scientists fall afoul of this system, simply because of major personality clashes with their head of department (who typically either selects the external board, or chairs the injury jury). Indeed, I have seen PIs leave the institute rather than be reviewed under this system, and (in my experience) the system has been a heavier burden on women and immigrants.

Better metrics

As part of the University of Leuven Department of Microbiology and Immunology board, I helped to fashion a new system which was built as a composite of metrics. The idea was to keep the transparency and objectivity of metrics, but to use them in a responsible manner and to ameliorate flaws. The system essentially used a weighted points score, building on different metrics. For publications in the prior 5 years, journal impact factor was used. For publications >5 years old, this was replaced by actual citations of your article. Points were given for teaching, Masters and PhD graduations, and various services to the institute. Again, each individual metric includes inherent flaws, and the basket of metrics used could have been improved, but the ethos behind the system was that by using a portfolio of weighted metrics you even out some of the flaws and create a transparent system.

The path forward

I hope it is clear that I recognise the flaws present in metrics, but also that I consider metrics to confer transparency and to be a valuable safeguard against the inevitable political clashes that can drive decisions by small juries. In particular, metrics can safe-guard junior investigators against the conflicts of interest that can dominate when a small internal jury has the power to judge the value of output. Just because metrics are flawed doesn't mean the alternatives are necessarily better.

In my ideal world (in the unlikely scenario that I ever become an institute director!), I would implement a two-stage review system, using 7 years cycles. The first stage would be metric-based, using a portfolio of different metrics. These metrics would be in line with institute values, to drive the type of behaviour and outputs that are desired. The metric would include provisions for parental or sick leave, built into the system. They would be discussed with PIs at the very start of review period, and fixed. Everything would be above board, transparent, and realistic for PIs to achieve. Administration would track the metrics, eliminating the excess burden of constant reviewing on scientists.

For PIs who didn't meet the metric-based criteria a second system would kick in. This second system would be entirely metric-free, and would instead focus on the re-evaluation of their contributions. By limiting this second evaluation to the edge cases, substantial resources could be invested to ensure that the re-evaluation was performed in as unbiased a manner as possible, with suitable safeguards. I would have a panel of 6 experts (paid for their time), 3 selected from a list proposed by the PI and 3 selected from a list proposed by the department head. Two internal senior staff would also sit on the panel, one selected by the PI and one selected by the department head. The panel would be given example portfolios of PIs that met the criteria of tenure-review, to bench-mark against. The PI would present their work and defend it. The panel would write a draft report and send it to the PI. The PI would then have the opportunity to rebut any points on the report, either in writing or as an oral defence, by the choice of the PI. The jury would then make a decision on whether the quality of the work met the institute objectives.

I would argue that this compound system brings in the best of both worlds. For most PIs, the metric-based system will bring transparency and will reduce both stress and paperwork. For those PIs that metrics don't adequately demonstrate their value, they get the detailed attention that is only possible when you commit serious resources to a review. Yes, it takes a lot of extra effort from the PI, the jury and the institute, which is why I don't propose it to run for everyone.

TLDR: it is all very well and good to celebrate when an institute says it is going to drop impact factors in their tenure assessment, but the reality is that the new systems put in place are often more political and subjective than the old system. Thoughtful use of a balanced portfolio of metrics can actually improve the quality of tenure review while reducing the stress and administrative burden on PIs.

Monday
Jun212021

Career trajectory

Today I gave a talk on my career trajectory for the University of Turku, in Finland. Looking back on the things I did right and wrong at different stages of my career, and a little advice for the next generation of early career researchers:

Monday
Jun212021

My Life in Science

An old talk I gave on my scientific career, with an emphasis on being a parent scientist and on my experience in seeing sexism in action in the academic career pathway:

Tuesday
Apr132021

Postdoc job opportunity in the lab

Happy to say we have a great job opportunity to join our lab! The position is for a bioinformatics or datascience postdoc position, starting in the Babraham Institute. The position is to lead the data analytics of the Eximious Horizon2020 project. An amazing opportunity to unravel the real-world link between environment and immunity, using the largest and most comprehensive datasets to yet be generated. I welcome applications from thoughtful scientists willing to learn the biology and search for the most appropriate computational tools to apply. Time is provided to learn and develop new skills, so consider applying even if you don't perfectly align to the project. Come join us in Cambridge! 

Apply here

Friday
Mar122021

A cynic's guide to getting a faculty position

I gave an academic caeer talk yesterday at the University of Alberta, and on request from the students I am putting the talk online. These are my personal thoughts on how the job selection process works for independent research positions in universities or research institutes, based largely on my experience, the experience of my trainees going through the process and my observations of behind-the-scenes job committee meetings. I am sure that there is enormous variation in experiences, and that systems work differently in different places: hearing the perspective of many people is more valuable than just hearing the perspective on one.

I'd also just note that this is not an endorsement of the system as it exists. There are aspects of the system that I dislike and actively work to change. But I still think it is valuable for job seekers to understand the system, warts and all, rather than believing in an aspiration system that has yet to materalise. I often hear from trainees that they career training is largely directly to non-academic careers, and they rarely hear how the academic pathway works. So, with a little too much honesty, and an expectation of landing in hot water, here is my attempt to open a conversation:

Wednesday
Mar032021

Thesis acknowledgements

It is so lovely to read the words of graduating students in their thesis acknowledgements. I've seen them learn and grow over the years, increase in skill and resiliency, reach depths they didn't know they have. And here they are, just leaving on to their new adventure and they stop to write kind words back to us.

These from (soon to be) Dr Steffie Junius:

Next, I would like to thank my co-promotor Prof. Adrian Liston. While on paper you’re addressed as my ‘co-promotor’, I truly perceived this as rather having two full promotors who both guided me in their own way, complementing each other. I still remember the evening in Boston when I received the email with an offer to start a PhD at your lab. The thrill to be accepted in such an environment of excellent science made me excited to become the best possible immunologist I could be. Throughout this PhD you have guided me with your advice and mentorship. Especially on the dark moments, you always were able to push me in the right direction and to follow through even when I did not know how. As PhD students, we always think the science is the most important part of a PhD, but you made me understand that personal development is just, if not more, important to becoming truly successful. Thank you for your advice and guidance over the years. The lessons you taught me will stay forever with me throughout my career. 

Thank you Steffie, it has been wonderful to be part of your journey. Enjoy the next stage of your career!

Monday
Nov092020

My career feedback strategy

Part of managing staff and students is to manage their scientific progress. Another aspect is to manage their personal growth and career pathway. Often it is easy to forget the latter, so I make sure that at least once a year I have a formal feedback session on management and careers with everyone in my lab. There are five stages to this, and this year it basically took me two weeks (but this is because I am currently still running two fairly large labs, one in Belgium and one in Cambridge).

Step 1: Anonymous survey of the whole lab. Here I use SurveyMonkey, with a series of questions that allow a quantification of satisfaction in different aspects of lab culture. I focus on questions that measure trust and happiness in the lab, like whether people plan to keep in contact with each other after graduation, how well they feel lab duties are balanced, etc. This is useful to get a bird's eye view of lab culture, which is otherwise biased towards the more vocal lab members. It is important not to get hung up on every negative answer - just because 100% of the lab isn't happy in every aspect doesn't mean you are doing things wrong. Instead it should be more of a comparative indicator. Are people more happy with the lab than the institute or vice versa. After a couple of years it also lets you do longitudinal comparisons - are problems being fixed after identification? Here is the list of questions that I used this year, and the answers of my Cambridge lab:

My interpretation: when people's biggest complaints about about seminars and journal club, then you have a healthy lab. We are also fortunate that this year there are many options for online seminar series of very high quality, so alternatives are available.

In the survey I also include a section allowing free-form answers to certain questions. It is more biased (few people answer them all), but also carries more information. This year those free-form questions were:

How should we run lab meeting?

How should we run journal club?

How could lab duties be better assigned, and are there new duties that need to be added?

Long-term, what new skills should we look at developing?

In our science headed in the right direction?

How much productive time did I lose due to COVID?

What new practices, put in place because of the lockdown, should we keep afterwards?

What extra changes should we make for the upcoming six months, to reduce the impact of partial lockdown?

What extra equipment would be nice to have in the lab?

Any other feedback?

Ideally, these would be addressed in the personal feedback (see below), but it is good to have the option for confidential comments.

Step 2: Individual self-evaluation from each lab member. Here I ask everyone to reflect on their strengths and weaknesses, their achievements and ambitions, things that they could have done differently and things that I could have done differently. I generally ask the same questions every year, although this year I had an extra section on how COVID affected them. I make sure to tell people upfront that this is not an official evaluation, it is a self-reflection piece. This is the form I ask them to fill out. This is a really valuable exercise for several reasons:

1) It gives people a time to reflect on their past year and their following year, to contemplate their future career

2) The questions are designed to focus around problem-solving, rather than blame assigning. What can you do to improve your chance of achieving next year's goal? What I can do to help you achieve this goal? Simply getting people to consider their own agency can be the push that is needed to solve problems

3) It let's me know what their goals are, for your next year and your career. The more information I have on where you are going, the more useful my mentoring will be

4) It let's me see how closely aligned their self-evaluation is to my evaluation of them. The biggest management problems arise from unaligned evaluations of skills. If someone is convinced that they are an excellent communicator and you think they are a poor communicator, then that needs to be resolved. Likewise if someone feels like they are behind in their PhD and you think they are ahead of where you expect them to be, that also needs to be resolved. Which brings me to:

Step 3: My written comments on their self-evaluation. Here I go through their evaluation and put down my comments. Where they list their strengths I highlight the ones that I agree with, and I mention strengths that they forgotten. Where they list their weaknesses I comment on weaknesses that I agree need to be fixed, with a proposed strategy, or I'll explain why I don't think the person is actually weak in that aspect, and perhaps it is more an issue of self-confidence than a real weakness. I'll comment on their key achievements, and mention extras that they may have forgotten. I'll discuss their proposed pathways to improvement, oftening higlighting just one for them to focus on in the next year (trying to do everything is not a great approach). I'll reply to where they ask for help, either promising that they will have it, or explaining why that particular suggestion is not suitable and proposing an alternative. I'll comment on their career plans, whether or not I think they are on the right track to achieve them and how they should go about preparing for the next step. I am always honest - I don't see any value in helping a post-doc deceive themselves that they are on the track to independence if they are not - but this does not need to be cruel. It is more about exploring whether or not they actually want to be on that track, explaining what needs to change for them to move onto it, or explaining the alternative track that they may be moving towards without being aware. I make it a point to be positive (especially with people who have under-estimated themselves, a more common phenotype than over-estimation). I also make it a point to recognise where my failings contributed, to take responsibility for this and to commit to a change in myself. Even if that is as simple as "I should have stepped in earlier", it leads by example in taking responsibility for your actions.

I like to give written feedback, even though I'll have a face-to-face meeting afterwards. It gives me the time to organise my thoughts. It lets me read and re-read to see if I struck the right tone. It means I go through all the points on the document. It also lets my staff read and re-read the comments. Sometimes things become emotional in feedback meetings, and your perception of what is being said is changed by the emotional context. You focus in on negatives and forget the positives.

Step 4: A face-to-face meeting. Here there is a follow-up meeting. Usually I don't go through the document - we've both seen the self-evaluation and my comments. I insist on no science at this meeting, it is all about them, our relationship and their career. Often I'll focus on just one aspect that I think is the most important. The meetings usually last thirty minutes, sometimes out to two hours each. Most common themes:

Junior PhD student, learning what a PhD is. Yes, you are on track. You really are. It is normal that you feel like you are not. Of course you don't know everything you need to know, you are here to learn.

Senior PhD student, looking at their next step. Should I stay for a post-doc? Should I write a fellowship? Should I move to industry? You should make a decision based on interest, not based on fear. If you are more interested in industry, go there. Here is how to start building your industry-entry plan. But don't move to industry because you are scared academia is too tough.

Junior post-doc, scared to ask for help. I know you were on top of your game at the end of your PhD, but that doesn't mean you start from the same place in a new lab on a new topic. Science is constantly learning. You need to communicate. If something isn't working, don't hide it until it works. Talk to me. Failure to talk can make our relationship non-functional, and doesn't help anyone.

Senior post-doc, looking at an independent position. Okay, let's look at the facts. How mobile will you be? What are the options available to you and your family? What are the timelines of applications? How early will you need to send me drafts to have sufficient time to address my feedback? Who can I network you with? What do we need to work on with training sessions?

Expecting parent. Alright, let's be realistic here. It is going to be brutal being a new parent. This was my experience. No, you are not going to be able to get X, Y or Z done while on parental leave. Organise everything and we'll get someone else to cover you - but it is up to you to organise things in advance. Samples, folder structure, design of experiments - they need to be able to access everything. When do you get back? Again, let's be realistic and assume you are functioning at 50% productivity for the year after that - anything extra will be a pleasant surprise. Better to finish one thing than leave ten partially completed. Make sure to establish good equal co-parenting from day one!

Super-scientist with crippling self-doubt. You are great, you really are. I know that it is hard to see your success in yourself. I spend half my time in a state of career anxiety, even after a great paper comes out. Sometimes it is just hard to trust your own judgement, and science constantly focuses in on the negatives. If you can't trust your judgement at the moment, trust mine. You're great. 

Step 5. Follow-up! Meetings need actions and behavioural changes to follow. Follow-up with them, make sure that they are putting their actions into place. Follow-up on yourself, check that you are meeting your own commitments. Check-in with them as to whether their goals are changing, especially after big events (that confidence boost from a publication might make them reconsider academia, that tech-transfer conference might have swayed them towards industry). Your relationship with your lab is a work in progress, not a tick-box once a year.

Saturday
Oct102020

The ingredients for a successful lab

Trying to reflect on what constitutes a successful lab, these are the 11 ingredients that I work towards bringing together:

A diverse set of experienced staff. Junior staff come in with a passion and enthusiasm that is second to none. However they also are all being trained in the same environment. By contrast post-docs and senior technicians have been trained in different environments, so they bring with them novel experiences. Having a mixture of staff at different levels and with different educational and life backgrounds optimises the chance that the key idea or skill set will be available. Having at least a few staff members with a long-term perspective in the lab is one of the most potent advantages a lab can have - it means the institutional knowledge is shared between multiple staff, and not all residing in the PI.

A dynamic and supportive lab culture. A successful lab is one with high morale, where people see that effort leads to results. The lab culture should be interactive and supportive. A community feeling, where everyone will jump in to get a project over the line, is critical. A place where everyone feels open to speak up and can live with being criticised is a place where experimental design can be optimised before hitting the bench. A healthy lab is one where the PI is only one voice, and there is just as much peer-to-peer flow of information and ideas.

Output spread across the lab. If the output is concentrated in a handful of people it is suggestive of wasted potential, and puts the lab at risk when the productive people move on. Ideally, every researcher should be getting a first author paper every 3 years.

A healthy portfolio of funding. Ideally this includes a mixture of small and large grants, with a long horizon. The reason why I specify a portfolio is that having all of your funding via one large grant creates a difficult problem when that grant is ending.

A pipeline of research projects. A strong research pipeline includes having high potential projects in the incubation stage, development stage, submission/review stage and published. It can be difficult to manage a pipeline, because you need to switch gears between different projects that need different styles of management and cost/benefit analysis. However the advantage is that there is always something cooking, so it doesn't create the problem of synchronised publication and then a long research gap while you start from scratch.

Balance of diversity in research projects. Focus on a topic gives synergy between projects at both the technical and intellectual level. Diversity of topics brings opportunity and reduces risk. Finding the sweet-spot between focus and diversity is difficult but brings advantages.

Creativity and innovation. A successful lab does research that isn't being done somewhere else. This means creativity and innovation, rather than doing the next obvious thing a little faster than the competition. This can come in different forms: developing new tools, to answer questions other people can't, coming up with creative approaches that other groups haven't thought of, or simply asking different questions.

A reserve of soft money. "Soft money", not tied to a project or time-limited, is precious and difficult to obtain. The advantages are enormous though, allowing investments that later lead to grants. A key advantage is that a reserve of soft money can be used to buffer long-term senior staff between grants. Knowing that you can fund senior staff even if there is a year gap between grants helps you keep the most essentially staff in the lab - even if you never need to actually use the reserve

Quality collaborations. A balance between working in isolation and acting as an academic CRO for other labs. Quality collaborations are usually reflected through bidirectional help, where they contribute to your work and you contribute to their work.

Access to high-end equipment and facilities. High level science is increasingly dependent on high level equipment and specialist staff, beyond what can be built and maintained in a single lab.

Supportive institutional and administrative staff. All the ingredients can be there, but if the departmental head is against you or admin work against you, the lab can be crippled. A group leader spending >50% of their time on admin, or research staff spending >25% of their time on admin, is a warning sign.

Friday
Aug072020

Unpopular opinion: the scientific publication system is not the problem

Scientific publishing is undergoing radical change. Nothing surprising there, scientific publishing has been constantly evolving and constantly improving. Innovation and change are needed to improve, although not all innovations end up being useful. I'm on record for saying that the DORA approach, for example, is ideologically well meaning, but so little consideration has been made of the practicalities that the implementation is damaging. Open-access is another example: an excellent ambition, however the pay-to-publish model used for implementation turbo-charged the fake journal industry.

I am glad that we have advocates pushing on various reforms to publishing: pre-print, open-access, retractions, innovations in accreditation, pre-registration, replication journals, trials in blind reviewing, publishing reviews, etc. The advocates do seem, to me, to have far too much belief that their particular reform is critical and often turn a blind eye to the potential downsides. That is also okay: the system needs both passionate advocates and dubious skeptics in order to push changes, throw out the ones that don't work and tweak the ones that do work in order to get the best cost/benefit ratio of implementation.

Fundamentally, though, the publication system is not broken. Oh, it is certainly flawed and improvements are needed and welcomed. But even if every flaw was fixed (which is probably impossible: some ambitions in publishing are at heart mutually contradictory) I don't think it will have the huge benefits that many advocates assume. Because at the heart of it, the problem is not the publication system, but the other systems that publishing flows into.

Let's take two examples:

  • Careers. Probably the main reason why flaws in the publishing system drive so much angst is that scientific publication is the main criteria used in awarding positions and grants. So issues with prestige journals, impact factors and so forth have real implications that damage people's lives and destroy careers. DORA is the ambition to not do that, without the solution of an alternative. Perhaps one day we will find a better system (I happen to believe it lies in improving metrics, and valuing a basket of different metrics for different roles, not in pretending metrics don't exist). But even a perfect system (again, probably impossible) won't fix the issue in career anxiety. Because in the end the issue is that the scientific career structure is broken: it is under-funded, built based on short-term perspectives, and operates on the pressure-cooker approach to milking productivity out of people until they break. From a broader perspective, the scientific career structure is not operating in a vacuum - it is part of a capitalist economy which again fuels these anxieties. Why are people so worried about losing their place in the academic pipeline? Because in our economy changing careers is really, really scary. Fixing publishing doesn't actually fix any of those downstream issues.
  • Translation. The other issue that is frequently raised by advocates for publication change are people who are involved in translation, usually commercialisation or medical implementation. Let's take the example of drug discovery. You don't need to go far in order to find people yelling about the "reproducibility crisis" (although the little data they rely on is, ironically enough, not especially reproducible) or animal-mouse translation issues. It would be great if every published study was 100% reproducible and translatable, although I'm rather sanguine about errors in the literature. There is always a trade-off between speed and reproducibility, and I am okay with speed and novelty being prioritised at the start of the scientific pipeline as long as reproducibility is prioritised at the end. Initiatives to improve what is published are welcomed, but flawed publications on drug discovery are only a problem because they feed into a flawed drug development system. Big pharma uses a system where investments are huge and the decision process is rushed, with the decision-making authority invested in a handful of people. The structure of our intellectual property system rewards decisions made early on incomplete information: snap judgements need to be made too early in the development process. This system will create errors and waste money. More importantly, perhaps, it will also miss opportunities. A medicine slowly developed in the public domain via collaborating experts may be entirely unviable commercially and never enter patients.
So I agree that scientific publishing is flawed, and improvements can and should be made. Unlike some, however, I don't see journals and editors as the enemy - I see them actively engaged in improvements. Like science itself, scientific publishing will improve slowly but steadily, with a few false leads and some backtracking needed. I am perhaps just too cynical to believe that "fixing" publishing will change science the way some advocates state: the problems have a deeper root cause at their heart.

Thursday
Jun252020

Training the PhD supervisors

I just completed another "training the PhD supervisors" course, in anticipation of my first Cambridge PhD students. I have a few thoughts on training supervisors, but first my credentials and context: 

1. Unlike most science professors, I took formal training in higher education, through a two year part-time Graduate Certificate program, and have published on PhD training.

2. 26 PhD students as supervisor (16) or co-supervisor (10). Of these, 18 graduations, 6 students still in progress and 2 drop-outs. Some easy experiences, where the students flew though. Some wonderful experiences, where I really got to help the student grow and flourish. Some steep learning curves, where the student and I took longer to get it together, but ultimately we both learned from the experience and the student suceeded. Some nightmares, that had me on the edge of quitting and occasionally still give me insomnia. I am a better supervisor today than I was 10 years ago, and hopefully I will be a better PhD supervisor in 10 years than I am today.

3. I see the PhD as a program where you create the environment that gives the student the opportunity to grow. This is difficult, since it involves understanding the student and pushing them just the right amount to stimulate them without intimidating them. The PhD for me is a highly versatile program, and I am happy for it to steer towards many different outcomes based on what the student is aiming for (academia, industry, etc).

So, my thoughts on training programs for PhD supervisors

First, they are necessary. The messages end up being fairly simple. Remember your PhD student is a person as well as a student. Learn that your student has different needs and expectations that you did as a PhD student. Learn to listen to their expectations, learn to be explicit in your expectations, be prepared to discuss and compromise. Document and revisit discussions. Learn the boundaries of reasonable expectations on both sides. Learn when to bring in extra help, learn where that help can come from. While these messages are simple, for many PhD supervisors it will be the first time they've explicitly heard them, and often new supervisors rely excessively on the lessons of their own n=1 PhD. 

This is the raison d'être of these training programs, and the central work is typically done well. There are several common failings, however:

1. Pedagogy has a teaching problem. Education is an advanced academic field, with a highly specialised language, just like other fields. Unfortunately, many education experts use this language when training PhD supervisors. It is a major turn-off, especially to STEM academics, where even common humanities terms can be opaque or even just mystifying. Most supervisors are going to get less than one undergrad credit worth of education training - the use of specialist language is unnecessary and a barrier to concept uptake. I fully acknowledge that STEM disciplines have the same language barrier. I hope that one day there is a concerted effort to bring knowledge from STEM into humanities - and at that point we will need to learn the language of humanities to effectively communicate. But during supervisor training the onus is clearly on the trainer to use discipline-neutral language.

2. Humanities and STEM are just too different. The PhD programs are so different, in style, outcome and supervision, that examples and advice end up being so generic it is of little value, or it jars completely with one of the fields. Just split up these training courses into humanities and STEM, replicate the common content and specialise the field-specific content. 

3. Supervisor training programs are too reactionary. A common mistake for new supervisors is to focus on correcting problems that they experienced during their own PhD. It can result in them being blindsided by different challenges. Ironically, the very classes that teach this are often guilty of the same problem. These courses are designed around the failings of current senior faculty. It is almost "what do we wish our senior lecturers had been taught 20 years ago?" in content and context. In STEM, the biggest failure in the senior supervisor population is the "sink or swim" mentality, which essentially assumes that any student who struggles is not cut out for a PhD (i.e., the failure is entirely in the student). This is demonstrably incorrect and propogates major problems of inequality. However, while this flaw is common in senior supervisors, it is becoming extremely rare in junior supervisors. When given problem examples, junior supervisors tend to first assume the failures are entirely in the supervisor. I have seen more issues arise from junior supervisors trying to be a friend to their students, or over-committing their time to a single student, then I have from junior supervisors neglecting their students. This is not to say that neglect is not a problem - it is, and needs to be addressed. However training courses for junior supervisors should better reflect the problems that are common in junior supervisors. 

4. Training programs are less valuable because they are siloed. This training is focused on the well-being of the student, and is essentially dedicated entirely to situations where the student has a problem that can be fixed by behaviour-change in the supervisor. We know, however, that junior faculty are under enormous stress, rife with anxiety. One of the biggest sources of stress can be the very rare cases of problem students. This situation, of a problem that requires behaviour-change in the student, is almost entirely neglected in supervisor training. We are trying to fix one side of the equation in this training, and the other side is often entirely neglected or dealt with in a generic "stress resilience" training course (which also assumes the flaw is in the faculty not being able to deal with the stress). What we need is integrated training. Pitch us the same problem scenario twice, but with different missing context. Walk through the problem scenario with missing context A, where you need to change. Walk through the problem scenario with missing context B, where the student needs to change. Discuss how to identify developing problems, how to reflect on whether you are dealing with a context A or context B issue, and what practical steps to take in each context. I really dislike the problem scenarios where we are expected to take a one paragraph description at face value - real lab problems are never that simple, and always involve looking at a problem from multiple perspectives. Real solutions always involve trade-offs. Let's not pretend to junior supervisors that they will be in a situation where they can just invest limitless time - there needs to be hard barriers to stop work-life imbalance on their side. Let's also not pretend that a supervisor-student relationship exists in isolation - it has impacts on the entire lab, and trade-offs are always required. Perhaps this comes from a STEM vs humanities divide, but I see the concept of the team/lab almost entirely neglected in problem scenarios and trouble-shooting.

Finally, a little self-reflection. I would give this particular training course a 9/10 - probably the best I've been through. And yet 90% of what I wrote is a criticism. Occupational hazard? I think in STEM we move very quickly on from the success to trying to fix the failures. I know that when I run evaluations I need to force myself to stop, and say "well done on X, Y and Z. These are important. Congratulations. Now let's talk about A, B and C, which need some improvement...... Again, well done on X, Y and Z."