Navigation
Public engagement

Becoming a Scientist

Read online for free

Print your own copy

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter

Entries by Adrian Liston (480)

Friday
Jun242011

FWO post-doctoral fellowship awarded to Bénédicte Cauwe

This week it was announced that Dr Bénédicte Cauwe won an FWO post-doctoral fellowship to perform research in the Autoimmune Genetics Laboratory. Dr Cauwe recently finished her PhD in the laboratory of Professor Ghislain Opdenakker at the Rega Institute and will continue her research on systemic lupus erythematosus at the Autoimmune Genetics Laboratory.

Tuesday
Jun212011

Academic independence

What is academic independence?

In the mind of many a post-doc it is quite simple, it is the freedom that you gain when you step up from being a post-doc to becoming a faculty member. As a post-doc, your principle investigator has the final say over your research program, while as a faculty member you are the principle investigator.

It seems straight-forward, but in practice the distinction can be quite blurred. As a senior PhD student in the Goodnow laboratory I effectively had academic independence. My principle investigator had funding and placed trust in me so that I could run my research more or less independently. Hopefully the PhD students in our laboratory feel the same way. Could I have done any hair-brained project I wanted to? Certainly not, it had to be within reason, but the research interests I had were aligned with that of my mentor, so in effect I had the independence to pursue the research that I wanted to pursue.

This is not qualitatively different from the academic independence I have now as a faculty member. Yes, I can chose the research program that I want to pursue, but again the within reason proviso applies. I no longer have a faculty member above me, acting as the final arbiter, but there are still limitations. The most obvious limitation is the grant review process. If I want to do an experiment I require funding, which necessitates my research aims being in line with the granting body and being approved by a panel of experts. Then of course, as junior faculty, I will have a jury over-looking my renewal. These juries invariably have something to say about the direction of your science - your research interests are too broad/too narrow, you are spending too much/ too little time on collaborative ventures, etc. In the modern "big science" era, your colleagues and collaborators form another restraint - you may need to negotiate for time on certain equipment or access to particular samples.

Some of these restraints may be reduced over time, but unless you are a Nobel Prize winner with guaranteed block funding for life there will always be some limitations to academic independence. Perhaps the biggest difference in the academic freedom between a post-doc and faculty member is the diffusion and immediacy of responsibility. As a post-doctoral fellow, the limitations on your research are concentrated in a single person who can have immediate impact - a particular line of research can be shut down today with a single decision. As a faculty member, by contrast, the limitations on your research are delayed and the decision-making capacity is diluted out into a plethora of juries. If one grant foundation chooses not to support your work, another (with a distinct jury) may, and often there are avenues for pursuing research for some months or even years without direct funding.

So rather than the qualitative leap in academic independence that a faculty position represents to some, perhaps it is more accurate to think of a gradual shift in responsibility. Someone moving from a post-doctoral position in a restrictive laboratory to a well-funded start-up faculty position will feel an enormous leap in academic freedom. But for others, being a senior post-doc in a rich laboratory supervised by a figure of benign neglect, the entry into a world of constant grant review may even result in a loss of freedom to pursue your research interest.

Thursday
Apr072011

IRO fellowship won by Dina Danso-Abeam

Today it was announced that Ms Dina Danso-Abeam in the Autoimmune Genetics Laboratory was awarded an IRO fellowship to perform research towards her PhD. 

Saturday
Mar262011

An alternative model for peer review

There is no doubt that the current model of peer review is an effective but inefficient system. The high quality of publications that complete peer review is a testomy to the effectivity of the peer review system, as poor papers rarely get accepted in well reviewed journals. However the efficiency of the review system is very low.

Consider that the highest ranked journals have acceptance rates of around 10% and even the middle-ranked journals have acceptance rates of less than 50%. Most papers get published sooner or later, but with the career reward of publishing in high impact factor journals, it is not unusual for a publication to get rejected four or five times as the authors work their way down the journal ranking list. Considering that each review will generally consist of three reviewers, a single paper that had a tough time could consume the (unpaid) time of fifteen reviewers before it is finally accepted. This is an enormous burden on the scientific community, and is largely a wasted burden - afterall, each journal editor only gets to see three of those fifteen reviews when making a decision to accept or decline an article. It also considerably slows down the dissemination of information, as it is not unusual for the entire review process to consume a year or more.

So let's consider an alternative model for peer review, one which keeps the critical aspects that provide effectiveness, but which changes the policies that produce inefficiency. Consider now a consortium of four or five publishers, which may include 20 journals that publish papers on immunology. Rather than authors submit to the individual journals, the authors would submit to a centralised editorial staff, which is paid for by the publishers but which is independent of each journal. An immediate advantage would be the ability to have many more specialised editors available, allowing for better decisions on choosing and assessing the reviews.

Each paper would then be sent out to five or six reviewers, and the reviews would be made available to each of the journals. The editorial staff at the journals would be able to make an assessment of the paper and put forward an option to accept, conditionally accept or decline the paper. This information would be transmitted back to the consortium, and would be provided to the authors. The authors would then be able to make their choice of which offer to accept. In effect, each journal would be making a blind offer to the authors to publish their paper, with full knowledge of the reviews but without the knowledge of whether the other journals put in a bid.

Consider the benefits of this alternative model to each player:

1. The journal gets to judge on more complete information, with double the number of reviews available for each paper, selected by more specialised editorial staff.

2. The reviewing community will more than halve the number of reviews required, while actually providing more information to the journals.

3. The authors will no longer have to make strategic decisions in choosing where to submit, they will simply submit to the consortium and have the option to publish in the top ranked journal which is interested in the paper.

4. The scientific community will have access to cutting-edge research months or even years earlier than under the current system.

Thursday
Jan062011

The verdict on Andrew Wakefield: Fraud

In 1998 Andrew Wakefield published a paper which has severely damaged public health in the last ten years. Based on his observations of only twelve children, nine that he claimed had autism, and without a control group, he concluded that the measles/mumps/rubella vaccine caused autism. As a hypothesis, this was fine, unlikely, but not impossible. He saw nine children with autism, reported that their parents linked this onset with the MMR vaccine, and put it in the literature. Why on earth on underpowered observation like this made it into the Lancet is beyond me, but there is nothing wrong with even outlandish hypotheses being published in the scientific literature. Was it a real observation, or just an effect of a small sample size? Was it a causative link, or just due to coincidence in timing?

As with any controversial hypothesis, after this one was published a large number of good scientists went out and tested it. It was tested over and over and over again, and the results are conclusive - there is no link between the MMR vaccine and autism.

In itself, this was of no shame to Andrew Wakefield. Every creative scientist comes up with multiple hypotheses that end up being wrong. People publish hypotheses all the time, then disprove them themselves or have them disproven by others. If you can't admit being wrong, you can't do science, and it is in fact the mark of a good scientist to be able to generate hypotheses that others seek to knock down. Ten of the thirteen authors on the study were able to see the new data and renounce the hypothesis.

The shame to Andrew Wakefield is not that his hypothesis was wrong. No, the shame he has brought upon himself was by being unscientific, unscrupulous and unethical:

  1. Firstly, Wakefield did not present his paper as a hypothesis generator, to be tested by independent scientists. Instead he went straight to the media and made the outrageous claim that his paper was evidence that the MMR vaccine should be stopped. This is not the way science or medicine works and was a conclusion unsupported by the data. Worst of all it was a conclusion that many parents without scientific training were tricked into believing. Vaccination rates for MMR went down (autism rates have remained unchanged) and children started dying again of easily preventable childhood diseases. A doctor does not see half a dozen children that developed leukemia after joining a football team and then hold a press conference telling parents that playing sports causes cancer in children, which is the direct equivalent of Wakefield's actions.
  2. Secondly, it has now been conclusively demonstrated that his original data was fraudulent. Interviews with the parents of the original nine children with autism show that he faked much of the data of the time of onset, taking cases where autism started before the MMR vaccine and reversing the dates to suggest that the vaccine started the autism. Analysis of the medical records of these children show that as well as the timing being incorrect, many of the symptoms were simply faked and non-existent. The evidence on this charge alone makes Wakefield guilty of professional misconduct and criminal fraud.
  3. Thirdly, unknown to the coauthors of the study and the parents of the children, Wakefield had a financial conflict of interest. Before the study had begun, Wakefield had been paid £435 643 to find a link between vaccines and disease as part of a lawsuit. Every scientist must disclose their financial interests in publication so that possible conflicts are known - Wakefield did not. If he had disclosed this to the press conferences the media may have been slightly more skeptical about his outlandish claims.

These last two issues, scientific misconduct and financial conflict of interest, are the reason why the paper was formally retracted by the Lancet. Studies that are wrong don't get retracted, they just get swamped by correct data and gradually forgotten. Instead, the retraction indicates that the Wakefield paper was fradulent and should never have been published in the first place. Likewise, the British General Medical Council investigated the matter and found that Wakefield "failed in his duties as a responsible consultant" and acted "dishonestly and irresponsibly", and thus struck him off the medical registry.

The worst part about this sorry affair is that it is still dampening down vaccination rates. Literally hundreds of studies, with a combined cohort size of a million children, have found no link between the MMR vaccine and autism, yet one fraudulent and retracted study of nine children is still talked about by parents. Some parents are withholding this lifesaving medical treatment from their children, and their good intentions do nothing to mitigate the fact that cases of measles and mumps are now more than 10 times more likely than they were in 1998, and confirmed deaths have resulted. And Andrew Wakefield, the discredited and disbarred doctor who started this all? Making big money in the US by selling fear to worried parents, and deadly disease to children who have no say in it at all.



Tuesday
Dec142010

The Autoimmune Genetics Laboratory in 2010

All the members of the Autoimmune Genetics Laboratory, at our end of year dinner.

Wednesday
Oct132010

The historic quandary of antibody production

The mechanism by which antibodies were formed was once one of the oldest and most perplexing mysteries of immunology. The properties of antibody generation, with the capacity of the immune system to generate specific antibodies against any foreign challenge – even artificial compounds which had never previously existed – defied the known laws of genetics.

Three major models of antibody production were proposed before the correct model was derived. The first was the “side-chain” hypothesis put forward by Ehrlich in 1900, in which antibodies were essentially a side-product of a normal cellular process (Ehrlich 1900). Rather than a specific class of proteins, antibodies were just normal cell-surface proteins that bound their antigen merely by chance, and the elevated production in the serum after immunisation was simply due to the bound proteins being released by the cell so that a functional, non-bound, protein could take its place. In this model antibodies “represent nothing more than the side-chains reproduced in excess during regeneration and are therefore pushed off from the protoplasm”.

 

Figure 1. The “side-chain” hypothesis of antibody formation. Under the side-chain hypothesis, antibodies were normal cell-surface molecules that by chance bound antigens (step 1). The binding of antigen disrupted the normal function of the protein so the antigen-antibody complex was shed (step 2), and the cell responded by replacing the absent protein (step 3). Notably, this model explained the large generation of specific antibodies after immunisation, as surface proteins without specificity would stay bound to the cell surface and not require additional production. The model also allowed a single cell to generate antibodies of multiple specificities.

 

The “side-chain” model was replaced by the “direct template” hypothesis by Haurowitz in 1930. Under this alternative scenario, antibodies were a distinct class of proteins but with no fixed structure. The antibody-forming cell would take in antigen and use it as a mould on which to cast the structure of the antibody (Breinl and Haurowitz 1930). The resulting fixed-structure protein would then be secreted as an antigen-specific antibody, and the antigen reused to create more antibody. In preference to the “side-chain” hypothesis, the “direct template” hypothesis explained the enormous potential range of antibody specificities and the biochemical similarities between them, but it lacked any mechanism to explain immunological tolerance.

 

Figure 2. The “direct-template” hypothesis of antibody formation. The direct-template hypothesis postulated that antibodies were a specific class of proteins with highly malleable structure. Antibody-forming cells would take in circulating antigen (step 1) and use this antigen as a mould to modify the structure of antibody (step 2). Upon antibody “setting”, the fixed structure antibody was released into circulation and the antigen cast was reused (step 3). In this model specificity is cast by the antigen, and a single antibody-producing cell can generate multiple different specificities of antibody. 

 

A third alternative model was put forward by Jerne in 1955 (Jerne 1955). The “natural selection” hypothesis is, in retrospect, quite similar to the “clonal selection” hypothesis, but uses the antibody, rather than the cell, as the unit of selection. In this model the healthy serum contains minute amounts of all possible antibodies. After the exposure to antigen, those antibodies which bind the antigen are taken up phagocytes, and the antibodies are then used as templates to produce more antibodies for production (the reverse of the “direct template” model). As with the “direct template” model, this hypothesis was useful in explaining many aspects of the immune response, but strikingly fails to explain immunological tolerance.

 

Figure 3. The “natural selection” hypothesis of antibody formation. The theoretical basis of the natural selection hypothesis is the presence in the serum, at undetectable levels, of all possible antibodies, each with a fixed specificity. When antigen is introduced it binds only those antibodies with the correct specificity (step 1), which are then internalised by phagocytes (step 2). These antibodies then act as a template for the production of identical antibodies (step 3), which are secreted (step 4). As with the clonal selection theory, this model postulated fixed specificity antibodies, however it allowed single cells to amplify antibodies of multiple specificities.

 

When Talmage proposed a revision with more capacity to explain allergy and autoimmunity in 1957 (Talmage 1957), Burnet immediately saw the potential to create an alternative cohesive model, the “clonal selection model” (Burnet 1957). The elegance of the 1957 Burnet model was that by maintaining the basic premise of the Jerne model (that antibody specificity exists prior to antigen exposure) and restricting the production of antibody to at most a few specificities per cell, the unit of selection becomes the cell. Critically, each cell will have “available on its surface representative reactive sites equivalent to those of the globulin they produce” (Burnet 1957). This would then allow only those cells selected by specific antigen exposure to become activated and produce secreted antibody. The advantage of moving from the antibody to the cell as the unit of selection was that concepts of natural selection could then be applied to cells, both allowing immunological tolerance (deletion of particular cells) and specific responsiveness (proliferation of particular cells). As Burnet wrote in his seminal paper, “This is simply a recognition that the expendable cells of the body can be regarded as belonging to clones which have arisen as a result of somatic mutation or conceivably other inheritable change. Each such clone will have some individual characteristic and in a special sense will be subject to an evolutionary process of selective survival within the internal environment of the cell.” (Burnet 1957)

 

Figure 4. The “clonal selection” hypothesis of antibody formation. Unlike the other models described, the clonal selection model limits each antibody-forming cell to a single antibody specificity, which presents the antibody on the cell surface. Under this scenario, antibody-forming cells that never encounter antigen are simply maintained in the circulation and do not produce secreted antibody (fate 1). By contrast, those cells (or “clones”) which encounter their specific antigen are expanded and start to secrete large amounts of antibody (fate 2). Critically, the clonal selection theory provides a mechanism for immunological tolerance, based on the principle that antibody-producing cells which encounter specific antigen during ontogeny would be eliminated (fate 3).

 

It is important to note that while the clonal selection theory rapidly gained support as explaining the key features of antibody production, for decades it remained a working model rather than a proven theory. Key support for the model had been generated in 1958 when Nossal and Lederberg demonstrated that each antibody producing cell has a single specificity (Nossal and Lederberg 1958), however a central premise of the model remained pure speculation – the manner by which sufficient diversity in specificity could be generated such that each precursor cell would be unique. “One aspect, however, should be mentioned. The theory requires at some stage in early embryonic development a genetic process for which there is no available precedent. In some way we have to picture a “randomization” of the coding responsible for part of the specification of gamma globulin molecules” (Burnet 1957). Describing the different theories of antibody formation in 1968, ten years after the original hypothesis was put forward, Nossal was careful to add a postscript after his support of the clonal selection hypothesis: “Knowledge in this general area, particularly insights gained from structural analysis, are advancing so rapidly that any statement of view is bound to be out-of-date by the time this book is printed. As this knowledge accumulates, it will favour some theories, but also show up their rough edges. No doubt our idea will seem as primitive to twenty-first century immunologists as Ehrlich’s and Landsteiner’s do today.” (Nossal, 1969).

It was not until the research of Tonegawa, Hood and Leder that the genetic principles of antibody gene rearrangement were discovered (Barstad et al. 1974; Hozumi and Tonegawa 1976; Seidman et al. 1979), rewriting the laws of genetics that one gene encoded one protein, and a mechanism was found for the most fragile of Burnet’s original axioms. The Burnet hypothesis, more than 50 years old and still the central tenant of the adaptive immune system, remains one of the best examples in immunology of the power of a good hypothesis to drive innovative experiments.

 

References

Barstad et al. (1974). "Mouse immunoglobulin heavy chains are coded by multiple germ line variable region genes." Proc Natl Acad Sci U S A 71(10): 4096-100.

Breinl and Haurowitz (1930). "Chemische Untersuchung des Prazipitates aus Hamoglobin and Anti-Hamoglobin-Serum and Bemerkungen ber die Natur der Antikorper." Z Phyisiol Chem 192: 45-55.

Burnet (1957). "A modification of Jerne's theory of antibody production using the concept of clonal selection." Australian Journal of Science 20: 67-69.

Ehrlich (1900). "On immunity with special reference to cell life." Proc R Soc Lond 66: 424-448.

Hozumi and Tonegawa (1976). "Evidence for somatic rearrangement of immunoglobulin genes coding for variable and constant regions." Proc Natl Acad Sci U S A 73(10): 3628-32.

Jerne (1955). "The Natural-Selection Theory of Antibody Formation." Proc Natl Acad Sci U S A 41(11): 849-57.

Nossal and Lederberg (1958). "Antibody production by single cells." Nature 181(4620): 1419-20.

Nossal (1969). Antibodies and immunity.

Seidman et al. (1979). "A kappa-immunoglobulin gene is formed by site-specific recombination without further somatic mutation." Nature 280(5721): 370-5.

Talmage. (1957). "Allergy and immunology." Annu Rev Med 8: 239-56.

Friday
Aug132010

2010's worst failure in peer review

Even though it is only August, I think I can safely call 2010's worst failure in the peer review process. Just as a sampler, here is the abstract:

Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time

Kam LE Hon, Pak C Ng and Ting F Leung

The Bible describes the case of a woman with high fever cured by our Lord Jesus Christ. Based on the information provided by the gospels of Mark, Matthew and Luke, the diagnosis and the possible etiology of the febrile illness is discussed. Infectious diseases continue to be a threat to humanity, and influenza has been with us since the dawn of human history. If the postulation is indeed correct, the woman with fever in the Bible is among one of the very early description of human influenza disease.

If you read the rest of the paper, it is riddled with flaws at every possible level. My main problems with this article are:

1. You can't build up a hypothesis on top of an unproven hypothesis. From the first sentence it is clear that the authors believe in the literal truth of the Bible and want to make conclusions out of the Bible, without drawing in any natural evidence. What they believe is their own business, but if they don't have any actual evidence to bring to the table they can't dine with scientists.

2. The discussion of the "case" is completely nonsensical. The authors rule out any symptom that wasn't specifically mentioned in the Bible ("it was probably not an autoimmune disease such as systemic lupus erythematousus with multiple organ system involvement, as the Bible does not mention any skin rash or other organ system involvement") because medical observation was so advanced 2000 years ago. They even felt the need to rule out demonic influence on the basis that exorcising a demon would be expected to cause "convulsion or residual symptomatology".

This really makes me so mad. The basis for getting published in science is really very simple - use the scientific method. The answer doesn't have to fit dogma or please anyone, but the question has to be asked in a scientific manner. How on earth did these authors manage to get a Bible pamphlet past what is meant to be rigorous peer review? Virology Journal is hardly Nature, but with an impact factor of 2.44 it is at least a credible journal (or was, until this catastrophe). At least the journal has apologised and promised to retract the paper:

As Editor-in-Chief of Virology Journal I wish to apologize for the publication of the article entitled ''Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time", which clearly does not provide the type of robust supporting data required for a case report and does not meet the high standards expected of a peer-reviewed scientific journal.

Okay, Nature has also made some colossally stupid mistakes in letting industry-funded pseudo-science into their pages, but in the 21st century you would hope that scientific journals would be able to tell the difference between evidence-based science, and faith-based pseudo-science.

Tuesday
Jul272010

Juvenile Diabetes Research Foundation

Good news in funding appears to come in pairs. The Juvenile Diabetes Research Foundation is supporting the Autoimmune Genetics Laboratory through a Career Development Award. This is a grant that I am particularly happy to receive, not just for the science that will come out of it, but because I have been a long-time admirer of the JDRF, who tirelessly raise money for research on type 1 diabetes. They are not only the leading sponsor of type 1 diabetes research (spending over $1.4 billion on research since 1970), but also take an active role in coordinating researchers and integrating patient into trials to ensure that the best results come from the money spent. As a PhD student with Chris Goodnow, I always joined in the Walk for the Cure fundraiser, and JDRF sponsored my conference travel to the International Immunology Congress in 2004.

Now the JDRF is supporting our research project on the contribution of non-hematopoietic defects to autoimmune diabetes:

The Non-obese diabetic (NOD) mouse is one of the best studied models of common autoimmune disease in humans, with the spontaneous development of autoimmune diabetes. Similar to the way multiple autoimmune diseases run in families of diabetic patients, the NOD mouse strain is also susceptible to multiple autoimmune diseases, with specific disease development depending on slight alterations in the environment and genetics. These results demonstrate the complexity of autoimmune genetics – in both human families and inbred mouse strains there appear to be a subset of genetic loci that skew the immune system towards dysfunction and an additional subset of genetic loci that result in this immune damage affecting a particular target organ. In the case of NOD mice and type 1 diabetic patients these additional genetic factors result in damage to the beta islets of the pancreas. While the previous emphasis on type 1 diabetes was strictly on the immune system, this model suggests the important role the pancreas may play in the disease process. If certain individuals harbour genetic loci that increase the vulnerability of pancreatic islets to immune-mediated damage, the combination of immune and pancreatic loci could provoke a pathology not caused by either set of genes alone.

Current approaches to genetic mapping in both mice and humans are confounded by the large number of small gene associations and are not able to discriminate between these functional subsets of genetic loci. However, we have developed an alternative strategy for functional genetic mapping. Instead of mapping diabetes as the sole end-point, with small genetic contributions by multiple genes, we map discrete functional processes of diabetes development. This has three key advantages. Firstly, as simpler sub-traits there are fewer genes contributing, each with larger effects, making mapping to particular genes more feasible. Secondly, by mapping a functional process within diabetes we start out with functional information for every gene association we find. Thirdly, by mapping a series of functional processes and then building up this genetic information into diabetes as an overall result we gain a more comprehensive view of diabetes, as a network of genetic and environmental influences that cause disease by influencing multiple systems and processes.

In this project we propose to use the functional genetic mapping approach to probe the role of the pancreatic beta islets in the development of diabetes in the NOD mice. We have developed a transgenic model of islet-specific cellular stress which demonstrates that NOD mice have a genetic predisposition of increased vulnerability of the pancreatic islets to dying and hence the development of diabetes. This is a unique model to analyse the genetic, cellular and biochemical pathways that can be altered in the pancreas of diabetes-susceptible individuals, shedding light on the role the beta islets play in the development of disease.

Saturday
Jul242010

A breakthrough for HIV prevention?

This week a breakthrough for HIV prevention was announced in Science. AIDS researchers in South Africa just completed a long-term study of Tenofovir Gel, and found that the gel, inserted into the vagina before sex, results in a 40% HIV protection rate for women. With 900 women being followed up for 30 months, the results look very solid, and potentially even better than the headline figure of 39% protection. As with all such studies, the protection rate given is with average usage, not ideal usage. The average study participant only actually used the gel for ~75% of sexual intercourse occasions. For the "high adherers", the group using the vaginal gel for >80% of sexual intercourse occasions, the protection rate was 54%. How important is this breakthrough? In a way, it is both bigger and smaller than the headlines would suggest.

A new tool to fight HIV spread

In the age of vaccines with efficacy rates of >99%, a ~40% protection rate sounds rather poor. Furthermore, this is currently a form of protection only against heterosexual transmission of HIV to women, with no data yet on any protection granted to males having sex with a HIV+ woman or as an anal gel for male homosexual transmission. HIV acquisition by non-sexual routes, such as intravenous drug use, will of course be unaffected by the gel. This is a very poor efficacy rate when compared to condom use. A Cochrane meta-analysis has determined that consistent use of condoms results in an 85% protection rate against HIV, which can go as high as 95% with correct usage. The protective effect is only on par with that of male circumcision, which multiple randomized trials have found protects males from heterosexual HIV transmission at a rate of around 60%.

Is the new gel then completely redundant? A downgrade from the condom? No, not for a key population group - the women of southern Africa. The ten countries of southern Africa together constitute 35% of global HIV cases, with HIV reaching a hyper-endemic situation with 10-30% of adults infected with HIV. In this region, heterosexual spread is the dominant form of HIV transmission, and indeed the risk factor of greatest magnitude at the population level goes to married women. Condom usage in Africa is generally very poor, with an average of only 4.6 condoms available per man per year, due to low demand. Only 7% of women in southern Africa reported using a condom the last time they had sexual intercourse with a regular partner. In particular, women who are food insecure are 70% less likely to use a condom when having sex, with less personal control over sexual relationships. Other women may not use a condom during sex for more personal reference - such as trying to conceive. A vaginal gel therefore provides (partial) HIV protection for the first time to any women who would not otherwise use a condom during sex, either because of personal choice, lack of sexual control, or through a desire to become pregnant.

The other important consideration is that any protection results in a greater number of cases being prevented than the effectiveness of the protection to the individual. This is because each case stopped also prevents the flow-on cases which would have spread from the infected individual. It has been estimated that a weakly protective vaccine, with only a 50% protection rate and only given to 30% of the population, would reduce new HIV infections by more than half, over 15 years. These figures are comparable to the results for Tenofovir Gel, so if the maximal potential is realized, this breakthrough has the ability to halve new African HIV cases.

A tool that will sit idle?

The problem, of course, is that the potential of this gel will not be realized. In many ways, the HIV epidemic is not a problem waiting for a medical solution, but rather a problem waiting for a social and political solution. Consider mother-to-child HIV prevention. Current medical treatment of HIV+ women during pregnancy and after birth reduces the transmission rate to the child by more than 99%. Even in developing countries, the treatment program has over 98% efficacy. And yet these cases, almost entirely preventable under current treatment, make up 15% of global HIV cases and 40% of HIV cases in southern Africa, since only 33% of pregnant HIV+ women in Africa get any form of anti-HIV treatment, let alone the recommended treatment program.

Other strategies, which are already proven to work, could make similar impacts if broadly implemented. Widespread male circumcision would reduce HIV rates by 60% in males and, by reducing prevalence, 30% in females. Comprehensive sexual education focused on preventing new infections can be highly successful. An aggressive campaign of university HIV testing and near universal antiretroviral treatment would be capable of reducing new HIV infections by 95% within 5 years. Just the simple treatment of individuals with genital herpes with current antiherpatic drugs could be expected to reduce transmission of HIV in southern Africa by 50%.

No, a new tool to fight HIV is not going to stop the virus. Realistically, the current tools available could cut new HIV cases by 99% within the decade, if only they were implemented. The true scourge of HIV is that it attacks the marginalised in society, hitting regions of great poverty, infecting those on the receiving side of racial and sexual discrimination. The people that, quite frankly, too many people feel deserve to be sick. Being interwoven with issues of sexuality, drugs, race and poverty, people in power have not only been slow to move - they have often moved in the wrong direction, such as the $15 billion pledged in aid by George W. Bush, with its focus on replacing effective condom use with ineffective "abstinence only" programs.

A major part of the problem is certainly lack of resources, both funding and public health infrastructure. The response to HIV has been delayed, fragmented, inconsistent and grossly under-resourced. Lesotho launched a national voluntary counselling and testing campaign aiming at universal testing, which fell through due to a lack of resources. In South Africa only 28% of HIV+ people have access to antiretrovirals. In Zimbabwe only 4.4% of HIV+ pregnant women are receiving antiretroviral treatment to prevent mother to child transmission. In Nigeria 10% of all HIV transmission events are due to lack of funds for hospitals to screen transfused blood, a situation which requires only funding to remedy. However, funding is not the only impediment to an efficient HIV prevention campaign. Policy makers have repeatedly failed to spend limiting resources on HIV prevention, concentrating on medical treatment without adequate care and support. This is despite the cost of most HIV prevention techniques being well under the $4770 per infection prevented that it would take to create a cost savings compared to simple treatment. What is needed to end the HIV crisis is, in fact, simple in health terms and is difficult only in political implementation – a coordinated and adequately funded approach to integrate evidence-based HIV prevention strategies, in concert with major social and economic development efforts to eliminate gender disparities, race- and sexuality-based discrimination and extreme poverty.