Oct 22 2019

Prime Editing the Genome

Move over CRISPR – enter Prime Editing.

Maybe. What we can say is that the pace of technological advancement in genetic editing is advancing so quickly it’s hard to keep up. Now a new study, published in Nature, details a new method for editing called prime editing. The authors write:

Here we describe prime editing, a versatile and precise genome editing method that directly writes new genetic information into a specified DNA site using a catalytically impaired Cas9 fused to an engineered reverse transcriptase, programmed with a prime editing guide RNA (pegRNA) that both specifies the target site and encodes the desired edit.

So actually this is built off of CRISPR technology, using a Cas9 component, and then pairing it with a bit of RNA (pegRNA) that both targets the bit of the genome you want to edit and also has the new code you want to insert. Insertion is accomplished by the enzyme reverse transcriptase. How does this compare to existing methods for gene editing? The authors again:

“Prime editing offers efficiency and product purity advantages over homology-directed repair, complementary strengths and weaknesses compared to base editing, and much lower off-target editing than Cas9 nuclease at known Cas9 off-target sites. Prime editing substantially expands the scope and capabilities of genome editing, and in principle could correct about 89% of known pathogenic human genetic variants.”

It is more precise and has fewer errors, and is able to target 89% of known genetic diseases. It cannot fix errors where a gene is entirely missing, or where there are too many copies of a gene. They tested the method on two human genetic diseases in human cells, Tay Sachs and sickle cell anemia, with success.

Obviously this is one study introducing this new technique. Years of research await to further test and confirm the risks and benefits of this technology, and a lot of testing will be necessary before regulators allow it to be used on actual people. At first it will likely be applied to diseases where cells can be removed from the body, treated, and then transplanted back in (like the bone marrow). So sickle cell may be one of the first diseases essentially cured by prime editing.

But let’s back up and take the 30,000 foot view of where we are. It was only 7 years ago that CRISPR was introduced, a technology based on bacteria that allows for the targeting and editing of DNA. The technology has been advancing quickly, getting more precise with fewer errors. Gene editing existed before CRISPR, but this technology made it cheap and accessible, putting us on an entirely different playing field. This has allowed many labs to conduct genetic research, resulting in part with prime editing as one offshoot technology.

This is a situation where the basic science is getting way ahead of the translational clinical research. We are advancing the technology faster than we can test specific applications. By the time we do the clinical research, the technology is obsolete. This is a good problem to have.

Of course it’s easy to get excited about the potential and therefore to overcall the implications of a new technology and create unrealistic expectations. Time will tell how this will all pan out. What typically happens is that the hype gets ahead of reality, and then 10-20 years down the line the technology catches up, or doesn’t. This is where we are with stem cells. We already went through our pulse of hype, while research is ongoing. We now have a more mature view of the potential and limitations of stem cell therapy, and I think mostly our expectations are being moderated. Some hurdles to specific applications may prove fatal, and others will take a lot longer to work out than we hoped. We are still in the “if” stage, not “when.”

The same is true of CRISPR and related gene-editing techniques. But let me outline the reasons for optimism. The first, as I stated, is that the technology post CRISPR is advancing quickly. Prime editing is just one example.

But the second is perhaps more encouraging. If we are talking about genetic diseases, which represent a lot of serious diseases that plague humanity, the theoretical application of gene editing is much more secure than, say, stem cells. With stem cells we have hurdles we don’t know how we will overcome – like the risk of forming cancers, and the challenge of forming complex structures. With gene editing, there’s a coding error, and we fix it. Problem solved.

The analogy to computer code is deliberate, and if you read media discussions of this technology such analogies are rampant. That’s because DNA is a form of code, it is like a computer code with four different letters (instead of two), and with a language composed of three-letter words (four cubed or 64 different words). Genetic diseases are coding errors, and fixing those errors cures the disease.  There isn’t necessarily any trade-off, side effect, or unintended consequence of fixing the error itself.

The only question is – how good is the gene editing technology? If it is precise, with few errors and off-target changes, then there should also be no negative effects from the change, assuming we understand the underlying genetics (which for many genetic diseases, we do). This is why CRISPR is so exciting. If we ever achieve something close to the theoretical optimal technology – where we can make any change to the genome with 100% precision and zero unintended changes – then we can straight-up cure a long list of diseases. We are tantalizingly close, and getting closer rapidly. We are already at the point where we can test the technology in people, and tests are ongoing.

I think it is a good bet that most people alive today will live to see diseases like sickle cell essentially cured. I don’t think that is hype, especially since the treatment can be applied outside the body and the genetic changes verified.

But then we get to more tricky applications of the technology, and these may take decades before we get confident enough to apply them broadly. First is applying CRISPR-like treatments inside living patients. Before we let a gene-editing tool loose inside a living person, we need to be highly confident about the overall risks. The risks are basically off-target genetic changes, introducing new unwanted mutations. Most of these are going to be benign, so if the rate is low enough the probability of harm can also be very low. What rate will we accept? Like any medical treatment, this is a matter of risk vs benefit. How bad is the disease you are trying to treat, what are the benefits of the treatment, and what are the risks? We may already be at the point with existing technology of acceptable risk vs benefit for many applications (we just need to confirm this with clinical research).

Another application that will take a long time to get comfortable with is germ-line editing. It’s one thing to edit the cells in a body (somatic cells) that will have zero effect on offspring. However, if you change the germ-line cells, those changes get passed down to children, and so you are essentially introducing them into the human gene pool. Making changes to an embryo for IVF, for example, would change the germ line. The potential for risk is higher here, and so this will likely wait until the technology is more mature and well-established.

Further there are potential changes to the genome for things other than fixing a clear error (curing a genetic disease). The risk vs benefit analysis changes if you are just talking about modifying disease risk factors, or enhancing function. I think we will get there eventually, but again the technology will have to mature first.

For example, we could make changes to the ApoE allele to decrease the risk of developing Alzheimer’s disease:

Apolipoprotein E (ApoE) is a major cholesterol carrier that supports lipid transport and injury repair in the brain. APOE polymorphic alleles are the main genetic determinants of Alzheimer disease (AD) risk: individuals carrying the ε4 allele are at increased risk of AD compared with those carrying the more common ε3 allele, whereas the ε2 allele decreases risk.

People with one copy of ApoE4 have a four-fold increased lifetime risk of developing AD, while having a double dose has a 10-fold increased risk. AD represents a significant disease burden on humanity, which is only increasing as the population ages. Imagine if we could significantly reduce the incidence of AD in the general population by routinely changing E4 alleles to E2, or even the more common E3 allele to the protective version.

There is a long list of diseases that have genetic risk factors. For each one we will need to do a careful risk vs benefit analysis, and determine which target populations might benefit from which changes, using what technology. This kind of research will likely dominate medicine over the next century.

But again – we’ll see. All I can do is make probability statements based upon theoretical benefits and limitations of an emerging technology that is moving quickly. I do think there is reasonable cause for optimism, however.

 

No responses yet