A followup to yesterday's post about the Xu et al paper, Sequence determinants of improved CRISPR sgRNA design: They have also kindly made a public webtool for generating CRISPR scores with their model. It's a cut-and-paste that accepts up to 10000 bases. Simple and quick.
Of course, their source code is available too in their supplemental material and here.
I'm going to try this again:
ReplyDeleteThanks for the links and the very useful blog.
Curious if you've done any back-testing of guides/spacers that have worked or haven't worked for you with either the Doench "on-target" scores or this Xu et al algorithm. I haven't found Doench scores to be very predictive of activity in mouse embryos, and a quick check of one experiment with the Xu scores wouldn't have been very helpful either.
Best,
Rick
I have done some simple scoring on my previously attempted targets to see if the scores correlate with activity, and I definitely don't see a perfect correlation. And it is possible, or even likely, that the context of mouse embryos will have distinct characteristics. For most of the experiments I am actually doing these days, they involve editing precise codons and I am usually try the closest CRISPR target regardless of whatever score I can derive.
Deletethanks for the comment -
Doug
Thanks for the reply. As you've described, often we don't have much choice when a particular codon is targeted - so we note the score, but go ahead anyway. The guides almost always pass our in vitro tests (transfection of guide RNAs into cell lines + T7 assay). We've seen guides with single digit scores (a la Doench et al) work perfectly fine in mice (the lowest scoring functional guide was 0.6!). Hopefully as the algorithms develop over time, they will increase in their prediction effectiveness.
DeleteBest,
Rick
Follow-up: I assembled data for 14 targets we have tested in mouse embryos. For 9/14 targets we got at least some edited mice. Ranged from 29-89% edited pups among liveborns. For the remaining 5/14 targets we got zero edited mice; average # of pups was 9.6 across these 5 failed projects. Average SSC score (Xu et al) was 0.144. range -0.8535 to 0.6028. The correlation between target SSC scores (Xu et al website) and efficiency was poor; got a slightly positive slope trend line but R-squared =0.03. 4 of the 5 failures had SSC ranging from 0.0376 to 0.3566. The successes spanned from -0.402 to 0.6028...The highest efficiency target had SSC of only -0.402. I only had two targets with scores below -0.1. But one was the most efficient target; the other was the lowest score and had zero efficiency. So, not much of a useful correlation so far.
Delete