References | Year | Task | Model approach | Architecture | Model | Model details | ABE support | CBE support |
---|---|---|---|---|---|---|---|---|
Dandage et al. [10] | 2019 | Efficiency | Deterministic | N/A | beditor | Computational scoring system that uses the Burrows-Wheeler aligner to determine mismatches and apply different penalty scores if a mismatch is near the PAM, genic or intergenic | ✓ | ✓ |
Arbab et al. [6] | 2020 | Efficiency | Machine learning | Decision tree | BE-Hive | Gradient boosted regression trees | ✓ | ✓ |
Bystander | Neural network | Deep conditional autoregressive machine learning model with encoder/decoder. Encoder has two hidden layers and the decoder exhibits five hidden layers. The network is fully connected | ||||||
Song et al. [7] | 2020 | Efficiency and bystander | Machine learning | Neural network | DeepBaseEditor (DeepABE/DeepCBE) | Two to three hidden layer deep neural network with convolution and dropouts | ✓ | ✓ |
Koblan et al. [11] | 2021 | Efficiency and bystander | Machine learning | Neural network, regression, decision tree | Mixed | BE-Hive, logistic regression, gradient boosted regression trees | ✗ | ✓ |
Yuan et al. [8] | 2021 | Bystander | Machine learning | Neural network | BE-SMART | Deep neural network model with dropout | ✗ | ✓ |
Marquart et al. [9] | 2021 | Efficiency | Machine learning | Neural network | BE-DICT | Attention-based deep neural network with an encoder block that has a self-attention layer, layer normalization and residual connections, and a feed forward network | ✓ | ✓ |
Bystander | Extension of the efficiency model. Encoded block of the efficiency module feeds into an encoder-decoder attention layer together with positional embeddings | |||||||
Pallaseni et al. [3] | 2022 | Efficiency and bystander | Machine learning | Decision tree | FORECasT-BE | Gradient boosted regression trees | ✓ | ✓ |