![]() ![]() "Our model revealed that a small, hairpin-like structure in RNA can decrease splicing." "Using an 'interpretable-by-design' approach, we've developed a neural network model that provides insights into RNA splicing - a fundamental process in the transfer of genomic information," notes Regev. Specifically, they developed a model - the data-driven equivalent of a high-powered microscope - that allows scientists to trace and quantify the RNA splicing process, from input sequence to output splicing prediction. Regev and the paper's other authors, Susan Liao, a faculty fellow at the Courant Institute, and Mukund Sudarshan, a Courant doctoral student at the time of the study, created a neural network based on what is already known about RNA splicing. "By harnessing a new approach that improves both the quantity and the quality of the data for machine-learning training, we designed an interpretable neural network that can accurately predict complex outcomes and explain how it arrives at its predictions." "Many neural networks are black boxes - these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding," says Oded Regev, a computer science professor at NYU's Courant Institute of Mathematical Sciences and the senior author of the paper, which appears in the Proceedings of the National Academy of Sciences. Among these are examinations of the intricacies of RNA splicing - the focal point of the study - which plays a role in transferring information from DNA to functional RNA and protein products. Given the pairs $(x_c, y_c)$ for $c = 1, …, C$ in the context set and given unseen inputs $x_t^$).The breakthrough centers on a specific usage of neural networks that has become popular in recent years - tackling challenging biological questions. Given a set of observations $(x_i, y_i)$, they are split into two sets: “context points” and “target points”. The broad idea behind how the NP model is set up and how it is trained is illustrated in this schema: The NP is a neural network based approach to represent a distribution over functions. Even though here I discuss, you might find it easier to start with which are essentially a non-probabilistic version of NPs. So here is my attempt at reviewing and discussing NPs.īefore reading my post, I recommend the reader to take a look at both original papers. I believe, often the best way towards understanding something is implementing it, empirically trying it out on simple problems, and finally explaining this to someone else. I found the idea behind NPs interesting, but I felt I was lacking intuition and a deeper understanding how NPs behave as a prior over functions. Neural Processes aim to combine the best of these two worlds. However the latter might be preferable in the presence of large amounts of data as training NNs is computationally much more scalable than inference for GPs. This differs from (non-Bayesian) neural networks which represent a single function rather than a distribution over functions. In the limited data regime, GPs are preferable due to their probabilistic nature and ability to capture uncertainty. Gaussian Processes – GPs offer a probabilistic framework for learning a distribution over a wide class of non-linear functionsīoth have their advantages and drawbacks.Deep Learning – neural networks are flexible non-linear functions which are straightforward to train.Neural Processes (NPs) caught my attention as they essentially are a neural network (NN) based probabilistic model which can represent a distribution over stochastic processes. See the paper conditional Neural Processes and the follow-up work by the same authors on Neural Processes which was presented in the workshop. In this year’s ICML, some interesting work was presented on Neural Processes. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |