PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks
PsychRNN is designed for neuroscientists and psychologists who are interested in RNNs as models of cognitive function in the brain.
Despite growing interest in RNNs as models of brain function, this approach poses relatively high barriers to entry to researchers, due to the technical know-how required for specialized deep learning software (e.g. TensorFlow or PyTorch) to train artificial neural network models.
We designed PsychRNN with accessibility and flexibility as important goals.
The frontend for users to define tasks and train RNNs uses only Python & NumPy, with no requirement for deep learning software.
The backend, based on TensorFlow for model training, is readily extensible. This design allows for accessible high-level specification and parameterization of tasks and models, using only a few lines of Python.
Modularity is central to PsychRNN’s design, to achieve flexibility in defining and parameterizing tasks and networks. This facilitates investigation of how task features (e.g. timing or input/output channels) shape the network solutions learned by the models.
PsychRNN also provides support for implementation of neurobiologically motivated constraints on synaptic connectivity, such as: no autapses, structured connectivity (e.g. for multi-region RNNs), Dale’s principle (separate excitatory & inhibitory cells), and fixed nonplastic subset of synapses. Modularity enables implementation of curriculum learning, or task shaping. RNNs can be trained in closed-loop, with tasks progressively adjusted as behavioral performance improves. This is more similar to animal training, for investigation of how shaping impacts neural solutions.
Daniel B Ehrlich; Jasmine T Stone; David Brandfonbrener; Alexander Atanasov; John D Murray
This post was automatically generated by Jasmine Stone