Chimeraki
  • Home
  • Scientific Work
  • Visual Art
    • Behind The Lens
    • Canvas Chaotica
  • Movement Art
  • Spilled Ink
  • Travel
  • Contact Me

CHAOS

The science

Research in machine learning, network science, complex systems, computational neuroscience

Hi I'm Sanjukta Krishnagopal, a postdoctoral researcher at the Gatsby Computational Neuroscience Unity at University College London where I'm advised by Peter Latham. Before this I graduated with a physics PhD from University of Maryland where I was advised by Michelle Girvan and was also a fellow of the CoMBiNe (Computation and Mathematics in Biological Networks) NRT program. My research lies at the intersection of machine learning, network science, complex systems, and computational neuroscience. My interests are varied and often interdiscplinary. They include (1) studying higher order networks (theory) and various applications in social and brain networks, (2) developing mathematical intuition for collective motion and dynamics in complex systems, (3) develop interpretable and biologically plausible machine learning architectures, (4) computational methods for predictive medicine, and (4) machine learning theory and reinforcement learning. My research has primarily been computational and theoretical, often involving interfacing with data. 

Current Position: Postdoc
I'm on the job market!

sanjukta_cv_feb2022.pdf
File Size: 199 kb
File Type: pdf
Download File

Primary Research Themes

Machine Learning and computational neuroscience
A biologically plausible alternative to backpropagation for learning
How does learning occur? How do weights change? Machine learning's answer to this is backpropagation, however this is not biologically plausible, and networks trained with this rule tend to forget old tasks when learning new ones. Dendritic Gated Networks are a novel architecture that combine dendritic “gating” (whereby interneurons target dendrites to shape neuronal response) with local learning rules to yield provably efficient performance. They are significantly more data efficient than conventional artificial networks and are highly resistant to forgetting, and we show that they perform well on a variety of tasks, in some cases better than backpropagation. The DGN bears similarities to the cerebellum and validates some experimental results from in vivo mouse cerebellar imaging.
I gave a talk on this work at Cosyne 2021 that I am very excited about which had an audience of 1200 people! (and a 5% acceptance rate). This work is done in collaboration with Google Deepmind
Arxiv paper

​ENCODED PRIOR SLICED WASSERSTEIN AUTOENCODER: LEARNING LATENT MANIFOLD REPRESENTATIONS
I am interested in learning latent representations of which preserve the geometry and topology of the data. In a sense, embedding the entire data manifold. Naturally, this should vastly improve interpretability of latent space, and interpolation in it. In VAEs the use of conventional priors can limit their ability to encode the underlying structure of data.
here we introduce EPSWAE, where the latent representation is learned through a different network preserves the topological and geometric properties of the data manifold. The use of the Wasserstein distance (as opposed to the KL divergence) makes this possible. 
A second interest of mine is navigating and interpreting the latent space. If the latent space lies on a low-d manifold, it's natural to interpolate along geodesics such that intermediate points lie on the manifold. I propose a network-geodesics algorithm to do exactly that, as opposed to conventional linear interpolation in latent space.
Arxiv link
interpretable learning: Reservoir computing 
Reservoir computing is a machine learning architecture that is a popular model in physics. While it doesn't fit the ML stereotype of being able to do insanely cool tasks, it is quite interesting to study since the reservoir itself is a dynamical system. The connections between the neurons are recurrent, and follow a differential equation that can be solved (numerically) to reveal interesting dynamical states/attractors. Thus the learning is encoded in the dynamical state of the reservoir computer in response to the input, which is a radically different way of thinking of about learning. This property of reservoir computers makes them an ideal candidate for understanding how learning occurs, i.e., not a black box, and learn chaotic models (hence, widely popular in modeling weather patterns). A bonus is that they can be implemented in hardware!

My research  in reservoir computing spans a variety of topics that have resulted in three publications:
Generalization of learning using very little data, 
Understanding how learning occurs through attractors in the reservoir dynamics, 
​
Separating chaotic signals e.g. a chaotic version of the cocktail party problem.

reinforcement learning for explaining mouse decision making
Mice have wonderfully complex brains, and are very good subjects for learning experiments. This project is supported by the International Brain Lab (IBL) that studies mice decision making in a task where the mouse has to move a wheel in the direction of a low contrast (dim) target in order to receive a reward. Reinforcement learning is extremely successful at playing games, self-driving cars etc. One could hypothesize that reinforcement learning comes close-ish to modeling mouse decision process. I use policy gradient reinforcement learning with an attention model in order to explain mouse behavior.


​MITIGATING CATASTROPHIC FORGETTING IN A WIDE NEURAL NETWORK
​I am interested in exploring the properties of neural networks in the limit of large width. Can someone mathematical about the relationship between the number of parameters and the ability of the network to mitiagate catastrophic forgetting (forgetting of old tasks when it learns new ones). This is an open quesion.
Sanjukta Krishnagopal and Peter Latham
Network science and complex systems
Network Medicine
This theme of research involves computational methods for medical application. This involves applying various techniques from multilayer networks, graph neural networks and statistics to predict disease subtypes in patients early on, and consequently pre-emptively treat them. In particular, I developed the 'Trajectory Clustering' algorithm that identifies disease subtypes in heterogenous multivariate diseases, where conventional methods face two main challenges challenges (1) unable to capture time-evolution of interactions, (2) can't handle multiple types of data (ordinal, categorical, continuous, phenotypic and genetic etc.). I collaborate with several clinicians on Parkinson's and Stroke. Three papers have resulted from this theme
Publications:
Plos One paper.
Biomedical Physics and Engineering Express  paper.
Stroke paper.
​
and Talk at NetSci 2018, Paris  here
Picture
Higher order networks: simplicial complexes
 While graphs are a source of rich information about pairwise interactions, several real world networks involve interactions between more than two agents. For example three students meeting in a break room is a  simultaneous 3-way interaction (represented by a filled traingle), not 3 pairwise interactions. So many real networks are higher-order, and often misleadingly reduced to pairwise interactions. Simplicial complexes are powerful tools to model higher order interactions. The nice thing about them is that they have mathematical definitions and can be analyzed through the lens of topology and geometry. I am interested in developing a theoretical understanding of higher-order networks, particularly properties of the higher order (Hodge) Laplacian. I am also interested in various applications.

My most recent investigation involves spectral community detection in an arbitrary dimensional simplicial complex (any number of interacting nodes). I am also studying concepts such as 'holes' or cavities in higher order networks.
Manuscript: Sanjukta Krishnagopal and Ginestra Bianconi
Picture
Success in extreme mountaineering
I enjoy hiking, and am deeply fascinated by extreme mountaineering, even though I haven't actually brought myself to do anything too dangerous. I decided, as a pet data analysis project, to study the various factors both personal and expeditional (and their interactions) that contribute to success/ various types of failure at extremely high altitudes in the Everest ranges. These factors include oxygen use, age, sex, previous expedition etc. but also length of expedition, ratio of sherpas to paying climbers, number of high camps. This necessitated the use of a multiscale network and regression. This topic and the results are personally exciting! I will be presenting this work at Complex Networks 2021 in Madrid. preprint here

Selected Past Research Projects

generalized similarity learning with limited data
We investigate the ways in which a machine learning architecture known as Reservoir Computing that loosely resembles neural dynamics  learns concepts such as “similar” and “different” and other relationships between image pairs and generalizes these concepts to previously unseen classes of data. We find that the reservoir acts as a nonlinear filter that projects the input into the high dimensional reservoir space, where inputs from the same category cluster together (see figure), allowing for easy generalization to unseen data. Our architecture outperforms conventional pair-based methods such as Siamese Neural Networks.
Advised by Yiannis Aloimonos and Michelle Girvan
Published in Complexity. Manuscript here
Picture
Synchorinzation patterns in fractal networks: from structure to function
​We investigate complex synchronization patterns such as cluster synchronization and partial amplitude death in networks of coupled Stuart–Landau oscillators with fractal (hierarchical) connectivities. The study of fractal or self-similar topology is motivated by the network of neurons in the brain. Our results show that there is a direct correlation between topology and dynamics (see top figure) - hierarchical networks display hierarchical dynamics.
Advised by Prof. Eckehard Schoell, PhD
Published in Philosophical Transactions of the Royal Society.
Manuscript here
Picture
Picture
Steganography and double key protection using chaotic maps
We developed the image encryption algorithm based on the chaotic logistic map and cat map. A secret key is used to determine initial conditions and the input to the chaotic map function. The Lorenz map is then used for successive pixel encryption. To make the cipher more robust against any attack, the secret key is modified after encrypting each pixel of the image using Arnold’s cat map. Decryption follows the exact reverse. Figures on the right show original (top), encrypted (middle) and recovered after decryption (bottom) with minimal loss. The encrypted image is then hidden using a steganography technique that uses a cover image along with the Lorenz map to determine the location of the pixels to be hidden in the cover. Tests on efficiency and key sensitivity show that our double-key method provides secure image encryption and real-time transmission.
Advised by Dr. Bijil Prakash
Published in Proceedings of Fourth International Conference on Soft Computing, 2014
Manuscript here
Picture
Picture
Picture
modeling of binocular vision in the v1 and lgn brain regions
We processed data from the LGN and V1 regions of the brain in order to identify how visual stimulus from the two individual retina is non-linearly combined in order to develop the corresponding neural response. With knowledge of the stimulus and data from the neural recording in the brain, we fit non-linear filters that convolve with the input in order to produce the desired neural signals.
Advised by Prof. Dan Butts
Picture
Biconvlstm for violence detection in videos
We introduce a Bidirectional Convolutional LSTM architecture for violence detection. The encoding of temporal features in both directions allows for a better video representation. Our method performs comparably with state of the art architectures on benchmarked datasets.
Conference proceedings, Workshop for Objectionable Content and Misinformation at ECCV 2018. Publication here

Picture
Encoding of chaotic dynamics in the weights of a recurrent neural network
We study the dynamical properties of a machine learning model called Reservoir Computing (RC) using a novel mathematical tool called 'directional fiber' in order to gain insight into how information is encoded through learning. The RC is trained to predict the chaotic Lorenz signal - chaotic signals are just that, mathematically chaotic, and hence extremely hard to predict. We find that the reservoir, which is a dynamical system itself, after training, emulates properties of the system it is trained on, i.e., contains a higher dimensional projection of the Lorenz fixed points.
IJCNN 2019 paper

For questions and code please contact me
Link to my github
Chimeraki
Copyright 2018
Powered by Create your own unique website with customizable templates.
  • Home
  • Scientific Work
  • Visual Art
    • Behind The Lens
    • Canvas Chaotica
  • Movement Art
  • Spilled Ink
  • Travel
  • Contact Me