About me

Hi! My name is Shyam Sudhakaran, and I'm currently an MLE at Autodesk Research. Before, I was a software engineer at Amazon Web Services and part-time researcher at IT University of Copenhagen. At AWS, I helped develop a serverless infrastructure in charge of collecting and storing millions of configurations from network devices all across the AWS network. I'm super interested in leveraging large language models (LLMs) for open-ended environment generation, reinforcement learning, and multimodal generative modelling.



MarioGPT generating a level

NCA generating a tree in Minecraft

Publications

For a full list visit Google Scholar


  • MarioGPT

    Preprint -- Shyam Sudhakaran, Miguel González-Duque, Claire Glanois, Matthias Freiberger, Elias Najarro, Sebastian Risi

    paper | github | huggingface demo

    MarioGPT is a finetuned GPT2 model (specifically, distilgpt2), that is trained on Super Mario Bros levels. MarioGPT is able to generate levels, guided by text prompts. This generation is not perfect, but we believe this is a great first step more controllable and diverse level / environment generation.
    Appeared in Techcrunch, Kotaku, PCMagazine, Slashgear, Dexerto

  • Skill Decision Transformer

    Neurips 2022 Foundation Models for Decision Making workshop -- Shyam Sudhakaran, Sebastian Risi

    paper | github

    Skill Decision Transformer draws inspiration from hindsight relabelling and skill discovery methods to discover a diverse set of primitive behaviors, or skills. We show that Skill DT can not only perform offline state-marginal matching (SMM), but can discovery descriptive behaviors that can be easily sampled. Furthermore, we show that through purely reward-free optimization, Skill DT is still competitive with supervised offline RL approaches on the D4RL benchmark.

  • Growing 3D Artefacts and Functional Machines with Neural Cellular Automata

    ALIFE 2021 -- Shyam Sudhakaran, Djordje Grbic, Siyan Li, Adam Katona, Elias Najarro, Claire Glanois, Sebastian Risi

    paper | github

    In this work, we propose an extension of NCAs to 3D, utilizing 3D convolutions on a voxel grid. We show that despite their simplicity, NCAs are capable of growing complex entities such as castles, apartment blocks, and trees, and even regenerating functional machines. Videos of growth: https://youtu.be/-EzztzKoPeo.
    Appeared in Fast Company

  • Goal-Guided Neural Cellular Automata: Learning to Control Self-Organising Systems

    ICLR 2022 Cells2Societies workshop -- Shyam Sudhakaran, Elias Najarro, Sebastian Risi

    paper | github

    NCAs are flexible and robust computational systems, but are inherently uncontrollable during and after their growth process. In this work, we attempt to control these systems using Goal-Guided Neural Cellular Automata (GoalNCA), which leverages goal encodings to control cell behavior dynamically, giving artificial cells the ability to morph and locomote.

  • HyperNCA: Growing Developmental Networks with Neural Cellular Automata

    ICLR 2022 Cells2Societies workshop -- Elias Najarro, Shyam Sudhakaran, Sebastian Risi

    paper | github

    Here we propose a new hypernetwork approach to grow artificial neural networks based on neural cellular automata (NCA). Inspired by self-organising systems and information-theoretic approaches to developmental biology, we show that our HyperNCA method can grow neural networks capable of solving common reinforcement learning tasks.

Projects

For a full list visit Github


  • Hyper-NN: Easy Hypernetworks in Pytorch and Jax

    github

    hyper-nn gives users with the ability to create easily customizable Hypernetworks for almost any generic torch.nn.Module from Pytorch and flax.linen.Module from Flax. Our Hypernetwork objects are also torch.nn.Modules and flax.linen.Modules, allowing for easy integration with existing systems. For Pytorch, we make use of the amazing library functorch

  • Optim-RL: Reinforcement Learning algorithms implented as Pytorch optimizers

    github

    A Pytorch based reinforcement learning library focused on providing flexible, modular components that allow for easy customizability and integration into existing workflows. With optim-rl, it becomes trivial to combine algorithms, such as combining PPO's (Proximal Policy Optimization) optimization step with a simple ES (Evolution Strategies) optimization step.

  • DeepNeuroevolutionCars: Evolving cars to drive

    github

    In this project, self driving cars, controlled by neural networks, are evolved using a C# reimplementation of Deep GA (Deep Genetic Algorithm)to drive through a maze.

  • BareBonesAI: Machine Learning algorithms built from scratch

    github

    In this project, popular algorithms are built from scratch using basic libraries such as numpy.
    Algorithms implemented: KNN, Linear Discriminant Analysis, Regression, Naive Bayes, Fuzzy Kmeans, PCA, Spectral Clustering.
    Optimization algorithms implemented: Stochastic Gradient Descent, Conjugate Gradient Descent, Jacobi Iteration.
    In addition, Feedforward neural networks are implemented with working backpropagation.

  • MultiAgentBattleSim

    github

    An interactive multi-agent AI research platform to test state of the art algorithms in customizable battle scenarios where teams of agents compete to survive.

  • Jax-NCA

    github | huggingface

    An efficient implementation of Neural Cellular Automatas, written with jax. NCAs are autoregressive models like RNNs, where new states are calculated from previous ones. With jax, we can make these operations a lot more performant with jax.lax.scan and jax.jit

Experience

Download Resume


  1. Modl.ai

    -- Remote

    Researcher (Contract)

    May 2023 — Present

    - Trained a suite of AI agents that leverage transformer based neural network architectures to learn from thousands of gameplay videos and bot data for FPS games.

    - Improved the efficiency of these agents on CPU with techniques such as distilling knowledge into a smaller network and rewriting components into rust / c++ accelerated frameworks (ggml, candle)

  2. IT University of Copenhagen

    -- Remote

    Researcher

    January 2021 — May 2023

    - Developed novel LLM based open-ended level generation algorithms, skill discovering reinforcement learning algorithms, and compact, biologically inspired algorithms. See publications for more details.

  3. Amazon Web Services

    -- Cupertino, CA

    Software Engineer

    August 2019 — February 2022

    - Designed and developed a serverless infrastructure in charge of collecting and storing millions of configurations from network devices all across the AWS network. This service was built using Lambda and Step Functions for on demand and scheduled workflow execution
    - Lead a project involving the automated transfer of terabytes of data between multiple AWS datacenters.
    - Enhanced service monitoring and metrics with a custom ML powered time series correlation analysis engine, enabling for quick root cause analysis.

  4. Fujitsu Labs

    -- Sunnyvale, CA

    Research Intern

    May 2019 — August 2019

    - Benchmarked and improved the Graph Neural Network algorithm, "Deep Tensor" using novel generated datasets which simulated real scenarios like network attacks and airline traffic.
    - Improved the data explainability feature in the "Accessible Deep Tensor" platform, which helped clients improve their understanding of their data.

  5. Stanford University

    -- Stanford, CA

    Research Intern

    Feb 2018 — May 2019

    - Furthered Natural Language Emergence research by developing intelligent reinforcement learning agents that communicate with each other using generated languages.

Contact

Contact Form