My name is Amir Saffari and this is my website and blog. I’ve a PhD in Computer Vision and Machine Learning and work at Alexa AI, Amazon as a
Principal ML Scientist. Currently, I focus on generative AI, training large language models (LLMs), teaching LLMs to use thousands of tools and APIs
to accomplish personalised and complex tasks in real-time, reasoning using weak supervision, and reinforcement learning based program synthesis.
2023: We have two papers accepted at ACL workshops exploring augmenting LLMs with knowledge graphs for zero-shot question answering: KAPING and Rigel-KAPING.
A few interesting papers submitted to ICLR 2019 around generative models for music.
Coupled Recurrent Models for Polyphonic Music Composition Adversarial Audio Synthesis HAPPIER: Hierarchical Polyphonic Music Generative RNN GANSynth: Adversarial Neural Audio Synthesis Synthnet: Learning synthesizers end-to-end Autoencoder-based Music Translation Music Transformer Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset Modulated Variational Auto-Encoders for Many-to-Many Musical Timbre Transfer TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer Will update if I come across more :)
Audio recording and slides for a talk I gave at The impact of ML on Society as part of The Foundation for Science and Technology debates hosted by The Royal Society.
The topic of how to approach Machine Learning (ML) research projects has come up many times over the past several years speaking to many in the field and industry. One of the fundamental aspects of ML projects is that there are often larger risks and unknowns attached to these project compared to others that are being undertaken within your company’s engineering department.
While we have made tremendous progress in ML technologies, modelling algorithms, platforms, and libraries, we are still very far from having what I call text book algorithms that would reduce the risk of implementing a project.