[Research] [Blog] [Talk] [Code]
Research
Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities.
Our goal is to bridge such gaps between the learning of living-beings and computers.
We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry etc.
We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning.
Currently, we are developing algorithms which enable computers to autonomously learn to perceive, act, and reason throughout their lives.
Our research often brings together ideas from a variety of theoretical and applied fields, such as, mathematical optimization,
Bayesian statistics, information geometry, signal processing, and control systems.
For more information, see our current publications and the following pages:
We are also thankful to receive the following external funding (funding amount is approximate),
- (2023-2026, USD 32,000) KAKENHI Grant-in-Aid for Early-Career Scientists, [23K13024]
- (2021-2026, USD 2.23 Million) JST-CREST and French-ANR's grant, The Bayes-Duality Project
- (2020-2023, USD 167,000) KAKENHI Grant-in-Aid for scientific Research (B), Life-Long Deep Learning using Bayesian Principles
- (2019-2022, USD 237,000) External funding through companies for several Bayes related projects
Blog
This blog provides a medium for our researchers to present their recent research findings, insights and updates. The posts in the blog
are written with a general audience in mind and aim to provide an accessible introduction to our research.
March 15, 2024.
Bayesian methods are considered impractical at large scale, and even when we are able to make them work for large deep networks (say using approximations), we don’t expect them to match the performance of optimizers... Continue
December 16, 2022.
What makes a good meta-review? It clearly describes the whole review process and gives clear reasons behind the decisions. Below are a few tips for writing Area Chairs (or Editors) of Machine Learning Conferences and... Continue
September 18, 2022.
We briefly review the two classical problems of distribution estimation and identity testing (in the context of property testing), then propose to extend them to a Markovian setting. We will see that the sample complexity... Continue
November 24, 2021.
In our previous post, we derived a natural-gradient variational inference (NGVI) algorithm for neural networks, detailing all our approximations and providing intuition. We saw it converge faster than more naive variational inference algorithms on relatively... Continue
April 13, 2021.
Bayesian Deep Learning hopes to tackle neural networks’ poorly-calibrated uncertainties by injecting some level of Bayesian thinking. There has been mixed success: progress is difficult as scaling Bayesian methods to such huge models is difficult!... Continue
November 05, 2020.
A very old and yet very exciting problem in statistics is the definition of a universal estimator $\hat{\theta}$. An estimation procedure that would work all the time. Close your eyes, push the button, it works,... Continue
Videos
Code
Here we list research code that has been open-sourced to accompany recent publications. Our team’s github homepage: https://github.com/team-approx-bayes.