Computational Mathematics and Statistics Seminar by Jiangrui Kang: Convexification of Maximum Mean Discrepancy and Further Topics

Time

-

Locations

RE 102
Speaker:  Jiangrui Kang, Ph.D. candidate, Illinois Institute of Technology
 
Title: Convexification of Maximum Mean Discrepancy and Further Topics

Abstract: This time I will introduce the trick proposed by Luc Pronzato (Performance analysis of greedy algorithms for minimising a Maximum Mean Discrepancy, https://arxiv.org/abs/2101.07564) which convexifies maximum mean discrepancy. This trick allows the maximum mean discrepancy associated with spd and characteristic kernel to be minimized by a certain type of Frank-Wolfe algorithm that avoids combinatorial optimization procedures. If time permits, I will also introduce the core-set approach proposed by Sener and Savarese (Active Learning for Convolutional Neural Networks: A Core-Set Approach, https://dm-gatech.github.io/CS8803-Fall2018-DML-Papers/coreset-cnn.pdf) that minimizing certain types of statistical discrepancy to help enhancing the generalization ability of machine learning models.
 
 
Computational Mathematics and Statistics Seminar

Tags:

Getting to Campus