河南大学数据分析技术实验室

设为首页  |  加入收藏

 首页  研究中心概况  新闻动态  研究队伍  科研成果  合作交流  人才培养  下载中心 
新闻动态

河南大学数学与统计学院在第七届 06-19
Empirical likelihood in single 05-13
现代人工智能的本质、途径和方向 04-13
热烈祝贺武相军教授荣登“爱思唯 03-28
数学与统计学院优秀毕业生分享交 03-24
(开启)人工智能理论及算法—应用 03-24
河南大学数学与统计学院“人工智 03-20
Limit Theory for the Autoregre 12-13
Learning without Paired Data i 12-08
Mathematical Modeling for Biom 12-08

新闻动态
您的位置: 首页>>新闻动态>>正文

A Geometric Understanding of Deep Learning
2021-03-02 20:56  

报告题目:A Geometric Understanding of Deep Learning

主 讲 人:顾 险 峰

单 位:美国纽约州立大学

时 间:338:30

ZOOM ID210 089 8623

密 码:123456


摘 要:

This work introduces an optimal transportation (OT) view of generative adversarial networks (GANs). Natural datasets have intrinsic patterns, which can be summarized as the manifold distribution principle: the distribution of a class of data is close to a low-dimensional manifold. GANs mainly accomplish two tasks: manifold learning and probability distribution transformation. The latter can be carried out using the classical OT method. From the OT perspective, the generator computes the OT map, while the discriminator computes the Wasserstein distance between the generated data distribution and the real data distribution; both can be reduced to a convex geometric optimization process. Furthermore, OT theory discovers the intrinsic collaborative—instead of competitive—relation between the generator and the discriminator, and the fundamental reason for mode collapse. We also propose a novel generative model, which uses an autoencoder (AE) for manifold learning and OT map for probability distribution transformation. This AE–OT model improves the theoretical rigor and transparency, as well as the computational stability and efficiency; in particular, it eliminates the mode collapse. The experimental results validate our hypothesis, and demonstrate the advantages of our proposed model.


关闭窗口