Title: Tensor Network Representations in Machine Learning
Abstract: Tensor networks are factorizations of very large tensors into networks of smaller tensors, it is shown to be a general extension of typical tensor decomposition to high dimensional case. Recently, tensor networks are also increasingly finding applications in machine learning such as model compression or acceleration of computations. In this talk, I will firstly present the general concept of tensor network related research in machine learning, and then introduce our studies on fundamental tensor network model, algorithm, and applications. In particular, the tensor ring decomposition model is introduced and shown to be powerful and efficient representations. In addition, we will present recent progresses on how tensor networks can be employed to solve challenging problems in tensor completion, multi-task learning and multi-modal learning.
Qibin Zhao received the Ph.D. degree in computer science from Shanghai Jiao Tong University, China in 2009. He was a research scientist at RIKEN Brain Science Institute from 2009 to 2017. He is currently a unit leader for tensor learning unit at RIKEN Center for Advanced Intelligence Project (AIP), and is also a visiting professor in Saitama Institute of Technology, and a visiting associate professor in Tokyo University of Agriculture and Technology, Japan. His research interests include machine learning, tensor factorization and tensor networks, computer vision and brain signal processing. He has published more than 120 papers in international journals and conferences and two monographs. He is a senior member of IEEE and serve as an editorial board member for Science China Technological Sciences.