Jikai Jin

Jikai Jin

Ph.D. student

Stanford university

Biography

Welcome to my personal website! I am an incoming Ph.D. student of the The Institute for Computational and Mathematical Engineering (ICME) at Stanford university. Prior to joining Stanford, I obtained my bachelor degree in computational mathematics at the School of Mathematical Sciences, Peking University, fortunately having Prof. Liwei Wang as my research advisor. My research is highly interdisciplinary across machine learning, statistics, operations research. While primarily focusing on theoretical aspects, the ultimate goal of my research is to develop state-of-the-art solutions for important real-world problems. If you share similar interest, feel free to contact me via email or Wechat. Download my resumé.

Interests
  • Statistics (non-parametric and causal inference)
  • Operations research (experiment design, uncertainty in decision-making)
  • Machine Learning (theory, algorithms and social aspects)
Education
  • Ph.D. in Computational and Mathematical Engineering, 2023 - 2028 (expected)

    Stanford University

  • BSc in Computational Mathematics, 2019 - 2023

    Peking University

  • High School Diploma, 2017 - 2019

    No.2 High School of East China Normal University

Recent News

All news»

Jul. 2023 Graduate from Peking University!

Apr. 2023 Our incremental learning paper is accepted by ICML 2023

Mar. 2023 I have decided to join Stanford ICME as a Ph.D. student this fall!

Jan. 2023 Our operator learning paper is accepted by ICLR 2023 as spotlight presentation.

Oct. 2022 Our operator learning paper is accepted by the NeurIPS 2022 AI4Science workshop.

Recent Publications

Quickly discover relevant content by filtering publications.
(2023). Understanding Incremental Learning of Gradient Descent -- A Fine-grained Analysis of Matrix Sensing. In ICML 2023.

PDF Cite ArXiv Poster

(2022). Minimax Optimal Kernel Operator Learning via Multilevel Training. In ICLR 2023 (spotlight).

PDF Cite ArXiv Slides Poster

(2022). Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. In NeurIPS 2022.

PDF Cite ArXiv

(2021). Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis. In NeurIPS 2021.

PDF Cite ArXiv

(2020). Improved analysis of clipping algorithms for non-convex optimization. In NeurIPS 2020.

PDF Cite ArXiv

Experience

 
 
 
 
 
advised by Prof. Liwei Wang (Peking University)
Unergrduate Research Intern
advised by Prof. Liwei Wang (Peking University)
Feb 2020 – Present Beijing, China
Work on machine learning theory.

Awards and Honors

All awards and honors»

Jun. 2023 Huaixin Scholar, BICMR

Jun. 2023 Peking University Excellent Graduate.

Jan. 2023 Sensetime Scholarship (awarded to 30 Chinese undergraduate students in the field of AI).

Oct. 2022 Qin Wanshun-Jin Yunhui Scholarship, Peking University.

Oct. 2022 Merit Student, Peking University.

Oct. 2021 Exceptional award for academic innovation, Peking University.

Jun. 2021 Elite undergraduate training program of Applied Mathematics and Statistics.

May 2021 Bronze Medal, S.T. Yau College Student Mathematics Competition, probability and statistics individual.

Dec. 2020 Qin Wanshun-Jin Yunhui Scholarship, Peking University.

Oct. 2020 Yizheng Scholarship, Peking University.

Feb. 2019 Silver Medal, 11th Romania Masters of Mathematics.

Oct. 2018 First Prize (ranked No.6), Chinese Mathematical Olympiad (CMO).

Oct. 2017 First Prize (ranked No.13), Chinese Mathematical Olympiad (CMO).

Contact