Xinran Zhu 朱欣然 pronounce?

I am a final-year Ph.D. candidate in Applied Math at Cornell University, advised by David Bindel. Before I came to Cornell, I received my bachelor’s degree from Shanghai Jiao Tong University (Zhiyuan College). My research interest lies in numerical analysis of kernel methods.

Publications

  1. NeurIPS
    Variational Gaussian Processes with Decoupled Conditionals
    Xinran Zhu, Kaiwen Wu, Natalie Maus, Jacob R. Gardner, and David Bindel
    In Advances in Neural Information Processing Systems 2023
  2. TMLR
    Bayesian Transformed Gaussian Processes
    Xinran Zhu, Leo Huang, Eric Hans Lee, Cameron Ibrahim, and David Bindel
    In Transactions on Machine Learning Research 2023
  3. KBS
    SigOpt Mulch: An Intelligent System for AutoML of Gradient Boosted Trees
    Aleksei Sorokin*, Xinran Zhu*, Eric Hans Lee, and Bolong Cheng
    Knowledge-Based Systems 2023
  4. NeurIPS-W
    Efficient Variational Gaussian Processes Initialization via Kernel-based Least Squares Fitting
    Xinran Zhu, Jacob R. Gardner, and David Bindel
    In NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, Decision-making Systems 2022
  5. PMBS
    ML-based Performance Portability for Time-Dependent Density Functional Theory in HPC Environments
    Adrian P Dieguez, Min Choi, Xinran Zhu, Bryan M Wong, and Khaled Z Ibrahim
    In 2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) 2022
  6. PP
    GPTuneBand: Multi-task and Multi-fidelity Autotuning for Large-scale High Performance Computing Applications
    In Proceedings of the SIAM Conference on Parallel Processing for Scientific Computing 2022
  7. NeurIPS
    Scaling Gaussian Processes with Derivative Information Using Variational Inference
    Misha A Padidar, Xinran Zhu, Leo Huang, Jacob R. Gardner, and David Bindel
    In Advances in Neural Information Processing Systems 2021
  8. PPoPP
    GPTune: multitask learning for autotuning exascale applications
    In Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2021