Tianzhe Chu

I'm a senior-year undergraduate student major in Computer Science at ShanghaiTech University, with a wonderful year(22-23) spent in UC Berkeley. I'm working on representation learning. I'm from Suzhou.

Email  /  Resume  /  Classes  /  Google Scholar  /  Github  /  Twitter

profile photo
Research

I am currently an undergraduate researcher in Berkeley Artificial Intelligence Research Lab(BAIR) advised by Prof. Yi Ma. I'm interested in unsupervised/self-supervised representation learning and interpretable deep learning architectures. My goal is to develop principled learning techniques that model the structures of real-world information at scale, with applications on visual recognition, 3D generation, multimodality, etc.

News

Fall 2024: I'm applying for Ph.D. in Machine Learning & Computer Vision.

Jan 2024: Our paper CPP and CRATE-AE were accepted by ICLR 2024! See you in Austria.

Nov 2023: Our paper CRATE-Segmentation was accepted by CPAL 2024 and NeurIPS 2023 XAI Workshop, both as Oral! Note: see here for what's CPAL.

Nov 2023: New preprint! We present the comprehensive version of CRATE.

Sep 2023: Our paper CRATE(white-box transformer) was accepted by NeurIPS 2023!

May 2023: Gonna leave lovely Berkeley (as well as U.S.), finishing 7 tech courses and a few interesting research projects.

Publications & Preprints (* means equal contribution)
PontTuset Emergence of Segmentation with Minimalistic White-Box Transformers
Yaodong Yu*, Tianzhe Chu*, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma
Accepted by CPAL 2024(Oral), NeurIPS 2023 XAI Workshop(Oral)(4 out of 59 accepted papers)
demo / project page / code / arxiv

The white-box transformer leads to the emergence of segmentation properties in the network's self-attention maps, solely through a minimalistic supervised training recipe.

PontTuset Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models
Tianzhe Chu*, Shengbang Tong*, Tianjiao Ding*, Xili Dai, Benjamin D. Haeffele, René Vidal, Yi Ma
Accepted by ICLR 2024
project page / code / arxiv

This paper proposes a novel image clustering pipeline that integrates pre-trained models and rate reduction, enhancing clustering accuracy and introducing an effective self-labeling algorithm for unlabeled datasets at scale.

PontTuset White-Box Transformers via Sparse Rate Reduction
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma
Accepted by NeurIPS 2023
code / arxiv

We develop white-box transformer-like deep network architectures which are mathematically interpretable and achieve performance very close to ViT.

PontTuset White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma
Submitted to JMLR
project page / code / arxiv

We propose CRATE(comprehensive version), a “white-box” transformer neural network architecture with strong performance at scale. “White-box” means we derive each layer of CRATE from first principles, from the perspective of compressing the data distribution with respect to a simple, local model. CRATE has been extended to MAE, DINO, BERT, and more transformer-based frameworks.

PontTuset Masked Completion via Structured Diffusion with White-Box Transformers
Druv Pai, Ziyang Wu, Sam Buchanan, Tianzhe Chu, Yaodong Yu, Yi Ma
Accepted by ICLR 2024, CPAL 2024(non-archival track)
project page / In coming!

We exploit a connection between denoising diffusion models and compression to construct white-box masked autoencoders from first principles.

Misc.

I am helping build a website for grad school applicants in EE/CS. Stay tuned~

If you need some academic guidance / talk about my research / chat about AI, shoot an email to me.