Research

My research focuses on making AI more efficient, interpretable, and accessible. Here are my publications and areas of interest.

Research Interests

zap

Efficient Deep Learning

Developing methods to reduce computational requirements of neural networks while maintaining performance. Focus on quantization, pruning, and knowledge distillation.

image

Multimodal Learning

Creating AI systems that can understand and reason across different modalities including text, images, audio, and video.

lightbulb

Interpretable AI

Building AI systems that are transparent and explainable, enabling users to understand and trust model predictions.

Publications

NeurIPS 2024Conference
78 citations

Multimodal Learning with Cross-Attention Fusion

Alex Chen, Emily Wang

A new approach to multimodal learning that uses cross-attention mechanisms to effectively fuse information from text, images, and structured data.

Collaborators

Stanford AI Lab

Research Collaboration

Google DeepMind

Internship Project

NVIDIA Research

Open Source