Latent Space Analysis

Utilize information learned in the latent space of deep neural networks to extract features and analyze data.

We develop tools that compress and make sense of learned representations so experts can explore data faster with confidence. In Compressing and Interpreting Word Embeddings (Li et al., 2023), we add latent space regularization and interactive semantics probing to shrink vectors while revealing human-readable axes (e.g., sentiment, tense) for transparent NLP analysis. In Local Latent Representation with Geometric Convolution for Particle Data (Li & Shen, 2022), we learn geometry-aware local latents that capture neighborhood structure in particle simulations, enabling feature discovery (shocks, vortices, mixing) without full-resolution passes. In IDLat (Shi et al., 2022), we introduce importance-driven latent generation that allocates capacity where it matters most, speeding up visual exploration and preserving critical phenomena. Together, these works turn black-box embeddings into compact, controllable, and interpretable spaces for both language and scientific data.

References

2023

  1. li2023compressing.png
    Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing
    Haoyu Li, Junpeng Wang, Yan Zheng, and 3 more authors
    Information Visualization, 2023

2022

  1. TVCG
    li2022local.png
    Local latent representation based on geometric convolution for particle data feature exploration
    Haoyu Li and Han-Wei Shen
    IEEE Transactions on Visualization and Computer Graphics, 2022
  2. TVCG
    VDL-Surrogate: A view-dependent latent-based model for parameter space exploration of ensemble simulations
    Neng Shi, Jiayi Xu, Haoyu Li, and 3 more authors
    IEEE Transactions on Visualization and Computer Graphics, 2022