ASR-enhanced Multimodal Representation Learning for Cross-Domain Product Retrieval

1Renmin University of China        2Kuaishou Technology

*Work done during internship at Kuaishou Technology
MY ALT TEXT

Conceptual diagram of the proposed AMPere method for cross-domain multimodal product representation learning. Our network consists of three branches, each responding for a specific domain. For each trainable layer in the network, its parameters are shared across the three branches. An LLM-based ASR text summarizer is deployed to extract product-specific information from the raw text overwhelmed by abundant uninformative words transcribed from casual chatting of network anchors or live streamers. Except for the summarizer, the network is trained end-to-end.

Abstract

E-commerce is increasingly multimedia-enriched, with products exhibited in a broad-domain manner as images, short videos, or live stream promotions. A unified and vectorized cross-domain production representation is essential. Due to large intra-product variance and high inter-product similarity in the broad-domain scenario, a visual-only representation is inadequate. While Automatic Speech Recognition (ASR) text derived from the short or live-stream videos is readily accessible, how to de-noise the excessively noisy text for multimodal representation learning is mostly untouched. We propose ASR-enhanced Multimodal Product Representation Learning (AMPere). In order to extract product-specific information from the raw ASR text, AMPere uses an easy-to-implement LLM-based ASR text summarizer. The LLM-summarized text, together with visual data, is then fed into a multi-branch network to generate compact multimodal embeddings. Extensive experiments on a large-scale tri-domain dataset verify the effectiveness of AMPere in obtaining a unified multimodal product representation that clearly improves cross-domain product retrieval.

Comparison with the baseline methods

With the LLM-summarized ASR text, AMPere improves the state-of-the-art visual based solution by a large margin.

Results

Qualitative results of ASR text summarization