Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion

Zhichao Wang1, Liumeng Xue1, Qiuqiang Kong1, Lei Xie1, Qiao Tian2, Yuping Wang2
1 Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University, Xi'an, China
2 SAMi, ByteDance Inc., Shanghai, China

Contents

1. Abstract

Zero-shot voice conversion (VC) converts source speech into the voice of any desired speaker using only one recording of the speaker without requiring additional model updates. Typical methods use a speaker representation from a pre-trained speaker verification (SV) model or learn speaker representation during VC training to achieve zero-shot VC. However, existing speaker modeling methods overlook the variation of speaker information richness in temporal and frequency channel dimensions of speech. This insufficient speaker modeling hampers the ability of the VC model to accurately represent unseen speakers who are not in the training dataset. In this study, we present a robust zero-shot VC model with multi-level temporal-channel retrieval , referred to as MTCR-VC. Specifically, to flexibly adapt to the dynamic-variant speaker characteristic in the temporal and channel axis of the speech, we propose a novel fine-grained speaker modeling method, called temporal-channel retrieval (TCR), to find out when and where speaker information appears in speech. It retrieves variable-length speaker representation from both temporal and channel dimensions under the guidance of a pre-trained SV model. Besides, inspired by the hierarchical process of human speech production, the MTCR speaker module stacks several TCR blocks to extract speaker representations from multi-granularity levels. Furthermore, to achieve better speech disentanglement and reconstruction, we introduce a cycle-based training strategy to simulate zero-shot inference recurrently. We adopt perpetual constraints on three aspects, including content, style, and speaker, to drive this process. Experiments demonstrate that MTCR-VC is superior to the previous VC methods in modeling speaker timbre while maintaining good speech naturalness.



2. System Description

Comparison Systems

3. Demos -- Out-of-dataset Speaker

Target Speaker Source Speech Method
MediumVC SRDVC FragmentVC MTCR-VC (proposed)

4. Demos -- In-dataset Speaker

Target Speaker Source Speech Method
MediumVC SRDVC FragmentVC MTCR-VC (proposed)