Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints

Zhichao Wang1, Xinsheng Wang1, Lei Xie1, Yuanzhe Chen2, Qiao Tian2, Yuping Wang2
1 Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University, Xi'an, China
2 Speech, Audio & Music Intelligence (SAMI), ByteDance

Contents

1. Abstract

Conveying the linguistic content and maintaining the source speech's speaking style, such as intonation and emotion, is essential in voice conversion (VC). However, in a low-resource situation, where only limited utterances from the target speaker are accessible, existing VC methods are hard to meet this requirement and capture the target speaker's timber. In this work, a novel VC model, referred to as MFC-StyleVC, is proposed for the low-resource VC task. Specifically, speaker timbre constraint generated by clustering method is newly proposed to guide target speaker timbre learning in different stages. Meanwhile, to prevent over-fitting to the target speaker's limited data, perceptual regularization constraints explicitly maintain model performance on specific aspects, including speaking style, linguistic content, and speech quality. Besides, a simulation mode is introduced to simulate the inference process to alleviate the mismatch between training and inference. Extensive experiments performed on highly expressive speech demonstrate the superiority of the proposed method in low-resource VC.



2. System Description

Comparison Systems

3. Demos -- Comparison Analysis

1 utt.
Target Speaker Testing scenarios Source Speech Source Speech (Copysyn thru Vocoder) Method
Hybrid-VC SN-VC StyleVC MFC-StyleVC (proposed)
F118 Interrogative (neutral)
Declarative (neutral)
Declarative (neutral)
Emotional (surprise)
Emotional (disgusting)
Emotional (disgusting)
Emotional (fear)
Declarative (neutral)
Emotional (surprise)
Emotional (disgusting)
5 utt.
Target Speaker Testing scenarios Source Speech Source Speech (Copysyn thru Vocoder) Method
Hybrid-VC SN-VC StyleVC MFC-StyleVC (proposed)
F118 Interrogative (neutral)
Declarative (neutral)
Declarative (neutral)
Emotional (surprise)
Emotional (disgusting)
Emotional (disgusting)
Emotional (sad)
Declarative (neutral)
Emotional (surprise)
Emotional (disgusting)

4. Ablation Analysis

Target Speaker Source speech Method
w/o content w/o speaker w/o style w/o real/fake w/o simulation Proposed
F118

5. Vary Duration

We further verify the proposed model's performance under the different recordings duration (1, 3, 5, 15, 30, and 350 seconds). The results of CER, $D_{style}$, Cos.Sim are shown in these figures. In general, the results are affected by the total duration of the data, the longer the duration, the better the results, and vice versa.
CER $D_{style}$ Cos.Sim

6. More Demos

Recordings from reserved speakers and celebrities.

6.1. 1 utt.

Source Speech More Speakers
M109 SSB1782 SSB1935 YF WQ

6.2. 5 utt.

Source Speech More Speakers
M109 SSB1782 SSB1935 YF WQ

6.3. 5 utt. from celebrities

Source Speech More Speakers
LZL YM