Accent and Speaker Disentanglement in Many-to-many Voice Conversion

Abstract: This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with his/her non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.

1. Examples target speaker speech:

speaker samples
s1(only Mandarin)
s2(only Mandarin)
s3(only Tianjin)

2. The results of Mandarin → Mandarin:

Test Set Source Audio Target Speaker Synthesized Speech
BL P1 P2(proposed)
1 s1
s2
s3
2 s1
s2
s3
3 s1
s2
s3
4 s1
s2
s3
5 s1
s2
s3

2. The results of Mandarin → Tianjin:

Test Set Source Audio Target Speaker Synthesized Speech
BL P1 P2(proposed)
1 s1
s2
s3
2 s1
s2
s3
3 s1
s2
s3
4 s1
s2
s3
5 s1
s2
s3