| 臺大學術典藏 |
2021-12-14T23:12:44Z |
Towards lifelong learning of end-to-end ASR
|
Chang, Heng Jui; HUNG-YI LEE; LIN-SHAN LEE |
| 臺大學術典藏 |
2021-12-14T23:12:44Z |
SUPERB: Speech processing Universal PERformance Benchmark
|
Yang, Shu Wen; Chi, Po Han; Chuang, Yung Sung; Lai, Cheng I.Jeff; Lakhotia, Kushal; Lin, Yist Y.; Liu, Andy T.; Shi, Jiatong; Chang, Xuankai; Lin, Guan Ting; Huang, Tzu Hsien; Tseng, Wei Cheng; Lee, Ko Tik; Liu, Da Rong; Huang, Zili; Dong, Shuyan; Li, Shang Wen; Watanabe, Shinji; Mohamed, Abdelrahman; HUNG-YI LEE |
| 臺大學術典藏 |
2021-12-14T23:12:43Z |
Voting for the right answer: Adversarial defense for speaker verification
|
Wu, Haibin; Zhang, Yang; Wu, Zhiyong; Wang, Dong; HUNG-YI LEE |
| 臺大學術典藏 |
2021-12-14T23:12:43Z |
Auto-KWS 2021 challenge: Task, datasets, and baselines
|
Wang, Jingsong; He, Yuxuan; Zhao, Chunyu; Shao, Qijie; Tu, Wei Wei; Ko, Tom; HUNG-YI LEE; Xie, Lei |
| 臺大學術典藏 |
2021-12-14T23:12:43Z |
S2VC: A framework for any-to-any voice conversion with self-supervised pretrained representations
|
Lin, Jheng Hao; Lin, Yist Y.; Chien, Chung Ming; HUNG-YI LEE |
| 臺大學術典藏 |
2021-09-02T00:05:16Z |
VQVC+: One-shot voice conversion by vector quantization and U-Net architecture
|
Wu D.-Y;Chen Y.-H;Lee H.-Y.; Wu D.-Y; Chen Y.-H; Lee H.-Y.; HUNG-YI LEE |
| 臺大學術典藏 |
2021-09-02T00:05:16Z |
WG-WaveNet: Real-time high-fidelity speech synthesis without GPU
|
Hsu P.-C;Lee H.-Y.; Hsu P.-C; Lee H.-Y.; HUNG-YI LEE |
| 臺大學術典藏 |
2021-09-02T00:05:16Z |
Understanding self-attention of self-supervised audio transformers
|
Yang S.-W;Liu A.T;Lee H.-Y.; Yang S.-W; Liu A.T; Lee H.-Y.; HUNG-YI LEE |
| 臺大學術典藏 |
2021-09-02T00:05:15Z |
Personalized dialogue response generation learned from monologues
|
Su F.-G;Hsu A.R;Tuan Y.-L;Lee H.-Y.; Su F.-G; Hsu A.R; Tuan Y.-L; Lee H.-Y.; HUNG-YI LEE |
| 臺大學術典藏 |
2021-09-02T00:05:15Z |
Self-Supervised Deep Learning for Fisheye Image Rectification
|
Chao C.-H;Hsu P.-L;Lee H.-Y;Wang Y.-C.F.; Chao C.-H; Hsu P.-L; Lee H.-Y; Wang Y.-C.F.; HUNG-YI LEE |