Spoken Language Systems

Our current research in automatic speech recognition includes training robust acoustic models from limited data by using sparse models, regularization methods, and cross-lingual data borrowing. Acoustic modeling of multiple languages and accents is essential for robust recognition as well. Recognition of mixed-code speech such as Chinese mixed with English or Hindi mixed with English, is also a primary research focus in our group.

Natural Language Processing

We focus on statistical NLP problems such as bilingual lexicon extraction from large and small corpora, statistical semantic parsing, cross-lingual information retrieval and structured summarization. One of our research highlights is the joint optimization of speech and language models for different applications such as spoken language summarization, speech translation, and dialog systems.

Music Information Retrieval

We use signal processing and statistical modeling to map the "DNA" of music pieces, and machine learning methods to learn how humans perceive music. We analyze music audio signals as well as music lyrics by signal processing and language processing methods. Our objective is to enable efficient retrieval of millions and tens of millions of songs by genres, styles, mood, and artist.