- Course: LIN 389C, Research in Computational Linguistics, 40585
- Semester: Spring 2018
- Webpage: http://jessyli.com/courses/lin389c
- Meeting: Wednesday 9:30am-12:30pm, Compling Lab (CLA 4.422)
To teach about, encourage, and give students time for research. Also to establish vertical cohorts among students interested in the same subfield.
There are six constituencies who will not be treated exactly equally in the course because their needs are different:
- first-year students: need general context and help on first research projects and first-year papers;
- second-year students: need feedback on the research they have done, prepare for further research, and write second-year papers;
- third-year students: need to write their Qualifying Papers (QP), write grant proposals;
- post-candidacy pre-proposal students: need to write and present their dissertation proposals;
- post-candidacy dissertation students: need to write dissertations, get feedback;
- students from other departments: needs will vary. NOTE that this course is designed to provide support and necessary foundations to research in Computational Linguistics, rather than an overview of research topics. You are expected to carry out one research project throughout the semester.
- Suggested topic list
- Research seminar: This semester we focus on a recent papers in computational linguistics. Topics and readings will be given on the schedule page.
Typically one of the students in the class will be responsible for giving a short initial summary of the paper and for preparing some questions to get the discussion going.
- Discussion of ongoing student research:
- Round-table: short presentations by all participants about their current research. This will happen almost every week.
- On-going research: longer presentations (30 minutes or an hour including discussion), students, faculty, auditors if they wish.
- Dissertation proposal presentations.
- Dissertation progress talks.
- Practice talks for conference presentations.
- First-year students: Attend all classes/activities. Talk about research.
First semester, submit literature discussion sketch halfway through the semester, submit literature discussion at end of semester.
Second semester, submit first-year paper draft halfway through the semester, submit first-year paper at end of semester.
- Second-year students: Attend all classes/activities. Talk about research.
First semester, submit research discussion draft halfway through the semester, submit research discussion paper at end of semester.
Second semester, submit second-year paper draft halfway through the semester, submit second-year paper at end of semester.
- Third-year students: Attend all classes/activities. Talk about research.
First semester, submit QP proposal halfway through the semester, submit QP progress report at end of semester.
Second semester, submit QP draft halfway through the semester, submit QP at end of semester.
- Post-candidacy, pre-proposal students: Attend all presentations. Talk about research.
- Dissertation-writing students: Attend all presentations, give at least one presentation during semester on doctoral research.
- Students from other departments: a course project, with 2 documents: intermediate report (2-3 pages), final report (8 pages).
This is the list of suggested topics we will discuss this semester:
- Reinforcement Learning
- Markov Decision Process
- Deep Q-learning
- Policy Gradient Methods
- Architecture of advanced ML systems
- Memory networks
- Generative Adversarial Networks
- Papers we might want to read
- Li et al., Deep Reinforcement Learning for Dialogue Generation
- He et al., Deep Reinforcement Learning with a Combinatorial Action Space for Predicting Popular Reddit Threads
- He et al., Deep Reinforcement Learning with a Natural Language Action Space
- Xiong et al., DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning
- Nogueira and Cho, Task-Oriented Query Reformulation with Reinforcement Learning
- Zhang and Lapata, Sentence Simplification with Deep Reinforcement Learning
- Nguyen et al., Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
- Le and Fokkens, Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing
- Gu et al., Trainable Greedy Decoding for Neural Machine Translation
- Lowe et al., Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
- Vaswani et al., Attention Is All You Need
- Kim et al., Structured Attention Networks
- Seo et al., Bidirectional Attention Flow for Machine Comprehension
- Wang et al., Gated Self-Matching Networks for Reading Comprehension and Question Answering
- Yin and Neubig, A Syntactic Neural Model for General-Purpose Code Generation
- Chen et al., Tree-to-tree Neural Networks for Program Translation
Each week, unless noted otherwise, we use the first part of the class to discuss the topic of the week, and we use the second part for a round-table discussing people’s research.