Term: January 2025 – May 2025
Time: Mondays & Wednesdays (3:30-5:00)
Venue: CDS 102
Credits: 3:1
Outline: This course is a graduate-level introduction to the field of Natural Language Processing (NLP), which involves building computational systems to handle human languages. Why care about NLP systems? We interact with them on a daily basis—such systems answer the questions we ask (using Google, or other search engines), curate the content we read, autocomplete words we are likely to type, translate text from languages we don’t know, flag content on social media that we might find harmful, etc. Such systems are prominently used in industry as well as academia, especially for analyzing textual data.
Learning Outcomes: The course is structured to emphasize practical learning. With four assignments, students will get a good sense on challenges involved in building models that deal with human languages. Post completion, students will feel comfortable developing models for problems involving textual data. The course also spends a considerable amount of time on language models, so students would be able to participate (in an informed way) about the current wave of large language models (LLMs). After the course, students would also be able to pick up most recent published research, and understand majority of the ideas in them.
Prerequisites: The class is intended for graduate students and senior undergraduates. We do not plan to impose any strict requisites on IISc courses that one should have completed to register for this course. However, students are expected to know the basics of linear algebra, probability, and calculus. Programming assignments would require proficiency in Python, familiarity with PyTorch would also be useful.
The course schedule is as follows. This is subject to changes based on student feedback and pace of the instruction.
Date | Topic | Reading Material |
---|---|---|
Jan 6 | Course introduction | |
Jan 8 | Text classification I | Eisenstein Sec 2-2.1 |
Jan 13 | Text classification II | Eisenstein Sec 2-2.1 |
Jan 15 | Word Representations I | |
Jan 20 | Word Representations II | |
Jan 22 | Neural Nets (Pre-requisites) | |
Jan 27 | Language Models I (n-gram models) | |
Feb 3 | Language Models II (recurrent architectures) | |
Feb 5 | Language Models III (attention) | |
Feb 10 | Language Models IV (transformers) | |
Feb 12 | Language Models V (transformers contd.) | |
Feb 12 | Language Models VI (pre-training) | |
Feb 17 | Language Models VII (post-training) | |
Feb 19 | Language Models VIII (overview + future directions) | |
Feb 24 | Tagging + HMMs | |
Feb 26 | Tagging + CRFs | |
… | To be updated … | |
Apr 9 | Course Summary (last class) |
The evaluation comprises:
The four programming assignments will tentatively involve building systems for learning word representations, text classification, language modeling, machine translation, and named entity recognition. The assignments will be implemented using interactive Python notebooks intended to run on Google’s Colab infrastructure. This allows students to use GPUs for free and with minimal setup. The notebooks will contain instructions interleaved with code blocks for students to fill in.
These assignments are meant to be solved individually. For a total of four assignments, you would get four late days, no extensions will be offered (please don’t even ask). There are no restrictions on how the late days can be used, for example, you can use all the four late days for one assignment. If you run out of late days, you can still submit your assignment, but your obtained score would be divided by 2 if submitting after 1 day, and by 4 if submitting after 2 days. No submissions would be entertained after that.
Important dates:
We will use Teams for all discussions on course-related matters. Registered students should have received the joining link/passkey.
If you have any feedback, you can share it (anonymously or otherwise) through this link: http://tinyurl.com/feedback-for-danish