Advanced NLP and Temporal Sequence Processing (4-5 February, 2021)

Actions and Detail Panel

Sales Ended

Event Information

Share this event

Date and Time



Online Event

Event description
Advanced Natural Language Processing (NLP) and Temporal Sequence Processing

About this Event


Together with Red Dragon AI, SGInnovate is pleased to present the third module of the Deep Learning Developer Series. In this module, we dive deeper into some of the latest Deep Learning techniques for text and time series applications.

About the Deep Learning Developer Series:

The Deep Learning Developer Series is a hands-on series targeted at developers and data scientists who are looking to build Artificial Intelligence applications for real-world usage. It is an expanded curriculum that breaks away from the regular eight-week full-time course structure and allows for modular customisation according to your own pace and preference. In every module, you will have the opportunity to build your Deep Learning models as part of your main project. You will also be challenged to use your new skills in an application that relates to your field of work or interest.

About this module:

One of the core skills in Natural Language Processing (NLP) is reliably detecting entities and classifying individual words according to their parts of speech. We will look at how Named Entity Recognition (NER) works and how Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTMs) are used for tasks like this and many others in NLP.

Another common technique of Deep Learning in NLP is the use of word and character vector embeddings. We will cover well-known models like Word2Vec and GLoVE, how they are created, their unique properties and how you can use them to improve the accuracy of Natural Language; in terms of understanding problems and applications.

We will also cover some of the recent developments in using transfer learning for text related problems and language modelling. Transfer learning has led to some of the recent state-of-the-art results for text classification problems like sentiment analysis and many more. This section will cover papers from ULMFIT , ELMo and OpenAI’s most recent Transformer model.

One of the biggest applications in Natural Language is the creation of chatbots and dialog systems. Thus, in this module you will discover how various types of chatbots work, the key technologies behind them and systems like Google’s DialogFlow and Duplex.

We will also look at applications such as the Neural Machine Translation. You will learn the recent developments and models that use these techniques, and various types of attention mechanisms that dramatically increased the quality of translation systems.

Beyond just text, this module will also cover time series predictions and how you can use techniques from the text-based models to make predictions on sequences. This opens the range of applications to include financial time series, continuous IoT readings, machinery failure prediction, website optimisation and trip planning.

As with all the other Deep Learning Developer modules, you will have the opportunity to build multiple models yourself. The main project in the module will allow you to apply the new skills acquired to your field of work or interest.

This workshop is eligible for funding support. For more details, please refer to the "Pricing" tab above.

In this course, participants will learn:

  • Text classification models and how to build a text classifier
  • To build a Named Entity Recogniser (NER) system
  • About sequence-to-sequence models
  • To build NLP models from scratch
  • To build a chatbot system powered by Machine Learning
  • To build a language model

Recommended Prerequisites:

  • Must have attended Module 1: Deep Learning Jumpstart Workshop
  • You MUST have a stable wifi connection to join the online workshop via your laptop
  • Please watch the introductory videos that will be sent out separately
  • Please experiment with the pre-exercises given


Advanced Natural Language Processing (NLP) and Temporal Sequence Processing Online Agenda

To ensure that each participant has a strong understanding of the materials given, and the ability to utilise the tools effectively, there will be multiple opportunities for participants to ask questions during the curriculum, in smaller breakout groups as well as individual project clinic sessions.


The link and guide to set up for the online training will be provided a few days before so that participants can sign up before the actual day of training.

Day 1 (4 February 2021)

08:45am – 09:00am: Registration

09:00am – 10:45am: Recurrent Neural Networks (RNNs) Recap Part 1

  • RNNs
  • Long Short-Term Memory (LSTMs)
  • Word embeddings: Word2Vec, GloVE
  • Basic Char RNNs
  • Word RNNs
  • Build LSTM networks

10:45am – 11:00am: Tea Break

11:00am – 12:30pm: RNNs Recap Part 2

12:20pm – 12:30pm: Q&A

12:30pm – 1:30pm: Lunch

1:30pm – 3:00pm: Natural Language Processing (NLP) Part 1

  • Text classification models
  • BiDirectional LSTMs
  • Building a Named Entity Recogniser (NER) system
  • Sentiment analysis
  • Building a text classifier
  • Personal text project
  • Main project

3:00pm – 3:15pm: Tea Break

3:15pm – 4:45pm: NLP Part 2

4:15pm – 5:15pm: Personal text project

  • Ideas on projects to do
  • Q&A on ‘doable projects’
  • Homework: What to bring to the next session

5:15pm – 5:30pm: Closing comments and questions

5:30pm - 5:45pm: Wrap Up

5:45pm - 6:15pm: Q&A

Day 2 (5 February 2021)

08:45am – 09:00am: Online Registration

09:00am – 10:45am: Sequence-to-sequence (Seq2Seq) and CNN for Text

  • Seq2Seq models
  • Convolutions for text networks
  • Clustering
  • Seq2Seq chatbot

10:45am – 11:00am: Tea Break

11:00am – 12:30pm: Project Clinic

Project questions and general follow up

12:30pm – 1:30pm: Lunch

1:30pm – 3:15pm: Time Series

  • Univariate vs Multivariate
  • Stationarity
  • TrendsWindowing and Differencing
  • Arima/ Sarima
  • LSTM for time series
  • ConvLSTM for time Series

2:15pm – 3:30pm: Tea Break

3:30pm – 4:30pm: The Rise of the Language Models

4:30pm – 5:15pm: Closing comments and questions

5:15pm – 5:45pm: Q&A

Participants will be given two weeks to complete their online learning and individual project.

Online Learning

  • Building NLP models from scratch
  • NLP pipelines
  • Guide to using Spacy
  • Building a chatbot system powered by Machine Learning
  • Building a language model


Participants must fulfil the criteria stated below to pass and complete the course.

  • Online Tests: Participants are required to score an minimum grade of more than 75%
  • Project: Participants are required to present, and achieve a pass on project that demonstrates the following:
  • The ability to use or create a data processing pipeline that gets data in the correct format for running in a Deep Learning model
  • The ability to create a model from scratch or use transfer learning to create a Deep Learning model
  • The ability to train that model and get results.
  • The ability to evaluate the model on held out data


S$1,605 / pax (after GST)

Funding Support

This workshop is eligible for CITREP+ funding.

CITREP+ is a programme under the TechSkills Accelerator (TeSA) – an initiative of SkillsFuture, driven by Infocomm Media Development Authority (IMDA).

*Please see ‘Guide for CITREP+ funding eligibility and self-application process’ below for more information.

Funding Amount: 

  • CITREP+ covers up to 90% of your nett payable course fee depending on your eligibility (for professionals)

Please note: funding is capped at $3,000 per course application

  • CITREP+ covers up to 100% of your nett payable course fee for eligible students / full-time National Servicemen (NSF)

Please note: funding is capped at $2,500 per course application

Funding Criteria:

  • Singaporean / PR
  • Meets course admission criteria
  • Sponsoring organisation must be registered or incorporated in Singapore (only for individuals sponsored by organisations)

Please note: 

  • Employees of local government agencies and Institutes of Higher Learning (IHLs) will qualify for CITREP+ under the “Individuals / Self-Sponsored” category
  • Sponsoring SMEs who wish to apply for up to 90% funding support for course must meet the SME status as defined here

Claim Conditions: 

  • Meet the minimum attendance (75%)
  • Complete and pass all assessments and / or projects

Guide for CITREP+ funding eligibility and self-application process:

For more information on CITREP+ eligibility criteria and application procedure, please click here

In partnership with:

Driven by:

Employability Partner:

For enquiries, please send an email to


Dr Martin Andrews

Martin has over 20 years’ experience in Machine Learning and has used it to solve problems in financial modelling and the creation of Artificial intelligence (AI) automation for companies. His current area of focus and specialisation is in Natural Language Processing and understanding. In 2017, Google appointed Martin as one of the first 12 Google Developer Experts for Machine Learning. Martin is also one of the Co-founders of Red Dragon AI.

Sam Witteveen

Sam has used Machine Learning and Deep Learning in building multiple tech startups, including a children’s educational app provider which has over 4 million users worldwide. His current focus is AI for conversational agents to allow humans to interact easier and faster with computers. In 2017, Google appointed Sam as one of the first 12 Google Developer Experts for Machine Learning in the world. Sam is also one of the Co-founders of Red Dragon AI.

Share with friends

Date and Time


Online Event

Save This Event

Event Saved