When you enroll through our links, we may earn a small commission—at no extra cost to you. This helps keep our platform free and inspires us to add more value.

Language Modeling with Recurrent Neural Networks in TensorFlow
If you are working with text data using neural networks, RNNs are a natural choice for sequences. This course works through language modeling problems using RNNS - optical character recognition or OCR and generating text using character prediction.

This Course Includes
pluralsight
4 (22 reviews )
2 hour 35 minutes
english
Online - Self Paced
core courses
pluralsight
About Language Modeling with Recurrent Neural Networks in TensorFlow
Recurrent Neural Networks (RNN) performance and predictive abilities can be improved by using long memory cells such as the LSTM and the GRU cell.
In this course, Language Modeling with Recurrent Neural Networks in Tensorflow, you will learn how RNNs are a natural fit for language modeling because of their inherent ability to store state. RNN performance and predictive abilities can be improved by using long memory cells such as the LSTM and the GRU cell.
First, you will learn how to model OCR as a sequence labeling problem.
Next, you will explore how you can architect an RNN to predict the next character based on past sequences.
Finally, you will focus on understanding advanced functions that the TensorFlow library offers, such as bi-directional RNNs and the multi-RNN cell.
By the end of this course, you will know how to apply and architect RNNs for use-cases such as image recognition, character prediction, and text generation; and you will be comfortable with using TensorFlow libraries for advanced functionality, such as the bidirectional RNN and the multi-RNN cell.
What You Will Learn?
- Course Overview : 1min.
- Applying Bidirectional Recurrent Neural Networks to Word Recognition : 44mins.
- Implementing Character Recognition Using Bidirectional RNNs : 42mins.
- Applying RNNs to Character Prediction for Text Generation : 34mins.
- Implementing RNNs for Character Prediction Used to Generate Text : 32mins.