For this recipe, we will create several helper functions. These functions will load the data, normalize the text, generate the vocabulary, and generate data batches. Only after all this will we then start training our word embeddings. To be clear, we are not predicting any target variables, but we will be fitting word embeddings instead:
- First, we will load the necessary libraries and start a graph session:
import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import random import os import string import requests import collections import io import tarfile import urllib.request from nltk.corpus import stopwords sess = tf.Session()
- Then we declare some model parameters. We will look at 50 pairs of word ...