Bigram Model#
|
This is one of the simplest generative models. It predicts the next character (letter/symbol) based only on the 1 past character. This is like an extremely simple mobile keyboard auto-complete.
Tokens & Vocabulary#
The text is first “tokenized” by simply assigning the numbers 0-64 to the uppercase & lowercase letters of the English alphabet and punctuation symbols. This mapping is called vocabulary. Since there are 65 such unique tokens, the vocabulary size is 65.
Embedding Table#
The model has only one layer, called an “embedding table”, that acts as a trainable lookup table. The embedding table is a (65 x 65) sized matrix.
Let’s say the past token is character “C” (uppercase C), which is assigned 15 in the vocabulary. When 15 is given as the input, the embedding table gives the 15th row, a vector of legnth 65 as the output. Each of the 65 elements in this output corresponds to the probability of the next token. If the 13th element has highest value, the output is 13 = “A”.
Training#
We train on the “Tiny Shakespeare” dataset, that contains 40,000 lines of Shakespeare plays. Given each token, we let the model guess the next token and then calculate the cross-entropy loss: \(-Σ t_i \log(y_i)\), where \(t_i\) is the ground truth (actual next token from dataset), and \(y_i\) is the token predicted by the model.
Results#
We can see the loss reducing with training iterations, and the resulting model generates text that somewhat resembles actual text, not pure random strings. But the resulting text is incomprehensible, since this model is too simple.
Limitation#
This model only looks at a single past token to predict the next token, which makes it extremely short-sighted. This is equal to context window legnth of 1. We will improve on this in the next models.
import torch
import torch.nn as nn
from torch.nn import functional as F
Hyperparameters#
B = 32 # batch size: how many independent sequences will we process in parallel?
T = 1 # time: what is the maximum context length for predictions?
max_iters = 3000
eval_interval = 300
learning_rate = 1e-2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
eval_iters = 200
Dataset#
!wget https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt
--2024-06-09 01:23:03-- https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1115394 (1.1M) [text/plain]
Saving to: ‘input.txt.2’
input.txt.2 0%[ ] 0 --.-KB/s
input.txt.2 100%[===================>] 1.06M --.-KB/s in 0.02s
2024-06-09 01:23:03 (43.7 MB/s) - ‘input.txt.2’ saved [1115394/1115394]
torch.manual_seed(1337)
with open('input.txt', 'r', encoding='utf-8') as f:
text = f.read()
# here are all the unique characters that occur in this text
chars = sorted(list(set(text)))
vocab_size = len(chars)
# create a mapping from characters to integers
stoi = { ch:i for i,ch in enumerate(chars) }
itos = { i:ch for i,ch in enumerate(chars) }
encode = lambda s: [stoi[c] for c in s] # encoder: take a string, output a list of integers
decode = lambda l: ''.join([itos[i] for i in l]) # decoder: take a list of integers, output a string
chars_str = ''.join(chars)
print(f'vocab_size: {vocab_size}')
print(f'vocabulary: {chars_str}')
vocab_size: 65
vocabulary:
!$&',-.3:;?ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
# Train and test splits
data = torch.tensor(encode(text), dtype=torch.long)
n = int(0.9*len(data)) # first 90% will be train, rest val
train_data = data[:n]
val_data = data[n:]
# data loading
def get_batch(split):
# generate a small batch of data of inputs x and targets y
data = train_data if split == 'train' else val_data
ix = torch.randint(len(data) - T, (T,))
x = torch.stack([data[i:i+T] for i in ix])
y = torch.stack([data[i+1:i+T+1] for i in ix])
x, y = x.to(device), y.to(device)
return x, y
Model#
class BigramLanguageModel(nn.Module):
def __init__(self):
super().__init__()
# each token directly reads off the logits for the next token from a lookup table
self.token_embedding_table = nn.Embedding(vocab_size, vocab_size) # for every possible token, weights for next token
def forward(self, idx, targets=None):
'''
B - batch # of independant vectors processed
T - time/block/context # of tokens in a context
C - channels/dimensionality
'''
# idx and targets are both (B,T) tensor of integers
logits = self.token_embedding_table(idx) # (B,T)
if targets is None:
loss = None
else:
B, T, C = logits.shape # C = vocab_size
logits = logits.view(B*T, C)
targets = targets.view(B*T)
loss = F.cross_entropy(logits, targets)
return logits, loss
def generate(self, idx, max_new_tokens):
for _ in range(max_new_tokens): # idx is (B, T) array of indices in the current context
logits, loss = self(idx) # get the predictions
logits = logits[:, -1, :] # (B,T,C) -> (B, C)
probs = F.softmax(logits, dim=-1) # (B, C)
idx_next = torch.multinomial(probs, num_samples=1) # sample from the distribution acc to prob (B, 1)
idx = torch.cat((idx, idx_next), dim=1) # New idx is concat (B, T+1)
return idx
model = BigramLanguageModel()
m = model.to(device)
@torch.no_grad()
def estimate_loss():
out = {}
model.eval()
for split in ['train', 'val']:
losses = torch.zeros(eval_iters)
for k in range(eval_iters):
X, Y = get_batch(split)
logits, loss = model(X, Y)
losses[k] = loss.item()
out[split] = losses.mean()
model.train()
return out
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
Training#
for iter in range(max_iters):
if iter % eval_interval == 0: # every once in a while evaluate the loss on train and val sets
losses = estimate_loss()
print(f"step {iter}: train loss {losses['train']:.4f}, val loss {losses['val']:.4f}")
xb, yb = get_batch('train') # sample a batch of data
# evaluate the loss
logits, loss = model(xb, yb)
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step()
step 0: train loss 4.7220, val loss 4.7486
step 300: train loss 4.1878, val loss 4.3036
step 600: train loss 3.9232, val loss 4.0731
step 900: train loss 3.7221, val loss 3.7067
step 1200: train loss 3.3978, val loss 3.3778
step 1500: train loss 3.1350, val loss 3.1010
step 1800: train loss 3.1643, val loss 3.1060
step 2100: train loss 2.9825, val loss 3.1367
step 2400: train loss 3.0083, val loss 2.9184
step 2700: train loss 2.8582, val loss 2.9125
Inference#
context = torch.zeros((1, 1), dtype=torch.long, device=device) # start with '\n' as seed
out_ints = m.generate(context, max_new_tokens=500)[0].tolist() # output list of ints
print(decode(out_ints))
S!Emh.. Acfomvid bno.:CUf:d:ghelofad,SVY!ugdathe t ypcxFbyfofineQbmu I¥e d tRFZGNGoufMkHxKj?s yvndinEUzrierGlo m mend.
PER:L;ves sBjNMNWWP, ybre,
TAdbrengotonol:LbHWz?YCHxrelo in tingisof s, nWbs!oulu y he i, ndanves th sc!$RrvirtR
Nf?e, dosxToutimsREs. fad, avYW:CavwivoTxEkendouZ?3KH--RI, ase 3!aists.jxjthsthandt tt'godan: IU!zgu:ztgmno VZYog t f r toitoulpqp CHUDMkXLvOvenXme:gxUGTO:Ml!BNNUFinds h ile IBave; bymeare.
Mlou thagis thhe, houXCCoingmof to lyZLCcqbye sisimanereagCad rS&ouRorar;ng