x = torch.randint(0, 10, (1, 3, 5)) # Values will be 0 to 9
em = nn.Embedding(100, 2) # Embedding with vocab_size 100 and embedding_dim 2
output = em(x[0])
print("X :", x)
print("--"*20)
print("Output shape:", output.shape)
print("Output :", output)
Output
Notice the shape of embedding 1st 2 dimension is same as input's 2nd and 3rd dim. we can say that each input row is converted to a batch of size equal to number of features(5) and embeding dimension(2)