Layers#
What are layers?#
Layers are basic building blocks of a neural network. You can think of layers as filters. Each layer do something that helps you in your task. It’s convenient to think of a big neural network system as many layers working together to achieve a common goal. Don’t think too hard. Anything can be a layer. A layer is just a fancy way to refer to a function. It takes in an input and spits out some output. It can be anything, trust me.
What are some common layers?#
Linear layer is everywhere.
Convolution layer is a layer that specialize in processing data that have patterns. Like images and voice.
Recurrent layer and Transformer are good in processing sequences and text.
Padding layer and Pooling layer layers are good at reshaping the input data.
Embedding layer are good at converting tokens (like characters) to vector (its meanings).
…and a lot more
Layers in code#
import torch
from torch.nn import Conv2d, Linear, Module, Sequential
Layers in PyTorch is represented by Module
class. All layers, such as Linear
, are subclasses of it.
print(issubclass(Linear, Module))
print(issubclass(Conv2d, Module))
True
True
This is how you define a custom Module
. It’s easy.
class Identity(Module):
def __init__(self):
super().__init__()
def forward(x):
return x
We created an identity function! Which means it does nothing but spit out what’s passed in. But isn’t it very easy! You now have a reusable module that can be put into a neural network! And can use a lot of PyTorch’s function such as callbacks or printing.
Now let’s create a sequential model.
model = Sequential(
Linear(3, 4),
Linear(4, 5),
Linear(5, 6),
)
x = torch.randn(3)
print(x)
print(x.shape)
y = model(x)
print(y)
print(y.shape)
tensor([-1.1426, -0.6195, -0.2858])
torch.Size([3])
tensor([ 0.5437, 0.0750, -0.2696, -0.5904, 0.3968, -0.4805],
grad_fn=<ViewBackward0>)
torch.Size([6])