Introduction to Pytorch
import torch
t1 = torch.tensor(4.)
t1
t1.dtype
t2 = torch.tensor([1., 2, 3, 4])
print(t2) # ALl the tensor elements are of same data type
# Matrix - 2D tensor
t3 = torch.tensor([[5, 6],
[7,8],
[9,10]])
print(t3)
# most of the time we will use floating point numbers
t4 = torch.tensor([
[[11,12,13],
[13,14,15]],
[[15,16,17],
[17,18,19.]]
])
t4
Tensors can have any number of dimensions and different lengths along each dimension. we can inspect length using .shape property
print(t1)
t1.shape
print(t2)
t2.shape
print(t3)
t3.shape
print(t4)
t4.shape
# start from outer most bracket and count number of elements in that ie. here 2 elements both matrices than we go 1 bracket in and there is also 2 list elements and so on .....
we can not make a tensor with an improper shape
x = torch.tensor(3.)
w = torch.tensor(4., requires_grad=True) #
b = torch.tensor(5., requires_grad=True)
x,w,b
y = w * x + b
y
y.backward()
# derivatives of y w.r.t each of input tensors are stored in .grad property of respective tensors
print('dy/dx', x.grad)
print('dy/dw', w.grad)
print('dy/db', b.grad)
we have not specified requires_grad=True in x, this tells pytorch that we are not intrested in drivatives of any future output w.r.t x but we are intrested for w and b .... so requires_grad property is important to save millions of usless coputations of derrivatives as per requirement
"grad" in w.grad is short for gradient, which is another term for derivative primarily used while dealing with vectors and matrices
t6 = torch.full((3,2), 42)
t6
t7 = torch.cat((t3,t6))
t7
t8 = torch.sin(t7)
t8
t9 = t8.reshape(3,2,2)
t9
import numpy as np
x = np.array([[1,2],[3, 4.]])
x
y = torch.from_numpy(x)
y
x.dtype, y.dtype
z = y.numpy()
z