Understanding Tensors in PyTorch: A Comprehensive Guide
- Nikita Barotkar
- Mar 7
- 3 min read

In machine learning and deep learning, tensors are a generic n-dimensional array to be used for arbitrary numeric computation and a fundamental concept for the computation and data manipulation. While they share similarities with arrays, particularly those found in libraries like NumPy, tensors have distinct advantages that make them crucial in frameworks like PyTorch. This blog will explore what tensors are, how they differ from NumPy arrays, and why they are essential in the context of PyTorch.
What is Tensor?
A tensor is a multi-dimensional matrix containing elements of a single data type. Tensors are the fundamental data type in the PyTorch framework and serve as the ultimate storage solution for numerical values. They can store arrays in multiple dimensions and represent physical properties, similar to scalars and vectors.
Let’s see some Math Operations in PyTorch tensors :
First, you need to install the torch library. Install it by following commands in Jupyter notebook:
Import torch library :

1. torch.abs()
Description: This function computes the absolute value of each element in the tensor. It is useful for ensuring that all values are positive, which can be important in certain mathematical operations or when interpreting results.
Example:
Use Case: Often used in loss functions or when calculating distances.
2. torch.add()
Description: This function adds two tensor element-wise. It is a basic arithmetic operation essential for many deep learning computations.
Example:
Use Case: It is commonly used in neural network layers for combining inputs or intermediate results.
3. torch.sub()
Description: This function subtracts one tensor from another element-wise. Like addition, it is fundamental for various calculations.
Example:
Use Case: Often used in comparing values or calculating differences.
4. torch.div()
Description: This function divides one tensor by another element-wise. It is crucial for normalization or scaling operations.
Example:
Use Case: It is frequently used for scaling inputs or outputs.
5. torch.mul()
Description: This function multiplies two tensor element-wise. It is essential for many neural network operations, such as applying weights.
Example:
Use Case: Commonly used in linear layers or when applying activation functions.
6. torch.neg()
Description: This function computes the negation of each element in the tensor. It is useful for flipping the sign of values.
Example:
Use Case: Often used in mathematical derivations or when applying certain activation functions.
7. torch.pow()
Description: This function raises each element in the tensor to a specified power. It is versatile and used in various mathematical operations.
Example:
Use Case: It is frequently used in activation functions like ReLU or in calculating norms.
8. torch.reciprocal()
Description: This function computes the reciprocal of each element in the tensor. It is useful for operations involving division.
Example:
Use Case: Often used in normalization or when computing certain mathematical expressions.
9. torch.remainder()
Description: This function computes the remainder of the division of each element in the tensor by another tensor. It is useful for cyclic operations or when dealing with periodic data.
Example:
Use Case: It is frequently used in periodic data or cyclic pattern tasks.
10. torch.square()
Description: This function computes the square of each element in the tensor. It is commonly used in calculating norms or distances.
Example:
Use Case: Often used in loss functions or when computing norms.
Tensor Broadcasting
Broadcasting refers to the ability to perform arithmetic operations on tensors of different shapes by automatically expanding the smaller tensor's dimensions to match the larger tensor's dimensions. This concept is borrowed from NumPy and is essential for simplifying operations in deep learning.
Rules for Broadcasting
Two tensors are considered "broadcastable" if they meet the following criteria:
At least one dimension: Each tensor must have at least one dimension.
Dimension compatibility: When comparing dimensions from the trailing end (rightmost side), the sizes must adhere to one of these conditions:
The dimensions are equal.
One of the dimensions is 1.
One of the tensors does not have that dimension (it can be considered as having size 1).
Conclusion
Tensor operations in PyTorch help you do many things with data, like adding, multiplying, and more. These operations are important for deep learning and math. There's also something called broadcasting, which makes it easy to work with data of different sizes by automatically adjusting them. This helps make things faster and use less memory. Understanding these concepts is important for using PyTorch well in machine learning and science.
Comments