Understanding Concept of Tensors in TensorFlow
- Sandhya Dwivedi
- Mar 20
- 4 min read
Updated: Mar 26
When I first started learning TensorFlow, one of the biggest concepts I had to wrap my head around was tensors. The name TensorFlow itself comes from the way data flows through tensors in deep learning models. But what exactly is a tensor, and why is it so important?
In this part, I'll break down what tensors are, how they work in TensorFlow, and demonstrate key operations with easy-to-follow examples. If you've ever worked with NumPy arrays, tensors will feel quite familiar—but with superpowers!
What Are Tensors?

A tensor is essentially a multi-dimensional array. Think of it as a generalization of scalars, vectors, and matrices that can hold any kind of data—numbers, images, text, and more.
Unlike regular Python lists or NumPy arrays, tensors are optimized for high-performance computations and can run efficiently on CPUs, GPUs, and TPUs (Tensor Processing Units). This is one of the main reasons why TensorFlow is so powerful for deep learning.
How Tensors Work in TensorFlow?

Tensors are the core data structures in TensorFlow, and understanding their properties is essential for working with deep learning models. Every tensor in TensorFlow has three key attributes: rank, shape, and data type (dtype). These properties define the structure of data and how it is processed during computations.
Rank of a Tensor (Number of Dimensions)
The rank of a tensor refers to the number of dimensions (or axes) it has. A tensor with rank 0 is called a scalar, meaning it holds a single value, such as 5. A rank 1 tensor is a vector, which is simply a 1D array like [3, 7, 9]. When a tensor has rank 2, it forms a matrix, which consists of rows and columns, such as [[1, 2], [3, 4]]. Tensors with rank 3 or higher are used to store complex data structures like images, videos, and multi-dimensional data.
For Example , in TensorFlow, we can create and check the rank of a tensor as follows:


Shape of a Tensor (Size of Each Dimension)
The shape of a tensor specifies the number of elements along each dimension. It provides a structural blueprint for the data contained within the tensor. For instance, a tensor with a shape of (3,) represents a 1D tensor (vector) with three elements, such as [1, 2, 3]. A tensor with a shape of (2, 3) represents a 2D tensor (matrix) with 2 rows and 3 columns, such as [[1, 2, 3], [4, 5, 6]].
We can check the shape of a tensor in TensorFlow using:


Here, the shape (2,3) tells us that the tensor contains 2 rows and 3 columns. In deep learning, shape plays a crucial role in defining the structure of neural network inputs, outputs, and intermediate layers.
Data Type (dtype) of a Tensor
The data type (dtype) of a tensor determines the kind of values it can store. TensorFlow supports various data types, including integers (tf.int32), floating-point numbers (tf.float32, tf.float64), Booleans (tf.bool), and strings (tf.string). By default, TensorFlow assigns a data type based on the values provided, but we can also specify a dtype manually.
For example, the following code checks the data type of a tensor:


This ensures that the tensor uses 64-bit floating-point precision, which is useful in high-accuracy computations. The correct choice of dtype is essential in deep learning models, as it affects memory usage, training speed, and numerical stability.
Tensors are the foundation of TensorFlow, and understanding their rank, shape, and data type is crucial for working with machine learning models. The rank defines the number of dimensions, the shape determines the structure of data, and the data type controls how values are stored and processed. Mastering these concepts allows for efficient tensor manipulations, which are essential in deep learning workflows. By gaining a strong grasp of tensors, you are laying a solid foundation for building and optimizing AI models.
Examples of Tensor Operations in TensorFlow
Tensors are not just storage units; they can be manipulated efficiently using TensorFlow's built-in operations. Here are some essential tensor operations you’ll frequently use.
Basic Arithmetic Operations
TensorFlow supports addition, subtraction, multiplication, and division just like NumPy.


These operations work element-wise just like NumPy arrays!
Matrix Multiplication (Dot Product)
When working with neural networks, matrix multiplication is a key operation. Instead of element-wise multiplication, this computes the dot product.


In deep learning, weights and inputs are often multiplied using this operation.
Reshaping a Tensor
Sometimes, we need to reshape tensors to fit model inputs or change the way data is structured.


Reshaping is commonly used when preparing image data for deep learning models.
Slicing a Tensor
TensorFlow allows slicing (extracting parts of a tensor), which is useful for data preprocessing.
Fig 1.9 : Slicing a Tensor Output of the above code
This is similar to NumPy slicing and is used in data augmentation and feature extraction.
Real-World Applications of Tensors
Tensors are used in various AI and ML applications, including:
Image Recognition – Representing pixel values as 3D tensors (Height × Width × Channels).
Natural Language Processing (NLP) – Converting words into vector embeddings for text processing.
Time-Series Forecasting – Using sequential tensors to analyze patterns over time.
Self-Driving Cars – Processing sensor data as high-dimensional tensors.
Comments