Overview Guide To Tensorflow 2.x with Examples


The most concise and complete explanation of what TensorFlow is can be found at (https://www.tensorflow.org/) and it highlights every important part of the library.

TensorFlow is an open-source software library for high-performance numerical computation.

Its flexible architecture allows easy deployment of computation across a range of platforms (CPUs, GPUs, and TPUs), from desktops to clusters of servers, to mobile and edge devices.

Originally developed by researchers and engineers from the Google Brain team within Google's AI organization, it comes with strong support for machine learning and deep learning,and therefore the flexible numerical computation core is employed across many other scientific domains.

In this blog post, we are going to see the basics for TensorFlow 2.x. This can be used as getting started guide to learn and understand it.

I'm not going to cover the installation/Setup of the Jupyter part as this can be found online easily.





How to Install TF2.0

We will see how to get the latest version of both the CPU & GPU version installed in the machine.The(!) allows it to install it from the jupyter notebook itself (locally).



##Installing the tensrorflow 2.0
# CPU version 
!pip install tensorflow
# GPU version
!pip install tensorflow-gpu


Importing and Validating the installation by checking the version

It is always important to verify the current versions of the packages installed. Since sometimes it may cause mismatch while trying to replicate the results. We use "__version__ " to view the version of the tf installed.

Tip:
Magic function allows you to switch between 1.x and 2.x`. This will be interpreted as: `2.x`.
You can use 2.x by running a cell with the tensorflow_version magic before you run import tensorflow.



##importing the Pre-req 
%tensorflow_version 2.x ##Magic function allows you to switch between 1.x and 2.x
import tensorflow as tf

##Validating the imported tf version
print(f'Tensorflow version currently used is {tf.__version__}')
-------------------------------
`%tensorflow_version` only switches the major version: `1.x` or `2.x`.
You set: `2.x ##Magic function allows you to switch between 1.x and 2.x`. This will be interpreted as: `2.x`.


TensorFlow is already loaded. Please restart the runtime to change versions.
Tensorflow version currently used is 2.1.0

What is a tensor?

A tensor is a vector or matrix of n-dimensions that represents all types of data.

Shape of a tensor

The shape of the data is the dimensionality of the matrix or array. The shape of a tensor is accessed via a property (rather than a function):



##Getting the shape of tensor
tf_constant.shape

## Reshape of tensor
tf_reshape_constant = tf.reshape(tf_constant,[1,4])
print(tf_reshape_constant ) 
------------------------------------
tf.Tensor([[10 20 30 40]], shape=(1, 4), dtype=int32)

Rank(dim) of tensor

The rank of a tensor is the number of dimensions it has, that is, the number of indices that are required to specify any particular element of that tensor.



print(f'The rank of original tensor is {tf.rank(tf_constant)}')
print(f'The rank of reshaped tensor is {tf.rank(tf_reshape_constant)}')
----------------------------------------
The rank of original tensor is 2
The rank of reshaped tensor is 2

Size of a tensor

The number of elements in a tensor. You can also use the NumPy function

print (f'The size of the tensor is {tf.size(tf_constant)}')
-----------------------------------
The size of the tensor is 4

Random value generation (similar to np.random)

This generated a normal distributed tensor with mean=0 /std=1



tf.random.normal(shape=(2,2),seed=10)
---------------------------
tensor:="" 0.45011964="" 1.0018815="" 2="" dtype="float32)" numpy="array([[1.6368568" shape="(2,">

Convert a tensor to NumPy array and vice versa



print(type(tf_constant.numpy()))
print(tf_constant.numpy())
---------------------------

[[10 20]
 [30 40]]


TF VARIABLES

The way to declare a TensorFlow eager variable is as follows:

A tf.Variable represents a tensor whose value can be changed by running ops on it.
You can read/change the value of the tensor which is not possible with the constants.
Lets checkout with an example.



tf_variables = tf.Variable([[1.,2.,3.],[4.,5.,6.]])
print(tf_variables)

##Chnaging the value of tensor tf_variables at position(0,0)
tf_variables[0,0].assign(100) 

-------------------------
tf .variable="" 2.="" 3.="" 3="" 5.="" 6.="" ariable:0="" dtype="float32)" numpy="array([[1.," shape="(2,"
tf .variable="" 2.="" 3.="" 3="" 4.="" 5.="" 6.="" dtype="float32)" nreadvariable="" numpy="array([[100.," shape="(2,"

Working with argmax and argmin

We will now look at how to find the indices of the elements with the largest and smallest values, respectively, across the axes of a tensor.



tf_ex_argmx = tf.constant([2, 11, 5, 42, 7, 19, -6, -11, 29])
print(tf_ex_argmx)
i = tf.argmax(input=tf_ex_argmx )
print('index of max; ', i)
print('Max element: ',tf_ex_argmx [i].numpy())

i = tf.argmin(input=tf_ex_argmx ,axis=0).numpy()
print('index of min: ', i)
print('Min element: ',tf_ex_argmx [i].numpy())
------------------------
tf.Tensor([  2  11   5  42   7  19  -6 -11  29], shape=(9,), dtype=int32)
index of max;  tf.Tensor(3, shape=(), dtype=int64)
Max element:  42
index of min:  7
Min element:  -11

TensorFlow Operations

There is a complete list of all TensorFlow Python modules, classes, and functions at https://www.tensorflow.org/api_docs/python/tf.


All of the maths functions can be found at https://www.tensorflow.org/api_docs/python/tf/math.
we will look at some useful TensorFlow operation.


## Tensorflow OPERATIONS



## Addtion between tensor + scalar
tf_new_constant = tf.constant([[10,20],[100,200]])

## Add the value 100 to all the values in the tensor
tf_new_constant = tf_new_constant+100 
print(tf_new_constant)

## Multiply the value 100 to all the values in the tensor
tf_new_constant = tf_new_constant*100 
print(tf_new_constant)

## Divide  the value 100 to all the values in the tensor
tf_new_constant = tf_new_constant/100 
print(tf_new_constant)


## MATRIX OPERATION

## Transpose a matrix
matrix = tf.constant([[1,2,3,4]])
print(f"The orginal matrix is :\n {matrix}")

matrix_transpose= tf.transpose(matrix)
print(f"The transpose version of the matrix is :\n {matrix_transpose}")


## We also know that A*AT is always symmetric lets try that one
a=tf.constant([[1,2],[3,4]])
symmetric = tf.matmul(a,tf.transpose(a))
print(f'The symmetric matrix is :\n {symmetric.numpy()}')
--------------------------------

tf.Tensor(
[[110 120]
 [200 300]], shape=(2, 2), dtype=int32)
tf.Tensor(
[[11000 12000]
 [20000 30000]], shape=(2, 2), dtype=int32)
tf.Tensor(
[[110. 120.]
 [200. 300.]], shape=(2, 2), dtype=float64)
The orginal matrix is :
 [[1 2 3 4]]
The transpose version of the matrix is :
 [[1]
 [2]
 [3]
 [4]]
The symmetric matrix is :
 [[ 5 11]
 [11 25]]

 Numpy Examples  :




import numpy as np

print(f'The square of tensor : \n  {np.square(tf_new_constant)}')

print(f'The sqrt using numpy of tensor : \n {np.sqrt(tf_new_constant)}')

print(f'The tf using math of tensor : \n {tf.math.sqrt(tf_new_constant)}')
----------------------------------
The square of tensor : 
  [[12100. 14400.]
 [40000. 90000.]]
The sqrt using numpy of tensor : 
 [[10.48808848 10.95445115]
 [14.14213562 17.32050808]]
The tf using math of tensor : 
 [[10.48808848 10.95445115]
 [14.14213562 17.32050808]]

Glimpse of tf.function

tf.function is a function that will take a Python function and return a TensorFlow graph. The advantage of this is that graphs can apply optimizations and exploit parallelism in the Python function (func). tf.function is new to TensorFlow 2.



@tf.function
def linear(m,x,b):
  return m*x+b

m = tf.constant([2,3])
x = tf.constant([10,20])
b=2

linear(m,x,b)
-----------------------
tf .tensor:="" 62="" dtype="int32)" numpy="array([22," shape="(2,),"



Tensorflow Dataset



Tensorflow 2.0 provides a collection of the dataset which can be downloaded and used for implementing the ML model.
This is super-useful feature when you want to quickly try out new functionalities
It handles downloading and preparing the data and constructing a tf.data.Dataset.
tfds.load is used to load the dataset from tensorflow_dataset.



import  tensorflow_datasets as tfds
After this, dataset will start to download unless you specify download=False"""
fashion_mnist_train = tfds.load(name="fashion_mnist", split="train")

Keras API

As more and more TensorFlor users started using Keras for its easy to use high-level API,
the more TensorFlow engineers had to seriously consider subsuming the Keras project into a separate module in TensorFlow called tf.keras.

We can access the Keras APIs which are implemented in Tensorflow 2.0 through tf.keras.

In Keras, we assemble layers to build a model and the most common stack of the layer is tf.keras.sequential.



model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=32, activation='relu', input_shape=(784, )))
model.add(tf.keras.layers.Dropout(0.4))

Train + Feed the model

To start training, call the model.fit method—so-called because it "fits" the model to the training data:
Here we take our dataset, generate a function called hypothesis function for which our dataset mostly fits it.
Tensorflow 2.0 has APIs which we can use to train our dataset like below and as the model trains, the loss and accuracy metrics are displayed.



model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy'])
model.fit(X_train, y_train, epochs=10)
test_loss, test_accuracy = model.evaluate(X_test, y_test)


Save the entire model

We can use model.save() to save the model's architecture, weights, and training configuration in a single file/folder in Tensorflow 2.0.

This allows you to export a model so it can be used without access to the original Python code*. Since the optimizer-state is recovered, you can resume training from exactly where you left off.

# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')

model_json = model.to_json()
with open("fashion_model.json", "w") as json_file:
    json_file.write(model_json)
 
model.save_weights("fashion_model.h5")

Multiple GPU distribution strategy

Tensorflow2.0 provides this facility to run our various models over parallel GPUs. The problem is not to get it to work but to use multiple GPUs efficiently. We can either use it for data parallelism, model parallelism or just training different set on the different GPUs. It is easier to set up and saves a lot of time!

To implement multiple GPU, we need to add below code:



strategy = tf.distribute.MirroredStrategy()

with strategy.scope():
  model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)
    ])


  model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

tf.distribute.MirroredStrategy supports distributed training on multiple GPUs at the same time. It creates one replica on each GPU and a variable is made in each GPU which is sync with each other. Each GPU gets the data equal to batch size. Taking advantage of multiple GPUs is very easy with Tensorflow 2.0.

Tensorboard

TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower-dimensional space, and much more.


  • Load the TensorBoard notebook extension using %load_ext tensorboard.
  • Adding the tf.keras.callback.TensorBoard callback ensures that logs are created and stored.
  • Place the logs in a timestamped subdirectory to allow easy selection of different training runs.
  • Start TensorBoard

%load_ext tensorboard

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])


log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)


model.fit(train_images, train_labels, epochs=5, callbacks = [tensorboard_callback])

%tensorboard --logdir logs/fit


Customizations

Callback is a class that is used to provide specific functionality at the time of training or evaluation of Keras model.
Callbacks are useful to get a view on internal states and statistics of the model during training.



model.compile(optimizer=Adam(), loss=sparse_categorical_crossentropy(),metrics=['accuracy']))
model.fit(data, epochs=10, validation_data=val_data, callbacks=[EarlyStopping(), TensorBoard(), ModelCheckpoint()])

Here, the callbacks are used in model.fit() method. Here, the class Earlystopping() is used so that the model stops training when the improvement in model is stopped.
TensorBoard() class is used for providing a visualization of how the model is getting trained.
ModelCheckpoint() is used to save the model after every epoch.
Here the model will be saved 10 times.


Summary


In this tutorial, you discovered how to use tensorflow and various changes from older version.

Specifically, you learned:

  • How to install the tensorflow version 2.0 and validate it.
  • How to use tf.constant and tf.variables and view shape,rank etc.
  • How to perform operation similar to numpy using tf.math.
  • How to use the tf.function.
  • How to use TensorFlow dataset
  • How to use Keras API
  • How to train Model
  • How to fit the model in tensorflow.
  • How to save the model
  • How to enable multi GPU strategy
  • How to enable tensorboard
  • How to work with custom callbacks.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Hey I'm Venkat
Developer, Blogger, Thinker and Data scientist. nintyzeros [at] gmail.com I love the Data and Problem - An Indian Lives in US .If you have any question do reach me out via below social media