Import an ONNX Model into TensorFlow for Inference – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Import an ONNX Model into TensorFlow for Inference – InApps Technology in today’s post !

Read more about Import an ONNX Model into TensorFlow for Inference – InApps Technology at Wikipedia



You can find content about Import an ONNX Model into TensorFlow for Inference – InApps Technology from the Wikipedia website

In the last tutorial, we trained a CNN model in PyTorch and converted that into an ONNX model. In the current tutorial, we will import the model into TensorFlow and use it for inference.

Before proceeding, make sure that you completed the previous tutorial as this is an extension of the same.

Converting ONNX Model to TensorFlow Model

The output folder has an ONNX model which we will convert into TensorFlow format.

ONNX has a Python module that loads the model and saves it into the TensorFlow graph.

We are now ready for conversion. Create a Python program with the below code and run it:

The output folder contains three models: PyTorch, ONNX, and TensorFlow.

We are now ready to use the model in TensorFlow. Note that it works only with TensorFlow 1.x. For this tutorial, we are using the 1.15, which is the last version.

We start by importing the right modules and then disable the warnings generated by TensorFlow.

The names for input and output tensor can be taken from Netron tool by opening the model.pb file.

The input node (input.1) and output node (add_4) name and shape are visible in the Netron.

The next few lines of code preprocess the image through OpenCV. We then open the TensorFlow model and create a session based on the graph.

Finally, by applying the argmax function, we classify the output into one of the ten classes defined by MNIST.

In this tutorial, we imported an ONNX model into TensorFlow and used it for inference. In the next part, we will build a computer vision application that runs at the edge powered by Intel’s Movidius Neural Compute Stick. The model uses an ONNX Runtime execution provider optimized for the OpenVINO Toolkit. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Feature image: Second-order Cauchy stress tensor, Wikipedia.

Read More:   Be Careful What You Wish for, or, ‘Web Dev Roadmap 2017’ – InApps Technology 2022

At this time, InApps Technology does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: [email protected].




Source: InApps.net

Rate this post

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...