In the last part of this series, we created the shared PVCs to enable collaboration among data scientists, machine learning engineers, and the DevOps team. Before that, we also built CPU and GPU-based container images for launching Jupyter Notebook Servers in Kubeflow.

Next, we will leverage the storage volumes and container images to build a simple machine learning pipeline based on three independent Notebook Servers. Each environment focuses on a specific task of data preparation, training, and inference.

This series aims not to build an extremely complex neural network but to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).

The current installment of this tutorial series focuses on building a Notebook Server for the data scientists to convert a set of images into a dataset ready to be used by the ML engineers to build and train a model.

We will start by uploading the ZIP file containing the images of cats and dogs from the popular Kaggle competition dataset. By the end of this tutorial, we will have two CSV files containing the train and test datasets path.

Make sure you have the shared PVCs created and visible in the Kubeflow dashboard. These PVCs will be mounted in the Notebook Server pods to write shared artifacts such as the dataset and models.

Let’s create a Notebook Server based on the Jupyter environment from the CPU-based container image created in the previous part of this tutorial. The custom container image has all the required Python modules to prepare and process the dataset.

Read More:   Update Big Data: Google Replaces YARN with Kubernetes to Schedule Apache Spark

From the Notebooks section of the navigation bar, click on the new server.

Give a name of your choice to the Notebook Server and choose the custom image option to provide the name of the Docker image built for data preparation. Depending on the available resources, allocate the number of CPUs and RAM. We don’t need a GPU for this environment.

Add a volume needed to create the personal workspace for the Notebook Server. This becomes the home directory of the user. For the data volumes, we will mount the existing shared volume, datasets created earlier. Processed data would be stored in the directory backed by this volume.

When you are ready, click the launch button to provision the Notebook Server. It may take a few minutes for the environment to become ready.

Behind the scenes, Kubeflow launched a Kubernetes statefulset based on the custom container image in the kubeflow-user-example-com namespace.

Let’s inspect the volumes section of the pod to verify if the volumes are mounted correctly.

Switch back to the Kubeflow dashboard and click on connect to access the Notebook Server. You should see the datasets directory in the environment.

Let’s get the raw dataset into the environment. Download train.zip file from Kaggle’s Dogs vs. Cats competition.

Create a directory called raw under the datasets directory, and upload the downloaded train.zip into it. Since the file is above 500MB, it may take a while to upload it.

We are now ready to process the raw data and turn it into a dataset.

Download the Jupyter Notebook from GitHub repository and upload it to the root directory of the Notebook Server.

Launch the Jupyter Notebook and run each cell to start processing the dataset.

We import the required Python modules already installed in the custom container image.

Next, we will unzip the raw dataset and inspect it.

Let’s inspect the dataset by accessing the first few images from each class — dogs and cats.

We will now parse the files in the directory and generate a list for each category.

We now have two lists – train and val – containing the path to the files from each category. Let’s take the help of Pandas library to turn them into CSV files.

At this point, the datasets directory has two CSV files that act as the training and validation dataset for the model we build in the next section.

Read More:   50+ Java Interview Questions - Guide To Crack Interview in 2022

With the dataset in place, we are all set to launch the training environment to build and train a convolutional neural network to classify the images. Stay tuned for the next part, which focuses on training.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.