Non-Destructive Inspection

General Description

This component runs Non-Destructive Inspection algorithms on images using Traditional Machine Vision and Machine Learning techniques.

The component embraces a set of computer vision techniques to allow the expert user to build image processing algorithms with the purpose of highlighting defects and/or measuring physical quantities of the product, such as dimensional ones. This allows assessing the quality and the conformity of the inspected product.

Resource Location
Repository Location Link
Latest Release Vs 2.1.19
Open API Spec Link
Video

NDI - Computer Vision Designer and Runtime

NDI - Computer Vision AI Classifier

Further Guidance
  • None

Related Datasets and Examples Link
Additional Links
  • None

Generation date of this content 14 May 2021

Screenshots

The following images are illustrative screen shots of the component.

Figure 1. Designer

Figure 2. Functions Documentation

Figure 3. Module Development: Code Editor

Component Author(s)

Company Name ZDMP Acronym Website Logo
Video Systems Srl. VSYS videosystems.it

Commercial Information

Resource Location
IPR Link

VSYS AI Image Classifier

VSYS AI Image Labeller

VSYS Features Analysis Suite

VSYS Silhouette Suite

VSYS MVD Machine Vision Designer

Price [For determination at end of project]
Licence [For determination at end of project]
Privacy Policy [For determination at end of project]
Volume license [For determination at end of project]

Architecture Diagram

The following diagram shows the position of this component in the ZDMP architecture.

Immagine che contiene screenshot Descrizione generata automaticamente

Figure 4. Position of component in ZDMP Architecture

Benefits

  • Easy development of complex Computer Vision algorithms

  • Visual debugging tools for easier development and testing

  • Modular approach to have flexible, extensible, and optimized functionalities

  • Easy implementation of complex Deep Learning / Neural Networks models

  • Real-time and historical processing of images

Computer Vision solutions:

  • Quality Control

  • Object Recognition

  • Defects Detection

  • Metrology

  • Biometrics

  • Image Classification

  • Counting / Presence of an object

  • Optical Character Reader (OCR) recognition

  • Barcode / QR / Data Matrix code scanning

  • Thermal Imaging

  • Robot guidance (using 3D sensors for the world mapping)

  • Space Analysis with the help of a robot (robot-mounted 3D sensors)

  • Robot picking (wall- or robot- mounted 3D or 2D sensors)

  • Robot Object Recognition and Quality Control

Features

There are many Computer Vision algorithms that use different techniques, for example traditional Machine Vision, Image Processing, Pattern Recognition and Machine Learning. This component allows a ZDMP Developer to build complex Computer Vision algorithms with the help of Graphic Tools for testing and debugging. The algorithm can then be run at run-time, with real-time or historic data from cameras.

This component offers the following features:

  • Designer

  • Runtime

The Designer allows the developer to visually build an algorithm connecting multiple functions. Functions are atomic algorithms. In the image of Figure 5 two functions are shown (Colour Spaces Conversion and Brightness).

Debugging tools, like the Object Dumper also in Figure 5, help the developer to build an algorithm: the output parameters of every Function can be displayed, which gives a clear idea of what each Function is doing. If a developer changes a parameter, the algorithm is automatically executed and the results are instantly shown, allowing fast and reliable development.

Even non-experts can develop complex algorithms with these tools, relying on the extensive documentation of each function.

Figure 5. Use of Image Visualizer block as a debugging tool inside the Designer

The Designer can export the developed algorithm, so that the Runtime, the engine that executes algorithms, will be able to run it at run-time, for example with in-line data from cameras. The results of the algorithm can also be published in the Service & Message Bus, to allow the integration with other ZDMP Components. Multiple algorithms can be created, for multiple products, cameras, and production lines.

The component is fully modular:

  • A module is a package that contains a list of functions

  • A module should run specific algorithms and have custom dependencies

  • Modules and their functions are shown in the Designer

There are many Computer Vision algorithms, and a specific use case most probably will not need all of them. That is why there is a “Builder” that lets the developer choose which modules to enable. This is required when the component is run on edge PCs or embedded devices with limited hardware performance. Developers can buy modules in the Marketplace, and then they will be available for selection in the Builder. Custom modules can also be developed and uploaded when specific and new functionalities are needed.

AI Image Classification

A service available in the platform is the AI Image Classification tool that allows the classification of images into different “classes”. A class is a label (cat, dog, zebra, etc) given to an image. Images representing products or details of products can be classified as a function of defects present inside the images, the simplest classification being the binary Good-vs-Bad classification. A Neural Network will learn to classify images from a labelled dataset and will then be used to classify new images without label: these are the two well-known AI phases:

  • Training: This is performed through the above-mentioned AI Image Classification tool. Its UI allows to upload an image dataset and classify its images. The Neural Network will train on that dataset. More advanced parameters can be edited by skilled users. The output is a trained model to be saved to be later used for inference.

Figure 6. AI Image Classification: Dataset Loading

Immagine che contiene testo, elettronico, screenshot Descrizione generata automaticamente

Figure 7. AI Image Classification: Dataset Classification

Figure 8. AI Image Classification: Training Setup

Immagine che contiene testo Descrizione generata automaticamente

Figure 9. AI Image Classification: Training Results

  • Inference: The trained model is used to classify new images through the Runtime component. A related AI Inference module is present in the Designer (see Figure 10 and Figure 11) and its main functions foresee a model parameter to input the trained model (Figure 12).

Figure 10. AI Image Classification: Module

Figure 11. AI Image Classification: Module Info

Figure 12. AI Image Inference Module: Classification Inference Function used in the Designer

System Requirements

The component is managed by the Application Runtime. All the component’s software requirements are automatically resolved.

As the component is very flexible and can be used to run a multitude of algorithms, hardware requirements may vary depending on the frequency of process requests, the size of files and images, and the techniques used. Some modules, based on Machine Learning and Neural Networks, may require one or more GPUs if the process time is a crucial factor.

In general, developers should run some tests to judge how many resources are needed. The component will auto-scale as needed to use all the available resources. If the component requests resources which are not available it is necessary to upgrade the hardware.

This is an edge component, which means it can be installed in the edge layer.

Software
Application Runtime
Hardware
8+ CPUs
16GB+ RAM
100GB+ disk
x nVIDIA CUDA Capable GPUs (if any module needs them)

Associated ZDMP services

Required

  • Secure Authentication and Authorisation: This is needed to secure the communication between components and authorize API calls

  • Portal: Needed to access all services within the platform, centralizing the authentication procedure

  • Marketplace: The modules created by developers must be uploaded to the Marketplace. Users can then buy modules based on their needs and those modules will be available in the Builder component

  • Service and Message Bus: The component is needed for the communication between the Runtime component and a zApp or another component

  • Application Runtime: Needed to build and run the component and any EXTERNAL_IMAGE modules

Optional

  • Data Acquisition: The Data Acquisition component exposes images from cameras inside the platform to be analysed

  • Storage: The Storage component stores images for later processing

Installation

The component is already installed on the ZDMP platform. It is accessible from the Portal.

To manually deploy a new instance of the component, access the Application Runtime to launch a new deployment from the Catalog, like in Figure 13. There some additional configuration options can be set, like in Figure 14.

Graphical user interface, text, application, email Description automatically generated

Figure 13. Deploy the component from the Application Runtime

Graphical user interface, application Description automatically generated

Figure 14. Edit the component’s configuration from the Application Runtime

How to use

The Component is divided in three parts:

  • Building phase

  • Designing phase

  • Runtime phase

In the Building phase, the Developer chooses which modules to enable in the Designer and Runtime Component. Additional modules must have been previously purchased through the Marketplace to be available in the Builder UI. The Developer, via the Builder UI, will choose which modules are needed for the specific algorithm to be built.

In the Designing phase, the Developer will use the enabled modules by graphically building an algorithm connecting modules’ functions via the UI.

In the Runtime phase, real-time or historic images can be processed through the developed algorithm.

Builder

The Builder allows selection of which modules to include in the Runtime Component. The modules can be purchased through the Marketplace and can be enabled here. Once the building procedure is done, the selected modules will be available in the Designer and the Runtime Component for run-time processing.

The image of Figure 15 shows the Builder UI where three modules are present. The Learn More button gives more detailed information on a module. Check a module version to make it available inside the Runtime Component. Multiple versions of the same module can be checked, in which case the versions will coexist. This is useful, for example, to test the latest version of a module without removing the old version which is running in production. In this example, the Traditional Machine Vision and the PyTorch Inference modules are selected, ready to be built.

Figure 15. The Builder UI

The Build Instance button is used to build Runtime Component. Subsequently, the Runtime Component will be ready, and the Designer, which will be described just below, will be accessible from the Designer button.

Designer

The Designer allows a developer to visually build Computer Vision algorithms.

This tool facilitates the development of an algorithm. The user can build a complex and specific flow using graphic debugging tools and the documentation of each module. It is particularly useful for pre-processing purposes, also considering that there is an infinite number of cameras and sensors in the market, with various characteristics, and the environmental conditions are a particularly key factor. This tool allows an algorithm to be tested with different images from different cameras and this way it is easy to check if the algorithm is stable or it needs to be tweaked, as necessary.

Figure 16. The Designer

On the left side, the list of the built modules is shown and for each of them the menu can be expanded to show their functions as well. To add a Function to the flow, click on Add, and the function block will be inserted in the Designing canvas.

Figure 17. The Functions of the Module Traditional Machine Vision

Every Function has input and output parameters. An output parameter of a Function can be connected to an input parameter of another one. Every parameter has a type: only parameters that have the same type can be connected; for example, an Image parameter cannot be connected to an Integer one.

In the image of Figure 18, the output parameter “image” of the Function Colour Spaces Conversion is of type IMAGE and it is connected to the parameter “image” of the Function Threshold, which is of type IMAGE too.

At the bottom of each Function block at least three Ease-of-Use icons are displayed; from left to right they allow respectively to: remove the Function block, open an information window describing the Function, and duplicate the Function block in case more than one is needed to construct the algorithm in the Designer canvas.

Figure 18. Two Functions connected

This is a basic step to create an algorithm. Obviously, through multiple steps and/or more sophisticated functions, the complexity of the algorithm is increased: this is often the case in Machine Vision analysis to address specific applications.

Once an algorithm has been completed, and usually tested, it can be exported by clicking on the Export button. This will download a .json file containing a recipe. The recipe contains all the logic needed from the Runtime to process data at run-time. Multiple recipes can be created, for various products, cameras, algorithms, etc. These files should be stored by the ZDMP Developer to later be used at run-time when needed.

There are several functions used to debug a specific algorithm. For example, the “Input Image” Function is used to load an Image from a user PC to test the algorithm. The “Object Dumper” prints the connected parameter’s value. In the image of Figure 19 some debugging functions have been added to the flow and an image has been uploaded from the PC to test the algorithm. What each Function is doing becomes clear.

Figure 19. A simple algorithm with some debugging blocks to monitor processing step by step

The input parameters of a Function can be edited. When a Function is selected with a left click, a menu will appear on the left side. This allows the user change the Selected Function’s Parameters.

Figure 20. Selected Function Parameters

Figure 20 shows an example of Selected Function Parameters. At the top, the Function’s display name can be edited. It is then possible to see the list of input parameters of the selected Function. Click on each parameter to edit its value. The type of the parameter is also shown. Different input methods are available depending on the type of the parameter. If a parameter is connected to another Function’s output, changing its value is not possible as it would not make sense.

The values of parameters set here are embodied in the algorithm exported via the Export button. When a process is executed, the Runtime will, by default, take the values from the exported algorithm, so they are hardcoded. The param_unique_id property is used to flag a parameter to make it overridable at run-time when a process is executed. Each parameter to be exported for this purpose must have a unique param_unique_id.

For example, in the flow of Figure 19 it makes sense that the input image of the first function block is to be overwritten with new images from a camera. This is not limited to images: the min_value parameter of the Threshold Function can, for example, depend on a light sensor: if the light sensor reports a very bright environment, the threshold value can be lowered, and vice versa.

In the example of Figure 21, the parameter image of the first Function in the flow, Colour Spaces Conversion, has been flagged “input_image”.

Figure 21. The param_unique_id flag of an input parameter

A similar thing can be envoked for the output parameters. By default, output parameters of functions are not returned as results, because that would impact the performance and most of them are of minor interest for the final application. Again, only if the param_unique_id property is set, that parameter will be returned. In the example above, the result of the algorithm, which is the output parameter image of the Function Threshold, can be flagged as in Figure 22, so that at run-time the user will receive it within the results.

Figure 22. The param_unique_id flag of an output parameter

Some functions can also have a UI, which can be used to simplify the editing of the input parameters. Henceforth, it will be referred as Function UI. If a module’s function has a Function UI, an additional far-right icon will appear at the bottom of the function block as in Figure 23. Clicking on it will open a dialog.

Figure 23. A Function with a Function UI

The image of Figure 24 shows an example of a Function UI. In this example, the input parameter “roi” is edited graphically.

Figure 24. A Function UI

This can also be achieved by directly editing the Function Parameters, as shown in Figure 25, although the Function UI may help if the parameter is complicated.

Figure 25. Updating an input parameter from the Selected Function Parameters menu

Runtime

The Runtime is the API Server of the Component. It runs algorithms at run-time. The Runtime REST APIs are accessible via API Gateway of the Service & Message Bus component.

The OpenAPI specification file to communicate with the component can be found in the API Gateway or it can be accessed in the technical documentation of the project at https://software.zdmp.eu/docs/openapi/edge-tier/non-destructive-inspection/openapi.yaml/. For more info on OpenAPI, https://swagger.io/docs/specification/about/.

The Runtime is used to execute a processing based on recipes exported from the Designer. The following APIs expose functions to allow this:

  • Set Flow Recipe (/flow/set)

  • Run Flow Recipe (/flow/run)

Flow API – Set Flow Recipe

Recipes must be uploaded to the Runtime before executing a Run Flow command. The Set Flow Recipe API requires:

  • Filename: The filename of the recipe file, for example check_dimensions.json

  • Body: The recipe to be set

This will save the recipe in the internal storage with the provided filename. This way recipes can be set only once. Figure 26 shows the Designer’s Storage dialog with the recipe file.

Figure 26. The recipe file is saved in the Internal Storage

Flow API – Run Flow Recipe

To run a processing on some data, the Run Flow Recipe API must be called. This requires:

  • Filename: The filename of the recipe previously uploaded

  • Input Params: List of input parameters (please check the param_unique_id above)

  • ID: String that will be sent back in the response, useful for tracking the inspected product or the processing analysis

It returns:

  • Output Params: List of output parameters (please check the param_unique_id above)

  • Flow Response Times: Object containing how much time each function and the whole algorithm took, useful for debugging purposes

  • ID: The same ID passed in the request

The following diagram shows an example of how to use the Runtime APIs; the following steps are taken:

  • Setting up the “check_dimensions” recipe

  • Setting up the “check_cracks” recipe

  • Running a Flow process using “check_dimensions” recipe

  • Running a Flow process using “check_cracks” recipe

  • Running a Flow process using “check_cracks” recipe

Figure 27. Demo of SetFlow and RunFlow

Internal Filesystem Storage

The component is equipped with an internal filesystem storage. It is used to store images, files, and data, so they can be used inside an algorithm.

Figure 28. NDT Inspection - Computer Vision component schema

There are three ways to interact with the Internal Filesystem Storage:

1. Designer

Files can be uploaded from the Designer by clicking on the Storage button.

Figure 29. Designer: Upload files to the internal storage

2. Runtime APIs

Files can also be uploaded via HTTP API (please check the OpenAPI specification):

Figure 30. CRUD files via REST APIs

3. Persistent Volumes

The Internal Filesystems Storage can also be mounted as a Persistent Volume. While this is the fastest way to read and write files, it requires additional configuration from the Application Runtime component. This solution is preferred in low latency-edge scenarios.

Figure 31. CRUD files by mounting the Volume

Parameter Model Description

The request and response object of the Run Flow Recipe API contain, respectively a list of input and output parameters. Here the Parameter model is described.

Field Mandatory Type Description
name true string The name of the parameter
value true any The value of the parameter
type true enum

The type of the parameter.

[ ALL, NONE, NUMBER, LIST_NUMBERS, STRING, LIST_STRINGS, BOOLEAN, IMAGE, LIST_IMAGES, FILE, LIST_FILES, OBJECT]

min false float The minimum value allowed for the parameter. Used for NUMBER types
max false float The maximum value allowed for the parameter. Used for NUMBER types
step false float The min step value allowed for the parameter. Used for NUMBER types
param_unique_id false string ID used to update a parameter when calling the Flow APIs or used to export a parameter in a Flow API call. Please read above
file_type false enum

The serialization method of the parameter. Used for IMAGE and FILE types.

[RAW_B64, FILESYSTEM_STORAGE]

The APIs accept application/json data, so there are no problems when providing JSON serializable values, like integers, floats, strings, etc. Images and files, however, require serialization to be sent via HTTP. The file_type field is used to indicate how images and files are serialized. It can be set to:

  • RAW_B64: the image or the file is encoded to a base64 string. The value field will contain the encoded string. This is recommended when dealing with small images and files

  • FILESYSTEM_STORAGE: the image or the file is present in the internal filesystem. The value field will contain the filename (ex. trained_model.onnx). To save a file or image to the internal storage, please read the relative documentation above. This is recommended when dealing with large images and files

Figure 32. RunFlow with a RAW_B64 IMAGE parameter

Figure 33. RunFlow with a FILESYSTEM_STORAGE IMAGE parameter

Develop a Module

A module is a package which contains a list of functions. Developers can develop their own module if a feature is missing and then use it inside the Runtime component. Some Computer Vision algorithms are extremely fast and require only a few dependencies. Neural Network based algorithms, however, are heavy and require custom dependencies, OSs, and computational resources such as GPUs. The Component gives the ability to develop new modules with distinct characteristics. That is why there are three types of modules.

  • DIRECT: The first one is the simplest and fastest as the functions are executed directly on the Runtime. The trade-off is the Module must be written in Python and custom libraries cannot be installed

  • PYTHON_EXTERNAL_IMAGE: The second one consists of a Container which runs the Python functions. In this case, custom Python libraries can be installed, and the functions will be executed in an isolated environment

  • EXTERNAL_IMAGE: The third one consists of a generic Container used to run more advanced functionalities. This can be written in the programming language of choice and be configured with all the needed hardware resources, so there are no constraints

There is a tool that can help with the creation of a module: the Module Creator utility. With the Module Creator utility new modules can be created from scratch or previously created modules can be edited. Functions can be immediately tested as they are being edited. This allows a fast development and debugging of modules.

For the first two types of modules, the Python source code can be edited using the built-in Python editor. For the EXTERNAL_IMAGE module, users develop their own container with their desired algorithms, but this is a higher level of complexity.

Figure 34. Module Creator page

The module must be exported by clicking on the Download .zip button, and then it must be uploaded to the Marketplace. Then, it will become available in the Builder UI.

Module Description

On the left side of the page, there is the Module Description editor, which allows editing of the new module’s basic information, such as the name, the description, etc. In particular:

field Meaning
Module ID The unique ID of the module. This is the Marketplace module name
Version The version of the module
Type Module Type: [ “DIRECT”, “PYTHON_EXTERNAL_IMAGE “, “EXTERNAL_IMAGE”]
Name Name of the module
Description Description of the module
Thumbnail Logo
Company Developer’s Company
Website Company’s Website
Contact Developer’s contact e-mail

Figure 35 shows an example of a filled-out Module Description:

Figure 35. Module Creator - Module Description example

Functions List

On the centre of the page, there is the Functions List editor. It allows adding Functions to the module. Click on Add to add a Function. The user can expand the newly created Function by clicking on the arrow.

Figure 36. Module Creator - Functions List

Field Meaning
Function ID The ID of the Function. It must be unique in the module
Function Name The name of the Function
Function Group (Optional) Graphical Function grouping
Function Description Description of the Function
Input Parameters Definitions of the Function’s input parameters
Output Parameters Definitions of the Function’s output parameters
Edit Source Code Opens the Function Code Editor Dialog
Test Process Opens the Function Test Process Dialog
Load Function UI Files (Optional) Upload a Web Component associated with the function
Download Function UI Downloads a .zip file containing the uploaded Web Component

To add an Input or Output Parameter, click on the corresponding button. Then the newly added Parameter can be edited, as shown in Figure 37.

Figure 37. Module Creator – Edit Function Parameter

Field Meaning
Name The name of the parameter. It must be unique in the Function’s input or output parameters
Select The type of the parameter. allowed_values = [“ALL”, “NONE”, “NUMBER”, “LIST_NUMBERS”, “STRING”, “LIST_STRINGS”, “IMAGE”, “LIST_IMAGES”, “FILE”, “LIST_FILES”, “BOOLEAN”]
Value The parameter’s default value
Step The parameter’s step value (in case of a NUMBER type)
Min The parameter’s min value (in case of a NUMBER type)
Max The parameter’s max value (in case of a NUMBER type)
Parameter Documentation Documentation of the parameter

The Remove button will remove it from the list.

Function UI

Functions can also provide a UI to easily configure the input parameters. The Function UI is a Web Component (https://developer.mozilla.org/en-US/docs/Web/Web_Components). Web Components can be created from multiple frameworks (https://github.com/obetomuniz/awesome-webcomponents#libraries-and-frameworks). Angular has Angular Elements (https://angular.io/guide/elements).

The files that need to be uploaded are these two. If the framework of choice generates multiple files, they need to be concatenated:

  • index.js

  • style.css

The name of the Custom Element must be function-ui.


``` javascript
customElements.define('function-ui', ....);

The Web Component must also define “getter” and “setter” properties to be able to pass and retrieve data:

Input Data Meaning
input_params List of Function’s input parameters
output_params List of Function’s output parameters

The Function UI should be used when a function has a lot of parameters, they are difficult to set (eg lists), or the developer wants to give the user a custom UI feedback.

For example, the function called ROI is used to crop a part of an image. It has:

  • A parameter called image representing the input image

  • Another called rectangle, of type LIST_NUMBERS, where each value is the vertex of the rectangle used for cropping

While the rectangle parameter value can be set from the Selected Function Parameters menu explained earlier, it is not an easy task. The image size must be known when setting the vertices and it requires some trial and error before finding the correct pixel values.

The Function UI can help in these situations: a Web Component can be developed which displays the input image (from the image parameter) and a rectangular selection box representing the input parameter rectangle. When the rectangle moves or is resized, the input parameter is updated accordingly.

Figure 38. Function UI example

The Designer will get the input_params property of the Web Component when the user closes the dialog to update the flow.

Function Code Editor

When developing a DIRECT or PYTHON_EXTERNAL_IMAGE module, the Function Code Editor Dialog will display a built-in editor where the processing code can be edited. The buttons on the right:

  • Set Code confirms the changes made to the code

  • Reset Code clears the code and pastes a template

Figure 39. Module Creator – Function Code Editor

Function Tester

The Function Tester Dialog is used to test the function. This is the final step to check that the function is behaving as desired. The input parameters, on the left, must be set to run the process, which is executed by clicking on the Process button. On the right, the output parameters of the process are shown, with their relative value.

Figure 40. Crete Module – Function Tester

Apply Module

On the right side of Figure 34, there is the Apply Module controller.

Figure 41. Module Creator – Apply Module controller

  • Upload .zip: Load a previously developed module to for further editing

  • Apply: Saves the configuration

  • Download .zip: Download the module to upload it to the Marketplace

Upload the module to the Marketplace

The .zip file must be uploaded to the Marketplace to appear in the Builder. A new product must then be created. Please check the Marketplace documentation for more information.

Figure 42. Marketplace – Create a new product

The .zip file must the uploaded by clicking Browse.

Develop an EXTERNAL_IMAGE Module

Instead of developing a module based on the Python programming language, users can develop an EXTERNAL_IMAGE module, which is based on a container to execute the module and its functions. Thanks to the containerization technology, it does not matter what programming language or libraries are used.

The flow to develop an EXTERNAL_IMAGE module can be summarized by the following chart:

Figure 43. EXTERNAL_IMAGE development

To explain these steps, an example module is developed from start to finish. An AI Classification Inference module is developed, which allows classification of an image. It is based on TensorFlow, and it will use the trained model from this example. A function called “*inference*” is created, that has these input parameters:

  • model: The trained model

  • image: The image to classify

and these output parameters:

  • predicted_class: The predicted class

  • predictions: The probabilities per class

Develop the Web Server

The container must be a Web Server that exposes some endpoints to execute the processing on the module’s functions. The following endpoints must be exposed by the Web Server:

  • /, GET method, used for health check

  • /process, POST method, used to execute a process

The OpenAPI specification for the Web Server can be found here.

The /process endpoint receives this object:

Figure 44. The /process request object

  • module_identity: Identifies the module, it can be ignored

  • function_identity: Contains the ID of the function of the process request

  • input_params: The input parameters of the process request

  • output_params: The default output parameters

  • instance_id: Unique ID of the process request

And it must respond with:

Figure 45. The /process response object

  • output_params: The processed output parameters

  • instance_id: Unique ID of the process request (same of the request)

Users can develop the Web Server on the Human Collaboration Gitlab. This way, it is easier to keep track of the developed modules in a centralized place and use CI/CD to automatically compile the container image. To create a new project in GitLab, create a new blank project.

Figure 46. Human Collaboration Gitlab personal projects

The project name must be set, and the Project URL should point to the user for personal projects, or a group for shared projects.

Figure 47. Human Collaboration GitLab create project

Figure 48. Human Collaboration GitLab project created correctly

Users can now clone the repository and develop the Web Server.


``` shell
git clone [URL]

For this example, the Web Server is in Python with the help of the FastAPI library. Here is a link to the source code repository. This example is useful for understanding a typical scenario, and it can be used as a template for future projects.

In the app/main.py file, the HTTP endpoints are defined following the OpenAPI specification above. The models are defined in the models folder.


``` python
from fastapi import FastAPI
from deserializer import deserialize_parameter
from inference import run_inference
from models.process_request import ProcessRequest
from models.process_response import ProcessResponse

app = FastAPI()

@app.get("/")
def health_check():
    return

@app.post("/process", response_model=ProcessResponse)
def process(process_request: ProcessRequest):
    # Deserialize input & output parameters
    for input_param in process_request.input_params:
        deserialize_parameter(input_param, process_request.instance_id)
    for output_param in process_request.output_params:
        deserialize_parameter(output_param, process_request.instance_id)

    if(process_request.function_identity.function_id == "inference"): ...
    elif(process_request.function_identity.function_id == "...."): ...

Inside the body of the /process function the actual processing code must be inserted. The parameters, though, must be deserialized to be used, just like with the regular component APIs. Images and files require then particular attention. In the case of IMAGE and FILE parameters, the file_type field indicates how they have been serialized.

  • RAW_B64: The image or the file is encoded to a base64 string

  • FILESYSTEM_STORAGE: The image or the file is present in the internal filesystem

  • INTERNAL_SHARED_FILESYSTEM: The image or the file is present in a private, internal shared filesystem

Some environment variables have been set to indicate some mount points. The following table indicates the procedure to get the real path of an image or a file.

file_type Real path
FILESYSTEM_STORAGE FILESYSTEM_STORAGE_PATH + parameter.value
INTERNAL_SHARED_FILESYSTEM INTERNAL_FILESYSTEM_SHARED_PATH + instance_id + parameter.value

In the deserializer.py file parameters are deserialized. In this case, images are loaded as PIL Images, which is a Python library. In this case, a PIL Image is required for Tensorflow, but in general there are no constraints. The library of choice can be used, like OpenCV, Scikit, etc, based on the needs of the application.


``` python
import base64
import io
import os

import PIL
import PIL.Image

from models.parameter import Parameter


def deserialize_image(parameter: Parameter, instance_id: str):
    if(parameter.file_type == "FILESYSTEM_STORAGE"):
        file_path = os.environ["FILESYSTEM_STORAGE_PATH"] + \
            "/" + parameter.value
        parameter.value = PIL.Image.open(file_path)
    elif(parameter.file_type == "INTERNAL_SHARED_FILESYSTEM"):
        file_path = os.environ["INTERNAL_FILESYSTEM_SHARED_PATH"] + \
            instance_id + "/" + parameter.value
        parameter.value = PIL.Image.open(file_path)
    elif(parameter.file_type == "RAW_B64"):
        buff = io.BytesIO(base64.b64decode(parameter.value))
        parameter.value = PIL.Image.open(buff)


def deserialize_file(parameter: Parameter, instance_id: str):
    if(parameter.file_type == "FILESYSTEM_STORAGE"):
        parameter.value = os.environ["FILESYSTEM_STORAGE_PATH"] + \
            "/" + parameter.value
    elif(parameter.file_type == "INTERNAL_SHARED_FILESYSTEM"):
        parameter.value = os.environ["INTERNAL_FILESYSTEM_SHARED_PATH"] + \
            instance_id + "/" + parameter.value


def deserialize_parameter(parameter: Parameter, instance_id: str):
    if(parameter.type == "IMAGE"):
        deserialize_image(parameter, instance_id)
    if(parameter.type == "FILE"):
        deserialize_file(parameter, instance_id)

After the deserialization, the processing can be performed. All the input parameters are ready to be used. The function_identity.function_id indicates which function to execute. A simple if statement is enough to decide which processing function to execute.

In the following image, there is the code of the function inference. The model and image parameter are retrieved, and a process is run by calling run_inference, which lives in another file for simplicity. The results of the run_inference function are copied in the output_params structure, and then the ProcessResponse object is returned.


``` python
if(process_request.function_identity.function_id == "inference"):
    # Get model and image parameters
    model_parameter = next(
        (input_param for input_param in process_request.input_params if input_param.name == "model"), None)
    image_parameter = next(
        (input_param for input_param in process_request.input_params if input_param.name == "image"), None)
    if(not model_parameter or not image_parameter):
        raise Exception("Invalid input parameters")

    # Run the inference
    predicted_class, predictions = run_inference(model_filename=model_parameter.value,
                                                    image=image_parameter.value)

    # Update ouput parameters
    predicted_class_parameter = next(
        (output_param for output_param in process_request.output_params if output_param.name == "predicted_class"), None)
    predictions_parameter = next(
        (output_param for output_param in process_request.output_params if output_param.name == "predictions"), None)
    predicted_class_parameter.value = predicted_class.item()
    predictions_parameter.value = predictions.tolist()[0]

    # Return the results
    return ProcessResponse(output_params=process_request.output_params, instance_id=process_request.instance_id)

This is all that is needed to develop a Web Server for an EXTERNAL_IMAGE module. The last step is to write a Dockerfile to create a container image. In the case of the example, this is quite simple.


``` docker
FROM tensorflow/tensorflow

RUN pip install fastapi uvicorn

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY ./app /app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]

Push to the Container Registry

If the Web Server has been committed to the Human Collaboration GitLab, it is easier to use the CI/CD Pipelines to compile and push the container image to the Container Registry. In this case, the .gitlab-ci.yml file of the example repository is used.

If the code is hosted elsewhere, the image must be manually built and pushed to the Container Registry. In this case, a token with the “write_registry” scope must be created to be able to push the image.

The pushed container images can be viewed from the “Packages & Registries” menu on the left of the repository on GitLab.

Figure 49. The GitLab Container Registry

Create the Module

Once the container image has been pushed, a new module must be created via the Module Creator utility. Following the example above, this should be filled in as per Figure 50. There a function inference has been added with its input and output parameters.

Figure 50. Module Creator filled with the example info

When the module type is set to EXTERNAL_IMAGE, the following menu will appear on the right.

Figure 51. Deployment Options

Use the Module

Once the module has been uploaded to the Marketplace and chosen in Builder, it can be used in the Designer. Here the input parameters are set:

  • model: The trained model of the example

  • image: An image to classify

Figure 52. Inference function used in the Designer

Node-RED

The Component exposes APIs to make a processing. It can be difficult for a zApp, though, to continuously make requests to the Component and manage parameters.

Tools like Node-RED can be used to automate this. For example, it can be used to automatically call the Run Flow API when a new image from a camera is arrived.

Input and output parameters can be mapped to be set and dumped to Message Bus topics.

The Data Acquisition component has a Node-RED instance running.

The following schema shows a demo environment:

  • The Node-RED instance is connected to a camera

  • When new images arrive, the Run Flow API is called, with the image and various other parameters

  • The results parameters of the Run Flow API are sent to the Message Bus, so other components (zApps, Monitoring and Alerting, etc) can use them

  • The input parameters can be overwritten by the zApp using the Message Bus

  • Other ZDMP Components (like the Monitoring & Alerting) are listening to the Message Bus for messages

Figure 53. Example of a Node-RED orchestrated system

Figure 54. Example of a Node-RED flow

Last modified November 4, 2021