Skip to content

Open In Colab

How to import segmentation pre-annotations

In this tutorial, we will show you how to import prediction labels into your Kili project.

You will see these predictions as pre-annotations in your labeling interface, and you will be able to modify or validate them.

We will discover this feature on a segmentation project that is used to work with images or videos with pixel-level annotations.

The data used in this tutorial is from the COCO dataset.

Import an image into your Kili project

Let's first inspect what our annotated image looks like in the COCO dataset.

Raw image With the annotations
image.png image.png

Before starting, we install the requirements:

!pip install matplotlib Pillow kili opencv-python
%matplotlib inline

import functools
import time
import json
import matplotlib.pyplot as plt
import numpy as np
import os
import getpass
import urllib.request
import requests
from PIL import Image
from random import randint
import cv2
from kili.client import Kili

Let's authenticate to Kili:

if "KILI_API_KEY" not in os.environ:
    KILI_API_KEY = getpass.getpass("Please enter your API key: ")
else:
    KILI_API_KEY = os.environ["KILI_API_KEY"]

kili = Kili(api_key=KILI_API_KEY)

Let's create an image project in Kili where we can annotate images with a semantic tool and two classes: HUMAN and MOTORCYCLE at pixel level.

We create the image project with its ontology (json_interface):

json_interface = {
    "jobs": {
        "JOB_0": {
            "mlTask": "OBJECT_DETECTION",
            "tools": ["semantic"],
            "instruction": None,
            "required": 1,
            "isChild": False,
            "content": {
                "categories": {
                    "MOTORCYCLE": {"name": "Motorcycle", "children": [], "color": "#0755FF"},
                    "HUMAN": {"name": "Human", "children": [], "color": "#EEBA00"},
                },
                "input": "radio",
            },
        }
    }
}
project = kili.create_project(
    description="COCO dataset",
    input_type="IMAGE",
    json_interface=json_interface,
    title="Motorcycle annotation",
)

project_id = project["id"]

Then, we upload the image to the project:

external_id = "moto"
content = "https://farm7.staticflickr.com/6153/6181981748_6a225c275d_z.jpg"

kili.append_many_to_dataset(
    project_id=project_id,
    content_array=[content],
    external_id_array=[external_id],
    json_content_array=None,
);

You should now be able to visualize your asset in Kili:

image.png

Import annotations from a mask

Now, imagine you already have your annotation stored as a mask. You want to insert it in Kili Technology as a prediction. Here is what the mask looks like:

image.png

We can start by downloading the image on disk:

mask_url = "https://raw.githubusercontent.com/kili-technology/kili-python-sdk/master/recipes/img/HUMAN.mask.png"
urllib.request.urlretrieve(mask_url, "mask.png");

Kili SDK provides a set of utils to easily create labels. See the documentation for more information.

Now, we will use the mask_to_normalized_vertices helper method to create a segmentation label from the mask image.

from kili.utils.labels.image import mask_to_normalized_vertices
mask = cv2.imread("mask.png")[:, :, 0]
contours, _ = mask_to_normalized_vertices(mask)
print(f"Found {len(contours)} contour(s) in the mask.")
Found 1 contour(s) in the mask.
annotations = [
    {
        "boundingPoly": [{"normalizedVertices": contour} for contour in contours],
        "categories": [{"name": "HUMAN", "confidence": 100}],
        "mid": randint(100, 1000),
        "score": None,
        "type": "polygon",
    }
]
json_response = {"JOB_0": {"annotations": annotations}}
kili.create_predictions(
    project_id=project_id,
    external_id_array=[external_id],
    json_response_array=[json_response],
    model_name="original_mask",
)
{'id': 'clfql5ikf01zz0jsxaxohccnx'}

That's it! Your mask was just converted to Kili Technology's standard format and uploaded as a prediction to the platform:

image.png

Export the annotations as a mask

You may want to do the reverse operation, that is download the existing annotation from Kili and convert it to a mask.

Let's see how you can achieve this using the helper method normalized_vertices_to_mask!

from kili.utils.labels.image import normalized_vertices_to_mask

Then, we can retrieve the json response and plot the mask:

labels = kili.labels(
    project_id=project_id, asset_external_id_in=[external_id], fields=["jsonResponse"]
)
label = labels[0]
json_response = label["jsonResponse"]
reconstructed_mask = normalized_vertices_to_mask(
    json_response["JOB_0"]["annotations"][0]["boundingPoly"][0]["normalizedVertices"],
    mask.shape[1],
    mask.shape[0],
)

plt.title(f"Mask for HUMAN class")
plt.imshow(reconstructed_mask, cmap="gray")
plt.show()

png

Cleanup

We can remove the project that we created:

kili.delete_project(project_id)

Conclusion

You can now try uploading your own predictions using kili.create_predictions()!