Skip to content

Open In Colab

How to upload medical images to Kili, and visualize segmentation labels with matplotlib

In this tutorial, we will learn how to:

  • upload medical images to Kili using pydicom
  • upload dicom tags as metadata to our assets
  • download segmentation labels from Kili, and convert them to Numpy masks for visualization with matplotlib.

Data used in this tutorial comes from the RSNA Pneumonia Detection Challenge hosted on Kaggle.

First of all, let's import the packages, and install pydicom in case you don't have it installed.

%pip install pydicom matplotlib Pillow wget numpy pandas kili
import pickle
from functools import reduce
from pathlib import Path

import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
import numpy as np
import pydicom
import wget
from PIL import Image

from kili.client import Kili

Get data

Let's download some dicom images:

data_folder = Path()
files = list(data_folder.glob("*.dcm"))
assert len(files) == 2, files

Process data

A dicom image not only contains pixels (or voxels), but also dicom tags, that can contain information about the patient, the scanner, etc.

Below, we extract the dicom tags and add them to metadata_array.

We also convert all images to JPEG format.

def extract_dicom_tags(img_dicom):
    metadata = {}
    for key in img_dicom.keys():
        if == 32736:  # key containing the image pixels
        item = img_dicom.get(key)
        if hasattr(item, "description") and hasattr(item, "value"):
            metadata[item.description()] = str(item.value)
    return metadata
metadata_array = []
processed_imgs = []

for file in files:
    sample = pydicom.dcmread(str(file))

    im = Image.fromarray(sample.pixel_array)
    fpath = data_folder / f"{file.stem}.jpeg"

{'Specific Character Set': 'ISO_IR 100', 'SOP Class UID': '1.2.840.10008.', 'SOP Instance UID': '', 'Study Date': '19010101', 'Study Time': '000000.00', 'Accession Number': '', 'Modality': 'CR', 'Conversion Type': 'WSD', "Referring Physician's Name": '', 'Series Description': 'view: PA', "Patient's Name": '0005d3cc-3c3f-40b9-93c3-46231c3eb813', 'Patient ID': '0005d3cc-3c3f-40b9-93c3-46231c3eb813', "Patient's Birth Date": '', "Patient's Sex": 'F', "Patient's Age": '22', 'Body Part Examined': 'CHEST', 'View Position': 'PA', 'Study Instance UID': '', 'Series Instance UID': '', 'Study ID': '', 'Series Number': '1', 'Instance Number': '1', 'Patient Orientation': '', 'Samples per Pixel': '1', 'Photometric Interpretation': 'MONOCHROME2', 'Rows': '1024', 'Columns': '1024', 'Pixel Spacing': '[0.14300000000000002, 0.14300000000000002]', 'Bits Allocated': '8', 'Bits Stored': '8', 'High Bit': '7', 'Pixel Representation': '0', 'Lossy Image Compression': '01', 'Lossy Image Compression Method': 'ISO_10918_1'}

Create the Kili project

Next, we need to connect to Kili, create a project, and define the annotation interface (ontology).

kili = Kili(
    # api_endpoint="",
    # the line above can be uncommented and changed if you are working with an on-premise version of Kili
json_interface = {
    "jobs": {
            "mlTask": "CLASSIFICATION",
            "content": {
                "categories": {"YES": {"name": "Yes"}, "NO": {"name": "No"}},
                "input": "radio",
            "required": 1,
            "isChild": False,
            "instruction": "Healthy ?",
        "JOB_0": {
            "mlTask": "OBJECT_DETECTION",
            "content": {
                "categories": {
                    "BONE": {"name": "bone"},
                    "TISSUE": {"name": "tissue"},
                    "LUNG": {"name": "lung"},
                    "RIB": {"name": "rib"},
                "input": "radio",
            "required": True,
            "tools": ["polygon"],
            "isChild": False,
            "instruction": "Segmentation",

We can now use the Kili SDK to create our project and upload our images to the project.

title = "[Kili SDK Notebook]: Medical Imaging with Kili Technology"
description = "This is a test project"
input_type = "IMAGE"

project = kili.create_project(
    title=title, description=description, input_type=input_type, json_interface=json_interface
project_id = project["id"]

Done! Your images and their metadata are in the project:



All that remains is to start labeling! To learn more about how to label images in Kili, check out our documentation.

Convert Kili labels to numpy masks

Once your assets are labeled, you might want to download them and visualize them using matplotlib.

To download your labels, simply use kili.labels(project_id). You can also export your labels to a zip file using kili.export_labels(project_id). For more information, see the documentation.

In this tutorial, we assume that our labels have already been downloaded and stored in a file medical-labels.pkl.

from kili.utils.labels.parsing import ParsedLabel

with open("medical-labels.pkl", "rb") as f:
    label = pickle.load(f)

label = ParsedLabel(
    label={"jsonResponse": label}, json_interface=json_interface, input_type="IMAGE"

annotations =["JOB_0"].annotations

In this example, annotations is a list containing 10 masks.

A mask is represented by a Python list of vertices, each vertex being a list of two coordinates (x, y).

<class ''>
{'x': 0.401891, 'y': 0.024966000000015254}

We assign a color to each class:

import matplotlib.colors as mcolors

colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
colors = [
    tuple(int(x * 255) for x in mcolors.hex2color(hex_color))
    for hex_color in plt.rcParams["axes.prop_cycle"].by_key()["color"]
for class_name, color in zip(
    json_interface["jobs"]["JOB_0"]["content"]["categories"].keys(), colors
    CLASS_TO_COLOR[class_name] = color
{'BONE': (31, 119, 180), 'TISSUE': (255, 127, 14), 'LUNG': (44, 160, 44), 'RIB': (214, 39, 40)}

We convert those labels using the kili.utils.labels module, and plot them using matplotlib:

from kili.utils.labels.image import normalized_vertices_to_mask
im =[0])

img_width, img_height = im.size
class_names = []
masks = []
for annotation in annotations:
    class_name =
    normalized_vertices = annotation.bounding_poly[0].normalized_vertices

    # convert the label normalized vertices to a numpy mask
    mask = normalized_vertices_to_mask(normalized_vertices, img_width, img_height)

    # add color to the mask
    mask_rgb = np.zeros((*mask.shape, 3), dtype=np.int32)
    mask_rgb[mask > 0] = CLASS_TO_COLOR[class_name]


Let's merge all masks into a single one:

merged_masks = reduce(lambda mask_1, mask_2: np.where(mask_1 != (0, 0, 0), mask_1, mask_2), masks)

Plot the image and masks

Finally, we can plot the image as well as the masks converted from our Kili labels:

handles = []
labels = []
for class_name, color in CLASS_TO_COLOR.items():
    patch = mpatches.Patch(color=tuple(x / 255 for x in color), label=class_name)
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im, cmap="gray")
ax.imshow(merged_masks, alpha=0.5)
ax.set_title(f"Healthy: {healthy}")
ax.legend(handles=handles, labels=labels, fontsize=16, loc="upper left")


Congrats! 👏

In this tutorial, we have seen how to upload medical images to Kili, and how to download the segmentation labels and convert them to Numpy masks.

Project cleanup