The napari application uses a backend for the graphical user interface (GUI) called Qt. A key feature of this framework is the use of widgets, which are composable, basic UI elements. napari not only utilizes these for its UI, but also enables you can add your own as dockable elements. In fact, the layer controls, layer list, and napari console are all such dockable containers of Qt widgets.
There are a number of ways to go about creating your own widgets, you can see an in-depth overview in the napari documentation. By far the simplest is to rely on the fact that napari supports the use of magicgui, a Python library for quick and easy building of GUIs. A key feature of magicgui is autogeneration of GUIs from functions and dataclasses, by mapping Python type hints to widgets.
In this module, we will implement elements of our previous workflow as functions and then use magicgui.magicgui decorator on those functions to return us compound widgets that we can use to make exploring the parameters easier in the GUI. For a nice overview of the magicgui decorators, see the official documentation.
This notebook converts the exploratory analysis of the previous notebook, Exploratory analysis: spot detection, and makes it more reproducible by 1. turning the workflow info functions, and 2. using magicgui to create widgets that can be used in the napari GUI. This can then serve as the basis for a napari plugin!
Nebari and Binder setup¶
# this cell is required to run these notebooks in the cloud. Make sure that you also have a desktop tab open.
import os
if 'BINDER_SERVICE_HOST' in os.environ or 'NEBARI_JUPYTERHUB_SSH_SERVICE_HOST' in os.environ:
os.environ['DISPLAY'] = ':1.0'Loading data¶
Let’s get everything set up, loading the remote data as in the Exploratory analysis: spot detection notebook.
from skimage import io
nuclei_url = 'https://raw.githubusercontent.com/kevinyamauchi/napari-spot-detection-tutorial/main/data/nuclei_cropped.tif'
nuclei = io.imread(nuclei_url)
spots_url = 'https://raw.githubusercontent.com/kevinyamauchi/napari-spot-detection-tutorial/main/data/spots_cropped.tif'
spots = io.imread(spots_url)Or, you can load the data locally, if you cloned the repository:
from skimage import io
from pathlib import Path
# Path of execution is different depending on whether the notebook is run locally or via jupyter-book
if (Path() / 'notebooks' / 'data').exists():
data_dir = Path() / 'notebooks' / 'data'
else:
data_dir = Path().resolve() / 'data'
nuclei = io.imread(data_dir / 'nuclei_cropped.tif')
spots = io.imread(data_dir / 'spots_cropped.tif')And then we set up our viewer and add the data as layers.
import napari
from napari.utils import nbscreenshot
# create the napari viewer
viewer = napari.Viewer()
# add the nuclei image to the viewer
viewer.add_image(nuclei, colormap = 'I Forest', blending = 'minimum')
# add the spots image to the viewer
viewer.add_image(spots, colormap = 'I Orange', blending='minimum')<Image layer 'spots' at 0x7f3b0777e5d0>A basic filtering function¶
Now let’s write a function that takes an array and a sigma value and performs the
high-pass operation. The goal is to remove some of the background signal and possible autofluorescence.
import numpy as np
from scipy import ndimage as ndi
def gaussian_high_pass(image, sigma):
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_imWe can test our function, by calling in with the spots array as the input data and a sigma value of 2, followed by adding the result to the viewer as a new image layer.
high_passed_spots = gaussian_high_pass(spots, 2)
viewer.add_image(high_passed_spots, colormap="I Blue", blending="minimum")<Image layer 'high_passed_spots' at 0x7f3b2a952850>nbscreenshot(viewer)To test on other input images or use different sigma values, we would have to repeat this process, which is somewhat annoying. A widget that allows us to select the input image and change the sigma value would be much more convenient.
Obtaining a basic widget using the @magicgui decorator¶
Now lets modify the function slightly, by providing type annotations and a docstring, to
leverage napari magicgui integration.
from magicgui import magicgui
@magicgui
def gaussian_high_pass(
image: "napari.types.ImageData", sigma: float = 2
) -> "napari.types.ImageData":
"""Apply a gaussian high pass filter to an image.
Parameters
----------
image : np.ndarray
The image to be filtered.
sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
Returns
-------
high_passed_im : np.ndarray
The image with the high pass filter applied
"""
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_imWe have our magicgui decorated function and we’ve annotated it with the napari types.
Now, the object gaussian_high_pass is both a (compound) widget and a callable function.
Let’s add it to the viewer.
viewer.window.add_dock_widget(gaussian_high_pass)<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7f3b05ff05f0>nbscreenshot(viewer)Notice that because we told magicgui that our function will use not just any numpy array, but
specifically ImageData---the data of a napari Image layer---and that it will also return that same type, magicgui
generated UI widgets for selecting an Image layer---if you add a different layer type, it won’t show
up in the dropdown!
Press the Run button and you will see that a new Image layer is added with the results of our
function—again thanks to autogeneration from magicgui.
# we'll call the widget to simulate clicking `Run`
gaussian_high_pass(viewer.layers['spots'].data)Output
array([[0.00084478, 0.00347908, 0.00447733, ..., 0.00014444, 0. ,
0. ],
[0.00049922, 0.00303904, 0.00163708, ..., 0.00112132, 0. ,
0. ],
[0. , 0.00024094, 0. , ..., 0.00195232, 0.00035587,
0. ],
...,
[0.0015178 , 0.0008351 , 0.00010436, ..., 0. , 0.00015427,
0.00047611],
[0.00136203, 0.00138009, 0.00107976, ..., 0. , 0. ,
0.00208043],
[0.00215613, 0.00369759, 0.0032546 , ..., 0. , 0.0006618 ,
0.00307453]], shape=(492, 494), dtype=float32)For now you will need to manually or programmatically set any colormap/blending settings. (Let’s also hide the previous filtering output.)
viewer.layers[-1].blending = "minimum"
viewer.layers[-1].colormap = "I Blue"
viewer.layers['high_passed_spots'].visible = Falsenbscreenshot(viewer)However, if you press Run again, the data for that layer will be updated in place, so
you can change the sigma value and see the updated result.
Our magicgui decorated gaussian_high_pass object is the widget, so we can easily get the value of the current setting:
gaussian_high_pass.sigma.value2.0At the same time, gaussian_high_pass remains a callable function. Let’s call it normally, to check
that the function is still working as expected. Remember, type hints are not enforced by Python at runtime,
so nothing should have changed.
test_output = gaussian_high_pass(spots, 2)
test_output.shape(492, 494)This means that if you have a script or module you can import the function and use it as normally or use it as a widget in napari.
Let’s make the the widget more dynamic and user-friendly, by giving magicgui some extra information.
Let’s ask for a slider for the sigma parameter and lets have the function be auto-called
when the slider is changed.
But first, lets remove the previous widget.
viewer.window.remove_dock_widget("all")@magicgui(
auto_call=True,
sigma={"widget_type": "FloatSlider", "min": 0, "max": 20}
)
def gaussian_high_pass(
image: "napari.types.ImageData", sigma: float = 2
) -> "napari.types.ImageData":
"""Apply a gaussian high pass filter to an image.
Parameters
----------
image : np.ndarray
The image to be filtered.
sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
Returns
-------
high_passed_im : np.ndarray
The image with the high pass filter applied
"""
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_imviewer.window.add_dock_widget(gaussian_high_pass)<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7f3b05f05370>nbscreenshot(viewer)Now you can play with the slider until you get the effect you want in the GUI and then return the value:
gaussian_high_pass.sigma.value2.0Or you can set the value:
gaussian_high_pass.sigma.value = 3nbscreenshot(viewer)A more complex example¶
Finally, lets make a widget for the whole workflow developed in the Exploratory analysis: spot detection
notebook as a function. We will need to write a function and then properly annotate it such that magicgui can generate the widgets.
This time we are also starting with image layer (data), but then we want a Points layer with points. We could again return
just the layer data using napari.types.PointsData. However, we would like some control over the Points layer visualization, so
we will return a LayerDataTuple.
If detect_spots() returns a LayerDataTuple, napari will add a new layer to
the viewer using the data in the LayerDataTuple. Briefly:
- The layer data tuple should be:
(layer_data, layer_metadata, layer_type) layer_data: the data to be displayed in the new layer (i.e., the points coordinates)layer_metadata: the display options for the layer stored as a dictionary. Some options to consider:symbol,size,face_colorlayer_type: the name of the layer type as a string—in this case'Points'
For more information on using the LayerDataTuple type, please see the documentation.
Also let’s change the image argument type hint to ImageLayer so that we can access more
properties if we’d like or be able to more easily set the value programmatically.
# again lets remove the previous widget
viewer.window.remove_dock_widget("all")import numpy as np
from skimage.feature import blob_log
@magicgui
def detect_spots(
image: "napari.layers.Image",
high_pass_sigma: float = 2,
spot_threshold: float = 0.2,
blob_sigma: float = 2
) -> "napari.types.LayerDataTuple":
"""Apply a gaussian high pass filter to an image.
Parameters
----------
image : np.ndarray
The image in which to detect the spots.
high_pass_sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
spot_threshold : float
The relative threshold to be passed to the blob detector.
The default value is 0.2.
blob_sigma: float
The expected sigma (width) of the spots. This parameter
is passed to the "max_sigma" parameter of the blob
detector.
Returns
-------
points_coords : np.ndarray
An NxD array with the coordinate for each detected spot.
N is the number of spots and D is the number of dimensions.
sizes : np.ndarray
An array of size N, where N is the number of detected spots
with the diameter of each spot.
"""
# filter the image layer data
filtered_spots = gaussian_high_pass(image.data, high_pass_sigma)
# detect the spots on the filtered image
blobs_log = blob_log(
filtered_spots,
max_sigma=blob_sigma,
threshold=None,
threshold_rel=spot_threshold
)
# convert the output of the blob detector to the
# desired points_coords and sizes arrays
# (see the docstring for details)
points_coords = blobs_log[:, 0:2]
sizes = 2 * np.sqrt(2) * blobs_log[:, 2]
return (points_coords, {"size": sizes, "face_color": "red"}, "Points")viewer.window.add_dock_widget(detect_spots)<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7f3ad8bbb260># let's call the widget/function to simulate pressing run
detect_spots(viewer.layers['spots'])Output
(array([[252., 329.],
[220., 275.],
[255., 172.],
[195., 253.],
[204., 278.],
[454., 314.],
[263., 157.],
[458., 30.],
[250., 355.],
[ 10., 268.],
[241., 152.],
[244., 148.],
[252., 151.],
[471., 56.],
[290., 178.],
[433., 315.],
[217., 260.],
[253., 335.],
[202., 258.],
[266., 348.],
[455., 308.],
[449., 318.],
[278., 159.],
[259., 166.],
[272., 327.],
[258., 177.],
[464., 287.],
[218., 57.],
[267., 162.],
[188., 252.],
[433., 306.],
[446., 301.],
[268., 165.],
[201., 287.],
[414., 298.],
[203., 254.],
[239., 162.],
[250., 178.],
[204., 272.],
[210., 268.],
[483., 30.],
[198., 249.],
[475., 43.],
[481., 51.],
[234., 60.],
[ 30., 329.],
[228., 36.],
[234., 78.],
[444., 321.],
[235., 329.],
[478., 369.],
[286., 177.],
[248., 340.],
[216., 282.],
[261., 343.],
[460., 281.],
[400., 299.],
[187., 0.],
[477., 258.],
[484., 397.],
[415., 445.],
[392., 291.],
[237., 335.],
[ 8., 256.],
[208., 265.],
[265., 321.],
[277., 179.],
[438., 317.],
[232., 338.],
[245., 162.],
[476., 269.],
[393., 286.],
[462., 41.],
[451., 305.],
[474., 32.],
[484., 62.],
[242., 163.],
[431., 318.],
[459., 38.],
[479., 260.],
[487., 29.],
[454., 232.],
[205., 0.],
[207., 267.],
[212., 280.],
[454., 296.],
[283., 162.],
[211., 255.],
[470., 30.],
[254., 323.],
[473., 367.],
[213., 262.],
[449., 321.],
[212., 265.],
[484., 46.],
[188., 247.],
[243., 333.],
[468., 274.],
[450., 307.],
[455., 287.],
[466., 261.],
[435., 299.],
[435., 288.],
[211., 272.],
[464., 36.],
[477., 263.],
[438., 286.],
[229., 341.],
[205., 269.],
[482., 38.],
[441., 291.],
[249., 175.],
[443., 314.],
[194., 256.],
[235., 346.],
[218., 277.],
[206., 432.],
[284., 180.],
[484., 53.],
[485., 25.],
[399., 313.],
[436., 326.],
[478., 28.],
[195., 245.],
[ 50., 257.],
[ 15., 272.],
[477., 58.],
[478., 47.],
[397., 303.],
[434., 297.],
[451., 287.],
[210., 282.],
[200., 256.],
[ 16., 257.],
[463., 283.]], dtype=float32),
{'size': array([2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 3.45696645, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 4.3997756 ,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712]),
'face_color': 'red'},
'Points')# lets set the dropdown value for the screenshot
detect_spots.image.value = viewer.layers['spots']
# and lets zoom in a bit
viewer.camera.center = (200, 270)
viewer.camera.zoom = 8
nbscreenshot(viewer)Custom keybindings¶
napari has extensive keyboard shortcuts that can be customized in the Preferences/Settings GUI.
However, it also enables you to bind key-press events to custom callback functions. Again, the
napari implementation (bind_key) is smart, so arguments like the viewer getting the key press or the current
selected layer of a given time will be passed to your function.
Lets try a simple example, to get the number of Points returned by our detector when we press
a key binding. For this, we will want the bind_key decorator to pass in a selected Points layer
as an argument to our function that will return the number of detected spots.
from napari.layers import Points
@Points.bind_key("Shift-D")
def print_number_of_points(points_layer: "napari.layers.Points"):
print("Detected points: ", len(points_layer.data))Give it a shot in the viewer, you should get a print statement in the notebook, when you press the keybinding with a Points layer selected, but not with any other layer type.
Let’s call the function to trigger it for the notebook:
print_number_of_points(viewer.layers['Points'])Detected points: 135
There are actually a number of other events that you can connect callbacks to, other than just key presses. For more information, see the napari events documentation.
Suggestions for further exploration¶
Try to implement a widget or keybinding on your own! If you have something that interests you, see if you can implement it in napari---feel free to use your own data, a different sample file! Otherwise, working with this notebook, here are a few things you can try to implement that we came up with:
Replace the default float SpinBox widgets with sliders in final detect_spots function/widget.
Convert one of the widgets above to use
@magic_factoryinstead of@magicgui. How does the usage/behavior change?Create a widget that lets the user pick from a number of different filters, see skimage.filters for ideas. It might be easiest to use an
enumfor this, that contains multiple different filters, so thatmagicguicreates a dropdown that allows you to select them.Replace the blob-log detector with a different one, e.g.,
skimage.feature.peak_local_max,skimage.feature.blob_dog, etc. You can find some ideas in the scikit-image feature detection documentation.Extend either of the two examples above to the full Layer, rather just the data array, and then make it so that the returned Layer uses (some of) the visualization parameters of the original layer. This will require using the
LayerDataTuplereturn type.Add a keybinding to trigger running either of the two widgets above. Consider what to attach the keybinding to: the viewer? a Layer type? a specific layer? Try to ensure the keybinding is not too “fragile”, in other words, consider what happens if the user has modified the layer list significantly!
Conclusions¶
We’ve now seen how to how to extend the viewer with custom GUI functionality: widgets and keybindings.
By using these concepts you can making analyses even more interactive, particularly exploratory/human-in-the-loop
analysis. Additionally, the approach described here, using magicgui, can also be directly used to create
a plugin to share with the world.