Offline Interactive Plotting with Python in 2021

I recently had a small visualization that I wanted to do: Given a 3D stack of images, plot them with the ability to interactively flip through the stack. Also, I wanted to have it work on a static site like Github Pages (e.g. this site).

This seemed to me like a reasonable expectation of any good programming language, especially the most popular data science language. There’s certainly others asking the same questions as me, and years of talk of making this easy to do. Note that my search was geared specifically towards images and geospatial data; if I just wanted some bar charts with sliders, I would have stuck with the embedding functionality of something like Altair (or one of the other 5 options below).

Turns out that my decision to stick with Python only and avoid Javascript made this into quite the scavenger hunt. You can see below how many Python plotting libraries it took before I found what I wanted. This isn’t trying to be a full comparison of the libraries (I have no desire to do that). I only hope that I’ll get to spare somebody else the tedium of trying 15 plotting demos.

Final choices:

Skipping to the winners, here were my favorites:

The “not-quites”

Here’s the notes I made about the others attempted along the way, and why they didn’t work for my case (if I missed something obvious and easy here, I’d be happy to hear how).

Example Image Stack Demo

For the two winners, I’ll show what I was able to accomplish with just a few lines of code. Hopefully it’s easy enough to translate to any other 3D image stack.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from numpy.fft import fft2, ifft2, fftshift

%matplotlib notebook

Here I was playing around with creating Gaussian random fields. These are used in cosmology to describe the distribution of stars/galaxies in the universe, but also work as a way to simulate turbulence in the atmosphere (my use case).

Theres a relatively simple way and fast way to generate an instance of this random field:

def power_law(k, beta):
    # condition, true replacement, false replacement
    with np.errstate(divide="ignore"):
        return np.where(k > 0, k ** (-beta), 0.0)


def make_gaussian_field(Nsize, beta=3, seed=None):
    if seed is not None:
        np.random.seed(seed)
    white_noise = np.random.uniform(size=(Nsize, Nsize))
    w_hat = fft2(white_noise)
    idxs = np.concatenate((np.arange(Nsize / 2 + 1), np.arange(-Nsize / 2 + 1, 0)))
    ka = 2 * np.pi / Nsize * idxs[:, np.newaxis]
    karr = np.sqrt(ka ** 2 + ka.T ** 2)
    P = np.sqrt(power_law(karr, beta))

    Phi = w_hat * P
    return ifft2(Phi).real

I wanted to try and see how the $\beta$ parameter affected the generated noise visually, so I made a stack of these with increasing $\beta$:

beta_range = np.arange(0, 3.5, 0.5)
phi_stack = np.array([make_gaussian_field(100, beta, seed=0) for beta in beta_range])
print(f"Image stack: {phi_stack.shape = }")
Image stack: phi_stack.shape = (7, 100, 100)

Plotting one of these images is easy enough with Matplotlib:

plt.close()
fig, ax = plt.subplots()
phi = phi_stack[-1]
axim = ax.imshow(phi)
fig.colorbar(axim, ax=ax)
plt.axis("off")
plt.show()
<IPython.core.display.Javascript object>

Now here’s the result of my search on how to visualize the entire stack.

2D Images with Sliders

For viewing one image at a time and flipping through with a slider, HoloViews seems to have the nicest interface.

import holoviews as hv
from holoviews import opts
import xarray as xr

hv.extension("bokeh", "matplotlib")

# Example options from their quick start guide
opts.defaults(
    opts.GridSpace(shared_xaxis=True, shared_yaxis=True),
    opts.Image(cmap="viridis", width=400, height=400),
    opts.Labels(
        text_color="white",
        text_font_size="8pt",
        text_align="left",
        text_baseline="bottom",
    ),
    opts.Path(color="white"),
    opts.Spread(width=600),
    opts.Overlay(show_legend=False),
)

I started by labelling my stack so I could convert it into an xarray dataset.

# Make a subset that is uneven so you can distiguish the dimensions:
p1 = phi_stack[:, :-10, :-20]
print(p1.shape)

xrphi = xr.Dataset(
    data_vars={"phi": (["beta", "y", "x"], p1)},
    coords={
        "beta": beta_range,
        "y": np.arange(p1.shape[1]),
        "x": np.arange(p1.shape[2]),
    },
)
xrphi
(7, 90, 80)
<xarray.Dataset>
Dimensions:  (beta: 7, x: 80, y: 90)
Coordinates:
  * beta     (beta) float64 0.0 0.5 1.0 1.5 2.0 2.5 3.0
  * y        (y) int64 0 1 2 3 4 5 6 7 8 9 10 ... 80 81 82 83 84 85 86 87 88 89
  * x        (x) int64 0 1 2 3 4 5 6 7 8 9 10 ... 70 71 72 73 74 75 76 77 78 79
Data variables:
    phi      (beta, y, x) float64 0.05235 0.2187 0.1063 ... 0.9273 0.5682 0.4145

This is an xarray dataset, but HoloViews needs its own hv.Dataset. Luckily, it’s integrated well-enough with xarray that you can just pass it in to create it:

phids = hv.Dataset(xrphi)

To visualize this, you create an Image object. The only extra information you need to pass to the Image constructor is the kdims, or the “Key” dimenions: which is the x (horizontal) dimension, and which is the y (vertical). The other dimensions will get gobbled up and become a slider on the “HoloMap”.

If you’ve got two dimensions named ‘x’ and ‘y’, it will interpret this automatically (but being explicit is nice):

imnoise = phids.to(hv.Image, kdims=["x", "y"])
imnoise

This is what I was looking for! It’s especially nice given how few lines of code it took. Flipping through the slider, we see that the simulated noise goes from pure white noise (when $\beta = 0$) to very spatially correlated, realistic looking tropospheric noise at high $\beta$s.

BUT, the final test was whether it was easy to output an HTML snippet that will work offline, or whether that’s another huge obstacle.

Luckily, it was this simple:

outname = "test_holo.html"

renderer = hv.renderer("bokeh")
# Unclear why the function must append .html? oh well
renderer.save(imnoise, outname.replace(".html", ""))

from IPython.core.display import HTML

# Now we can load it here, or insert it anywhere as an embedding:
with open(outname) as f:
    wt = f.read()
HTML(wt)
test_holo

Though it took many hours of search and trying different library examples, I was sold after I found this. It was exactly what I wanted, looked nice, needed very few lines of code, and the integration with xarray is an added bonus. They even have a further extension for other geospatial data with GeoViews, which I’ll keep my eye on for more complicated map making.

3D Surface Animation

While my use case here is definitely better as a 2D slider, I originally wanted to see it as a 3D surface.

import ipyvolume as ipv
from ipywidgets import VBox

Kx, Ky = np.meshgrid(np.arange(phi_stack.shape[2]), np.arange(phi_stack.shape[1]))

# Make a colormap proportional to height
colormap = cm.coolwarm
znorm = phi - phi.min()
znorm /= znorm.ptp()
znorm.min(), znorm.max()
color = colormap(znorm)

Ipyvolume made this nearly as easy as using Matplotlib’s 3D plotting, but it has a much sleeker interface thanks to the other JS libraries it uses.

ipv.figure()
mesh = ipv.plot_surface(Kx, Ky, phi_stack, color=color[..., :3])
ipv.animation_control(mesh)  # shows controls for animation controls


vb = VBox([ipv.gcc()])
vb

In a live Jupyter kernel, this is how you would plot it. But, like HoloViews, ipyvolume also makes it easy to save as an HTML embedding:

outname = "widget1.html"
ipv.embed.embed_html(outname, [vb])

# Now load the snippet we just saved to show it works as a standalone
with open(outname) as f:
    wt = f.read()
HTML(wt)

While the animation is pretty sweet, I think I was a bit over eager in thinking this example needed to be 3D… maybe next time.

IPyVolume Widget