Installation

Gradio requires Python 3. Once you have Python, you can download the latest version of gradio using pip, like this:

pip install gradio

Or you may need to do pip3 install gradio if you have multiple installations of Python.

Basic Usage

Creating an interface using gradio involves just adding a few lines to your existing code. For example, here's an how to create a gradio interface using a pretrained keras model:

import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
io = gradio.Interface(inputs="imageupload", outputs="label", model_type="keras", model=image_mdl)
io.launch()

Running the code above will open a new browser window with an image upload. The user can drag and drop their own image, which produces outputs like this:

 

 

Basic Parameters

Running a GradIO interface requires creating an Interface(inputs : str, outputs : str, model_type : str, model : Any) object, which takes as input arguments:
inputs – the string representing the input interface to be used, or a subclass of gradio.AbstractInput for additional customization (see below).
outputs – the string representing the output interface to be used, , or a subclass of gradio.AbstractOutput for additional customization (see below).
model_type – the string representing type of model being passed in. Supported types include keras.
model – the actual model to use for processing.

Instead of providing the string names for inputs and outputs, objects that represent input and output interfaces can be provided. For example, the code in the Basic Usage section executes identically as:

import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
inp = gradio.inputs.ImageUpload()
out = gradio.outputs.Label()
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
io.launch()

This allows for customization of the interfaces, by passing in arguments to the input and output constructors. The parameters that each interface constructor accepts is described below.

Supported Interfaces

This is the list of currently supported interfaces in GradIO. All input interfaces can be paired with any output interface.

Input Interfaces

inputs=“text”

Use this interface to enter text as your input. Parameters: None

Input

inputs=“imageupload”

Use this interface to upload images to your model. Parameters:
shape – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
image_mode – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: 'RGB'
scale – A float used to rescale each pixel value in the image. Default: 1/127.5
shift – A float used to shift each pixel value in the image after scaling. Default: -1
cropper_aspect_ratio – Either None or a float that is the aspect ratio of the cropper. Default: None

Input
Drop Image Here
- or -
Click to Upload

inputs=“snapshot”

Use this interface to take snapshots from the user's webcam. Parameters:
shape – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
image_mode – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: 'RGB'
scale – A float used to rescale each pixel value in the image. Default: 1/127.5
shift – A float used to shift each pixel value in the image after scaling. Default: -1
cropper_aspect_ratio – Either None or a float that is the aspect ratio of the cropper. Default: None

Input
Click to Upload a Snapshot from the Webcam.

inputs=“sketchpad”

Use this interface to take simple monochrome cketches as input. Parameters:
shape – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
invert_colors – a boolean that designates whether the colors should be inverted before passing into the model. Default: True

Input

inputs=“microphone”

Use this interface to audio input from the microphone.

Input
Click to Upload Audio from the Microphone.

inputs=“audio_file”

Use this interface to upload audio to your model.

Input
Drop Audio File Here
- or -
Click to Upload

Output Interfaces

outputs=“classifier”

Use this interface for classification. Responds with confidence intervals.

Output
happy
happy
73%
shocked
20%
sad
 
angry
 

outputs=“text”

Use this interface to display the text of your output.

Output

outputs=“image”

Use this interface to display the text of your output.

Output

Customizing Interfaces

In practice, it is fairly typical to customize the input and output interfaces so they preprocess the inputs in way your model accepts, or postprocesses the result of your model in the appropriate way so that the output interface can display the result. For example, you may need to adapt the preprocessing of the image upload interface so that the image is resized to the correct dimensions before being fed into your model. This can be done in one of two ways: (1) instantiating gradio.Input / gradio.Output objects with custom parameters, or (2) supplying custom preprocessing/postprocessing functions.

Input/Output Objects with Custom Parameters

For small, common changes to the input and output interfaces, you can often simply change the parameters in the constructor of the input and output objects to affect the preprocessing/postprocessing. Here is an example that resizing the image to a different size before feeding it into the model, and tweaks the output interface to hide the confidence bars and show the top 5 classes rather than the default 3:

import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
inp = gradio.inputs.ImageUpload(shape=(299, 299, 3))
out = gradio.outputs.Label(num_top_classes=5)
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
io.launch()

Custom Preprocessing/Postprocessing Functions

Alternatively, you can completely override the default preprocessing/postprocessing functions by supplying your own. For example, here we modify the preprocessing function of the ImageUpload interface to add some noise to the image before feeding it into the model.

import gradio, base64, numpy as np, tensorflow as tf
from io import BytesIO
from PIL import Image
image_mdl = tf.keras.applications.inception_v3.InceptionV3()

def pre(inp):
    im = gradio.preprocessing_utils.encoding_to_image(inp)
    im = gradio.preprocessing_utils.resize_and_crop(im, (299, 299))
    im = np.array(im).flatten()
    im = im * 1/127.5 - 1
    im = im + np.random.normal(0, 0.1, im.shape)  # Adding the noise
    array = im.reshape(1, 299, 299, 3)
    return array

inp = gradio.inputs.ImageUpload(preprocessing_fn=pre)
io = gradio.Interface(inputs=inp, outputs="label", model_type="keras", model=mdl)
io.launch()

Model Types

We currently support the following kinds of models:

model_type="sklearn"

This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a sklearn model and creating a gradio interface around it.

from sklearn import datasets, svm
import gradio

digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))  # flatten the images

# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001)
classifier.fit(data, digits.target)

# The sklearn digits dataset is different from MNIST: it is 8x8 and consists of black digits on a white background.
inp = gradio.inputs.Sketchpad(shape=(8, 8), flatten=True, scale=16/255, invert_colors=False)
io = gradio.Interface(inputs=inp, outputs="label", model_type="sklearn", model=classifier)
io.launch()

model_type="keras"

This allows you to pass in keras models, and get predictions from the model. Here's a complete example of training a keras model and creating a gradio interface around it.

import gradio, tensorflow as tf

(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
loss, accuracy = model.evaluate(x_test, y_test)

io = gradio.Interface(inputs="sketchpad", outputs="label", model=model, model_type='keras')
io.launch(inline=True, share=True)

Run this code in a colab notebook to see the interface -- embedded in the notebook.

model_type="pytorch"

This allows you to pass in pytorch models, and get predictions from the model. Here's a complete example of training a pytorch model and creating a gradio interface around it.

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import gradio

# Device configuration
device = torch.device('cpu')

# Hyper-parameters
input_size = 784
hidden_size = 500
num_classes = 10
num_epochs = 2
batch_size = 100
learning_rate = 0.001

# MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='../../data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data',train=False, transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size,shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)

# Fully connected neural network with one hidden layer
class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out

model = NeuralNet(input_size, hidden_size, num_classes).to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        # Move tensors to the configured device
        images = images.reshape(-1, 28*28).to(device)
        labels = labels.to(device)

        # Forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)

        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

inp = gradio.inputs.Sketchpad(flatten=True, scale=1/255, dtype='float32')
io = gradio.Interface(inputs=inp, outputs="label", model_type="pytorch", model=model)
io.launch()
        

model_type="pyfunc"

This allows you to pass in an arbitrary python function, and get the outputs from the function. Here's a very simple example of a "model" with a gradio interface around it.

import gradio

# A very simplistic function that capitalizes each letter in the given string
def big(x):
    return x.upper()

io = gradio.Interface(inputs="textbox", outputs="textbox", model=big, model_type='pyfunc')
io.launch(inline=True, share=True)

A more realistic examples of the pyfunc use case may be the following, where we would like to use a TensorFlow session with a trained model to do predictions. So we wrap the session inside a python function like this:

import tensorflow as tf
import gradio

n_classes = 10
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train.reshape(-1, 784) / 255.0, x_test.reshape(-1, 784) / 255.0
y_train = tf.keras.utils.to_categorical(y_train, n_classes).astype(float)
y_test = tf.keras.utils.to_categorical(y_test, n_classes).astype(float)

learning_rate = 0.5
epochs = 5
batch_size = 100

x = tf.placeholder(tf.float32, [None, 784], name="x")
y = tf.placeholder(tf.float32, [None, 10], name="y")

W1 = tf.Variable(tf.random_normal([784, 300], stddev=0.03), name='W1')
b1 = tf.Variable(tf.random_normal([300]), name='b1')
W2 = tf.Variable(tf.random_normal([300, 10], stddev=0.03), name='W2')
hidden_out = tf.add(tf.matmul(x, W1), b1)
hidden_out = tf.nn.relu(hidden_out)
y_ = tf.matmul(hidden_out, W2)

probs = tf.nn.softmax(y_)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
init_op = tf.global_variables_initializer()
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

sess = tf.Session()
sess.run(init_op)
total_batch = int(len(y_train) / batch_size)
for epoch in range(epochs):
    avg_cost = 0
    for start, end in zip(range(0, len(y_train), batch_size), range(batch_size, len(y_train)+1, batch_size)):
        batch_x = x_train[start: end]
        batch_y = y_train[start: end]
        _, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})
        avg_cost += c / total_batch

def predict(inp):
    return sess.run(probs, feed_dict={x:inp})

inp = gradio.inputs.Sketchpad(flatten=True)
io = gradio.Interface(inputs=inp, outputs="label", model_type="pyfunc", model=predict)
io.launch(inline=True, share=True)

Saliency Maps

The imageupload interface also supports a saliency model, in which a heatmap is overlaid on top of the input image. This can be used to show feature attributions, e.g. as an interpretation methods. The user supplies their own saliency function, which should take in three arguments: the model object, the input feature, and the input label. Here is an example of a saliency function and what it may produce:

import numpy as np
import tensorflow as tf
from deepexplain.tensorflow import DeepExplain
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential, Model
import gradio

model = tf.keras.applications.MobileNet()

def saliency(model, x, y):
    y = y.reshape(1, 1, 1, 1000)
    with DeepExplain(session=K.get_session()) as de:
        input_tensor = model.layers[0].input
        fModel = Model(inputs=input_tensor, outputs = model.layers[-3].output)
        target_tensor = fModel(input_tensor)

        attributions_gradin = de.explain('grad*input', target_tensor, input_tensor, x, ys=y)
        sal = np.sum(np.abs(attributions_gradin.squeeze()), axis=-1)
        sal = (sal - sal.min()) / (sal.max() - sal.min())
        return sal

inp = gradio.inputs.ImageUpload()
out = gradio.outputs.Label(label_names='imagenet1000', max_label_words=1, word_delimiter=",")

io = gradio.Interface(inputs=inp,
                      outputs=out,
                      model=model,
                      model_type='keras',
                      saliency=saliency)

io.launch();

Which produces this:

Launch Options

When launching the interface, you have the option to pass in several boolean parameters that determine how the interface is displayed. Here is an example showing all of the possible parameters:

io.launch(inbrowser=True, inline=False, validate=False, share=True)

inbrowser – whether the model should launch in a new browser window.
inline – whether the model should launch embedded in an interactive python environment (like jupyter notebooks or colab notebooks).
validate – whether gradio should try to validate the interface-model compatibility before launch.
share – whether a public link to share the model should be created. for processing.