Installation
Gradio requires Python 3. Once you have Python, you can download the latest version of gradio
using pip, like this:
pip install gradio
Or you may need to do pip3 install gradio
if you have multiple installations of Python.
Basic Usage
Creating an interface using gradio involves just adding a few lines to your existing code. For example, here's an
how to create a gradio
interface using a pretrained keras
model:
import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
io = gradio.Interface(inputs="imageupload", outputs="label", model_type="keras", model=image_mdl)
io.launch()
Running the code above will open a new browser window with the following interface running:
Basic Parameters
Running a GradIO interface requires creating an Interface(inputs : str,
outputs : str, model_type
: str, model : Any)
object, which takes as input
arguments:
inputs
– the string representing
the input interface to be used, or a subclass of gradio.AbstractInput
for additional customization (see below).
outputs
– the string representing
the output interface to be used, , or a subclass of gradio.AbstractOutput
for additional customization (see below).
model_type
– the string
representing type of model being passed in. Supported types include
keras.
model
– the actual model to use
for processing.
Instead of providing the string names for inputs
and outputs
, objects that represent input and output interfaces can be provided. For example, the code
in the Basic Usage section executes identically as:
import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
inp = gradio.inputs.ImageUpload()
out = gradio.outputs.Label()
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
io.launch()
This allows for customization of the interfaces, by passing in arguments to the input and output constructors. The parameters that each interface constructor accepts is described below.
Supported Interfaces
This is the list of currently supported interfaces in GradIO. All input interfaces can be paired with any output interface.
Input Interfaces
inputs=“text”
Use this interface to enter text as your input. Parameters: None
inputs=“imageupload”
Use this interface to upload images to your model. Parameters:
shape
– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
image_mode
– PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: 'RGB'
scale
– A float used to rescale each pixel value in the image. Default: 1/127.5
shift
– A float used to shift each pixel value in the image after scaling. Default: -1
cropper_aspect_ratio
– Either None or a float that is the aspect ratio of the cropper. Default: None
- or -
Click to Upload
inputs=“snapshot”
Use this interface to take snapshots from the user's webcam. Parameters:
shape
– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
image_mode
– PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: 'RGB'
scale
– A float used to rescale each pixel value in the image. Default: 1/127.5
shift
– A float used to shift each pixel value in the image after scaling. Default: -1
cropper_aspect_ratio
– Either None or a float that is the aspect ratio of the cropper. Default: None
inputs=“sketchpad”
Use this interface to take simple monochrome cketches as input. Parameters:
shape
– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: (224, 224, 3)
invert_colors
– a boolean that designates whether the colors should be inverted before passing into the model. Default: True
inputs=“microphone”
Use this interface to audio input from the microphone.
inputs=“audio_file”
Use this interface to upload audio to your model.
- or -
Click to Upload
Output Interfaces
outputs=“classifier”
Use this interface for classification. Responds with confidence intervals.
outputs=“text”
Use this interface to display the text of your output.
outputs=“image”
Use this interface to display the text of your output.
Customizing Interfaces
In practice, it is fairly typical to customize the input and output interfaces so they preprocess the inputs
in way your model accepts, or postprocesses the result of your model in the appropriate way so that the output interface
can display the result. For example, you may need to adapt the preprocessing of the image upload interface so that
the image is resized to the correct dimensions before being fed into your model. This can be done in one of two ways: (1) instantiating gradio.Input
/
gradio.Output
objects with custom parameters, or (2) supplying custom preprocessing/postprocessing functions.
Input/Output Objects with Custom Parameters
For small, common changes to the input and output interfaces, you can often simply change the parameters in the constructor of the input and output objects to affect the preprocessing/postprocessing. Here is an example that resizing the image to a different size before feeding it into the model, and tweaks the output interface to hide the confidence bars and show the top 5 classes rather than the default 3:
import gradio, tensorflow as tf
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
inp = gradio.inputs.ImageUpload(shape=(299, 299, 3))
out = gradio.outputs.Label(num_top_classes=5, show_confidences=False)
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
io.launch()
Custom Preprocessing/Postprocessing Functions
Alternatively, you can completely override the default preprocessing/postprocessing functions by supplying your own. For example, here we modify the preprocessing function of the ImageUpload interface to add some noise to the image before feeding it into the model.
import gradio, base64, numpy as np, tensorflow as tf
from io import BytesIO
from PIL import Image
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
def pre(inp):
im = gradio.preprocessing_utils.encoding_to_image(inp)
im = gradio.preprocessing_utils.resize_and_crop(im, (299, 299))
im = np.array(im).flatten()
im = im * 1/127.5 - 1
im = im + np.random.normal(0, 0.1, im.shape) # Adding the noise
array = im.reshape(1, 299, 299, 3)
return array
inp = gradio.inputs.ImageUpload(preprocessing_fn=pre)
io = gradio.Interface(inputs=inp, outputs="label", model_type="keras", model=mdl)
io.launch()
Model Types
We currently support the following kinds of models:model_type="sklearn"
This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a sklearn
model and creating a gradio
interface around it.
model_type="keras"
This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a sklearn
model and creating a gradio
interface around it.
model_type="pytorch"
This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a sklearn
model and creating a gradio
interface around it.
model_type="pyfunc"
This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a sklearn
model and creating a gradio
interface around it.
Launch Options
When launching the interface, you have the option to pass in several boolean parameters that determine how the interface is displayed. Here is an example showing all of the possible parameters:
io.launch(inbrowser=True, inline=False, validate=False, share=True)
inbrowser
– the string representing
the input interface to be used, or a subclass of gradio.AbstractInput
for additional customization (see below).
inline
– the string representing
the output interface to be used, , or a subclass of gradio.AbstractOutput
for additional customization (see below).
validate
– the string
representing type of model being passed in. Supported types include
keras.
share
– the actual model to use
for processing.