<p>Gradio requires <ahref="https://www.python.org/downloads/">Python 3</a>. Once you have Python, you can download the latest version of <code>gradio</code> using pip, like this:</p>
the input interface to be used, or a subclass of <code>gradio.AbstractInput</code> for additional customization (see <ahref="#custom-interfaces">below</a>).<br>
the output interface to be used, , or a subclass of <code>gradio.AbstractOutput</code> for additional customization (see <ahref="#custom-interfaces">below</a>).<br>
<p>Instead of providing the string names for <code><spanclass="var">inputs</span></code> and <code><spanclass="var">outputs</span></code>, objects that represent input and output interfaces can be provided. For example, the code
in the Basic Usage section executes identically as:</p>
<pre><codeclass="python">import gradio, tensorflow as tf
<p>This allows for customization of the interfaces, by passing in arguments to the input and output constructors. The parameters that each interface constructor accepts is described below.</p>
<p>Use this interface to upload images to your model. Parameters: <br>
<code><spanclass="var">shape</span></code>– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
<code><spanclass="var">image_mode</span></code>– PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
<code><spanclass="var">scale</span></code>– A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
<code><spanclass="var">shift</span></code>– A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
<code><spanclass="var">cropper_aspect_ratio</span></code>– Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
<p>Use this interface to take snapshots from the user's webcam. Parameters: <br>
<code><spanclass="var">shape</span></code>– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
<code><spanclass="var">image_mode</span></code>– PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
<code><spanclass="var">scale</span></code>– A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
<code><spanclass="var">shift</span></code>– A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
<code><spanclass="var">cropper_aspect_ratio</span></code>– Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
<p>Use this interface to take simple monochrome cketches as input. Parameters: <br>
<code><spanclass="var">shape</span></code>– a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
<code><spanclass="var">invert_colors</span></code>– a boolean that designates whether the colors should be inverted before passing into the model. Default: <code>True</code><br>
<p>Use this interface to display the text of your output.</p>
<divclass="gradio output text">
<divclass="role">Output</div>
<textareareadonlyclass="output_text">Lorem ipsum consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
<p>In practice, it is fairly typical to customize the input and output interfaces so they preprocess the inputs
in way your model accepts, or postprocesses the result of your model in the appropriate way so that the output interface
can display the result. For example, you may need to adapt the preprocessing of the image upload interface so that
the image is resized to the correct dimensions before being fed into your model. This can be done in one of two ways: (1) instantiating <code>gradio.Input</code> /
<code>gradio.Output</code> objects with custom parameters, or (2) supplying custom preprocessing/postprocessing functions.</p>
<h2>Input/Output Objects with Custom Parameters</h2>
<p>For small, common changes to the input and output interfaces, you can often simply change the parameters in
the constructor of the input and output objects to affect the preprocessing/postprocessing. Here is an example that
resizing the image to a different size before feeding it into the model, and tweaks the output interface to
hide the confidence bars and show the top 5 classes rather than the default 3:</p>
<pre><codeclass="python">import gradio, tensorflow as tf
<p>This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a <code>sklearn</code> model and creating a <code>gradio</code> interface around it.
<p>This allows you to pass in keras models, and get predictions from the model. Here's a complete example of training a <code>keras</code> model and creating a <code>gradio</code> interface around it.
<p><ahref="https://colab.research.google.com/drive/1DQSuxGARUZ-v4ZOAuw-Hf-8zqegpmes-">Run this code in a colab notebook</a> to see the interface -- embedded in the notebook.</p>
<p>This allows you to pass in pytorch models, and get predictions from the model. Here's a complete example of training a <code>pytorch</code> model and creating a <code>gradio</code> interface around it.
<p>This allows you to pass in an arbitrary python function, and get the outputs from the function. Here's a very simple example of a "model" with a <code>gradio</code> interface around it.
<p><code><spanclass="var">inbrowser</span></code>– whether the model should launch in a new browser window.<br>
<code><spanclass="var">inline</span></code>– whether the model should launch embedded in an interactive python environment (like jupyter notebooks or colab notebooks).<br>
<code><spanclass="var">validate</span></code>– whether gradio should try to validate the interface-model compatibility before launch.<br>