mirror of
https://github.com/gradio-app/gradio.git
synced 2024-12-21 02:19:59 +08:00
475 lines
24 KiB
HTML
475 lines
24 KiB
HTML
<html>
|
||
<head>
|
||
<!-- Global site tag (gtag.js) - Google Analytics -->
|
||
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-123499302-2"></script>
|
||
<script>
|
||
window.dataLayer = window.dataLayer || [];
|
||
function gtag(){dataLayer.push(arguments);}
|
||
gtag('js', new Date());
|
||
|
||
gtag('config', 'UA-123499302-2');
|
||
</script>
|
||
<title>Gradio</title>
|
||
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
|
||
<link href="style/style.css" rel="stylesheet">
|
||
<link href="style/home.css" rel="stylesheet">
|
||
<link href="style/getting_started.css" rel="stylesheet">
|
||
<link href="gradio/gradio.css" rel="stylesheet">
|
||
<link href="gradio/vendor/cropper.css" rel="stylesheet">
|
||
<link rel="stylesheet"
|
||
href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.6/styles/github.min.css">
|
||
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.6/highlight.min.js"></script>
|
||
<script>hljs.initHighlightingOnLoad();</script>
|
||
</head>
|
||
<body>
|
||
<nav>
|
||
<img src="img/logo_inline.png" />
|
||
<a href="index.html">Gradio</a>
|
||
<a class="selected" href="getting_started.html">Getting Started</a>
|
||
<a href="sharing.html">Sharing</a>
|
||
<a href="blog.html">Blog</a>
|
||
<a href="contact.html">Contact</a>
|
||
</nav>
|
||
<div class="content">
|
||
<h1>Installation</h1>
|
||
<p>Gradio requires <a href="https://www.python.org/downloads/">Python 3</a>. Once you have Python, you can download the latest version of <code>gradio</code> using pip, like this:</p>
|
||
<pre><code class="bash">pip install gradio</code></pre>
|
||
|
||
<p>Or you may need to do <code>pip3 install gradio</code> if you have multiple installations of Python.</p>
|
||
<h1>Basic Usage</h1>
|
||
<p>Creating an interface using gradio involves just adding a few lines to your existing code. For example, here's an
|
||
how to create a <code>gradio</code> interface using a pretrained <code>keras</code> model:</p>
|
||
|
||
<pre><code class="python">import gradio, tensorflow as tf
|
||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||
io = gradio.Interface(inputs="imageupload", outputs="label", model_type="keras", model=image_mdl)
|
||
io.launch()</code></pre>
|
||
|
||
<p>Running the code above will open a new browser window with an image upload. The user can drag and drop
|
||
their own image, which produces outputs like this:</p>
|
||
<img class="webcam" src="img/cheetah-clean.png" width="80%"/>
|
||
|
||
|
||
<p> </p><p> </p>
|
||
<h1>Basic Parameters</h1>
|
||
<p>Running a GradIO interface requires creating an <code><span
|
||
class="func">Interface(</span><span class="var">inputs</span> : str,
|
||
<span class="var">outputs</span> : str, <span class="var">model_type</span>
|
||
: str, <span class="var">model</span> : Any<span
|
||
class="func">)</span></code> object, which takes as input
|
||
arguments:<br>
|
||
<code><span class="var">inputs</span></code> – the string representing
|
||
the input interface to be used, or a subclass of <code>gradio.AbstractInput</code> for additional customization (see <a href="#custom-interfaces">below</a>).<br>
|
||
<code><span class="var">outputs</span></code> – the string representing
|
||
the output interface to be used, , or a subclass of <code>gradio.AbstractOutput</code> for additional customization (see <a href="#custom-interfaces">below</a>).<br>
|
||
<code><span class="var">model_type</span></code> – the string
|
||
representing type of model being passed in. Supported types include
|
||
keras.<br>
|
||
<code><span class="var">model</span></code> – the actual model to use
|
||
for processing.</p>
|
||
<p>Instead of providing the string names for <code><span class="var">inputs</span></code> and <code><span class="var">outputs</span></code>, objects that represent input and output interfaces can be provided. For example, the code
|
||
in the Basic Usage section executes identically as:</p>
|
||
|
||
<pre><code class="python">import gradio, tensorflow as tf
|
||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||
inp = gradio.inputs.ImageUpload()
|
||
out = gradio.outputs.Label()
|
||
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
|
||
io.launch()</code></pre>
|
||
|
||
<p>This allows for customization of the interfaces, by passing in arguments to the input and output constructors. The parameters that each interface constructor accepts is described below.</p>
|
||
|
||
<h1>Supported Interfaces</h1>
|
||
<p id="interfaces_text">This is the list of currently supported interfaces
|
||
in GradIO. All input interfaces can be paired with any output interface.
|
||
</p>
|
||
<div class="interfaces_set">
|
||
<div class="inputs_set">
|
||
<h2>Input Interfaces</h2>
|
||
<h2><code><span class="var">inputs</span>=“text”</code></h2>
|
||
<p>Use this interface to enter text as your input. Parameters: <em>None</em>
|
||
</p>
|
||
<div class="gradio input text">
|
||
<div class="role">Input</div>
|
||
<textarea class="input_text"
|
||
placeholder="Enter text here..."></textarea>
|
||
</div>
|
||
<h2><code><span class="var">inputs</span>=“imageupload”</code></h2>
|
||
<p>Use this interface to upload images to your model. Parameters: <br>
|
||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||
<code><span class="var">image_mode</span></code> – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
|
||
<code><span class="var">scale</span></code> – A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
|
||
<code><span class="var">shift</span></code> – A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
|
||
<code><span class="var">cropper_aspect_ratio</span></code> – Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
|
||
</p>
|
||
<div class="gradio input image_file">
|
||
<div class="role">Input</div>
|
||
<div class="input_image">
|
||
Drop Image Here<br>- or -<br>Click to Upload
|
||
</div>
|
||
</div>
|
||
<h2><code><span class="var">inputs</span>=“snapshot”</code></h2>
|
||
<p>Use this interface to take snapshots from the user's webcam. Parameters: <br>
|
||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||
<code><span class="var">image_mode</span></code> – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
|
||
<code><span class="var">scale</span></code> – A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
|
||
<code><span class="var">shift</span></code> – A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
|
||
<code><span class="var">cropper_aspect_ratio</span></code> – Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
|
||
</p>
|
||
<div class="gradio input snapshot">
|
||
<div class="role">Input</div>
|
||
<div class="input_snapshot">
|
||
<img class="webcam" src="img/webcam.png" />
|
||
<div class="input_directions">
|
||
Click to Upload a Snapshot from the Webcam.
|
||
</div>
|
||
</div>
|
||
</div>
|
||
<h2><code><span class="var">inputs</span>=“sketchpad”</code></h2>
|
||
<p>Use this interface to take simple monochrome cketches as input. Parameters: <br>
|
||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||
<code><span class="var">invert_colors</span></code> – a boolean that designates whether the colors should be inverted before passing into the model. Default: <code>True</code><br>
|
||
</p>
|
||
<div class="input sketchpad">
|
||
<div class="role">Input</div>
|
||
</div>
|
||
<h2><code><span class="var">inputs</span>=“microphone”</code></h2>
|
||
<p>Use this interface to audio input from the microphone.</p>
|
||
<div class="gradio input mic">
|
||
<div class="role">Input</div>
|
||
<div class="input_mic">
|
||
<img class="mic" src="img/mic.png" />
|
||
<div class="input_directions">
|
||
Click to Upload Audio from the Microphone.
|
||
</div>
|
||
</div>
|
||
</div>
|
||
<h2><code><span class="var">inputs</span>=“audio_file”</code></h2>
|
||
<p>Use this interface to upload audio to your model.</p>
|
||
<div class="gradio input audio_file">
|
||
<div class="role">Input</div>
|
||
<div class="input_audio">
|
||
Drop Audio File Here<br>- or -<br>Click to Upload
|
||
</div>
|
||
</div>
|
||
</div><!--
|
||
--><div class="outputs_set">
|
||
<h2>Output Interfaces</h2>
|
||
<h2><code><span class="var">outputs</span>=“classifier”</code></h2>
|
||
<p>Use this interface for classification. Responds with confidence
|
||
intervals. </p>
|
||
<div class="gradio output classifier">
|
||
<div class="role">Output</div>
|
||
<div class="output_class">happy</div>
|
||
<div class="confidence_intervals">
|
||
<div class="confidence"><div class="label">happy</div><div class="level" style="width: 219px">73%</div></div>
|
||
<div class="confidence"><div class="label">shocked</div><div class="level" style="width: 60px">20%</div></div>
|
||
<div class="confidence"><div class="label">sad</div><div class="level" style="width: 15px"> </div></div>
|
||
<div class="confidence"><div class="label">angry</div><div class="level" style="width: 6px"> </div></div>
|
||
</div>
|
||
</div>
|
||
<h2><code><span class="var">outputs</span>=“text”</code></h2>
|
||
<p>Use this interface to display the text of your output.</p>
|
||
<div class="gradio output text">
|
||
<div class="role">Output</div>
|
||
<textarea readonly class="output_text">Lorem ipsum consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
|
||
</textarea>
|
||
</div>
|
||
<h2><code><span class="var">outputs</span>=“image”</code></h2>
|
||
<p>Use this interface to display the text of your output.</p>
|
||
<div class="gradio output image">
|
||
<div class="role">Output</div>
|
||
<div class="output_image">
|
||
<img src="img/altered_logo.png" />
|
||
</div>
|
||
</div>
|
||
</div>
|
||
</div>
|
||
<h1 id="custom-interfaces">Customizing Interfaces</h1>
|
||
<p>In practice, it is fairly typical to customize the input and output interfaces so they preprocess the inputs
|
||
in way your model accepts, or postprocesses the result of your model in the appropriate way so that the output interface
|
||
can display the result. For example, you may need to adapt the preprocessing of the image upload interface so that
|
||
the image is resized to the correct dimensions before being fed into your model. This can be done in one of two ways: (1) instantiating <code>gradio.Input</code> /
|
||
<code>gradio.Output</code> objects with custom parameters, or (2) supplying custom preprocessing/postprocessing functions.</p>
|
||
<h2>Input/Output Objects with Custom Parameters</h2>
|
||
<p>For small, common changes to the input and output interfaces, you can often simply change the parameters in
|
||
the constructor of the input and output objects to affect the preprocessing/postprocessing. Here is an example that
|
||
resizing the image to a different size before feeding it into the model, and tweaks the output interface to
|
||
hide the confidence bars and show the top 5 classes rather than the default 3:</p>
|
||
|
||
<pre><code class="python">import gradio, tensorflow as tf
|
||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||
inp = gradio.inputs.ImageUpload(shape=(299, 299, 3))
|
||
out = gradio.outputs.Label(num_top_classes=5)
|
||
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
|
||
io.launch()</code></pre>
|
||
|
||
<h2>Custom Preprocessing/Postprocessing Functions</h2>
|
||
<p>Alternatively, you can completely override the default preprocessing/postprocessing functions by supplying
|
||
your own. For example, here we modify the preprocessing function of the ImageUpload interface to add some
|
||
noise to the image before feeding it into the model.</p>
|
||
|
||
<pre><code class="python">import gradio, base64, numpy as np, tensorflow as tf
|
||
from io import BytesIO
|
||
from PIL import Image
|
||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||
|
||
def pre(inp):
|
||
im = gradio.preprocessing_utils.encoding_to_image(inp)
|
||
im = gradio.preprocessing_utils.resize_and_crop(im, (299, 299))
|
||
im = np.array(im).flatten()
|
||
im = im * 1/127.5 - 1
|
||
im = im + np.random.normal(0, 0.1, im.shape) # Adding the noise
|
||
array = im.reshape(1, 299, 299, 3)
|
||
return array
|
||
|
||
inp = gradio.inputs.ImageUpload(preprocessing_fn=pre)
|
||
io = gradio.Interface(inputs=inp, outputs="label", model_type="keras", model=mdl)
|
||
io.launch()</code></pre>
|
||
|
||
<h1>Model Types</h1>
|
||
We currently support the following kinds of models:
|
||
<h3><code><span class="var">model_type</span>="sklearn"</code></h3>
|
||
<p>This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a <code>sklearn</code> model and creating a <code>gradio</code> interface around it.
|
||
</p>
|
||
|
||
<pre><code class="python">from sklearn import datasets, svm
|
||
import gradio
|
||
|
||
digits = datasets.load_digits()
|
||
n_samples = len(digits.images)
|
||
data = digits.images.reshape((n_samples, -1)) # flatten the images
|
||
|
||
# Create a classifier: a support vector classifier
|
||
classifier = svm.SVC(gamma=0.001)
|
||
classifier.fit(data, digits.target)
|
||
|
||
# The sklearn digits dataset is different from MNIST: it is 8x8 and consists of black digits on a white background.
|
||
inp = gradio.inputs.Sketchpad(shape=(8, 8), flatten=True, scale=16/255, invert_colors=False)
|
||
io = gradio.Interface(inputs=inp, outputs="label", model_type="sklearn", model=classifier)
|
||
io.launch()</code></pre>
|
||
|
||
<h3><code><span class="var">model_type</span>="keras"</code></h3>
|
||
<p>This allows you to pass in keras models, and get predictions from the model. Here's a complete example of training a <code>keras</code> model and creating a <code>gradio</code> interface around it.
|
||
</p>
|
||
|
||
<pre><code class="python">import gradio, tensorflow as tf
|
||
|
||
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
|
||
x_train, x_test = x_train / 255.0, x_test / 255.0
|
||
|
||
model = tf.keras.models.Sequential([
|
||
tf.keras.layers.Flatten(),
|
||
tf.keras.layers.Dense(512, activation=tf.nn.relu),
|
||
tf.keras.layers.Dropout(0.2),
|
||
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
|
||
])
|
||
|
||
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
|
||
model.fit(x_train, y_train, epochs=5)
|
||
loss, accuracy = model.evaluate(x_test, y_test)
|
||
|
||
io = gradio.Interface(inputs="sketchpad", outputs="label", model=model, model_type='keras')
|
||
io.launch(inline=True, share=True)</code></pre>
|
||
|
||
<p><a href="https://colab.research.google.com/drive/1DQSuxGARUZ-v4ZOAuw-Hf-8zqegpmes-">Run this code in a colab notebook</a> to see the interface -- embedded in the notebook.</p>
|
||
<h3><code><span class="var">model_type</span>="pytorch"</code></h3>
|
||
<p>This allows you to pass in pytorch models, and get predictions from the model. Here's a complete example of training a <code>pytorch</code> model and creating a <code>gradio</code> interface around it.
|
||
</p>
|
||
<pre><code class="python">import torch
|
||
import torch.nn as nn
|
||
import torchvision
|
||
import torchvision.transforms as transforms
|
||
import gradio
|
||
|
||
# Device configuration
|
||
device = torch.device('cpu')
|
||
|
||
# Hyper-parameters
|
||
input_size = 784
|
||
hidden_size = 500
|
||
num_classes = 10
|
||
num_epochs = 2
|
||
batch_size = 100
|
||
learning_rate = 0.001
|
||
|
||
# MNIST dataset
|
||
train_dataset = torchvision.datasets.MNIST(root='../../data', train=True, transform=transforms.ToTensor(), download=True)
|
||
test_dataset = torchvision.datasets.MNIST(root='../../data',train=False, transform=transforms.ToTensor())
|
||
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size,shuffle=True)
|
||
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
|
||
|
||
# Fully connected neural network with one hidden layer
|
||
class NeuralNet(nn.Module):
|
||
def __init__(self, input_size, hidden_size, num_classes):
|
||
super(NeuralNet, self).__init__()
|
||
self.fc1 = nn.Linear(input_size, hidden_size)
|
||
self.relu = nn.ReLU()
|
||
self.fc2 = nn.Linear(hidden_size, num_classes)
|
||
|
||
def forward(self, x):
|
||
out = self.fc1(x)
|
||
out = self.relu(out)
|
||
out = self.fc2(out)
|
||
return out
|
||
|
||
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
|
||
|
||
# Loss and optimizer
|
||
criterion = nn.CrossEntropyLoss()
|
||
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
||
|
||
# Train the model
|
||
total_step = len(train_loader)
|
||
for epoch in range(num_epochs):
|
||
for i, (images, labels) in enumerate(train_loader):
|
||
# Move tensors to the configured device
|
||
images = images.reshape(-1, 28*28).to(device)
|
||
labels = labels.to(device)
|
||
|
||
# Forward pass
|
||
outputs = model(images)
|
||
loss = criterion(outputs, labels)
|
||
|
||
# Backward and optimize
|
||
optimizer.zero_grad()
|
||
loss.backward()
|
||
optimizer.step()
|
||
|
||
inp = gradio.inputs.Sketchpad(flatten=True, scale=1/255, dtype='float32')
|
||
io = gradio.Interface(inputs=inp, outputs="label", model_type="pytorch", model=model)
|
||
io.launch()
|
||
</code></pre>
|
||
|
||
<h3><code><span class="var">model_type</span>="pyfunc"</code></h3>
|
||
<p>This allows you to pass in an arbitrary python function, and get the outputs from the function. Here's a very simple example of a "model" with a <code>gradio</code> interface around it.
|
||
</p>
|
||
|
||
<pre><code class="python">import gradio
|
||
|
||
# A very simplistic function that capitalizes each letter in the given string
|
||
def big(x):
|
||
return x.upper()
|
||
|
||
io = gradio.Interface(inputs="textbox", outputs="textbox", model=big, model_type='pyfunc')
|
||
io.launch(inline=True, share=True)</code></pre>
|
||
|
||
<p>A more realistic examples of the <code>pyfunc</code> use case may be the following, where we would like to
|
||
use a TensorFlow session with a trained model to do predictions. So we wrap the session inside a python function
|
||
like this:</p>
|
||
|
||
<pre><code class="python">import tensorflow as tf
|
||
import gradio
|
||
|
||
n_classes = 10
|
||
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
|
||
x_train, x_test = x_train.reshape(-1, 784) / 255.0, x_test.reshape(-1, 784) / 255.0
|
||
y_train = tf.keras.utils.to_categorical(y_train, n_classes).astype(float)
|
||
y_test = tf.keras.utils.to_categorical(y_test, n_classes).astype(float)
|
||
|
||
learning_rate = 0.5
|
||
epochs = 5
|
||
batch_size = 100
|
||
|
||
x = tf.placeholder(tf.float32, [None, 784], name="x")
|
||
y = tf.placeholder(tf.float32, [None, 10], name="y")
|
||
|
||
W1 = tf.Variable(tf.random_normal([784, 300], stddev=0.03), name='W1')
|
||
b1 = tf.Variable(tf.random_normal([300]), name='b1')
|
||
W2 = tf.Variable(tf.random_normal([300, 10], stddev=0.03), name='W2')
|
||
hidden_out = tf.add(tf.matmul(x, W1), b1)
|
||
hidden_out = tf.nn.relu(hidden_out)
|
||
y_ = tf.matmul(hidden_out, W2)
|
||
|
||
probs = tf.nn.softmax(y_)
|
||
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_, labels=y))
|
||
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
|
||
init_op = tf.global_variables_initializer()
|
||
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
|
||
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
|
||
|
||
sess = tf.Session()
|
||
sess.run(init_op)
|
||
total_batch = int(len(y_train) / batch_size)
|
||
for epoch in range(epochs):
|
||
avg_cost = 0
|
||
for start, end in zip(range(0, len(y_train), batch_size), range(batch_size, len(y_train)+1, batch_size)):
|
||
batch_x = x_train[start: end]
|
||
batch_y = y_train[start: end]
|
||
_, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})
|
||
avg_cost += c / total_batch
|
||
|
||
def predict(inp):
|
||
return sess.run(probs, feed_dict={x:inp})
|
||
|
||
inp = gradio.inputs.Sketchpad(flatten=True)
|
||
io = gradio.Interface(inputs=inp, outputs="label", model_type="pyfunc", model=predict)
|
||
io.launch(inline=True, share=True)</code></pre>
|
||
|
||
|
||
<h1>Saliency Maps</h1>
|
||
<p>The <code>imageupload</code> interface also supports a saliency model, in which a heatmap
|
||
is overlaid on top of the input image. This can be used to show feature attributions, e.g. as an interpretation
|
||
methods. The user supplies their own saliency function, which should take in three arguments: the model object,
|
||
the input feature, and the input label. Here is an example of a saliency function and what it may produce: </p>
|
||
<pre><code class="python">import numpy as np
|
||
import tensorflow as tf
|
||
from deepexplain.tensorflow import DeepExplain
|
||
from tensorflow.keras import backend as K
|
||
from tensorflow.keras.models import Sequential, Model
|
||
import gradio
|
||
|
||
model = tf.keras.applications.MobileNet()
|
||
|
||
def saliency(model, x, y):
|
||
y = y.reshape(1, 1, 1, 1000)
|
||
with DeepExplain(session=K.get_session()) as de:
|
||
input_tensor = model.layers[0].input
|
||
fModel = Model(inputs=input_tensor, outputs = model.layers[-3].output)
|
||
target_tensor = fModel(input_tensor)
|
||
|
||
attributions_gradin = de.explain('grad*input', target_tensor, input_tensor, x, ys=y)
|
||
sal = np.sum(np.abs(attributions_gradin.squeeze()), axis=-1)
|
||
sal = (sal - sal.min()) / (sal.max() - sal.min())
|
||
return sal
|
||
|
||
inp = gradio.inputs.ImageUpload()
|
||
out = gradio.outputs.Label(label_names='imagenet1000', max_label_words=1, word_delimiter=",")
|
||
|
||
io = gradio.Interface(inputs=inp,
|
||
outputs=out,
|
||
model=model,
|
||
model_type='keras',
|
||
saliency=saliency)
|
||
|
||
io.launch();</code></pre>
|
||
<p>Which produces this:</p>
|
||
<img class="webcam" src="img/cheetah-saliency2.jpg" width="80%"/>
|
||
|
||
<h1>Launch Options</h1>
|
||
<p>When launching the interface, you have the option to pass in several boolean parameters that determine how the interface is displayed. Here
|
||
is an example showing all of the possible parameters:</p>
|
||
|
||
<pre><code class="python">io.launch(inbrowser=True, inline=False, validate=False, share=True)
|
||
</code></pre>
|
||
|
||
|
||
<p><code><span class="var">inbrowser</span></code> – whether the model should launch in a new browser window.<br>
|
||
<code><span class="var">inline</span></code> – whether the model should launch embedded in an interactive python environment (like jupyter notebooks or colab notebooks).<br>
|
||
<code><span class="var">validate</span></code> – whether gradio should try to validate the interface-model compatibility before launch.<br>
|
||
<code><span class="var">share</span></code> – whether a public link to share the model should be created.
|
||
for processing.</p>
|
||
|
||
</div>
|
||
<footer>
|
||
<img src="img/logo_inline.png" />
|
||
</footer>
|
||
<script src="gradio/vendor/jquery.min.js"></script>
|
||
<script src="gradio/image_upload.js"></script>
|
||
<script src="gradio/label.js"></script>
|
||
<script src="gradio/vendor/cropper.js"></script>
|
||
<script src="https://unpkg.com/ml5@0.1.3/dist/ml5.min.js"></script>
|
||
<script src="js/models.js"></script>
|
||
<body>
|
||
</html>
|