mirror of
https://github.com/gradio-app/gradio.git
synced 2024-11-27 01:40:20 +08:00
parent
d0a54dba4b
commit
56f1843931
457
README.md
457
README.md
@ -1,10 +1,12 @@
|
||||
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio) [![codecov](https://codecov.io/gh/gradio-app/gradio/branch/master/graph/badge.svg?token=NNVPX9KEGS)](https://codecov.io/gh/gradio-app/gradio) [![PyPI - Downloads](https://img.shields.io/pypi/dm/gradio)](https://pypi.org/project/gradio/) [![Twitter Follow](https://img.shields.io/twitter/follow/gradio.svg?style=social&label=Follow)](https://twitter.com/gradio)
|
||||
|
||||
# Welcome to Gradio
|
||||
# Gradio: Build Machine Learning Web Apps — in Python
|
||||
|
||||
Quickly create beautiful user interfaces around your machine learning models. Gradio (pronounced GRAY-dee-oh) makes it easy for you to demo your model in your browser or let people "try it out" by dragging-and-dropping in their own images, pasting text, recording their own voice, etc. and seeing what the model outputs.
|
||||
Gradio (pronounced GRAY-dee-oh) is an open-source Python library that has been used to build hundreds of thousands of machine learning and data science demos.
|
||||
|
||||
![Interface montage](website/homepage/src/assets/img/montage.gif)
|
||||
With Gradio, you can quickly create a beautiful user interfaces around your machine learning models and let people "try out" what you've built by dragging-and-dropping in their own images, pasting text, recording their own voice, and interacting with your demo through the browser.
|
||||
|
||||
![Interface montage](website/homepage/src/assets/img/meta-image-2.png)
|
||||
|
||||
Gradio is useful for:
|
||||
|
||||
@ -17,11 +19,21 @@ Gradio is useful for:
|
||||
**You can find an interactive version of the following Getting Started at [https://gradio.app/getting_started](https://gradio.app/getting_started).**
|
||||
|
||||
|
||||
## Getting Started
|
||||
## Quickstart
|
||||
|
||||
**Prerequisite**: Python 3.7+ and that's it!
|
||||
**Prerequisite**: Gradio requires Python 3.7 or above, that's it!
|
||||
|
||||
### Quick Start
|
||||
### What Problem is Gradio Solving? 😲
|
||||
|
||||
One of the *best ways to share* your machine learning model, API, or data science workflow with others is to create an **interactive demo** that allows your users or colleagues to try out the demo in their browsers.
|
||||
|
||||
A web-based demo is great as it allows anyone who can use a browser (not just technical people) to intuitively try their own inputs and understand what you've built.
|
||||
|
||||
However, creating such web-based demos has traditionally been difficult, as you needed to know web hosting to serve the web app and web development (HTML, CSS, JavaScript) to build a GUI for your demo.
|
||||
|
||||
Gradio allows you to **build demos and share them, directly in Python.** And usually in just a few lines of code! So let's get started.
|
||||
|
||||
### Hello, World ⚡
|
||||
|
||||
To get Gradio running with a simple "Hello, World" example, follow these three steps:
|
||||
|
||||
@ -40,31 +52,34 @@ import gradio as gr
|
||||
def greet(name):
|
||||
return "Hello " + name + "!!"
|
||||
|
||||
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
|
||||
|
||||
iface = gr.Interface(fn=greet, inputs="text", outputs="text")
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
|
||||
<span>3.</span> The interface below will appear automatically within the Python notebook, or pop in a browser on [http://localhost:7860](http://localhost:7860/) if running from a script.
|
||||
<span>3.</span> The demo below will appear automatically within the Python notebook, or pop in a browser on [http://localhost:7860](http://localhost:7860/) if running from a script.
|
||||
|
||||
![hello_world interface](demo/hello_world/screenshot.gif)
|
||||
|
||||
### Understanding the `Interface` class
|
||||
### The `Interface` class 🧡
|
||||
|
||||
Gradio can wrap almost any Python function with an easy-to-use user interface. In the example above, we saw a simple text-based function. But the function could be anything from image enhancer to a tax calculator to (most commonly) the prediction function of a pretrained machine learning model.
|
||||
You'll notice that in order to create the demo, we defined a `gradio.Interface` class. This `Interface` class can wrap almost any Python function with a user interface. In the example above, we saw a simple text-based function. But the function could be anything from music generator to a tax calculator to (most commonly) the prediction function of a pretrained machine learning model.
|
||||
|
||||
The core `Interface` class is initialized with three parameters:
|
||||
The core `Interface` class is initialized with three required parameters:
|
||||
|
||||
- `fn`: the function to wrap
|
||||
- `inputs`: the input component type(s), e.g. `"image"` or `"audio"` ([see docs for complete list](/docs))
|
||||
- `outputs`: the output component type(s) e.g. `"image"` or `"label"` ([see docs for complete list](/docs))
|
||||
- `fn`: the function to wrap a UI around
|
||||
- `inputs`: which component(s) to use for the input, e.g. `"text"` or `"image"` or `"audio"`
|
||||
- `outputs`: which component(s) to use for the output, e.g. `"text"` or `"image"` `"label"`
|
||||
|
||||
With these three arguments, we can quickly create interfaces and `launch()` them. But what if you want to change how the UI components look or behave?
|
||||
Gradio includes more than 20 different components, most of which can be used as inputs or outputs. ([See docs for complete list](/docs))
|
||||
|
||||
### Customizable Components
|
||||
### Components Attributes 💻
|
||||
|
||||
Let's say we want to customize the input text field - for example, we wanted it to be larger and have a text hint. If we use the actual input class for `Textbox` instead of using the string shortcut, we have access to much more customizability. To see a list of all the components we support and how you can customize them, check out the [Docs](https://gradio.app/docs).
|
||||
With these three arguments to `Interface`, you can quickly create user interfaces and `launch()` them. But what if you want to change how the UI components look or behave?
|
||||
|
||||
Let's say you want to customize the input text field - for example, you wanted it to be larger and have a text hint. If we use the actual input class for `Textbox` instead of using the string shortcut, you have access to much more customizability through component attributes.
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
@ -74,19 +89,23 @@ def greet(name):
|
||||
return "Hello " + name + "!"
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
demo = gr.Interface(
|
||||
fn=greet,
|
||||
inputs=gr.inputs.Textbox(lines=2, placeholder="Name Here..."),
|
||||
inputs=gr.Textbox(lines=2, placeholder="Name Here..."),
|
||||
outputs="text",
|
||||
)
|
||||
iface.launch()
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![hello_world_2 interface](demo/hello_world_2/screenshot.gif)
|
||||
|
||||
### Multiple Inputs and Outputs
|
||||
To see a list of all the components Gradio supports and what attributes you can use to customize them, check out the [Docs](https://gradio.app/docs).
|
||||
|
||||
Let's say we had a much more complex function, with multiple inputs and outputs. In the example below, we have a function that takes a string, boolean, and number, and returns a string and number. Take a look how we pass a list of input and output components.
|
||||
### Multiple Inputs and Outputs 🔥
|
||||
|
||||
Let's say you had a much more complex function, with multiple inputs and outputs. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number. Take a look how you pass a list of input and output components.
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
@ -99,21 +118,22 @@ def greet(name, is_morning, temperature):
|
||||
return greeting, round(celsius, 2)
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
demo = gr.Interface(
|
||||
fn=greet,
|
||||
inputs=["text", "checkbox", gr.inputs.Slider(0, 100)],
|
||||
inputs=["text", "checkbox", gr.Slider(0, 100)],
|
||||
outputs=["text", "number"],
|
||||
)
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![hello_world_3 interface](demo/hello_world_3/screenshot.gif)
|
||||
|
||||
We simply wrap the components in a list. Each component in the `inputs` list corresponds to one of the parameters of the function, in order. Each component in the `outputs` list corresponds to one of the values returned by the function, again in order.
|
||||
You simply wrap the components in a list. Each component in the `inputs` list corresponds to one of the parameters of the function, in order. Each component in the `outputs` list corresponds to one of the values returned by the function, again in order.
|
||||
|
||||
### Working with Images
|
||||
### Images 🎨
|
||||
|
||||
Let's try an image-to-image function. When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
|
||||
Let's try an image-to-image function! When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
@ -130,18 +150,19 @@ def sepia(input_img):
|
||||
return sepia_img
|
||||
|
||||
|
||||
iface = gr.Interface(sepia, gr.inputs.Image(shape=(200, 200)), "image")
|
||||
demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image")
|
||||
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![sepia_filter interface](demo/sepia_filter/screenshot.gif)
|
||||
|
||||
Additionally, our `Image` input interface comes with an 'edit' button which opens tools for cropping, flipping, rotating, drawing over, and applying filters to images. We've found that manipulating images in this way will often reveal hidden flaws in a model.
|
||||
Additionally, our `Image` input interface comes with an 'edit' button ✏️ which opens tools for cropping, flipping, rotating, drawing over, and applying filters to images. We've found that manipulating images in this way can help reveal biases or hidden flaws in a machine learning model!
|
||||
|
||||
In addition to images, Gradio supports other media input types, such as audio or video uploads, as well as many output components. Read about these in the [Docs](https://gradio.app/docs).
|
||||
In addition to images, Gradio supports other media types, such as audio or video. Read about these in the [Docs](https://gradio.app/docs).
|
||||
|
||||
### Working with DataFrames and Graphs
|
||||
### DataFrames and Graphs 📈
|
||||
|
||||
You can use Gradio to support inputs and outputs from your typical data libraries, such as numpy arrays, pandas dataframes, and plotly graphs. Take a look at the demo below (ignore the complicated data manipulation in the function!)
|
||||
|
||||
@ -172,28 +193,28 @@ def sales_projections(employee_data):
|
||||
return employee_data, plt.gcf(), regression_values
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
demo = gr.Interface(
|
||||
sales_projections,
|
||||
gr.inputs.Dataframe(
|
||||
gr.Dataframe(
|
||||
headers=["Name", "Jan Sales", "Feb Sales", "Mar Sales"],
|
||||
default=[["Jon", 12, 14, 18], ["Alice", 14, 17, 2], ["Sana", 8, 9.5, 12]],
|
||||
value=[["Jon", 12, 14, 18], ["Alice", 14, 17, 2], ["Sana", 8, 9.5, 12]],
|
||||
),
|
||||
["dataframe", "plot", "numpy"],
|
||||
description="Enter sales figures for employees to predict sales trajectory over year.",
|
||||
)
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![sales_projections interface](demo/sales_projections/screenshot.gif)
|
||||
|
||||
### Example Inputs
|
||||
### Example Inputs 🦮
|
||||
|
||||
You can provide example data that a user can easily load into the model. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs).
|
||||
You can provide example data that a user can easily load into the model. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs).
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
|
||||
|
||||
def calculator(num1, operation, num2):
|
||||
if operation == "add":
|
||||
return num1 + num2
|
||||
@ -205,9 +226,9 @@ def calculator(num1, operation, num2):
|
||||
return num1 / num2
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
demo = gr.Interface(
|
||||
calculator,
|
||||
["number", gr.inputs.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
||||
[gr.Number(4), gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
||||
"number",
|
||||
examples=[
|
||||
[5, "add", 3],
|
||||
@ -220,14 +241,15 @@ iface = gr.Interface(
|
||||
flagging_options=["this", "or", "that"],
|
||||
)
|
||||
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![calculator interface](demo/calculator/screenshot.gif)
|
||||
|
||||
You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of Interface) and you can use CTRL + arrow keys to navigate through the examples quickly.
|
||||
You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of `Interface`).
|
||||
|
||||
### Live Interfaces
|
||||
### Live Interfaces 🪁
|
||||
|
||||
You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.
|
||||
|
||||
@ -246,74 +268,24 @@ def calculator(num1, operation, num2):
|
||||
return num1 / num2
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
demo = gr.Interface(
|
||||
calculator,
|
||||
["number", gr.inputs.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
||||
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
||||
"number",
|
||||
live=True,
|
||||
)
|
||||
|
||||
iface.launch()
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
![calculator_live interface](demo/calculator_live/screenshot.gif)
|
||||
|
||||
Note there is no submit button, because the interface resubmits automatically on change.
|
||||
|
||||
### Using State
|
||||
### Flagging 🚩
|
||||
|
||||
Your function may use data that persists beyond a single function call. If the data is something accessible to all function calls and all users, you can create a global variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
|
||||
|
||||
Another type of data persistence Gradio supports is session **state**, where data persists across multiple submits within a page load. However, data is *not* shared between different users of your model. To store data in a session state, you need to do three things: (1) Pass in an extra parameter into your function, which represents the state of the interface. (2) At the end of the function, return the updated value of the state as an extra return value (3) Add the `'state'` input and `'state'` output components when creating your `Interface`. See the chatbot example below:
|
||||
|
||||
```python
|
||||
import random
|
||||
|
||||
import gradio as gr
|
||||
|
||||
|
||||
def chat(message, history):
|
||||
history = history or []
|
||||
if message.startswith("How many"):
|
||||
response = random.randint(1, 10)
|
||||
elif message.startswith("How"):
|
||||
response = random.choice(["Great", "Good", "Okay", "Bad"])
|
||||
elif message.startswith("Where"):
|
||||
response = random.choice(["Here", "There", "Somewhere"])
|
||||
else:
|
||||
response = "I don't know"
|
||||
history.append((message, response))
|
||||
html = "<div class='chatbot'>"
|
||||
for user_msg, resp_msg in history:
|
||||
html += f"<div class='user_msg'>{user_msg}</div>"
|
||||
html += f"<div class='resp_msg'>{resp_msg}</div>"
|
||||
html += "</div>"
|
||||
return html, history
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
chat,
|
||||
["text", "state"],
|
||||
["html", "state"],
|
||||
css="""
|
||||
.chatbox {display:flex;flex-direction:column}
|
||||
.user_msg, .resp_msg {padding:4px;margin-bottom:4px;border-radius:4px;width:80%}
|
||||
.user_msg {background-color:cornflowerblue;color:white;align-self:start}
|
||||
.resp_msg {background-color:lightgray;align-self:self-end}
|
||||
""",
|
||||
allow_screenshot=False,
|
||||
allow_flagging="never",
|
||||
)
|
||||
iface.launch()
|
||||
|
||||
```
|
||||
![chatbot interface](demo/chatbot/screenshot.gif)
|
||||
|
||||
Notice how the state persists across submits within each page, but the state is not shared between the two pages. Some more points to note: you can pass in a default value to the state parameter, which is used as the initial value of the state. The state must be a something that can be serialized to a JSON format (e.g. a dictionary, a list, or a single value. Typically, objects will not work).
|
||||
|
||||
### Flagging
|
||||
|
||||
Underneath the output interfaces, there is a button marked "Flag". When a user testing your model sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for the interface creator to review. Within the directory provided by the `flagging_dir=` argument to the Interface constructor, a CSV file will log the flagged inputs. If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well.
|
||||
Underneath the output interfaces, there is a "Flag" button. When a user testing your model sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for the interface creator to review. Within the directory provided by the `flagging_dir=` argument to the Interface constructor, a CSV file will log the flagged inputs. If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well.
|
||||
|
||||
For example, with the calculator interface shown above, we would have the flagged data stored in the flagged directory shown below:
|
||||
|
||||
@ -353,244 +325,83 @@ im/1.png,Output/1.png
|
||||
|
||||
You can review these flagged inputs by manually exploring the flagging directory, or load them into the examples of the Gradio interface by pointing the `examples=` argument to the flagged directory. If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of the strings when flagging, which will be saved as an additional column to the CSV.
|
||||
|
||||
### Sharing Interfaces Publicly
|
||||
### Blocks: More Flexibility and Control 🧱
|
||||
|
||||
Interfaces can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:
|
||||
Gradio offers two APIs to users: (1) **Interface**, a high level abstraction for creating demos (that we've been discussing so far), and (2) **Blocks**, a low-level API for designing web apps with more flexible layouts and data flows. Blocks allows you to do things like: group together related demos, change where components appear on the page, handle complex data flows (e.g. outputs can serve as inputs to other functions), and update properties/visibility of components based on user interaction -- still all in Python.
|
||||
|
||||
As an example, Blocks uses nested `with` statements in Python to lay out components on a page, like this:
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
import gradio as gr
|
||||
|
||||
demo = gr.Blocks()
|
||||
|
||||
|
||||
def flip_text(x):
|
||||
return x[::-1]
|
||||
|
||||
def flip_image(x):
|
||||
return np.fliplr(x)
|
||||
|
||||
|
||||
with demo:
|
||||
gr.Markdown("Flip text or image files using this demo.")
|
||||
with gr.Tabs():
|
||||
with gr.TabItem("Flip Text"):
|
||||
text_input = gr.Textbox()
|
||||
text_output = gr.Textbox()
|
||||
text_button = gr.Button("Flip")
|
||||
with gr.TabItem("Flip Image"):
|
||||
with gr.Row():
|
||||
image_input = gr.Image()
|
||||
image_output = gr.Image()
|
||||
image_button = gr.Button("Flip")
|
||||
|
||||
text_button.click(flip_text, inputs=text_input, outputs=text_output)
|
||||
image_button.click(flip_image, inputs=image_input, outputs=image_output)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
![blocks_flipper interface](demo/blocks_flipper/screenshot.gif)
|
||||
|
||||
|
||||
If you are interested in how Blocks works, [read its dedicated Guide](introduction_to_blocks).
|
||||
|
||||
### Sharing Demos 🌎
|
||||
|
||||
Gradio demos can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:
|
||||
|
||||
```python
|
||||
gr.Interface(classify_image, "image", "label").launch(share=True)
|
||||
```
|
||||
|
||||
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any packaging any dependencies. If you're working out of colab notebook, a share link is always automatically created. It usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces.
|
||||
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any packaging any dependencies. A share link usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a Gradio URL, we are only a proxy for your local server, and do not store any data sent through the interfaces.
|
||||
|
||||
Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default), only a local link is created, which can be shared by [port-forwarding](https://www.ssh.com/ssh/tunneling/example) with specific users.
|
||||
Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default, except in colab notebooks), only a local link is created, which can be shared by [port-forwarding](https://www.ssh.com/ssh/tunneling/example) with specific users.
|
||||
|
||||
Share links expire after 72 hours. For permanent hosting, see Hosting Gradio Apps on Spaces below.
|
||||
Share links expire after 72 hours. For permanent hosting, see the next section.
|
||||
|
||||
![Sharing diagram](website/homepage/src/assets/img/sharing.svg)
|
||||
|
||||
### Hosting Gradio Apps on Spaces
|
||||
### Hosting Gradio Demo on Spaces 🤗
|
||||
|
||||
Huggingface provides the infrastructure to permanently host your Gradio model on the internet, for free! You can either drag and drop a folder containing your Gradio model and all related files, or you can point HF Spaces to your Git repository and HP Spaces will pull the Gradio interface from there. See [Huggingface Spaces](http://huggingface.co/spaces/) for more information.
|
||||
If you'd like to have a permanent link to your Gradio demo on the internet, use Huggingface Spaces. Hugging Face Spaces provides the infrastructure to permanently host your machine learning model for free!
|
||||
|
||||
You can either drag and drop a folder containing your Gradio model and all related files, or you can point Spaces to your Git repository and Spaces will pull the Gradio interface from there. See [Huggingface Spaces](http://huggingface.co/spaces/) for more information.
|
||||
|
||||
![Hosting Demo](website/homepage/src/assets/img/hf_demo.gif)
|
||||
|
||||
## Advanced Features
|
||||
<span id="advanced-features"></span>
|
||||
|
||||
Here, we go through several advanced functionalities that your Gradio demo can include without you needing to write much more code!
|
||||
|
||||
### Authentication
|
||||
|
||||
You may wish to put an authentication page in front of your interface to limit who can open your interface. With the `auth=` keyword argument in the `launch()` method, you can pass a list of acceptable username/password tuples; or, for more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns True to allow authentication, False otherwise. Here's an example that provides password-based authentication for a single user named "admin":
|
||||
|
||||
```python
|
||||
gr.Interface(fn=classify_image, inputs=image, outputs=label).launch(auth=("admin", "pass1234"))
|
||||
```
|
||||
|
||||
### Interpreting your Predictions
|
||||
|
||||
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the `interpretation` keyword in the `Interface` class to `default`. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:
|
||||
|
||||
```python
|
||||
import requests
|
||||
import tensorflow as tf
|
||||
|
||||
import gradio as gr
|
||||
|
||||
inception_net = tf.keras.applications.MobileNetV2() # load the model
|
||||
|
||||
# Download human-readable labels for ImageNet.
|
||||
response = requests.get("https://git.io/JJkYN")
|
||||
labels = response.text.split("\n")
|
||||
|
||||
|
||||
def classify_image(inp):
|
||||
inp = inp.reshape((-1, 224, 224, 3))
|
||||
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
|
||||
prediction = inception_net.predict(inp).flatten()
|
||||
return {labels[i]: float(prediction[i]) for i in range(1000)}
|
||||
|
||||
|
||||
image = gr.inputs.Image(shape=(224, 224))
|
||||
label = gr.outputs.Label(num_top_classes=3)
|
||||
|
||||
gr.Interface(
|
||||
fn=classify_image, inputs=image, outputs=label, interpretation="default"
|
||||
).launch()
|
||||
|
||||
```
|
||||
|
||||
|
||||
In addition to `default`, Gradio also includes [Shapley-based interpretation](https://christophm.github.io/interpretable-ml-book/shap.html), which provides more accurate interpretations, albeit usually with a slower runtime. To use this, simply set the `interpretation` parameter to `"shap"` (note: also make sure the python package `shap` is installed). Optionally, you can modify the the `num_shap` parameter, which controls the tradeoff between accuracy and runtime (increasing this value generally increases accuracy). Here is an example:
|
||||
|
||||
```python
|
||||
gr.Interface(fn=classify_image, inputs=image, outputs=label, interpretation="shap", num_shap=5).launch()
|
||||
```
|
||||
|
||||
This will work for any function, even if internally, the model is a complex neural network or some other black box. If you use Gradio's `default` or `shap` interpretation, the output component must be a `Label`. All common input components are supported. Here is an example with text input.
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
import gradio as gr
|
||||
|
||||
male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]
|
||||
|
||||
|
||||
def gender_of_sentence(sentence):
|
||||
male_count = len([word for word in sentence.split() if word.lower() in male_words])
|
||||
female_count = len(
|
||||
[word for word in sentence.split() if word.lower() in female_words]
|
||||
)
|
||||
total = max(male_count + female_count, 1)
|
||||
return {"male": male_count / total, "female": female_count / total}
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
fn=gender_of_sentence,
|
||||
inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
|
||||
outputs="label",
|
||||
interpretation="default",
|
||||
)
|
||||
iface.launch()
|
||||
|
||||
```
|
||||
|
||||
So what is happening under the hood? With these interpretation methods, Gradio runs the prediction multiple times with modified versions of the input. Based on the results, you'll see that the interface automatically highlights the parts of the text (or image, etc.) that contributed increased the likelihood of the class as red. The intensity of color corresponds to the importance of that part of the input. The parts that decrease the class confidence are highlighted blue.
|
||||
|
||||
You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input interface - therefore the number of outputs here corresponds to the number of input interfaces. To see the format for interpretation for each input interface, check the Docs.
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
import gradio as gr
|
||||
|
||||
male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]
|
||||
|
||||
|
||||
def gender_of_sentence(sentence):
|
||||
male_count = len([word for word in sentence.split() if word.lower() in male_words])
|
||||
female_count = len(
|
||||
[word for word in sentence.split() if word.lower() in female_words]
|
||||
)
|
||||
total = max(male_count + female_count, 1)
|
||||
return {"male": male_count / total, "female": female_count / total}
|
||||
|
||||
|
||||
def interpret_gender(sentence):
|
||||
result = gender_of_sentence(sentence)
|
||||
is_male = result["male"] > result["female"]
|
||||
interpretation = []
|
||||
for word in re.split("( )", sentence):
|
||||
score = 0
|
||||
token = word.lower()
|
||||
if (is_male and token in male_words) or (not is_male and token in female_words):
|
||||
score = 1
|
||||
elif (is_male and token in female_words) or (
|
||||
not is_male and token in male_words
|
||||
):
|
||||
score = -1
|
||||
interpretation.append((word, score))
|
||||
return interpretation
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
fn=gender_of_sentence,
|
||||
inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
|
||||
outputs="label",
|
||||
interpretation=interpret_gender,
|
||||
enable_queue=True,
|
||||
)
|
||||
iface.launch()
|
||||
|
||||
```
|
||||
|
||||
### Themes and Custom Styling
|
||||
|
||||
If you'd like to change how your interface looks, you can select a different theme by simply passing in the `theme` parameter, like so:
|
||||
|
||||
```python
|
||||
gr.Interface(fn=classify_image, inputs=image, outputs=label, theme="huggingface").launch()
|
||||
```
|
||||
|
||||
Here are the themes we currently support: `"default"`, `"huggingface"`, `"grass"`, `"peach"`, and the dark themes corresponding to each of these: `"darkdefault"`, `"darkhuggingface"`, `"darkgrass"`, `"darkpeach"`.
|
||||
|
||||
If you'd like to have more fine-grained control over any aspect of the app, you can also write your own css or pass in a css file, with the `css` parameter of the `Interface` class.
|
||||
|
||||
### Custom Flagging Options
|
||||
|
||||
In some cases, you might like to provide your users or testers with *more* than just a binary option to flag a sample. You can provide `flagging_options` that they select from a dropdown each time they click the flag button. This lets them provide additional feedback every time they flag a sample.
|
||||
|
||||
Here's an example:
|
||||
|
||||
```python
|
||||
gr.Interface(fn=classify_image, inputs=image, outputs=label, flagging_options=["incorrect", "ambiguous", "offensive", "other"]).launch()
|
||||
```
|
||||
|
||||
### Loading Hugging Face Models and Spaces
|
||||
|
||||
Gradio integrates nicely with the Hugging Face Hub, allowing you to load models and Spaces with just one line of code. To use this, simply use the `load()` method in the `Interface` class. So:
|
||||
|
||||
- To load any model from the Hugging Face Hub and create an interface around it, you pass `"model/"` or `"huggingface/"` followed by the model name, like these examples:
|
||||
|
||||
```python
|
||||
gr.Interface.load("huggingface/gpt2").launch();
|
||||
```
|
||||
|
||||
```python
|
||||
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
|
||||
inputs=gr.inputs.Textbox(lines=5, label="Input Text") # customizes the input component
|
||||
).launch()
|
||||
```
|
||||
|
||||
- To load any Space from the Hugging Face Hub and recreate it locally (so that you can customize the inputs and outputs for example), you pass `"spaces/"` followed by the model name:
|
||||
|
||||
```python
|
||||
gr.Interface.load("spaces/eugenesiow/remove-bg", inputs="webcam", title="Remove your webcam background!").launch()
|
||||
```
|
||||
|
||||
One of the great things about loading Hugging Face models or spaces using Gradio is that you can then immediately use the resulting `Interface` object just like function in your Python code (this works for every type of model/space: text, images, audio, video, and even multimodal models):
|
||||
|
||||
```python
|
||||
io = gr.Interface.load("models/EleutherAI/gpt-neo-2.7B")
|
||||
io("It was the best of times") # outputs model completion
|
||||
```
|
||||
|
||||
### Putting Interfaces in Parallel and Series
|
||||
|
||||
Gradio also lets you mix interfaces very easily using the `gradio.Parallel` and `gradio.Series` classes. `Parallel` lets you put two similar models (if they have the same input type) in parallel to compare model predictions:
|
||||
|
||||
```python
|
||||
generator1 = gr.Interface.load("huggingface/gpt2")
|
||||
generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
|
||||
generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
|
||||
|
||||
gr.Parallel(generator1, generator2, generator3).launch()
|
||||
```
|
||||
|
||||
`Series` lets you put models and spaces in series, piping the output of one model into the input of the next model.
|
||||
|
||||
```python
|
||||
generator = gr.Interface.load("huggingface/gpt2")
|
||||
translator = gr.Interface.load("huggingface/t5-small")
|
||||
|
||||
gr.Series(generator, translator).launch() # this demo generates text, then translates it to German, and outputs the final result.
|
||||
```
|
||||
|
||||
And of course, you can also mix `Parallel` and `Series` together whenever that makes sense!
|
||||
|
||||
### Queuing to Manage Long Inference Times
|
||||
|
||||
If many people are using your interface or if the inference time of your function is long (> 1min), simply set the `enable_queue` parameter in the `launch` method to `True` to prevent timeouts.
|
||||
|
||||
```python
|
||||
gr.Interface(fn=classify_image, inputs=image, outputs=label).launch(enable_queue=True)
|
||||
```
|
||||
|
||||
This sets up a queue of workers to handle the predictions and return the response to the front end. This is strongly recommended if you are planning on uploading your demo to Hugging Face Spaces (as described above) so that you can manage a large number of users simultaneously using your demo.
|
||||
### Next Steps
|
||||
|
||||
Now that you're familiar with the basics of Gradio, here are some good next steps:
|
||||
|
||||
* Check out [the free Gradio course](https://huggingface.co/course/chapter9/1) for a step-by-step walkthrough of everything Gradio-related with lots of examples of how to build your own machine learning demos 📖
|
||||
* Gradio offers two APIs to users: **Interface**, a high level abstraction covered in this guide, and **Blocks**, a more flexible API for designing web apps with more flexible layouts and data flows. [Read more about Blocks here](/introduction_to_blocks/) 🧱
|
||||
* If you'd like to stick with **Interface**, but want to add more advanced features to your demo (like authentication, interpretation, or state), check out our guide on [advanced features with the Interface class](/advanced_interface_features) 💪
|
||||
* If you just want to explore what demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/), view the underlying Python code, and be inspired 🤗
|
||||
|
||||
|
||||
|
||||
|
BIN
demo/blocks_flipper/screenshot.gif
Normal file
BIN
demo/blocks_flipper/screenshot.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 122 KiB |
@ -1,10 +1,12 @@
|
||||
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio) [![codecov](https://codecov.io/gh/gradio-app/gradio/branch/master/graph/badge.svg?token=NNVPX9KEGS)](https://codecov.io/gh/gradio-app/gradio) [![PyPI - Downloads](https://img.shields.io/pypi/dm/gradio)](https://pypi.org/project/gradio/) [![Twitter Follow](https://img.shields.io/twitter/follow/gradio.svg?style=social&label=Follow)](https://twitter.com/gradio)
|
||||
|
||||
# Welcome to Gradio
|
||||
# Gradio: Build Machine Learning Web Apps — in Python
|
||||
|
||||
Quickly create beautiful user interfaces around your machine learning models. Gradio (pronounced GRAY-dee-oh) makes it easy for you to demo your model in your browser or let people "try it out" by dragging-and-dropping in their own images, pasting text, recording their own voice, etc. and seeing what the model outputs.
|
||||
Gradio (pronounced GRAY-dee-oh) is an open-source Python library that has been used to build hundreds of thousands of machine learning and data science demos.
|
||||
|
||||
![Interface montage](website/homepage/src/assets/img/montage.gif)
|
||||
With Gradio, you can quickly create a beautiful user interfaces around your machine learning models and let people "try out" what you've built by dragging-and-dropping in their own images, pasting text, recording their own voice, and interacting with your demo through the browser.
|
||||
|
||||
![Interface montage](website/homepage/src/assets/img/meta-image-2.png)
|
||||
|
||||
Gradio is useful for:
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user