merge conflicts

This commit is contained in:
aliabd 2021-12-23 14:55:31 +04:00
commit d159f982a3
271 changed files with 37748 additions and 4327 deletions

View File

@ -18,7 +18,7 @@ jobs:
. venv/bin/activate
pip install --upgrade pip
pip install -r gradio.egg-info/requires.txt
pip install shap IPython
pip install shap IPython comet_ml wandb mlflow tensorflow transformers
pip install selenium==4.0.0a6.post2 coverage scikit-image
- run:
command: |

39
.dockerignore Normal file
View File

@ -0,0 +1,39 @@
# Python build
.eggs/
gradio.egg-info/*
!gradio.egg-info/requires.txt
!gradio.egg-info/PKG-INFO
dist/
*.pyc
__pycache__/
*.py[cod]
*$py.class
build/
# JS build
gradio/templates/frontend/static
# Secrets
.env
# Gradio run artifacts
*.db
*.sqlite3
gradio/launches.json
# Tests
.coverage
coverage.xml
test.txt
# Demos
demo/tmp.zip
demo/flagged
demo/files/*.avi
demo/files/*.mp4
# Etc
.idea/*
.DS_Store
*.bak
workspace.code-workspace

29
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,29 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Device information (please complete the following information):**
- OS: [e.g. Windows or iOS]
- Browser [e.g. chrome, safari]
- Gradio version [e.g. 2.5.1]
**Additional context**
Add any other context about the problem here.

View File

@ -0,0 +1,17 @@
---
name: Feature request
about: Suggest an improvement or new feature for Gradio
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.

54
.gitignore vendored
View File

@ -1,40 +1,40 @@
venv
dist/
# Python build
.eggs/
gradio.egg-info/*
!gradio.egg-info/requires.txt
!gradio.egg-info/PKG-INFO
dist/
*.pyc
staticfiles
.env
.coverage
coverage.xml
*.sqlite3
.idea/*
*.ipynb
.ipynb_checkpoints/*
models/*
.models/*
gradio_files/*
ngrok*
examples/ngrok*
gradio-flagged/*
.DS_Store
__pycache__/
*.py[cod]
*$py.class
demo/models/*
*.h5
docs.json
*.bak
demo/tmp.zip
demo/flagged
test.txt
build/
flagged/
gradio/launches.json
# JS build
gradio/templates/frontend/static
workspace.code-workspace
# Secrets
.env
# Gradio run artifacts
*.db
*.sqlite3
gradio/launches.json
flagged
# Tests
.coverage
coverage.xml
test.txt
# Demos
demo/tmp.zip
demo/files/*.avi
demo/files/*.mp4
# Etc
.idea/*
.DS_Store
*.bak
workspace.code-workspace
*.h5

View File

@ -1,17 +1,27 @@
# Contributing to Gradio UI
You can start by forking or cloning the repo (https://github.com/gradio-app/gradio-UI.git) and creating your own branch to work from. All PRs must pass the continuous integration
tests and receive approval from a member of the Gradio UI development team before they will be merged.
# Contributing to Gradio
You can start by forking or cloning the repo (https://github.com/gradio-app/gradio.git) and creating your own branch to work from. All PRs must pass the continuous integration
tests and receive approval from a member of the Gradio development team before they will be merged.
### Structure of the Repository
It's helpful to know the overall structure of the repository so that you can focus on the part of the source code you'd like to contribute to
* `/gradio`: contains the source code for the actual Python library
* `/gradio/interface.py`: contains the source code for the core `Interface` class
* `/test`: contains unit tests for the Python library
* `/website`: contains the code for the Gradio website (www.gradio.app). See the README in the `/website` folder for more details
### Continuous Integration and Testing
All PRs must pass the continuous integration tests before merging. To test locally, you can run `python3 -m unittest`.
### Submitting PRs
All PRs should be against `master`. Direct commits to master are blocked, and PRs require an approving review
to merge into master. By convention, the Gradio UI maintainers will review PRs when:
* An initial review has been requested
* A maintainer is tagged in the PR comments and asked to complete a review
to merge into master. By convention, the Gradio maintainers will review PRs when:
* An initial review has been requested, and
* A maintainer (@abidlabs, @aliabid94, @aliabd, @AK391, or @dawoodkhan82) is tagged in the PR comments and asked to complete a review
We ask that you make sure initial CI checks are passing before requesting a review.
One of the Gradio UI maintainers will merge the PR when all the checks are passing.
One of the Gradio maintainers will merge the PR when all the checks are passing.

231
README.md
View File

@ -1,22 +1,23 @@
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio) [![codecov](https://codecov.io/gh/gradio-app/gradio/branch/master/graph/badge.svg?token=NNVPX9KEGS)](https://codecov.io/gh/gradio-app/gradio) [![PyPI - Downloads](https://img.shields.io/pypi/dm/gradio)](https://pypi.org/project/gradio/) [![Twitter Follow](https://img.shields.io/twitter/follow/gradio.svg?style=social&label=Follow)](https://twitter.com/gradio)
# Welcome to Gradio
Quickly create a GUI around your machine learning model, API, or function. Gradio makes it easy for you to "play around" with your model in your browser by dragging-and-dropping in your own images, pasting your own text, recording your own voice, etc. and seeing what the model outputs.
Quickly create customizable UI components around your models. Gradio makes it easy for you to "play around" with your model in your browser by dragging-and-dropping in your own images, pasting your own text, recording your own voice, etc. and seeing what the model outputs.
![Interface montage](demo/screenshots/montage.gif)
![Interface montage](website/homepage/src/assets/img/montage.gif)
Gradio is useful for:
* **Demoing** your machine learning models for clients / collaborators / users / students
* **Deploying** your models quickly with automatic shareable links and getting feedback on model performance
* **Debugging** your model interactively during development using built-in interpretation visualizations for any model
**You can find an interactive version of the following Getting Started at [https://gradio.app/getting_started](https://gradio.app/getting_started).**
## Getting Started
You can find an interactive version of this README at [https://gradio.app/getting_started](https://gradio.app/getting_started).
### Quick Start
@ -43,11 +44,11 @@ iface.launch()
<span>3.</span> The interface below will appear automatically within the Python notebook, or pop in a browser on [http://localhost:7860](http://localhost:7860/) if running from a script.
![hello_world interface](demo/screenshots/hello_world/1.gif)
![hello_world interface](demo/hello_world/screenshot.gif)
### The Interface
Gradio can wrap almost any Python function with an easy to use interface. That function could be anything from a simple tax calculator to a pretrained model.
Gradio can wrap almost any Python function with an easy-to-use user interface. That function could be anything from a simple tax calculator to a pretrained machine learning model.
The core `Interface` class is initialized with three parameters:
@ -59,7 +60,7 @@ With these three arguments, we can quickly create interfaces and `launch()` th
### Customizable Components
What if we wanted to customize the input text field - for example, we wanted it to be larger and have a text hint? If we use the actual input class for `Textbox` instead of using the string shortcut, we have access to much more customizability. To see a list of all the components we support and how you can customize them, check out the [Docs](https://gradio.app/docs)
Let's say we want to customize the input text field - for example, we wanted it to be larger and have a text hint. If we use the actual input class for `Textbox` instead of using the string shortcut, we have access to much more customizability. To see a list of all the components we support and how you can customize them, check out the [Docs](https://gradio.app/docs).
```python
import gradio as gr
@ -73,7 +74,7 @@ iface = gr.Interface(
outputs="text")
iface.launch()
```
![hello_world_2 interface](demo/screenshots/hello_world_2/1.gif)
![hello_world_2 interface](demo/hello_world_2/screenshot.gif)
### Multiple Inputs and Outputs
@ -95,13 +96,13 @@ iface = gr.Interface(
outputs=["text", "number"])
iface.launch()
```
![hello_world_3 interface](demo/screenshots/hello_world_3/1.gif)
![hello_world_3 interface](demo/hello_world_3/screenshot.gif)
We simply wrap the components in a list. Furthermore, if we wanted to compare multiple functions that have the same input and return types, we can even pass a list of functions for quick comparison.
We simply wrap the components in a list. Each component in the `inputs` list corresponds to one of the parameters of the function, in order. Each component in the `outputs` list corresponds to one of the values returned by the function, again in order.
### Working with Images
Let's try an image to image function. When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
Let's try an image-to-image function. When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
```python
import gradio as gr
@ -119,13 +120,13 @@ iface = gr.Interface(sepia, gr.inputs.Image(shape=(200, 200)), "image")
iface.launch()
```
![sepia_filter interface](demo/screenshots/sepia_filter/1.gif)
![sepia_filter interface](demo/sepia_filter/screenshot.gif)
Additionally, our `Image` input interface comes with an 'edit' button which opens tools for cropping, flipping, rotating, drawing over, and applying filters to images. We've found that manipulating images in this way will often reveal hidden flaws in a model.
In addition to images, Gradio supports other media input types, such as audio or video uploads. Read about these in the [Docs](https://gradio.app/docs).
In addition to images, Gradio supports other media input types, such as audio or video uploads, as well as many output components. Read about these in the [Docs](https://gradio.app/docs).
### Working with Data
### Working with DataFrames and Graphs
You can use Gradio to support inputs and outputs from your typical data libraries, such as numpy arrays, pandas dataframes, and plotly graphs. Take a look at the demo below (ignore the complicated data manipulation in the function!)
@ -163,7 +164,7 @@ iface = gr.Interface(sales_projections,
iface.launch()
```
![sales_projections interface](demo/screenshots/sales_projections/1.gif)
![sales_projections interface](demo/sales_projections/screenshot.gif)
### Example Inputs
@ -199,13 +200,13 @@ iface = gr.Interface(calculator,
iface.launch()
```
![calculator interface](demo/screenshots/calculator/1.gif)
![calculator interface](demo/calculator/screenshot.gif)
You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of Interface) and you can use CTRL + arrow keys to navigate through the examples quickly.
### Live Interfaces
You can make interfaces automatically responsive by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input.
You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.
```python
import gradio as gr
@ -229,7 +230,7 @@ iface = gr.Interface(calculator,
iface.launch()
```
![calculator_live interface](demo/screenshots/calculator_live/1.gif)
![calculator_live interface](demo/calculator_live/screenshot.gif)
Note there is no submit button, because the interface resubmits automatically on change.
@ -270,7 +271,7 @@ iface = gr.Interface(chat, "text", "html", css="""
""", allow_screenshot=False, allow_flagging=False)
iface.launch()
```
![chatbot interface](demo/screenshots/chatbot/1.gif)
![chatbot interface](demo/chatbot/screenshot.gif)
Notice how the state persists across submits within each page, but the state is not shared between the two pages.
@ -316,7 +317,7 @@ im/1.png,Output/1.png
You can review these flagged inputs by manually exploring the flagging directory, or load them into the examples of the Gradio interface by pointing the `examples=` argument to the flagged directory. If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of the strings when flagging, which will be saved as an additional column to the CSV.
### Sharing Interfaces Publicly & Privacy
### Sharing Interfaces Publicly
Interfaces can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:
@ -324,27 +325,68 @@ Interfaces can be easily shared publicly by setting `share=True` in the `launch(
gr.Interface(classify_image, "image", "label").launch(share=True)
```
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any dependencies. If you're working out of colab notebook, a share link is always automatically created. It usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces.
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any packaging any dependencies. If you're working out of colab notebook, a share link is always automatically created. It usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces.
Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default), only a local link is created, which can be shared by [port-forwarding](https://www.ssh.com/ssh/tunneling/example) with specific users.
Share links expire after 72 hours. For permanent hosting, see below.
Share links expire after 72 hours. For permanent hosting, see Hosting Gradio Apps on Spaces below.
![Sharing diagram](demo/images/sharing.svg)
![Sharing diagram](/assets/img/sharing.svg)
### Hosting Gradio Apps on Spaces
Huggingface provides the infrastructure to permanently host your Gradio model on the internet, for free! You can either drag and drop a folder containing your Gradio model and all related files, or you can point HF Spaces to your Git repository and HP Spaces will pull the Gradio interface from there. See [Huggingface Spaces](http://huggingface.co/spaces/) for more information.
![Hosting Demo](/assets/img/hf_demo.gif)
## Advanced Features
<span id="advanced-features"></span>
Here, we go through several advanced functionalities that your Gradio demo can include without you needing to write much more code!
### Authentication
You may wish to put an authentication page in front of your interface to limit access. With the `auth=` keyword argument in the `launch()` method, you can pass a list of acceptable username/password tuples; or, for custom authentication handling, pass a function that takes a username and password as arguments, and returns True to allow authentication, False otherwise.
You may wish to put an authentication page in front of your interface to limit who can open your interface. With the `auth=` keyword argument in the `launch()` method, you can pass a list of acceptable username/password tuples; or, for more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns True to allow authentication, False otherwise. Here's an example that provides password-based authentication for a single user named "admin":
### Permanent Hosting
```python
gr.Interface(fn=classify_image, inputs=image, outputs=label).launch(auth=("admin", "pass1234"))
```
You can share your interface publicly and permanently by hosting on Gradio's infrastructure. You will need to create a Gradio premium account. First, log into Gradio on [gradio.app](https://gradio.app) and click Sign In at the top. Once you've logged in with your Github account, you can specify which repositories from your Github profile you'd like to have hosted by Gradio. You must also specify the file within the repository that runs the Gradio `launch()` command. Once you've taken these steps, Gradio will launch your interface and provide a public link you can share.
### Interpreting your Predictions
## Advanced Features
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the `interpretation` keyword in the `Interface` class to `default`. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:
### Interpretation
```python
import gradio as gr
import tensorflow as tf
import requests
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've added the ability for interpretation so that users can understand what parts of the input are responsible for the output. Take a look at the simple interface below:
inception_net = tf.keras.applications.MobileNetV2() # load the model
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify_image(inp):
inp = inp.reshape((-1, 224, 224, 3))
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
prediction = inception_net.predict(inp).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
image = gr.inputs.Image(shape=(224, 224))
label = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=classify_image, inputs=image, outputs=label, interpretation="default").launch()
```
In addition to `default`, Gradio also includes [Shapley-based interpretation](https://christophm.github.io/interpretable-ml-book/shap.html), which provides more accurate interpretations, albeit usually with a slower runtime. To use this, simply set the `interpretation` parameter to `"shap"` (note: also make sure the python package `shap` is installed). Optionally, you can modify the the `num_shap` parameter, which controls the tradeoff between accuracy and runtime (increasing this value generally increases accuracy). Here is an example:
```python
gr.Interface(fn=classify_image, inputs=image, outputs=label, interpretation="shap", num_shap=5).launch()
```
This will work for any function, even if internally, the model is a complex neural network or some other black box. If you use Gradio's `default` or `shap` interpretation, the output component must be a `Label`. All common input components are supported. Here is an example with text input.
```python
import gradio as gr
@ -363,11 +405,10 @@ iface = gr.Interface(
iface.launch()
```
![gender_sentence_default_interpretation interface](demo/screenshots/gender_sentence_default_interpretation/1.gif)
Notice the `interpretation` keyword argument. We're going to use Gradio's default interpreter here. After you submit and click Interpret, you'll see the interface automatically highlights the parts of the text that contributed to the final output orange! The parts that conflict with the output are highlight blue.
So what is happening under the hood? With these interpretation methods, Gradio runs the prediction multiple times with modified versions of the input. Based on the results, you'll see that the interface automatically highlights the parts of the text (or image, etc.) that contributed increased the likelihood of the class as red. The intensity of color corresponds to the importance of that part of the input. The parts that decrease the class confidence are highlighted blue.
You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input interface - therefore the number of outputs here corresponds to the number of input interfaces. To see the format for interpretation for each input interface, check the [Docs](https://gradio.app/docs).
You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input interface - therefore the number of outputs here corresponds to the number of input interfaces. To see the format for interpretation for each input interface, check the Docs.
```python
import gradio as gr
@ -399,48 +440,98 @@ iface = gr.Interface(
outputs="label", interpretation=interpret_gender, enable_queue=True)
iface.launch()
```
![gender_sentence_custom_interpretation interface](demo/screenshots/gender_sentence_custom_interpretation/1.gif)
If you use Gradio's default interpretation, the output component must be a label or a number. All input components are supported for default interpretation. Below is an example with image input.
### Themes and Custom Styling
If you'd like to change how your interface looks, you can select a different theme by simply passing in the `theme` parameter, like so:
```python
import gradio as gr
import tensorflow as tf
import numpy as np
import json
from os.path import dirname, realpath, join
# Load human-readable labels for ImageNet.
current_dir = dirname(realpath(__file__))
with open(join(current_dir, "files/imagenet_labels.json")) as labels_file:
labels = json.load(labels_file)
mobile_net = tf.keras.applications.MobileNetV2()
def image_classifier(im):
arr = np.expand_dims(im, axis=0)
arr = tf.keras.applications.mobilenet.preprocess_input(arr)
prediction = mobile_net.predict(arr).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
iface = gr.Interface(
image_classifier,
gr.inputs.Image(shape=(224, 224)),
gr.outputs.Label(num_top_classes=3),
capture_session=True,
interpretation="default",
examples=[
["images/cheetah1.jpg"],
["images/lion.jpg"]
])
iface.launch()
gr.Interface(fn=classify_image, inputs=image, outputs=label, theme="huggingface").launch()
```
![image_classifier interface](demo/screenshots/image_classifier/1.gif)
Here are the themes we currently support: `"default"`, `"huggingface"`, `"grass"`, `"peach"`, and the dark themes corresponding to each of these: `"darkdefault"`, `"darkhuggingface"`, `"darkgrass"`, `"darkpeach"`.
If you'd like to have more fine-grained control over any aspect of the app, you can also write your own css or pass in a css file, with the `css` parameter of the `Interface` class.
### Custom Flagging Options
In some cases, you might like to provide your users or testers with *more* than just a binary option to flag a sample. You can provide `flagging_options` that they select from a dropdown each time they click the flag button. This lets them provide additional feedback every time they flag a sample.
Here's an example:
```python
gr.Interface(fn=classify_image, inputs=image, outputs=label, flagging_options=["incorrect", "ambiguous", "offensive", "other"]).launch()
```
### Loading Hugging Face Models and Spaces
Gradio integrates nicely with the Hugging Face Hub, allowing you to load models and Spaces with just one line of code. To use this, simply use the `load()` method in the `Interface` class. So:
- To load any model from the Hugging Face Hub and create an interface around it, you pass `"model/"` or `"huggingface/"` followed by the model name, like these examples:
```python
gr.Interface.load("huggingface/gpt-2").launch();
```
```python
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
inputs=gr.inputs.Textbox(lines=5, label="Input Text") # customizes the input component
).launch()
```
- To load any Space from the Hugging Face Hub and recreate it locally (so that you can customize the inputs and outputs for example), you pass `"spaces/"` followed by the model name:
```python
gr.Interface.load("spaces/eugenesiow/remove-bg", inputs="webcam", title="Remove your webcam background!").launch()
```
One of the great things about loading Hugging Face models or spaces using Gradio is that you can then immediately use the resulting `Interface` object just like function in your Python code (this works for every type of model/space: text, images, audio, video, and even multimodal models):
```python
io = gr.Interface.load("models/EleutherAI/gpt-neo-2.7B")
io("It was the best of times") # outputs model completion
```
### Putting Interfaces in Parallel and Series
Gradio also lets you mix interfaces very easily using the `gradio.Parallel` and `gradio.Series` classes. `Parallel` lets you put two similar models (if they have the same input type) in parallel to compare model predictions:
```python
generator1 = gr.Interface.load("huggingface/gpt2")
generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
gr.Parallel(generator1, generator2, generator3).launch()
```
`Series` lets you put models and spaces in series, piping the output of one model into the input of the next model.
```python
generator = gr.Interface.load("huggingface/gpt2")
translator = gr.Interface.load("huggingface/t5-small")
gr.Series(generator, translator).launch() # this demo generates text, then translates it to German, and outputs the final result.
```
And of course, you can also mix `Parallel` and `Series` together whenever that makes sense!
### Queuing to Manage Long Inference Times
If many people are using your interface or if the inference time of your function is long (> 1min), simply set the `enable_queue` parameter in the `Interface` class to `True` to prevent timeouts.
```python
gr.Interface(fn=classify_image, inputs=image, outputs=label, enable_queue=True).launch()
```
This sets up a queue of workers to handle the predictions and return the response to the front end. This is strongly recommended if you are planning on uploading your demo to Hugging Face Spaces (as described above) so that you can manage a large number of users simultaneously using your demo.
## Contributing:
If you would like to contribute and your contribution is small, you can directly open a pull request (PR). If you would like to contribute a larger feature, we recommend first creating an issue with a proposed design for discussion. Please see our contributing guidelines for more info.
If you would like to contribute and your contribution is small, you can directly open a pull request (PR). If you would like to contribute a larger feature, we recommend first creating an issue with a proposed design for discussion. Please see our [contributing guidelines](https://github.com/gradio-app/gradio/blob/master/CONTRIBUTING.md) for more info.
## License:
@ -448,7 +539,7 @@ Gradio is licensed under the Apache License 2.0
## See more:
You can find many more examples (like GPT-2, model comparison, multiple inputs, and numerical interfaces) as well as more info on usage on our website: www.gradio.app
You can find many more examples as well as more info on usage on our website: www.gradio.app
See, also, the accompanying paper: ["Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild"](https://arxiv.org/pdf/1906.02569.pdf), *ICML HILL 2019*, and please use the citation below.
@ -459,4 +550,4 @@ author={Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfoz
journal={arXiv preprint arXiv:1906.02569},
year={2019}
}
```
```

5
SECURITY.md Normal file
View File

@ -0,0 +1,5 @@
# Security Policy
## Reporting a Vulnerability
If you discover a security vulnerability, we would be very grateful if you could email us at team@gradio.app. This is the preferred approach instead of opening a public issue. We take all vulnerability reports seriously, and will work to patch the vulnerability immediately. Whenever possible, we will credit the person or people who report the security vulnerabilities after it has been patched.

View File

@ -1,51 +0,0 @@
from argparse import ArgumentParser
import gradio
import numpy as np
import signal
import time
parser = ArgumentParser(description='Arguments for Building Interface')
parser.add_argument('-i', '--inputs', type=str, help="name of input interface")
parser.add_argument('-o', '--outputs', type=str, help="name of output interface")
parser.add_argument('-d', '--delay', type=int, help="delay in processing", default=0)
share_parser = parser.add_mutually_exclusive_group(required=False)
share_parser.add_argument('--share', dest='share', action='store_true')
share_parser.add_argument('--no-share', dest='share', action='store_false')
parser.set_defaults(share=False)
args = parser.parse_args()
def mdl(input):
time.sleep(args.delay)
return np.array(1)
def launch_interface(args):
io = gradio.Interface(inputs=args.inputs, outputs=args.outputs, model=mdl, model_type='pyfunc')
httpd, _, _ = io.launch(share=args.share, validate=False)
class ServiceExit(Exception):
"""
Custom exception which is used to trigger the clean exit
of all running threads and the main program.
"""
pass
def service_shutdown(signum, frame):
print('Shutting server down due to signal {}'.format(signum))
httpd.shutdown()
raise ServiceExit
signal.signal(signal.SIGTERM, service_shutdown)
signal.signal(signal.SIGINT, service_shutdown)
try:
# Keep the main thread running, otherwise signals are ignored.
while True:
time.sleep(0.5)
except ServiceExit:
pass
if __name__ == "__main__":
launch_interface(args)

View File

@ -2,3 +2,4 @@ coverage:
range: 0..100
round: down
precision: 2
comment: false

View File

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 157 KiB

View File

Before

Width:  |  Height:  |  Size: 143 KiB

After

Width:  |  Height:  |  Size: 143 KiB

View File

Before

Width:  |  Height:  |  Size: 224 KiB

After

Width:  |  Height:  |  Size: 224 KiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1 @@
tensorflow

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

@ -0,0 +1 @@
fpdf

View File

@ -35,7 +35,7 @@ iface = gr.Interface(disease_report,
],
title="Disease Report",
description="Upload an Xray and select the diseases to scan for.",
theme="compact",
theme="grass",
flagging_options=["good", "bad", "etc"],
allow_flagging="auto"
)

View File

Before

Width:  |  Height:  |  Size: 528 KiB

After

Width:  |  Height:  |  Size: 528 KiB

View File

@ -1,39 +0,0 @@
import gradio as gr
import os, sys
file_folder = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(file_folder, "utils"))
from FCN8s_keras import FCN
from PIL import Image
import cv2
import tensorflow as tf
from drive import download_file_from_google_drive
import numpy as np
weights = os.path.join(file_folder, "face_seg_model_weights.h5")
if not os.path.exists(weights):
file_id = "1IerDF2DQqmJWqyvxYZOICJT1eThnG8WR"
download_file_from_google_drive(file_id, weights)
model1 = FCN()
model1.load_weights(weights)
def segment_face(inp):
im = Image.fromarray(np.uint8(inp))
im = im.resize((500, 500))
in_ = np.array(im, dtype=np.float32)
in_ = in_[:, :, ::-1]
in_ -= np.array((104.00698793,116.66876762,122.67891434))
in_ = in_[np.newaxis,:]
out = model1.predict(in_)
out_resized = cv2.resize(np.squeeze(out), (inp.shape[1], inp.shape[0]))
out_resized_clipped = np.clip(out_resized.argmax(axis=2), 0, 1).astype(np.float64)
result = (out_resized_clipped[:, :, np.newaxis] + 0.25)/1.25 * inp.astype(np.float64).astype(np.uint8)
return result / 255
iface = gr.Interface(segment_face, gr.inputs.Image(source="webcam", tool=None), "image", capture_session=True)
if __name__ == "__main__":
iface.launch()

View File

@ -1,27 +0,0 @@
time,retail,food,other
0,0,4,4
1,4,3,7
2,12,34,22
3,25,24,25
4,26,12,45
5,28,4,44
6,39,33,32
7,32,24,24
8,28,34,36
9,64,52,54
10,72,66,67
11,53,45,54
12,24,54,42
13,35,35,107
14,36,34,70
15,48,20,20
16,62,32,60
17,44,56,81
18,44,54,76
19,30,52,72
20,21,66,50
21,134,60,40
22,124,50,40
23,63,55,35
24,24,40,20
25,66,60,22
1 time retail food other
2 0 0 4 4
3 1 4 3 7
4 2 12 34 22
5 3 25 24 25
6 4 26 12 45
7 5 28 4 44
8 6 39 33 32
9 7 32 24 24
10 8 28 34 36
11 9 64 52 54
12 10 72 66 67
13 11 53 45 54
14 12 24 54 42
15 13 35 35 107
16 14 36 34 70
17 15 48 20 20
18 16 62 32 60
19 17 44 56 81
20 18 44 54 76
21 19 30 52 72
22 20 21 66 50
23 21 134 60 40
24 22 124 50 40
25 23 63 55 35
26 24 24 40 20
27 25 66 60 22

Binary file not shown.

Binary file not shown.

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1 @@
matplotlib

View File

@ -0,0 +1 @@
pandas

View File

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -1,7 +1,7 @@
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])
@ -26,4 +26,4 @@ iface = gr.Interface(
fn=gender_of_sentence, inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
outputs="label", interpretation=interpret_gender, enable_queue=True)
if __name__ == "__main__":
iface.launch()
iface.launch()

View File

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 68 KiB

View File

@ -1,7 +1,7 @@
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])

View File

Before

Width:  |  Height:  |  Size: 75 KiB

After

Width:  |  Height:  |  Size: 75 KiB

View File

@ -0,0 +1 @@
numpy

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

13
demo/gpt_j/run.py Normal file
View File

@ -0,0 +1,13 @@
import gradio as gr
title = "GPT-J-6B"
examples = [
['The tower is 324 metres (1,063 ft) tall,'],
["The Moon's orbit around Earth has"],
["The smooth Borealis basin in the Northern Hemisphere covers 40%"]
]
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
inputs=gr.inputs.Textbox(lines=5, label="Input Text"),
title=title, examples=examples).launch();

View File

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

View File

Before

Width:  |  Height:  |  Size: 93 KiB

After

Width:  |  Height:  |  Size: 93 KiB

View File

@ -1,31 +0,0 @@
import gradio as gr
import tensorflow as tf
import numpy as np
import json
from os.path import dirname, realpath, join
# Load human-readable labels for ImageNet.
current_dir = dirname(realpath(__file__))
with open(join(current_dir, "files/imagenet_labels.json")) as labels_file:
labels = json.load(labels_file)
mobile_net = tf.keras.applications.MobileNetV2()
def image_classifier(im):
arr = np.expand_dims(im, axis=0)
arr = tf.keras.applications.mobilenet.preprocess_input(arr)
prediction = mobile_net.predict(arr).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
iface = gr.Interface(
image_classifier,
gr.inputs.Image(shape=(224, 224)),
gr.outputs.Label(num_top_classes=3),
capture_session=True,
interpretation="default",
examples=[
["images/cheetah1.jpg"],
["images/lion.jpg"]
])
if __name__ == "__main__":
iface.launch()

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -0,0 +1,2 @@
numpy
tensorflow

View File

@ -0,0 +1,21 @@
import gradio as gr
import tensorflow as tf
import requests
inception_net = tf.keras.applications.MobileNetV2() # load the model
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify_image(inp):
inp = inp.reshape((-1, 224, 224, 3))
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
prediction = inception_net.predict(inp).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
image = gr.inputs.Image(shape=(224, 224))
label = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=[
["images/cheetah1.jpg"], ["images/lion.jpg"]]).launch()

View File

Before

Width:  |  Height:  |  Size: 412 KiB

After

Width:  |  Height:  |  Size: 412 KiB

View File

Before

Width:  |  Height:  |  Size: 541 KiB

After

Width:  |  Height:  |  Size: 541 KiB

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,3 @@
pillow
torch
torchvision

View File

@ -0,0 +1,22 @@
import torch
import requests
import gradio as gr
from PIL import Image
from torchvision import transforms
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def predict(inp):
inp = Image.fromarray(inp.astype('uint8'), 'RGB')
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
return {labels[i]: float(prediction[i]) for i in range(1000)}
inputs = gr.inputs.Image()
outputs = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=predict, inputs=inputs, outputs=outputs).launch()

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -0,0 +1,2 @@
numpy
tensorflow

View File

@ -0,0 +1,20 @@
import gradio as gr
import tensorflow as tf
import requests
inception_net = tf.keras.applications.MobileNetV2() # load the model
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify_image(inp):
inp = inp.reshape((-1, 224, 224, 3))
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
prediction = inception_net.predict(inp).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
image = gr.inputs.Image(shape=(224, 224))
label = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=classify_image, inputs=image, outputs=label, interpretation="default").launch()

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

View File

@ -1,13 +1,8 @@
import gradio as gr
def image_mod(image):
if image is None:
return "images/lion.jpg"
return image.rotate(45)
iface = gr.Interface(image_mod, gr.inputs.Image(type="pil"), "image")
if __name__ == "__main__":
iface.launch()

View File

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 315 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 396 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -14,8 +14,8 @@ def fn(text1, text2, num, slider1, slider2, single_checkbox,
"negative": slider1 / (num + slider1 + slider2),
"neutral": slider2 / (num + slider1 + slider2),
}, # Label
(audio1[0], np.flipud(audio1[1])) if audio1 is not None else "audio/cantina.wav", # Audio
np.flipud(im1) if im1 is not None else "images/2.jpg", # Image
(audio1[0], np.flipud(audio1[1])) if audio1 is not None else "files/cantina.wav", # Audio
np.flipud(im1) if im1 is not None else "files/cheetah1.jpg", # Image
video, # Video
[("Height", 70), ("Weight", 150), ("BMI", "22"), (dropdown, 42)], # KeyValues
[("The", "art"), (" ", None), ("quick", "adj"), (" ", None),
@ -24,7 +24,7 @@ def fn(text1, text2, num, slider1, slider2, single_checkbox,
"<button style='background-color: red'>Click Me: " + radio + "</button>", # HTML
"files/titanic.csv",
np.ones((4, 3)), # Dataframe
[im for im in [im1, im2, im3, im4, "images/1.jpg"] if im is not None], # Carousel
[im for im in [im1, im2, im3, im4, "files/cheetah1.jpg"] if im is not None], # Carousel
df2 # Timeseries
)
@ -59,19 +59,19 @@ iface = gr.Interface(
gr.inputs.Timeseries(x="time", y="value", optional=True),
],
outputs=[
gr.outputs.Textbox(),
gr.outputs.Label(),
gr.outputs.Audio(),
gr.outputs.Image(),
gr.outputs.Video(),
gr.outputs.KeyValues(),
gr.outputs.HighlightedText(),
gr.outputs.JSON(),
gr.outputs.HTML(),
gr.outputs.File(),
gr.outputs.Dataframe(),
gr.outputs.Carousel("image"),
gr.outputs.Timeseries(x="time", y="value")
gr.outputs.Textbox(label="Textbox"),
gr.outputs.Label(label="Label"),
gr.outputs.Audio(label="Audio"),
gr.outputs.Image(label="Image"),
gr.outputs.Video(label="Video"),
gr.outputs.KeyValues(label="KeyValues"),
gr.outputs.HighlightedText(label="HighlightedText"),
gr.outputs.JSON(label="JSON"),
gr.outputs.HTML(label="HTML"),
gr.outputs.File(label="File"),
gr.outputs.Dataframe(label="Dataframe"),
gr.outputs.Carousel("image", label="Carousel"),
gr.outputs.Timeseries(x="time", y="value", label="Timeseries")
],
theme="huggingface",
title="Kitchen Sink",

Binary file not shown.

View File

@ -0,0 +1,3 @@
scipy
numpy
matplotlib

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -0,0 +1 @@
numpy

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -0,0 +1,2 @@
numpy
matplotlib

View File

@ -1,18 +0,0 @@
import gradio as gr
import os, sys
file_folder = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(file_folder, "utils"))
from bert import QA
model = QA('bert-large-uncased-whole-word-masking-finetuned-squad')
def qa_func(paragraph, question):
return model.predict(paragraph, question)["answer"]
iface = gr.Interface(qa_func,
[
gr.inputs.Textbox(lines=7, label="Context", default="Victoria has a written constitution enacted in 1975, but based on the 1855 colonial constitution, passed by the United Kingdom Parliament as the Victoria Constitution Act 1855, which establishes the Parliament as the state's law-making body for matters coming under state responsibility. The Victorian Constitution can be amended by the Parliament of Victoria, except for certain 'entrenched' provisions that require either an absolute majority in both houses, a three-fifths majority in both houses, or the approval of the Victorian people in a referendum, depending on the provision."),
gr.inputs.Textbox(lines=1, label="Question", default="When did Victoria enact its constitution?"),
],
gr.outputs.Textbox(label="Answer"))
if __name__ == "__main__":
iface.launch()

View File

@ -0,0 +1 @@
pytorch-transformers==1.0.0

View File

@ -0,0 +1,13 @@
import gradio as gr
examples = [
["The Amazon rainforest is a moist broadleaf forest that covers most of the Amazon basin of South America",
"Which continent is the Amazon rainforest in?"]
]
gr.Interface.load("huggingface/deepset/roberta-base-squad2",
inputs=[gr.inputs.Textbox(lines=5, label="Context", placeholder="Type a sentence or paragraph here."),
gr.inputs.Textbox(lines=2, label="Question", placeholder="Ask a question based on the context.")],
outputs=[gr.outputs.Textbox(label="Answer"),
gr.outputs.Label(label="Probability")],
examples=examples).launch()

Binary file not shown.

Binary file not shown.

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -0,0 +1,3 @@
pandas
numpy
matplotlib

Some files were not shown because too many files have changed in this diff Show More