update demos and readme

This commit is contained in:
Ali Abid 2020-10-26 15:27:28 -07:00
parent d4d768e9be
commit 2a7d9972c9
37 changed files with 1309 additions and 127 deletions

293
README.md
View File

@ -1,158 +1,229 @@
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio)
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio)
# Welcome to `gradio` :rocket:
# Welcome to Gradio
Quickly create customizable UI components around your TensorFlow or PyTorch models, or even arbitrary Python functions. Mix and match components to support any combination of inputs and outputs. Gradio makes it easy for you to "play around" with your model in your browser by dragging-and-dropping in your own images (or pasting your own text, recording your own voice, etc.) and seeing what the model outputs. You can also generate a share link which allows anyone, anywhere to use the interface as the model continues to run on your machine. Our core library is free and open-source! Take a look:
Quickly create customizable UI components around your models. Gradio makes it easy for you to "play around" with your model in your browser by dragging-and-dropping in your own images, pasting your own text, recording your own voice, etc. and seeing what the model outputs.
<p align="center">
<img src="https://i.ibb.co/m0skD0j/bert.gif" alt="drawing"/>
</p>
![Interface montage](demo/screenshots/montage.gif)
Gradio is useful for:
* Creating demos of your machine learning code for clients / collaborators / users
* Getting feedback on model performance from users
* Debugging your model interactively during development
To get a sense of `gradio`, take a look at a few of these examples, and find more on our website: www.gradio.app.
## Installation
```
## Getting Started
You can find an interactive version of this README at [https://gradio.app/getting_started](https://gradio.app/getting_started).
### Quick Start
To get Gradio running with a simple example, follow these three steps:
1. Install Gradio from pip.
````bash
pip install gradio
```
(you may need to replace `pip` with `pip3` if you're running `python3`).
````
## Usage
Gradio is very easy to use with your existing code. Here are a few working examples:
### 0. Hello World [![alt text](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/18ODkJvyxHutTN0P5APWyGFO_xwNcgHDZ?usp=sharing)
Let's start with a basic function (no machine learning yet!) that greets an input name. We'll wrap the function with a `Text` to `Text` interface.
```python
2. Run the code below as a Python script or in a Python notebook (or in a [colab notebook](https://colab.research.google.com/drive/18ODkJvyxHutTN0P5APWyGFO_xwNcgHDZ?usp=sharing)).
````python
import gradio as gr
def greet(name):
return "Hello " + name + "!"
gr.Interface(fn=greet, inputs="text", outputs="text").launch()
iface = gr.Interface(fn=greet, inputs="text", outputs="text")
iface.launch()
```
The core Interface class is initialized with three parameters:
3. The interface below will appear automatically within the Python notebook, or pop in a browser on [http://localhost:7860](http://localhost:7860/) if running from a script.
![hello_world interface](demo/screenshots/hello_world/1.gif)
- `fn`: the function to wrap
- `inputs`: the name of the input interface
- `outputs`: the name of the output interface
### The Interface
Calling the `launch()` function of the `Interface` object produces the interface shown in image below. Click on the gif to go the live interface in our getting started page.
Gradio can wrap almost any Python function with an easy to use interface. That function could be anything from a simple tax calculator to a pretrained model.
<a href="https://gradio.app/getting_started#interface_4">
<p align="center">
<img src="https://i.ibb.co/T4Rqs5y/hello-name.gif" alt="drawing"/>
</p>
</a>
The core `Interface` class is initialized with three parameters:
### 1. Inception Net [![alt text](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1c6gQiW88wKBwWq96nqEwuQ1Kyt5LejiU?usp=sharing)
- `fn`: the function to wrap
- `inputs`: the input component type(s)
- `outputs`: the output component type(s)
Now, let's do a machine learning example. We're going to wrap an
interface around the InceptionV3 image classifier, which we'll load
using Tensorflow! Since this is an image classification model, we will use the `Image` input interface.
We'll output a dictionary of labels and their corresponding confidence scores with the `Label` output
interface. (The original Inception Net architecture [can be found here](https://arxiv.org/abs/1409.4842))
With these three arguments, we can quickly create interfaces and `launch()` them. But what if you want to change how the UI components look or behave?
```python
### Customizable Components
What if we wanted to customize the input text field - for example, we wanted it to be larger and have a text hint? If we use the actual input class for `Textbox` instead of using the string shortcut, we have access to much more customizability. To see a list of all the components we support and how you can customize them, check out the [Docs](https://gradio.app/docs)
````python
import gradio as gr
def greet(name):
return "Hello " + name + "!"
iface = gr.Interface(
fn=greet,
inputs=gr.inputs.Textbox(lines=2, placeholder="Name Here..."),
outputs="text")
iface.launch()
```
![hello_world_2 interface](demo/screenshots/hello_world_2/1.gif)
### Multiple Inputs and Outputs
Let's say we had a much more complex function, with multiple inputs and outputs. In the example below, we have a function that takes a string, boolean, and number, and returns a string and number. Take a look how we pass a list of input and output components.
````python
import gradio as gr
def greet(name, is_morning, temperature):
salutation = "Good morning" if is_morning else "Good evening"
greeting = "%s %s. It is %s degrees today" % (
salutation, name, temperature)
celsius = (temperature - 32) * 5 / 9
return greeting, round(celsius, 2)
iface = gr.Interface(
fn=greet,
inputs=["text", "checkbox", gr.inputs.Slider(0, 100)],
outputs=["text", "number"])
iface.launch()
```
![hello_world_3 interface](demo/screenshots/hello_world_3/1.gif)
We simply wrap the components in a list. Furthermore, if we wanted to compare multiple functions that have the same input and return types, we can even pass a list of functions for quick comparison.
### Working with Images
Let's try an image to image function. When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
````python
import gradio as gr
import tensorflow as tf
import numpy as np
import requests
inception_net = tf.keras.applications.InceptionV3() # load the model
def sepia(img):
sepia_filter = np.array([[.393, .769, .189],
[.349, .686, .168],
[.272, .534, .131]])
sepia_img = img.dot(sepia_filter.T)
sepia_img /= sepia_img.max()
return sepia_img
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify_image(inp):
inp = inp.reshape((-1, 299, 299, 3))
inp = tf.keras.applications.inception_v3.preprocess_input(inp)
prediction = inception_net.predict(inp).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
image = gr.inputs.Image(shape=(299, 299, 3))
label = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=classify_image, inputs=image, outputs=label).launch()
iface = gr.Interface(sepia, gr.inputs.Image(shape=(200, 200)), "image")
iface.launch()
```
This code will produce the interface below. The interface gives you a way to test
Inception Net by dragging and dropping images, and also allows you to use naturally modify the input image using image editing tools that
appear when you click EDIT. Notice here we provided actual `gradio.inputs` and `gradio.outputs` objects to the Interface
function instead of using string shortcuts. This lets us use built-in preprocessing (e.g. image resizing)
and postprocessing (e.g. choosing the number of labels to display) provided by these
interfaces.
![sepia_filter interface](demo/screenshots/sepia_filter/1.gif)
<p align="center">
<img src="https://i.ibb.co/X8KGJqB/inception-net-2.gif" alt="drawing"/>
</p>
Additionally, our `Image` input interface comes with an 'edit' button which opens tools for cropping, flipping, rotating, drawing over, and applying filters to images. We've found that manipulating images in this way will often reveal hidden flaws in a model.
You can supply your own model instead of the pretrained model above, as well as use different kinds of models or functions. Here's a list of the interfaces we currently support, along with their preprocessing / postprocessing parameters:
### Example Data
**Input Interfaces**:
- `Sketchpad(shape=(28, 28), invert_colors=True, flatten=False, scale=1/255, shift=0, dtype='float64')`
- `Webcam(image_width=224, image_height=224, num_channels=3, label=None)`
- `Textbox(lines=1, placeholder=None, label=None, numeric=False)`
- `Radio(choices, label=None)`
- `Dropdown(choices, label=None)`
- `CheckboxGroup(choices, label=None)`
- `Slider(minimum=0, maximum=100, default=None, label=None)`
- `Image(shape=(224, 224, 3), image_mode='RGB', scale=1/127.5, shift=-1, label=None)`
- `Microphone()`
You can provide example data that a user can easily load into the model. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you provide a nested list to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs).
**Output Interfaces**:
- `Label(num_top_classes=None, label=None)`
- `KeyValues(label=None)`
- `Textbox(lines=1, placeholder=None, label=None)`
- `Image(label=None, plot=False)`
Interfaces can also be combined together, for multiple-input or multiple-output models.
### 2. Real-Time MNIST [![alt text](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LXJqwdkZNkt1J_yfLWQ3FLxbG2cAF8p4?usp=sharing)
Let's wrap a fun `Sketchpad`-to-`Label` UI around MNIST. For this example, we'll take advantage of the `live`
feature in the library. Set `live=True` inside `Interface()`> to have it run continuous predictions.
We've abstracted the model training from the code below, but you can see the full code on the colab link.
```python
import tensorflow as tf
````python
import gradio as gr
from urllib.request import urlretrieve
urlretrieve("https://gr-models.s3-us-west-2.amazonaws.com/mnist-model.h5","mnist-model.h5")
model = tf.keras.models.load_model("mnist-model.h5")
def calculator(num1, operation, num2):
if operation == "add":
return num1 + num2
elif operation == "subtract":
return num1 - num2
elif operation == "multiply":
return num1 * num2
elif operation == "divide":
return num1 / num2
def recognize_digit(inp):
prediction = model.predict(inp.reshape(1, 28, 28, 1)).tolist()[0]
return {str(i): prediction[i] for i in range(10)}
sketchpad = gr.inputs.Sketchpad()
label = gr.outputs.Label(num_top_classes=3)
gr.Interface(fn=recognize_digit, inputs=sketchpad,
outputs=label, live=True).launch()
iface = gr.Interface(calculator,
["number", gr.inputs.Radio(["add", "subtract", "multiply", "divide"]), "number"],
"number",
examples=[
[5, "add", 3],
[12, "divide", -2]
]
)
iface.launch()
```
![calculator interface](demo/screenshots/calculator/1.gif)
This code will produce the interface below.
### Flagging
<p align="center">
<img src="https://i.ibb.co/vkgZLcH/gif6.gif" alt="drawing"/>
</p>
Underneath the output interfaces, there is a button marked "Flag". When a user testing your model sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for review. Within the directory provided by the `flagging_dir=` argument to the Interface constructor, a CSV file will log the flagged inputs. If the interface involved file inputs, such as for Image and Audio interfaces, folders will be created to store those flagged inputs as well.
You can review these flagged inputs by manually exploring the flagging directory, or load them into the Gradio interface by pointing the `examples=` argument to the flagged CSV file.
### Interpretation
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've added the ability for interpretation so that users can understand what parts of the input are responsible for the output. Take a look at the simple interface below:
````python
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])
total = max(male_count + female_count, 1)
return {"male": male_count / total, "female": female_count / total}
iface = gr.Interface(
fn=gender_of_sentence, inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
outputs="label", interpretation="default")
iface.launch()
```
![gender_sentence_default_interpretation interface](demo/screenshots/gender_sentence_default_interpretation/1.gif)
Notice the `interpretation` keyword argument. We're going to use Gradio's default interpreter here. After you submit and click Interpret, you'll see the interface automatically highlights the parts of the text that contributed to the final output orange! The parts that conflict with the output are highlight blue.
Gradio's default interpretation works with single output type interfaces, where the output is either a Label or Number. We're working on expanding the default interpreter to be much more customizable and support more interfaces.
You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input interface - therefore the number of outputs here corresponds to the number of input interfaces. To see the format for interpretation for each input interface, check the [Docs](https://gradio.app/docs).
````python
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])
total = max(male_count + female_count, 1)
return {"male": male_count / total, "female": female_count / total}
def interpret_gender(sentence):
result = gender_of_sentence(sentence)
is_male = result["male"] > result["female"]
interpretation = []
for word in re.split('( )', sentence):
score = 0
token = word.lower()
if (is_male and token in male_words) or (not is_male and token in female_words):
score = 1
elif (is_male and token in female_words) or (not is_male and token in male_words):
score = -1
interpretation.append((word, score))
return interpretation
iface = gr.Interface(
fn=gender_of_sentence, inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
outputs="label", interpretation=interpret_gender)
iface.launch()
```
![gender_sentence_custom_interpretation interface](demo/screenshots/gender_sentence_custom_interpretation/1.gif)
## Contributing:
## Contributing:
If you would like to contribute and your contribution is small, you can directly open a pull request (PR). If you would like to contribute a larger feature, we recommend first creating an issue with a proposed design for discussion. Please see our contributing guidelines for more info.
## License:
## License:
Gradio is licensed under the Apache License 2.0
## See more:
## See more:
You can find many more examples (like GPT-2, model comparison, multiple inputs, and numerical interfaces) as well as more info on usage on our website: www.gradio.app
@ -165,6 +236,4 @@ author={Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfoz
journal={arXiv preprint arXiv:1906.02569},
year={2019}
}
```
```

View File

@ -20,11 +20,7 @@ import copy
analytics.write_key = "uxIFddIEuuUcFLf9VgH2teTEtPlWdkNy"
analytics_url = 'https://api.gradio.app/'
try:
ip_address = requests.get('https://api.ipify.org').text
except requests.ConnectionError:
ip_address = "No internet connection"
ip_address = networking.get_local_ip_address()
class Interface:
"""
@ -61,6 +57,7 @@ class Interface:
title (str): a title for the interface; if provided, appears above the input and output components.
description (str): a description for the interface; if provided, appears above the input and output components.
thumbnail (str): path to image or src to use as display picture for models listed in gradio.app/hub
server_name (str): to make app accessible on local network set to "0.0.0.0".
allow_screenshot (bool): if False, users will not see a button to take a screenshot of the interface.
allow_flagging (bool): if False, users will not see a button to flag an input and output.
flagging_dir (str): what to name the dir where flagged data is stored.
@ -306,7 +303,7 @@ class Interface:
except (KeyboardInterrupt, OSError):
print("Keyboard interruption in main thread... closing server.")
thread.keep_running = False
networking.url_ok(path_to_local_server)
networking.url_ok(path_to_local_server) # Hit the server one more time to close it
def test_launch(self):
for predict_fn in self.predict:
@ -341,7 +338,7 @@ class Interface:
networking.set_meta_tags(self.title, self.description, self.thumbnail)
server_port, app, thread = networking.start_server(
self, self.server_port)
self, self.server_name, self.server_port)
path_to_local_server = "http://{}:{}/".format(self.server_name, server_port)
self.status = "RUNNING"
self.server = app

View File

@ -55,6 +55,14 @@ def set_config(config):
app.app_globals["config"] = config
def get_local_ip_address():
try:
ip_address = requests.get('https://api.ipify.org').text
except requests.ConnectionError:
ip_address = "No internet connection"
return ip_address
def get_first_available_port(initial, final):
"""
Gets the first open port in a specified range of port numbers
@ -163,7 +171,7 @@ def interpret():
def file(path):
return send_file(os.path.join(app.cwd, path))
def start_server(interface, server_port=None):
def start_server(interface, server_name, server_port=None):
if server_port is None:
server_port = INITIAL_PORT_VALUE
port = get_first_available_port(
@ -175,10 +183,11 @@ def start_server(interface, server_port=None):
log.setLevel(logging.ERROR)
if interface.save_to is not None:
interface.save_to["port"] = port
process = threading.Thread(target=app.run, kwargs={"port": port})
process.start()
return port, app, process
thread = threading.Thread(target=app.run,
kwargs={"port": port, "host": server_name},
daemon=True)
thread.start()
return port, app, thread
def close_server(process):
process.terminate()

View File

@ -46,7 +46,7 @@ const radio = {
},
load_example: function(data) {
let child = this.choices.indexOf(data) + 1;
this.target.find("input:nth-child("+child+")").prop("checked", true);
this.target.find("label:nth-child("+child+") input").prop("checked", true);
this.target.find("input").button("refresh");
}
}

23
demo/calculator.py Normal file
View File

@ -0,0 +1,23 @@
import gradio as gr
def calculator(num1, operation, num2):
if operation == "add":
return num1 + num2
elif operation == "subtract":
return num1 - num2
elif operation == "multiply":
return num1 * num2
elif operation == "divide":
return num1 / num2
iface = gr.Interface(calculator,
["number", gr.inputs.Radio(["add", "subtract", "multiply", "divide"]), "number"],
"number",
examples=[
[5, "add", 3],
[12, "divide", -2]
]
)
if __name__ == "__main__":
iface.launch()

37
demo/face_segment.py Normal file
View File

@ -0,0 +1,37 @@
import gradio as gr
import os, sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.abspath(__file__), "utils"))
from FCN8s_keras import FCN
from PIL import Image
import cv2
import tensorflow as tf
from drive import download_file_from_google_drive
import numpy as np
weights = os.path.join(file_folder, "face_seg_model_weights.h5")
if not os.path.exists(weights):
file_id = "1IerDF2DQqmJWqyvxYZOICJT1eThnG8WR"
download_file_from_google_drive(file_id, weights)
model1 = FCN()
model1.load_weights(weights)
def segment_face(inp):
im = Image.fromarray(np.uint8(inp))
im = im.resize((500, 500))
in_ = np.array(im, dtype=np.float32)
in_ = in_[:, :, ::-1]
in_ -= np.array((104.00698793,116.66876762,122.67891434))
in_ = in_[np.newaxis,:]
out = model1.predict(in_)
out_resized = cv2.resize(np.squeeze(out), (inp.shape[1], inp.shape[0]))
out_resized_clipped = np.clip(out_resized.argmax(axis=2), 0, 1).astype(np.float64)
result = (out_resized_clipped[:, :, np.newaxis] + 0.25)/1.25 * inp.astype(np.float64).astype(np.uint8)
return result / 255
iface = gr.Interface(segment_face, "webcam", "image", capture_session=True)
if __name__ == "__main__":
iface.launch()

View File

@ -0,0 +1,29 @@
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])
total = max(male_count + female_count, 1)
return {"male": male_count / total, "female": female_count / total}
def interpret_gender(sentence):
result = gender_of_sentence(sentence)
is_male = result["male"] > result["female"]
interpretation = []
for word in re.split('( )', sentence):
score = 0
token = word.lower()
if (is_male and token in male_words) or (not is_male and token in female_words):
score = 1
elif (is_male and token in female_words) or (not is_male and token in male_words):
score = -1
interpretation.append((word, score))
return interpretation
iface = gr.Interface(
fn=gender_of_sentence, inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
outputs="label", interpretation=interpret_gender)
if __name__ == "__main__":
iface.launch()

View File

@ -0,0 +1,15 @@
import gradio as gr
import re
male_words, female_words = ["he", "his", "him"], ["she", "her"]
def gender_of_sentence(sentence):
male_count = len([word for word in sentence.split() if word.lower() in male_words])
female_count = len([word for word in sentence.split() if word.lower() in female_words])
total = max(male_count + female_count, 1)
return {"male": male_count / total, "female": female_count / total}
iface = gr.Interface(
fn=gender_of_sentence, inputs=gr.inputs.Textbox(default="She went to his house to get her keys."),
outputs="label", interpretation="default")
if __name__ == "__main__":
iface.launch()

8
demo/hello_world.py Normal file
View File

@ -0,0 +1,8 @@
import gradio as gr
def greet(name):
return "Hello " + name + "!"
iface = gr.Interface(fn=greet, inputs="text", outputs="text")
if __name__ == "__main__":
iface.launch()

11
demo/hello_world_2.py Normal file
View File

@ -0,0 +1,11 @@
import gradio as gr
def greet(name):
return "Hello " + name + "!"
iface = gr.Interface(
fn=greet,
inputs=gr.inputs.Textbox(lines=2, placeholder="Name Here..."),
outputs="text")
if __name__ == "__main__":
iface.launch()

15
demo/hello_world_3.py Normal file
View File

@ -0,0 +1,15 @@
import gradio as gr
def greet(name, is_morning, temperature):
salutation = "Good morning" if is_morning else "Good evening"
greeting = "%s %s. It is %s degrees today" % (
salutation, name, temperature)
celsius = (temperature - 32) * 5 / 9
return greeting, round(celsius, 2)
iface = gr.Interface(
fn=greet,
inputs=["text", "checkbox", gr.inputs.Slider(0, 100)],
outputs=["text", "number"])
if __name__ == "__main__":
iface.launch()

35
demo/outbreak_forecast.py Normal file
View File

@ -0,0 +1,35 @@
import gradio as gr
import matplotlib.pyplot as plt
import numpy as np
from math import sqrt
def outbreak(r, month, countries, social_distancing):
months = ["January", "February", "March", "April", "May"]
m = months.index(month)
start_day = 30 * m
final_day = 30 * (m + 1)
x = np.arange(start_day, final_day+1)
day_count = x.shape[0]
pop_count = {"USA": 350, "Canada": 40, "Mexico": 300, "UK": 120}
r = sqrt(r)
if social_distancing:
r = sqrt(r)
for i, country in enumerate(countries):
series = x ** (r) * (i + 1)
plt.plot(x, series)
plt.title("Outbreak in " + month)
plt.ylabel("Cases")
plt.xlabel("Days since Day 0")
plt.legend(countries)
return plt
iface = gr.Interface(outbreak,
[
gr.inputs.Slider(1, 4, label="R"),
gr.inputs.Dropdown(["January", "February", "March", "April", "May"], label="Month"),
gr.inputs.CheckboxGroup(["USA", "Canada", "Mexico", "UK"], label="Countries"),
gr.inputs.Checkbox(label="Social Distancing?"),
],
"plot")
if __name__ == "__main__":
iface.launch()

17
demo/question_answer.py Normal file
View File

@ -0,0 +1,17 @@
import gradio as gr
import os, sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.abspath(__file__), "utils"))
from bert import QA
model = QA('bert-large-uncased-whole-word-masking-finetuned-squad')
def qa_func(paragraph, question):
return model.predict(paragraph, question)["answer"]
iface = gr.Interface(qa_func,
[
gr.inputs.Textbox(lines=7, label="Context", default="Victoria has a written constitution enacted in 1975, but based on the 1855 colonial constitution, passed by the United Kingdom Parliament as the Victoria Constitution Act 1855, which establishes the Parliament as the state's law-making body for matters coming under state responsibility. The Victorian Constitution can be amended by the Parliament of Victoria, except for certain 'entrenched' provisions that require either an absolute majority in both houses, a three-fifths majority in both houses, or the approval of the Victorian people in a referendum, depending on the provision."),
gr.inputs.Textbox(lines=1, label="Question", default="When did Victoria enact its constitution?"),
],
gr.outputs.Textbox(label="Answer"))
if __name__ == "__main__":
iface.launch()

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 762 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 683 KiB

14
demo/sepia_filter.py Normal file
View File

@ -0,0 +1,14 @@
import gradio as gr
import numpy as np
def sepia(img):
sepia_filter = np.array([[.393, .769, .189],
[.349, .686, .168],
[.272, .534, .131]])
sepia_img = img.dot(sepia_filter.T)
sepia_img /= sepia_img.max()
return sepia_img
iface = gr.Interface(sepia, gr.inputs.Image(shape=(200, 200)), "image")
if __name__ == "__main__":
iface.launch()

130
demo/utils/FCN8s_keras.py Normal file
View File

@ -0,0 +1,130 @@
from keras.models import Sequential, Model
from keras.layers import *
from keras.activations import relu
from keras.initializers import RandomNormal
from keras.applications import *
import keras.backend as K
def FCN(num_output=21, input_shape=(500, 500, 3)):
"""Instantiate the FCN8s architecture with keras.
# Arguments
basenet: type of basene {'vgg16'}
trainable_base: Bool whether the basenet weights are trainable
num_output: number of classes
input_shape: input image shape
weights: pre-trained weights to load (None for training from scratch)
# Returns
A Keras model instance
"""
ROW_AXIS = 1
COL_AXIS = 2
CHANNEL_AXIS = 3
def _crop(target_layer, offset=(None, None), name=None):
"""Crop the bottom such that it has the same shape as target_layer."""
""" Use _keras_shape to prevent undefined output shape in Conv2DTranspose"""
def f(x):
width = x._keras_shape[ROW_AXIS]
height = x._keras_shape[COL_AXIS]
target_width = target_layer._keras_shape[ROW_AXIS]
target_height = target_layer._keras_shape[COL_AXIS]
cropped = Cropping2D(cropping=((offset[0], width - offset[0] - target_width), (offset[1], height - offset[1] - target_height)), name='{}'.format(name))(x)
return cropped
return f
input_tensor = Input(shape=input_shape)
pad1 = ZeroPadding2D(padding=(100, 100))(input_tensor)
conv1_1 = Conv2D(filters=64, kernel_size=(3, 3), activation='relu',
padding='valid', name='conv1_1')(pad1)
conv1_2 = Conv2D(filters=64, kernel_size=(3, 3), activation='relu',
padding='same', name='conv1_2')(conv1_1)
pool1 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2),
padding='same', name='pool1')(conv1_2)
# Block 2
conv2_1 = Conv2D(filters=128, kernel_size=(3, 3),
activation='relu',
padding='same', name='conv2_1')(pool1)
conv2_2 = Conv2D(filters=128, kernel_size=(3, 3), activation='relu',
padding='same', name='conv2_2')(conv2_1)
pool2 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2),
padding='same', name='pool2')(conv2_2)
# Block 3
conv3_1 = Conv2D(filters=256, kernel_size=(3, 3), activation='relu',
padding='same', name='conv3_1')(pool2)
conv3_2 = Conv2D(filters=256, kernel_size=(3, 3), activation='relu',
padding='same', name='conv3_2')(conv3_1)
conv3_3 = Conv2D(filters=256, kernel_size=(3, 3), activation='relu',
padding='same', name='conv3_3')(conv3_2)
pool3 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2),
padding='same', name='pool3')(conv3_3)
# Block 4
conv4_1 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv4_1')(pool3)
conv4_2 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv4_2')(conv4_1)
conv4_3 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv4_3')(conv4_2)
pool4 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2),
padding='same', name='pool4')(conv4_3)
# Block 5
conv5_1 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv5_1')(pool4)
conv5_2 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv5_2')(conv5_1)
conv5_3 = Conv2D(filters=512, kernel_size=(3, 3), activation='relu',
padding='same', name='conv5_3')(conv5_2)
pool5 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2),
padding='same', name='pool5')(conv5_3)
# fully conv
fc6 = Conv2D(filters=4096, kernel_size=(7, 7),
activation='relu', padding='valid',
name='fc6')(pool5)
drop6 = Dropout(0.5)(fc6)
fc7 = Conv2D(filters=4096, kernel_size=(1, 1),
activation='relu', padding='valid',
name='fc7')(drop6)
drop7 = Dropout(0.5)(fc7)
#basenet = VGG16_basenet()
# input
#input_tensor = Input(shape=input_shape)
# Get skip_layers=[drop7, pool4, pool3] from the base net: VGG16
#skip_layers = VGG16_basenet(input_tensor)
#drop7 = skip_layers[0]
score_fr = Conv2D(filters=num_output, kernel_size=(1, 1), padding='valid', name='score_fr')(drop7)
upscore2 = Conv2DTranspose(num_output, kernel_size=4, strides=2, use_bias=False, name='upscore2')(score_fr)
# scale pool4 skip for compatibility
#pool4 = skip_layers[1]
scale_pool4 = Lambda(lambda x: x * 0.01, name='scale_pool4')(pool4)
score_pool4 = Conv2D(filters=num_output, kernel_size=(1, 1),
padding='valid', name='score_pool4')(scale_pool4)
score_pool4c = _crop(upscore2, offset=(5, 5),
name='score_pool4c')(score_pool4)
fuse_pool4 = add([upscore2, score_pool4c])
upscore_pool4 = Conv2DTranspose(filters=num_output, kernel_size=(4, 4),
strides=(2, 2), padding='valid',
use_bias=False,
data_format=K.image_data_format(),
name='upscore_pool4')(fuse_pool4)
# scale pool3 skip for compatibility
#pool3 = skip_layers[2]
scale_pool3 = Lambda(lambda x: x * 0.0001, name='scale_pool3')(pool3)
score_pool3 = Conv2D(filters=num_output, kernel_size=(1, 1),
padding='valid', name='score_pool3')(scale_pool3)
score_pool3c = _crop(upscore_pool4, offset=(9, 9),
name='score_pool3c')(score_pool3)
fuse_pool3 = add([upscore_pool4, score_pool3c])
# score
upscore8 = Conv2DTranspose(filters=num_output, kernel_size=(16, 16),
strides=(8, 8), padding='valid',
use_bias=False,
data_format=K.image_data_format(),
name='upscore8')(fuse_pool3)
score = _crop(input_tensor, offset=(31, 31), name='score')(upscore8)
# model
model = Model(input_tensor, score, name='fcn_vgg16')
return model

75
demo/utils/bert.py Normal file
View File

@ -0,0 +1,75 @@
from __future__ import absolute_import, division, print_function
import collections
import logging
import math
import numpy as np
import torch
from pytorch_transformers import (WEIGHTS_NAME, BertConfig,
BertForQuestionAnswering, BertTokenizer)
from torch.utils.data import DataLoader, SequentialSampler, TensorDataset
from utils import (get_answer, input_to_squad_example,
squad_examples_to_features, to_list)
RawResult = collections.namedtuple("RawResult",
["unique_id", "start_logits", "end_logits"])
class QA:
def __init__(self,model_path: str):
self.max_seq_length = 384
self.doc_stride = 128
self.do_lower_case = True
self.max_query_length = 64
self.n_best_size = 20
self.max_answer_length = 30
self.model, self.tokenizer = self.load_model(model_path)
if torch.cuda.is_available():
self.device = 'cuda'
else:
self.device = 'cpu'
self.model.to(self.device)
self.model.eval()
def load_model(self,model_path: str,do_lower_case=False):
config = BertConfig.from_pretrained(model_path + "/bert_config.json")
tokenizer = BertTokenizer.from_pretrained(model_path, do_lower_case=do_lower_case)
model = BertForQuestionAnswering.from_pretrained(model_path, from_tf=False, config=config)
return model, tokenizer
def predict(self,passage :str,question :str):
example = input_to_squad_example(passage,question)
features = squad_examples_to_features(example,self.tokenizer,self.max_seq_length,self.doc_stride,self.max_query_length)
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids,
all_example_index)
eval_sampler = SequentialSampler(dataset)
eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=1)
all_results = []
for batch in eval_dataloader:
batch = tuple(t.to(self.device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2]
}
example_indices = batch[3]
outputs = self.model(**inputs)
for i, example_index in enumerate(example_indices):
eval_feature = features[example_index.item()]
unique_id = int(eval_feature.unique_id)
result = RawResult(unique_id = unique_id,
start_logits = to_list(outputs[0][i]),
end_logits = to_list(outputs[1][i]))
all_results.append(result)
answer = get_answer(example,features,all_results,self.n_best_size,self.max_answer_length,self.do_lower_case)
return answer

31
demo/utils/drive.py Normal file
View File

@ -0,0 +1,31 @@
import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)

View File

@ -0,0 +1,3 @@
pytorch-transformers==1.0.0
Flask==1.1.1
Flask-Cors==3.0.8

514
demo/utils/utils.py Normal file
View File

@ -0,0 +1,514 @@
from __future__ import absolute_import, division, print_function
import collections
import math
import numpy as np
import torch
from pytorch_transformers.tokenization_bert import (BasicTokenizer,
whitespace_tokenize)
from torch.utils.data import DataLoader, SequentialSampler, TensorDataset
class SquadExample(object):
"""
A single training/test example for the Squad dataset.
For examples without an answer, the start and end position are -1.
"""
def __init__(self,
qas_id,
question_text,
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None):
self.qas_id = qas_id
self.question_text = question_text
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
def __str__(self):
return self.__repr__()
def __repr__(self):
s = ""
s += "qas_id: %s" % (self.qas_id)
s += ", question_text: %s" % (
self.question_text)
s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
if self.start_position:
s += ", start_position: %d" % (self.start_position)
if self.end_position:
s += ", end_position: %d" % (self.end_position)
return s
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
unique_id,
example_index,
doc_span_index,
tokens,
token_to_orig_map,
token_is_max_context,
input_ids,
input_mask,
segment_ids,
paragraph_len,
start_position=None,
end_position=None,):
self.unique_id = unique_id
self.example_index = example_index
self.doc_span_index = doc_span_index
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.token_is_max_context = token_is_max_context
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.paragraph_len = paragraph_len
self.start_position = start_position
self.end_position = end_position
def input_to_squad_example(passage, question):
"""Convert input passage and question into a SquadExample."""
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
paragraph_text = passage
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
qas_id = 0
question_text = question
start_position = None
end_position = None
orig_answer_text = None
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position)
return example
def _check_is_max_context(doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# Because of the sliding window approach taken to scoring documents, a single
# token can appear in multiple documents. E.g.
# Doc: the man went to the store and bought a gallon of milk
# Span A: the man went to the
# Span B: to the store and bought
# Span C: and bought a gallon of
# ...
#
# Now the word 'bought' will have two scores from spans B and C. We only
# want to consider the score with "maximum context", which we define as
# the *minimum* of its left and right context (the *sum* of left and
# right context will always be the same, of course).
#
# In the example the maximum context for 'bought' would be span C since
# it has 1 left context and 3 right context, while span B has 4 left context
# and 0 right context.
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
def squad_examples_to_features(example, tokenizer, max_seq_length,
doc_stride, max_query_length,cls_token_at_end=False,
cls_token='[CLS]', sep_token='[SEP]', pad_token=0,
sequence_a_segment_id=0, sequence_b_segment_id=1,
cls_token_segment_id=0, pad_token_segment_id=0,
mask_padding_with_zero=True):
"""Loads a data file into a list of `InputBatch`s."""
unique_id = 1000000000
# cnt_pos, cnt_neg = 0, 0
# max_N, max_M = 1024, 1024
# f = np.zeros((max_N, max_M), dtype=np.float32)
example_index = 0
features = []
# if example_index % 100 == 0:
# logger.info('Converting %s/%s pos %s neg %s', example_index, len(examples), cnt_pos, cnt_neg)
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[0:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
# The -3 accounts for [CLS], [SEP] and [SEP]
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
# We can have documents that are longer than the maximum sequence length.
# To deal with this we do a sliding window approach, where we take chunks
# of the up to our max length with a stride of `doc_stride`.
_DocSpan = collections.namedtuple( # pylint: disable=invalid-name
"DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_ids = []
# CLS token at the beginning
if not cls_token_at_end:
tokens.append(cls_token)
segment_ids.append(cls_token_segment_id)
# Query
for token in query_tokens:
tokens.append(token)
segment_ids.append(sequence_a_segment_id)
# SEP token
tokens.append(sep_token)
segment_ids.append(sequence_a_segment_id)
# Paragraph
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
is_max_context = _check_is_max_context(doc_spans, doc_span_index,
split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_ids.append(sequence_b_segment_id)
paragraph_len = doc_span.length
# SEP token
tokens.append(sep_token)
segment_ids.append(sequence_b_segment_id)
# CLS token at the end
if cls_token_at_end:
tokens.append(cls_token)
segment_ids.append(cls_token_segment_id)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1 if mask_padding_with_zero else 0] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(pad_token)
input_mask.append(0 if mask_padding_with_zero else 1)
segment_ids.append(pad_token_segment_id)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
start_position = None
end_position = None
features.append(
InputFeatures(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
paragraph_len=paragraph_len,
start_position=start_position,
end_position=end_position))
unique_id += 1
return features
def to_list(tensor):
return tensor.detach().cpu().tolist()
def _get_best_indexes(logits, n_best_size):
"""Get the n-best logits from a list."""
index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
RawResult = collections.namedtuple("RawResult",["unique_id", "start_logits", "end_logits"])
def get_final_text(pred_text, orig_text, do_lower_case, verbose_logging=False):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = steve smith
# orig_text = Steve Smith's
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the SQuAD eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "Steve Smith".
#
# Therefore, we have to apply a semi-complicated alignment heuristic between
# `pred_text` and `orig_text` to get a character-to-character alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == " ":
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = "".join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
tok_text = " ".join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in tok_ns_to_s_map.items():
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
return orig_text
output_text = orig_text[orig_start_position:(orig_end_position + 1)]
return output_text
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
def get_answer(example, features, all_results, n_best_size,
max_answer_length, do_lower_case):
example_index_to_features = collections.defaultdict(list)
for feature in features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( "PrelimPrediction",["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
example_index = 0
features = example_index_to_features[example_index]
prelim_predictions = []
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index]))
prelim_predictions = sorted(prelim_predictions,key=lambda x: (x.start_logit + x.end_logit),reverse=True)
_NbestPrediction = collections.namedtuple("NbestPrediction",
["text", "start_logit", "end_logit","start_index","end_index"])
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
orig_doc_start = -1
orig_doc_end = -1
if pred.start_index > 0: # this is a non-null prediction
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
tok_text = " ".join(tok_tokens)
# De-tokenize WordPieces that have been split off.
tok_text = tok_text.replace(" ##", "")
tok_text = tok_text.replace("##", "")
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text,do_lower_case)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
else:
final_text = ""
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_logit=pred.start_logit,
end_logit=pred.end_logit,
start_index=orig_doc_start,
end_index=orig_doc_end))
if not nbest:
nbest.append(_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0,start_index=-1,
end_index=-1))
assert len(nbest) >= 1
total_scores = []
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
probs = _compute_softmax(total_scores)
answer = {"answer" : nbest[0].text,
"start" : nbest[0].start_index,
"end" : nbest[0].end_index,
"confidence" : probs[0],
"document" : example.doc_tokens
}
return answer

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@ -0,0 +1,3 @@
input_0,output_0
input_2020-10-26-13-34-49.png,"{'confidences': [{'confidence': 1, 'label': '5'}, {'confidence': 0, 'label': '0'}, {'confidence': 0, 'label': '1'}], 'label': '5'}"
input_2020-10-26-13-41-52.png,"{'confidences': [{'confidence': 1, 'label': '2'}, {'confidence': 0, 'label': '0'}, {'confidence': 0, 'label': '1'}], 'label': '2'}"
1 input_0 output_0
2 input_2020-10-26-13-34-49.png {'confidences': [{'confidence': 1, 'label': '5'}, {'confidence': 0, 'label': '0'}, {'confidence': 0, 'label': '1'}], 'label': '5'}
3 input_2020-10-26-13-41-52.png {'confidences': [{'confidence': 1, 'label': '2'}, {'confidence': 0, 'label': '0'}, {'confidence': 0, 'label': '1'}], 'label': '2'}

View File

@ -120,7 +120,7 @@ gradio/static/js/vendor/wavesurfer.min.js
gradio/static/js/vendor/webcam.min.js
gradio/static/js/vendor/white-theme.js
gradio/templates/index.html
test/test_diff_texts.py
test/test_demos.py
test/test_inputs.py
test/test_interfaces.py
test/test_interpretation.py

View File

@ -46,7 +46,7 @@ const radio = {
},
load_example: function(data) {
let child = this.choices.indexOf(data) + 1;
this.target.find("input:nth-child("+child+")").prop("checked", true);
this.target.find("label:nth-child("+child+") input").prop("checked", true);
this.target.find("input").button("refresh");
}
}

123
readme_template.md Normal file
View File

@ -0,0 +1,123 @@
[![CircleCI](https://circleci.com/gh/gradio-app/gradio.svg?style=svg)](https://circleci.com/gh/gradio-app/gradio) [![PyPI version](https://badge.fury.io/py/gradio.svg)](https://badge.fury.io/py/gradio)
# Welcome to Gradio
Quickly create customizable UI components around your models. Gradio makes it easy for you to "play around" with your model in your browser by dragging-and-dropping in your own images, pasting your own text, recording your own voice, etc. and seeing what the model outputs.
![Interface montage](demo/screenshots/montage.gif)
Gradio is useful for:
* Creating demos of your machine learning code for clients / collaborators / users
* Getting feedback on model performance from users
* Debugging your model interactively during development
## Getting Started
You can find an interactive version of this README at [https://gradio.app/getting_started](https://gradio.app/getting_started).
### Quick Start
To get Gradio running with a simple example, follow these three steps:
1. Install Gradio from pip.
````bash
pip install gradio
````
2. Run the code below as a Python script or in a Python notebook (or in a [colab notebook](https://colab.research.google.com/drive/18ODkJvyxHutTN0P5APWyGFO_xwNcgHDZ?usp=sharing)).
$code_hello_world
3. The interface below will appear automatically within the Python notebook, or pop in a browser on [http://localhost:7860](http://localhost:7860/) if running from a script.
$demo_hello_world
### The Interface
Gradio can wrap almost any Python function with an easy to use interface. That function could be anything from a simple tax calculator to a pretrained model.
The core `Interface` class is initialized with three parameters:
- `fn`: the function to wrap
- `inputs`: the input component type(s)
- `outputs`: the output component type(s)
With these three arguments, we can quickly create interfaces and `launch()` them. But what if you want to change how the UI components look or behave?
### Customizable Components
What if we wanted to customize the input text field - for example, we wanted it to be larger and have a text hint? If we use the actual input class for `Textbox` instead of using the string shortcut, we have access to much more customizability. To see a list of all the components we support and how you can customize them, check out the [Docs](https://gradio.app/docs)
$code_hello_world_2
$demo_hello_world_2
### Multiple Inputs and Outputs
Let's say we had a much more complex function, with multiple inputs and outputs. In the example below, we have a function that takes a string, boolean, and number, and returns a string and number. Take a look how we pass a list of input and output components.
$code_hello_world_3
$demo_hello_world_3
We simply wrap the components in a list. Furthermore, if we wanted to compare multiple functions that have the same input and return types, we can even pass a list of functions for quick comparison.
### Working with Images
Let's try an image to image function. When using the `Image` component, your function will receive a numpy array of your specified size, with the shape `(width, height, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a numpy array.
$code_sepia_filter
$demo_sepia_filter
Additionally, our `Image` input interface comes with an 'edit' button which opens tools for cropping, flipping, rotating, drawing over, and applying filters to images. We've found that manipulating images in this way will often reveal hidden flaws in a model.
### Example Data
You can provide example data that a user can easily load into the model. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you provide a nested list to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs).
$code_calculator
$demo_calculator
### Flagging
Underneath the output interfaces, there is a button marked "Flag". When a user testing your model sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for review. Within the directory provided by the `flagging_dir=` argument to the Interface constructor, a CSV file will log the flagged inputs. If the interface involved file inputs, such as for Image and Audio interfaces, folders will be created to store those flagged inputs as well.
You can review these flagged inputs by manually exploring the flagging directory, or load them into the Gradio interface by pointing the `examples=` argument to the flagged CSV file.
### Interpretation
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've added the ability for interpretation so that users can understand what parts of the input are responsible for the output. Take a look at the simple interface below:
$code_gender_sentence_default_interpretation
$demo_gender_sentence_default_interpretation
Notice the `interpretation` keyword argument. We're going to use Gradio's default interpreter here. After you submit and click Interpret, you'll see the interface automatically highlights the parts of the text that contributed to the final output orange! The parts that conflict with the output are highlight blue.
Gradio's default interpretation works with single output type interfaces, where the output is either a Label or Number. We're working on expanding the default interpreter to be much more customizable and support more interfaces.
You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input interface - therefore the number of outputs here corresponds to the number of input interfaces. To see the format for interpretation for each input interface, check the [Docs](https://gradio.app/docs).
$code_gender_sentence_custom_interpretation
$demo_gender_sentence_custom_interpretation
## Contributing:
If you would like to contribute and your contribution is small, you can directly open a pull request (PR). If you would like to contribute a larger feature, we recommend first creating an issue with a proposed design for discussion. Please see our contributing guidelines for more info.
## License:
Gradio is licensed under the Apache License 2.0
## See more:
You can find many more examples (like GPT-2, model comparison, multiple inputs, and numerical interfaces) as well as more info on usage on our website: www.gradio.app
See, also, the accompanying paper: ["Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild"](https://arxiv.org/pdf/1906.02569.pdf), *ICML HILL 2019*, and please use the citation below.
```
@article{abid2019gradio,
title={Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild},
author={Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfozan, Abdulrahman and Zou, James},
journal={arXiv preprint arXiv:1906.02569},
year={2019}
}
```

24
render_readme.py Normal file
View File

@ -0,0 +1,24 @@
import re
import os
from string import Template
with open("readme_template.md") as readme_file:
readme = readme_file.read()
codes = re.findall(r'\$code_([^\s]*)', readme)
demos = re.findall(r'\$demo_([^\s]*)', readme)
template_dict = {}
for code_src in codes:
with open(os.path.join("demo", code_src + ".py")) as code_file:
python_code = code_file.read().replace('if __name__ == "__main__":\n iface.launch()', "iface.launch()")
template_dict["code_" + code_src] = "````python\n" + python_code + "\n```"
for demo_src in demos:
template_dict["demo_" + demo_src] = "![" + demo_src + " interface](demo/screenshots/" + demo_src + "/1.gif)"
readme_template = Template(readme)
output_readme = readme_template.substitute(template_dict)
with open("README.md", "w") as readme_md:
readme_md.write(output_readme)