mirror of
https://github.com/gradio-app/gradio.git
synced 2024-11-21 01:01:05 +08:00
SEO improvements to guides (#2915)
* replace underscores with dashes and redirect old urls * tldrs, listicles, and in-site cross-linking * add canonical tags to all pages * changelog * shorten into Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
This commit is contained in:
parent
db54b7b76a
commit
625ccae34c
@ -14,6 +14,7 @@ No changes to highlight.
|
||||
* Fixes issue where markdown support in chatbot breaks older demos [@dawoodkhan82](https://github.com/dawoodkhan82) in [PR 3006](https://github.com/gradio-app/gradio/pull/3006)
|
||||
|
||||
## Documentation Changes:
|
||||
* SEO improvements to guides by[@aliabd](https://github.com/aliabd) in [PR 2915](https://github.com/gradio-app/gradio/pull/2915)
|
||||
* Use `gr.LinePlot` for the `blocks_kinematics` demo by [@freddyaboulton](https://github.com/freddyaboulton) in [PR 2998](https://github.com/gradio-app/gradio/pull/2998)
|
||||
|
||||
## Testing and Infrastructure Changes:
|
||||
|
@ -1,19 +1,32 @@
|
||||
# Key Features
|
||||
|
||||
Let's go through some of the most popular features of Gradio!
|
||||
Let's go through some of the most popular features of Gradio! Here are Gradio's key features:
|
||||
|
||||
1. [Adding example inputs](#example-inputs)
|
||||
2. [Passing custom error messages](#errors)
|
||||
3. [Adding descriptive content](#descriptive-content)
|
||||
4. [Setting up flagging](#flagging)
|
||||
5. [Preprocessing and postprocessing](#preprocessing-and-postprocessing)
|
||||
6. [Styling demos](#styling)
|
||||
7. [Queuing users](#queuing)
|
||||
8. [Iterative outputs](#iterative-outputs)
|
||||
9. [Progress bars](#progress-bars)
|
||||
10. [Batch functions](#batch-functions)
|
||||
|
||||
## Example Inputs
|
||||
|
||||
You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs).
|
||||
You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs#components).
|
||||
|
||||
$code_calculator
|
||||
$demo_calculator
|
||||
|
||||
You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of `Interface`).
|
||||
|
||||
Continue learning about examples in the [More On Examples](https://gradio.app/more-on-examples) guide.
|
||||
|
||||
## Errors
|
||||
|
||||
You wish to pass custom error messages to the user. To do so, raise a `gr.Error("custom message")` to display an error message. If you try to divide by zero in the calculator demo above, a popup modal will display the custom error message.
|
||||
You wish to pass custom error messages to the user. To do so, raise a `gr.Error("custom message")` to display an error message. If you try to divide by zero in the calculator demo above, a popup modal will display the custom error message. Learn more about Error in the [docs](https://gradio.app/docs#errors).
|
||||
|
||||
## Descriptive Content
|
||||
|
@ -1,5 +1,16 @@
|
||||
# Sharing Your App
|
||||
|
||||
How to share your Gradio app:
|
||||
|
||||
1. [Sharing demos with the share parameter](#sharing-demos)
|
||||
2. [Hosting on HF Spaces](#hosting-on-hf-spaces)
|
||||
3. [Embedding hosted spaces](#embedding-hosted-spaces)
|
||||
4. [Embedding with web components](#embedding-with-web-components)
|
||||
5. [Using the API page](#api-page)
|
||||
6. [Adding authentication to the page](#authentication)
|
||||
7. [Accessing Network Requests](#accessing-the-network-request-directly)
|
||||
8. [Mounting within FastAPI](#mounting-within-another-fastapi-app)
|
||||
|
||||
## Sharing Demos
|
||||
|
||||
Gradio demos can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:
|
@ -1,5 +1,7 @@
|
||||
# Interface State
|
||||
|
||||
This guide covers how State is handled in Gradio. Learn the difference between Global and Session states, and how to use both.
|
||||
|
||||
## Global State
|
||||
|
||||
Your function may use data that persists beyond a single function call. If the data is something accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
|
@ -1,5 +1,7 @@
|
||||
# Reactive Interfaces
|
||||
|
||||
This guide covers how to get Gradio interfaces to refresh automatically or continuously stream data.
|
||||
|
||||
## Live Interfaces
|
||||
|
||||
You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.
|
@ -1,8 +1,10 @@
|
||||
# More on Examples & Flagging
|
||||
# More on Examples
|
||||
|
||||
This guide covers what more you can do with Examples: Loading examples from a directory, providing partial examples, and caching. If Examples is new to you, check out the intro in the [Key Features](../key-features/#example-inputs) guide.
|
||||
|
||||
## Providing Examples
|
||||
|
||||
As covered in the Quickstart, adding examples to an Interface is as easy as providing a list of lists to the `examples`
|
||||
As covered in the [Key Features](../key-features/#example-inputs) guide, adding examples to an Interface is as easy as providing a list of lists to the `examples`
|
||||
keyword argument.
|
||||
Each sublist is a data sample, where each element corresponds to an input of the prediction function.
|
||||
The inputs must be ordered in the same order as the prediction function expects them.
|
@ -1,5 +1,7 @@
|
||||
# Advanced Interface Features
|
||||
|
||||
There's more to cover on the [Interface](https://gradio.app/docs#interface) class. This guide covers all the advanced features: Using [Interpretation](https://gradio.app/docs#interpretation), custom styling, loading from the [Hugging Face Hub](https://hf.co), and using [Parallel](https://gradio.app/docs#parallel) and [Series](https://gradio.app/docs#series).
|
||||
|
||||
## Interpreting your Predictions
|
||||
|
||||
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the `interpretation` keyword in the `Interface` class to `default`. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:
|
||||
@ -23,6 +25,8 @@ You can also write your own interpretation function. The demo below adds custom
|
||||
|
||||
$code_gender_sentence_custom_interpretation
|
||||
|
||||
Learn more about Interpretation in the [docs](https://gradio.app/docs#interpretation).
|
||||
|
||||
## Custom Styling
|
||||
|
||||
If you'd like to have more fine-grained control over any aspect of your demo, you can also write your own css or pass in a filepath to a css file, with the `css` parameter of the `Interface` class.
|
||||
@ -39,7 +43,7 @@ gr.Interface(..., css="body {background-image: url('file=clouds.jpg')}")
|
||||
|
||||
## Loading Hugging Face Models and Spaces
|
||||
|
||||
Gradio integrates nicely with the Hugging Face Hub, allowing you to load models and Spaces with just one line of code. To use this, simply use the `load()` method in the `Interface` class. So:
|
||||
Gradio integrates nicely with the [Hugging Face Hub](https://hf.co), allowing you to load models and Spaces with just one line of code. To use this, simply use the `load()` method in the `Interface` class. So:
|
||||
|
||||
- To load any model from the Hugging Face Hub and create an interface around it, you pass `"model/"` or `"huggingface/"` followed by the model name, like these examples:
|
||||
|
||||
@ -88,3 +92,5 @@ gr.Series(generator, translator).launch() # this demo generates text, then tran
|
||||
```
|
||||
|
||||
And of course, you can also mix `Parallel` and `Series` together whenever that makes sense!
|
||||
|
||||
Learn more about Parallel and Series in the [docs](https://gradio.app/docs#parallel).
|
@ -1,6 +1,6 @@
|
||||
# Blocks and Event Listeners
|
||||
|
||||
We took a quick look at Blocks in the Quickstart. Let's dive deeper.
|
||||
We took a quick look at Blocks in the [Quickstart](https://gradio.app/quickstart/#blocks-more-flexibility-and-control). Let's dive deeper. This guide will cover the how Blocks are structured, event listeners and their types, running events continuously, updating configurations, and using dictionaries vs lists.
|
||||
|
||||
## Blocks Structure
|
||||
|
||||
@ -28,7 +28,7 @@ Take a look at the demo below:
|
||||
$code_blocks_hello
|
||||
$demo_blocks_hello
|
||||
|
||||
Instead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docs) for the event listeners for each Component.
|
||||
Instead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docs#components) for the event listeners for each Component.
|
||||
|
||||
## Running Events Continuously
|
||||
|
@ -22,6 +22,8 @@ with gr.Blocks() as demo:
|
||||
btn2 = gr.Button("Button 2")
|
||||
```
|
||||
|
||||
Learn more about Rows in the [docs](https://gradio.app/docs/#row).
|
||||
|
||||
## Columns and Nesting
|
||||
|
||||
Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
|
||||
@ -33,6 +35,8 @@ See how the first column has two Textboxes arranged vertically. The second colum
|
||||
|
||||
Columns have a `min_width` parameter as well (320 pixels by default). This prevents adjacent columns from becoming too narrow on mobile screens.
|
||||
|
||||
Learn more about Columns in the [docs](https://gradio.app/docs/#column).
|
||||
|
||||
## Tabs and Accordions
|
||||
|
||||
You can also create Tabs using the `with gradio.Tab('tab_name'):` clause. Any component created inside of a `with gradio.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.
|
||||
@ -44,6 +48,7 @@ $demo_blocks_flipper
|
||||
|
||||
Also note the `gradio.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gradio.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.
|
||||
|
||||
Learn more about [Tabs](https://gradio.app/docs/#tab) and [Accordions](https://gradio.app/docs/#accordion) in the docs.
|
||||
|
||||
## Visibility
|
||||
|
@ -1,5 +1,7 @@
|
||||
# State in Blocks
|
||||
|
||||
We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same.
|
||||
|
||||
## Global State
|
||||
|
||||
Global state in Blocks works the same as in Interface. Any variable created outside a function call is a reference shared between all users.
|
||||
@ -25,5 +27,7 @@ Let's see how we do each of the 3 steps listed above in this game:
|
||||
|
||||
With more complex apps, you will likely have many State variables storing session state in a single Blocks app.
|
||||
|
||||
Learn more about `State` in the [docs](https://gradio.app/docs#state).
|
||||
|
||||
|
||||
|
@ -1,5 +1,7 @@
|
||||
# Custom JS and CSS
|
||||
|
||||
This guide covers how to style Blocks with more flexibility, as well as adding Javascript code to event listeners.
|
||||
|
||||
## Custom CSS
|
||||
|
||||
For additional styling ability, you can pass any CSS to your app using the `css=` kwarg.
|
@ -2,7 +2,7 @@
|
||||
Tags: TRANSLATION, HUB, SPACES
|
||||
|
||||
|
||||
**Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](/introduction_to_blocks).
|
||||
**Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](https://gradio.app/quickstart/#blocks-more-flexibility-and-control).
|
||||
|
||||
## Introduction
|
||||
|
@ -11,13 +11,17 @@ Such models are perfect to use with Gradio's *sketchpad* input, so in this tutor
|
||||
|
||||
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
Let's get started!
|
||||
Let's get started! This guide covers how to build a pictionary app (step-by-step):
|
||||
|
||||
1. [Set up the Sketch Recognition Model](#1-set-up-the-sketch-recognition-model)
|
||||
2. [Define a `predict` function](#2-define-a-predict-function)
|
||||
3. [Create a Gradio Interface](#3-create-a-gradio-interface)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Make sure you have the `gradio` Python package already [installed](/getting_started). To use the pretrained sketchpad model, also install `torch`.
|
||||
|
||||
## Step 1 — Setting up the Sketch Recognition Model
|
||||
## 1. Set up the Sketch Recognition Model
|
||||
|
||||
First, you will need a sketch recognition model. Since many researchers have already trained their own models on the Quick Draw dataset, we will use a pretrained model in this tutorial. Our model is a light 1.5 MB model trained by Nate Raw, that [you can download here](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/pytorch_model.bin).
|
||||
|
||||
@ -47,7 +51,7 @@ model.load_state_dict(state_dict, strict=False)
|
||||
model.eval()
|
||||
```
|
||||
|
||||
## Step 2 — Defining a `predict` function
|
||||
## 2. Define a `predict` function
|
||||
|
||||
Next, you will need to define a function that takes in the *user input*, which in this case is a sketched image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/class_names.txt).
|
||||
|
||||
@ -76,7 +80,7 @@ Then, the function converts the image to a PyTorch `tensor`, passes it through t
|
||||
|
||||
* `confidences`: the top five predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
|
||||
|
||||
## Step 3 — Creating a Gradio Interface
|
||||
## 3. Create a Gradio Interface
|
||||
|
||||
Now that we have our predictive function set up, we can create a Gradio Interface around it.
|
||||
|
@ -17,9 +17,15 @@ Chatbots are *stateful*, meaning that the model's prediction can change dependin
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers` and `torch`.
|
||||
Make sure you have the `gradio` Python package already [installed](/quickstart). To use a pretrained chatbot model, also install `transformers` and `torch`.
|
||||
|
||||
## Step 1 — Setting up the Chatbot Model
|
||||
Let's get started! Here's how to build your own chatbot:
|
||||
|
||||
1. [Set up the Chatbot Model](#1-set-up-the-chatbot-model)
|
||||
2. [Define a `predict` function](#2-define-a-predict-function)
|
||||
3. [Create a Gradio Interface](#3-create-a-gradio-interface)
|
||||
|
||||
## 1. Set up the Chatbot Model
|
||||
|
||||
First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
|
||||
|
||||
@ -33,7 +39,7 @@ tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
|
||||
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
|
||||
```
|
||||
|
||||
## Step 2 — Defining a `predict` function
|
||||
## 2. Define a `predict` function
|
||||
|
||||
Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
|
||||
|
||||
@ -66,7 +72,7 @@ Then, the function tokenizes the input and concatenates it with the tokens corre
|
||||
* `response`: which is a list of tuples of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
|
||||
* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
|
||||
|
||||
## Step 3 — Creating a Gradio Interface
|
||||
## 3. Create a Gradio Interface
|
||||
|
||||
Now that we have our predictive function set up, we can create a Gradio Interface around it.
|
||||
|
||||
@ -93,7 +99,7 @@ This produces the following interface, which you can try right here in your brow
|
||||
|
||||
And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
|
||||
|
||||
* Gradio's ["Getting Started" guide](https://gradio.app/getting_started/)
|
||||
* Gradio's [Quickstart guide](https://gradio.app/quickstart/)
|
||||
* The final [chatbot demo](https://huggingface.co/spaces/abidlabs/chatbot-stylized) and [complete code](https://huggingface.co/spaces/abidlabs/chatbot-stylized/tree/main) (on Hugging Face Spaces)
|
||||
|
||||
|
@ -8,7 +8,13 @@ The purpose of this guide is to illustrate how to add a new component, which you
|
||||
|
||||
Make sure you have followed the [CONTRIBUTING.md](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) guide in order to setup your local development environment (both client and server side).
|
||||
|
||||
## Step 1 - Create a New Python Class and Import it
|
||||
Here's how to create a new component on Gradio:
|
||||
|
||||
1. [Create a New Python Class and Import it](#1-create-a-new-python-class-and-import-it)
|
||||
2. [Create a New Svelte Component](#2-create-a-new-svelte-component)
|
||||
3. [Create a New Demo](#3-create-a-new-demo)
|
||||
|
||||
## 1. Create a New Python Class and Import it
|
||||
|
||||
The first thing to do is to create a new class within the [components.py](https://github.com/gradio-app/gradio/blob/main/gradio/components.py) file. This Python class should inherit from a list of base components and should be placed within the file in the correct section with respect to the type of component you want to add (e.g. input, output or static components).
|
||||
In general, it is advisable to take an existing component as a reference (e.g. [TextBox](https://github.com/gradio-app/gradio/blob/main/gradio/components.py#L290)), copy its code as a skeleton and then adapt it to the case at hand.
|
||||
@ -142,7 +148,7 @@ from gradio.components import (
|
||||
|
||||
```
|
||||
|
||||
### Step 1.1 - Writing Unit Test for Python Class
|
||||
### 1.1 Writing Unit Test for Python Class
|
||||
|
||||
When developing new components, you should also write a suite of unit tests for it. The tests should be placed in the [gradio/test/test_components.py](https://github.com/gradio-app/gradio/blob/main/test/test_components.py) file. Again, as above, take a cue from the tests of other components (e.g. [Textbox](https://github.com/gradio-app/gradio/blob/main/test/test_components.py)) and add as many unit tests as you think are appropriate to test all the different aspects and functionalities of the new component. For example, the following tests were added for the ColorPicker component:
|
||||
|
||||
@ -199,7 +205,7 @@ class TestColorPicker(unittest.TestCase):
|
||||
self.assertEqual(component.get_config().get("value"), "#000000")
|
||||
```
|
||||
|
||||
## Step 2 - Create a New Svelte Component
|
||||
## 2. Create a New Svelte Component
|
||||
|
||||
Let's see the steps you need to follow to create the frontend of your new component and to map it to its python code:
|
||||
- Create a new UI-side Svelte component and figure out where to place it. The options are: create a package for the new component in the [ui/packages folder](https://github.com/gradio-app/gradio/tree/main/ui/packages), if this is completely different from existing components or add the new component to an existing package, such as to the [form package](https://github.com/gradio-app/gradio/tree/main/ui/packages/form). The ColorPicker component for example, was included in the form package because it is similar to components that already exist.
|
||||
@ -367,11 +373,11 @@ colorpicker: () => import("./ColorPicker"),
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.1 . Writing Unit Test for Svelte Component
|
||||
### 2.1 Writing Unit Test for Svelte Component
|
||||
|
||||
When developing new components, you should also write a suite of unit tests for it. The tests should be placed in the new component's folder in a file named MyAwesomeComponent.test.ts. Again, as above, take a cue from the tests of other components (e.g. [Textbox.test.ts](https://github.com/gradio-app/gradio/blob/main/ui/packages/app/src/components/Textbox/Textbox.test.ts)) and add as many unit tests as you think are appropriate to test all the different aspects and functionalities of the new component.
|
||||
|
||||
### Step 3 - Create a New Demo
|
||||
### 3. Create a New Demo
|
||||
|
||||
The last step is to create a demo in the [gradio/demo folder](https://github.com/gradio-app/gradio/tree/main/demo), which will use the newly added component. Again, the suggestion is to reference an existing demo. Write the code for the demo in a file called run.py, add the necessary requirements and an image showing the application interface. Finally add a gif showing its usage.
|
||||
You can take a look at the [demo](https://github.com/gradio-app/gradio/tree/main/demo/color_picker) created for the ColorPicker, where an icon and a color selected through the new component is taken as input, and the same icon colored with the selected color is returned as output.
|
@ -2,8 +2,8 @@
|
||||
Tags: INTERPRETATION, SENTIMENT ANALYSIS
|
||||
|
||||
**Prerequisite**: This Guide requires you to know about Blocks and the interpretation feature of Interfaces.
|
||||
Make sure to [read the Guide to Blocks first](/introduction_to_blocks) as well as the
|
||||
interpretation section of the [Advanced Interface Features Guide](/advanced_interface_features#interpreting-your-predictions).
|
||||
Make sure to [read the Guide to Blocks first](https://gradio.app/quickstart/#blocks-more-flexibility-and-control) as well as the
|
||||
interpretation section of the [Advanced Interface Features Guide](/advanced-interface-features#interpreting-your-predictions).
|
||||
|
||||
## Introduction
|
||||
|
@ -1,8 +1,8 @@
|
||||
# Developing Faster with Auto-Reloading
|
||||
|
||||
**Prerequisite**: This Guide requires you to know about Blocks. Make sure to [read the Guide to Blocks first](/introduction_to_blocks).
|
||||
**Prerequisite**: This Guide requires you to know about Blocks. Make sure to [read the Guide to Blocks first](https://gradio.app/quickstart/#blocks-more-flexibility-and-control).
|
||||
|
||||
<span id="advanced-features"></span>
|
||||
This guide covers auto reloading, reloading in a Python IDE, and using gradio with Jupyter Notebooks.
|
||||
|
||||
## Why Auto-Reloading?
|
||||
|
@ -13,7 +13,7 @@ This guide will show you how to build a demo for your 3D image model in a few li
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
||||
Make sure you have the `gradio` Python package already [installed](https://gradio.app/quickstart).
|
||||
|
||||
|
||||
## Taking a Look at the Code
|
@ -24,7 +24,15 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
|
||||
|
||||
Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.
|
||||
|
||||
## Step 1 — Setting up the Transformers ASR Model
|
||||
Here's how to build a real time speech recognition (ASR) app:
|
||||
|
||||
1. [Set up the Transformers ASR Model](#1-set-up-the-transformers-asr-model)
|
||||
2. [Create a Full-Context ASR Demo with Transformers](#2-create-a-full-context-asr-demo-with-transformers)
|
||||
3. [Create a Streaming ASR Demo with Transformers](#3-create-a-streaming-asr-demo-with-transformers)
|
||||
4. [Create a Streaming ASR Demo with DeepSpeech](#4-create-a-streaming-asr-demo-with-deepspeech)
|
||||
|
||||
|
||||
## 1. Set up the Transformers ASR Model
|
||||
|
||||
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the Hugging Face model, `Wav2Vec2`.
|
||||
|
||||
@ -38,7 +46,7 @@ p = pipeline("automatic-speech-recognition")
|
||||
|
||||
That's it! By default, the automatic speech recognition model pipeline loads Facebook's `facebook/wav2vec2-base-960h` model.
|
||||
|
||||
## Step 2 — Creating a Full-Context ASR Demo with Transformers
|
||||
## 2. Create a Full-Context ASR Demo with Transformers
|
||||
|
||||
We will start by creating a *full-context* ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
|
||||
|
||||
@ -63,7 +71,7 @@ Let's see it in action! (Record a short audio clip and then click submit, or [op
|
||||
|
||||
<iframe src="https://abidlabs-full-context-asr.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
## Step 3 — Creating a Streaming ASR Demo with Transformers
|
||||
## 3. Create a Streaming ASR Demo with Transformers
|
||||
|
||||
Ok great! We've built an ASR model that works well for short audio clips. However, if you are recording longer audio clips, you probably want a *streaming* interface, one that transcribes audio as the user speaks instead of just all-at-once at the end.
|
||||
|
||||
@ -140,7 +148,7 @@ Try the demo below to see the difference (or [open in a new tab](https://hugging
|
||||
<iframe src="https://abidlabs-streaming-asr-paused.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
|
||||
## Step 4 — Creating a Streaming ASR Demo with DeepSpeech
|
||||
## 4. Create a Streaming ASR Demo with DeepSpeech
|
||||
|
||||
You're not restricted to ASR models from the `transformers` library -- you can use your own models or models from other libraries. The `DeepSpeech` library contains models that are specifically designed to handle streaming audio data. These models perform really well with streaming data as they are able to account for previous chunks of audio data when making predictions.
|
||||
|
@ -13,7 +13,7 @@ Gradio simplifies the collection of this data by including a **Flag** button wit
|
||||
|
||||
Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.
|
||||
|
||||
There are [four parameters](/docs/#interface-header) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.
|
||||
There are [four parameters](https://gradio.app/docs/#interface-header) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.
|
||||
|
||||
* `allow_flagging`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`.
|
||||
* `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.
|
@ -26,10 +26,165 @@ server {
|
||||
return 301 /quickstart;
|
||||
}
|
||||
|
||||
location /guides.html {
|
||||
return 301 /guides;
|
||||
location /building_with_blocks {
|
||||
return 301 /building-with-blocks;
|
||||
}
|
||||
|
||||
location /other_tutorials {
|
||||
return 301 /other-tutorials;
|
||||
}
|
||||
|
||||
location /building_interfaces {
|
||||
return 301 /building-interfaces;
|
||||
}
|
||||
|
||||
location /tabular_data_science_and_plots {
|
||||
return 301 /tabular-data-science-and-plots;
|
||||
}
|
||||
|
||||
location /integrating_other_frameworks {
|
||||
return 301 /integrating-other-frameworks;
|
||||
}
|
||||
|
||||
location /controlling_layout.md {
|
||||
return 301 /controlling-layout.md;
|
||||
}
|
||||
|
||||
location /state_in_blocks.md {
|
||||
return 301 /state-in-blocks.md;
|
||||
}
|
||||
|
||||
location /custom_CSS_and_JS.md {
|
||||
return 301 /custom-CSS-and-JS.md;
|
||||
}
|
||||
|
||||
location /blocks_and_event_listeners.md {
|
||||
return 301 /blocks-and-event-listeners.md;
|
||||
}
|
||||
|
||||
location /using_blocks_like_functions.md {
|
||||
return 301 /using-blocks-like-functions.md;
|
||||
}
|
||||
|
||||
location /using_flagging.md {
|
||||
return 301 /using-flagging.md;
|
||||
}
|
||||
|
||||
location /named_entity_recognition.md {
|
||||
return 301 /named-entity-recognition.md;
|
||||
}
|
||||
|
||||
location /real_time_speech_recognition.md {
|
||||
return 301 /real-time-speech-recognition.md;
|
||||
}
|
||||
|
||||
location /developing_faster_with_reload_mode.md {
|
||||
return 301 /developing-faster-with-reload-mode.md;
|
||||
}
|
||||
|
||||
location /create_your_own_friends_with_a_gan.md {
|
||||
return 301 /create-your-own-friends-with-a-gan.md;
|
||||
}
|
||||
|
||||
location /setting_up_a_demo_for_maximum_performance.md {
|
||||
return 301 /setting-up-a-demo-for-maximum-performance.md;
|
||||
}
|
||||
|
||||
location /building_a_pictionary_app.md {
|
||||
return 301 /building-a-pictionary-app.md;
|
||||
}
|
||||
|
||||
location /creating_a_chatbot.md {
|
||||
return 301 /creating-a-chatbot.md;
|
||||
}
|
||||
|
||||
location /how_to_use_3D_model_component.md {
|
||||
return 301 /how-to-use-3D-model-component.md;
|
||||
}
|
||||
|
||||
location /creating_a_new_component.md {
|
||||
return 301 /creating-a-new-component.md;
|
||||
}
|
||||
|
||||
location /running_background_tasks.md {
|
||||
return 301 /running-background-tasks.md;
|
||||
}
|
||||
|
||||
location /custom_interpretations_with_blocks.md {
|
||||
return 301 /custom-interpretations-with-blocks.md;
|
||||
}
|
||||
|
||||
location /reactive_interfaces.md {
|
||||
return 301 /reactive-interfaces.md;
|
||||
}
|
||||
|
||||
location /more_on_examples_and_flagging.md {
|
||||
return 301 /more-on-examples.md;
|
||||
}
|
||||
|
||||
location /interface_state.md {
|
||||
return 301 /interface-state.md;
|
||||
}
|
||||
|
||||
location /advanced_interface_features.md {
|
||||
return 301 /advanced-interface-features.md;
|
||||
}
|
||||
|
||||
location /key_features.md {
|
||||
return 301 /key-features.md;
|
||||
}
|
||||
|
||||
location /quickstart.md {
|
||||
return 301 /quickstart.md;
|
||||
}
|
||||
|
||||
location /sharing_your_app.md {
|
||||
return 301 /sharing-your-app.md;
|
||||
}
|
||||
|
||||
location /connecting_to_a_database.md {
|
||||
return 301 /connecting-to-a-database.md;
|
||||
}
|
||||
|
||||
location /creating_a_realtime_dashboard_from_google_sheets.md {
|
||||
return 301 /creating-a-realtime-dashboard-from-google-sheets.md;
|
||||
}
|
||||
|
||||
location /plot_component_for_maps.md {
|
||||
return 301 /plot-component-for-maps.md;
|
||||
}
|
||||
|
||||
location /creating_a_dashboard_from_bigquery_data.md {
|
||||
return 301 /creating-a-dashboard-from-bigquery-data.md;
|
||||
}
|
||||
|
||||
location /using_gradio_for_tabular_workflows.md {
|
||||
return 301 /using-gradio-for-tabular-workflows.md;
|
||||
}
|
||||
|
||||
location /image_classification_in_pytorch.md {
|
||||
return 301 /image-classification-in-pytorch.md;
|
||||
}
|
||||
|
||||
location /using_hugging_face_integrations.md {
|
||||
return 301 /using-hugging-face-integrations.md;
|
||||
}
|
||||
|
||||
location /Gradio_and_ONNX_on_Hugging_Face.md {
|
||||
return 301 /Gradio-and-ONNX-on-Hugging-Face.md;
|
||||
}
|
||||
|
||||
location /image_classification_with_vision_transformers.md {
|
||||
return 301 /image-classification-with-vision-transformers.md;
|
||||
}
|
||||
|
||||
location /Gradio_and_Wandb_Integration.md {
|
||||
return 301 /Gradio-and-Wandb-Integration.md;
|
||||
}
|
||||
|
||||
location /image_classification_in_tensorflow.md {
|
||||
return 301 /image-classification-in-tensorflow.md;
|
||||
}
|
||||
|
||||
error_page 404 /404.html;
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title="Changelog", url="https://gradio.app/changelog", image="https://www.gradio.app/assets/img/meta-image.png", description="Gradio Changelog and Release Notes" %}
|
||||
{% with title="Changelog", url="https://gradio.app/changelog", image="https://www.gradio.app/assets/img/meta-image.png", description="Gradio Changelog and Release Notes", canonical="https://gradio.app/changelog" %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title="Gradio Demos", url="https://gradio.app/demos", image="/assets/img/meta-image.png", description="Play Around with Gradio Demos" %}
|
||||
{% with title="Gradio Demos", url="https://gradio.app/demos", image="/assets/img/meta-image.png", description="Play Around with Gradio Demos", canonical="https://gradio.app/demos" %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -148,6 +148,7 @@ def build(output_dir, jinja_env, gradio_wheel_url, gradio_version):
|
||||
version="main",
|
||||
gradio_version=gradio_version,
|
||||
gradio_wheel_url=gradio_wheel_url,
|
||||
canonical_suffix="/main"
|
||||
)
|
||||
output_folder = os.path.join(output_dir, "docs")
|
||||
os.makedirs(output_folder)
|
||||
@ -167,7 +168,7 @@ def build_pip_template(version, jinja_env):
|
||||
docs_files = os.listdir("src/docs")
|
||||
template = jinja_env.get_template("docs/template.html")
|
||||
output = template.render(
|
||||
docs=docs, find_cls=find_cls, version="pip", gradio_version=version, ordered_events=ordered_events
|
||||
docs=docs, find_cls=find_cls, version="pip", gradio_version=version, canonical_suffix="", ordered_events=ordered_events
|
||||
)
|
||||
with open(f"src/docs/v{version}_template.html", "w+") as template_file:
|
||||
template_file.write(output)
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title="Gradio Docs", url="https://gradio.app/docs", image="/assets/img/meta-image.png", description="Browse Gradio Documentation and Examples" %}
|
||||
{% with title="Gradio Docs", url="https://gradio.app/docs", image="/assets/img/meta-image.png", description="Browse Gradio Documentation and Examples", canonical="https://gradio.app/docs" + canonical_suffix %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -32,7 +32,7 @@ def format_name(guide_name):
|
||||
guide_name = guide_name[guide_name.index("_") + 1 :]
|
||||
if guide_name.lower().endswith(".md"):
|
||||
guide_name = guide_name[:-3]
|
||||
pretty_guide_name = " ".join([word[0].upper() + word[1:] for word in guide_name.split("_")])
|
||||
pretty_guide_name = " ".join([word[0].upper() + word[1:] for word in guide_name.split("-")])
|
||||
return index, guide_name, pretty_guide_name
|
||||
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title="Gradio Guides", url="https://gradio.app/guides", image="/assets/img/meta-image.png", description="Step-by-Step Gradio Tutorials" %}
|
||||
{% with title="Gradio Guides", url="https://gradio.app/guides", image="/assets/img/meta-image.png", description="Step-by-Step Gradio Tutorials", canonical="https://gradio.app/guides" %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title=pretty_name, url="https://gradio.app/" + name, image="https://www.gradio.app/assets/img/meta-image.png", description="A Step-by-Step Gradio Tutorial" %}
|
||||
{% with title=pretty_name, url="https://gradio.app/" + name, image="https://www.gradio.app/assets/img/meta-image.png", description="A Step-by-Step Gradio Tutorial", canonical="https://gradio.app/" + name %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{% with title="Gradio", url="https://gradio.app/", image="/assets/img/meta-image.png", description="Build & Share Delightful Machine Learning Apps" %}
|
||||
{% with title="Gradio", url="https://gradio.app/", image="/assets/img/meta-image.png", description="Build & Share Delightful Machine Learning Apps", canonical="https://gradio.app/" %}
|
||||
{% include "templates/meta.html" %}
|
||||
{% endwith %}
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
|
@ -20,6 +20,7 @@
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
|
||||
<link rel="canonical" href="{{ canonical }}" />
|
||||
|
||||
|
||||
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-156449732-1"></script>
|
||||
|
@ -16,7 +16,7 @@
|
||||
<nav
|
||||
class="hidden w-full flex-col gap-3 lg:flex lg:w-auto lg:flex-row lg:gap-8"
|
||||
>
|
||||
<a class="thin-link flex items-center gap-3" href="/getting_started"
|
||||
<a class="thin-link flex items-center gap-3" href="/quickstart"
|
||||
><span>⚡</span> <span>Quickstart</span>
|
||||
</a>
|
||||
<a class="thin-link flex items-center gap-3" href="/docs"
|
||||
|
Loading…
Reference in New Issue
Block a user