Fix small issues in docs and guides (#5669)

* Keep website guides sidebar width consistent

* add next / prev buttons to chatinterface

* add changeset

* sidebar fixes on docs

* clean iframes from guides

* add changeset

---------

Co-authored-by: gradio-pr-bot <gradio-pr-bot@users.noreply.github.com>
This commit is contained in:
Ali Abdalla 2023-09-25 13:42:32 -07:00 committed by GitHub
parent c57f1b75e2
commit c5e9695596
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 201 additions and 197 deletions

View File

@ -0,0 +1,6 @@
---
"gradio": minor
"website": minor
---
feat:Fix small issues in docs and guides

View File

@ -709,7 +709,7 @@ class Blocks(BlockContext):
btn.click(fn=update, inputs=inp, outputs=out)
demo.launch()
Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german, sound_alert
Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german
Guides: blocks-and-event-listeners, controlling-layout, state-in-blocks, custom-CSS-and-JS, custom-interpretations-with-blocks, using-blocks-like-functions
"""

View File

@ -12,9 +12,7 @@ In this Guide, we'll walk you through:
- How to setup a Gradio demo for EfficientNet-Lite4
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
Here's an example of an ONNX model: try out the EfficientNet-Lite4 demo below.
<iframe src="https://onnx-efficientnet-lite4.hf.space" frameBorder="0" height="810" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Here's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.
## What is the ONNX Model Zoo?

View File

@ -12,9 +12,6 @@ In this Guide, we'll walk you through:
- How to setup a Gradio demo using the Wandb integration for JoJoGAN
- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
Here's an example of an model trained and experiments tracked on wandb, try out the JoJoGAN demo below.
<iframe src="https://akhaliq-jojogan.hf.space" frameBorder="0" height="810" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## What is Wandb?

View File

@ -7,9 +7,7 @@ Tags: VISION, RESNET, PYTORCH
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
Let's get started!
@ -81,7 +79,8 @@ gr.Interface(fn=predict,
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
<gradio-app space="gradio/pytorch-image-classifier">
---

View File

@ -7,9 +7,8 @@ Tags: VISION, MOBILENET, TENSORFLOW
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from traffic control systems to satellite imaging.
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Let's get started!
@ -79,7 +78,7 @@ gr.Interface(fn=classify_image,
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
<gradio-app space="gradio/keras-image-classifier">
---

View File

@ -7,9 +7,7 @@ Tags: VISION, TRANSFORMERS, HUB
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
State-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like this (try one of the examples!):
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
State-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.
Let's get started!
@ -46,7 +44,7 @@ Notice that we have added one more parameter, the `examples`, which allows us to
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
<gradio-app space="gradio/vision-transformer">
---

View File

@ -7,9 +7,7 @@ Tags: SKETCHPAD, LABELS, LIVE
How well can an algorithm guess what you're drawing? A few years ago, Google released the **Quick Draw** dataset, which contains drawings made by humans of a variety of every objects. Researchers have used this dataset to train models to guess Pictionary-style drawings.
Such models are perfect to use with Gradio's _sketchpad_ input, so in this tutorial we will build a Pictionary web application using Gradio. We will be able to build the whole web application in Python, and will look like this (try drawing something!):
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Such models are perfect to use with Gradio's _sketchpad_ input, so in this tutorial we will build a Pictionary web application using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
Let's get started! This guide covers how to build a pictionary app (step-by-step):
@ -101,7 +99,7 @@ gr.Interface(fn=predict,
This produces the following interface, which you can try right here in your browser (try drawing something, like a "snake" or a "laptop"):
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
<gradio-app space="gradio/pictionary">
---

View File

@ -11,9 +11,7 @@ It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/
Generative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a peek at what we're going to be putting together:
<iframe src="https://nimaboscarino-cryptopunks.hf.space" frameBorder="0" height="855" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a [peek](https://nimaboscarino-cryptopunks.hf.space) at what we're going to be putting together.
### Prerequisites
@ -113,9 +111,6 @@ gr.Interface(
).launch()
```
Launching the interface should present you with something like this:
<iframe src="https://nimaboscarino-cryptopunks-1.hf.space" frameBorder="0" height="365" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## Step 4 — Even more punks!
@ -163,9 +158,7 @@ The `examples` parameter takes a list of lists, where each item in the sublists
You can also try adding a `title`, `description`, and `article` to the `gr.Interface`. Each of those parameters accepts a string, so try it out and see what happens 👀 `article` will also accept HTML, as [explored in a previous guide](/guides/key-features/#descriptive-content)!
When you're all done, you may end up with something like this:
<iframe src="https://nimaboscarino-cryptopunks.hf.space" frameBorder="0" height="855" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
When you're all done, you may end up with something like [this](https://nimaboscarino-cryptopunks.hf.space).
For reference, here is our full code:

View File

@ -152,6 +152,7 @@ def organize_docs(d):
"routes": {},
"events": {},
"py-client": {},
"chatinterface": {}
}
pages = []
for mode in d:
@ -174,9 +175,10 @@ def organize_docs(d):
if mode == "component":
organized["components"][c["name"].lower()] = c
pages.append(c["name"].lower())
elif mode in ["helpers", "routes", "py-client"]:
elif mode in ["helpers", "routes", "py-client", "chatinterface"]:
organized[mode][c["name"].lower()] = c
pages.append(c["name"].lower())
else:
# if mode not in organized["building"]:
# organized["building"][mode] = {}
@ -259,6 +261,10 @@ def organize_docs(d):
organized["py-client"][cls]["next_obj"] = organized["py-client"][
c_keys[i + 1]
]["name"]
for cls in organized["chatinterface"]:
organized["chatinterface"][cls]["prev_obj"] = "Block-Layouts"
organized["chatinterface"][cls]["next_obj"] = "Themes"
organized["events_matrix"] = component_events
organized["events"] = events

View File

@ -81,7 +81,7 @@
<svelte:window bind:scrollY={y} />
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={obj.name.toLowerCase()}
{components}
@ -90,7 +90,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left lg:ml-10"
@ -169,14 +169,12 @@
</h3>
</div>
<div class="codeblock bg-gray-50 mx-auto p-3">
{#if obj.override_signature}
<div class="codeblock bg-gray-50 mx-auto p-3">
<pre><code class="code language-python"
>{obj.override_signature}</code
></pre>
</div>
{:else}
<div class="codeblock bg-gray-50 mx-auto p-3">
<pre><code class="code language-python"
>{obj.parent}.<span>{obj.name}&lpar;</span
><!--
@ -189,8 +187,8 @@
>&rpar;</span
></code
></pre>
</div>
{/if}
</div>
{#if mode === "components"}
<div class="embedded-component">
@ -539,7 +537,7 @@
</div>
<div
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block"
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block lg:w-2/12"
>
<div class="mx-8">
<a

View File

@ -75,7 +75,7 @@
<svelte:window bind:scrollY={y} />
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"block-layouts"}
{components}
@ -84,7 +84,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -489,7 +489,7 @@
</div>
</div>
<div
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block"
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:w-2/12 lg:block"
>
<div class="px-8">
<a

View File

@ -75,7 +75,7 @@
<svelte:window bind:scrollY={y} />
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"combining-interfaces"}
{components}
@ -84,7 +84,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -489,7 +489,7 @@
</div>
</div>
<div
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block"
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:w-2/12 lg:block"
>
<div class="px-8">
<a

View File

@ -36,159 +36,165 @@
/>
<main class="container mx-auto px-4 flex gap-4">
<DocsNav
current_nav_link={"components"}
{components}
{helpers}
{routes}
{py_client}
/>
<div class="flex w-full">
<DocsNav
current_nav_link={"components"}
{components}
{helpers}
{routes}
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
>
New to Gradio? Start here: <a class="link" href="/quickstart"
>Getting Started</a
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
>
</p>
<p
class="bg-gradient-to-r from-green-100 to-green-50 border border-green-200 px-4 py-1 rounded-full text-green-800 mb-1 w-fit float-left sm:float-right"
>
See the <a class="link" href="/changelog">Release History</a>
</p>
</div>
{#if on_main}
<div class="codeblock bg-gray-100 border border-gray-200 text-gray-800 px-3 py-1 mt-4 rounded-lg lg:ml-10">
<p class="my-2">
To install Gradio from main, run the following command:
</p>
<button class="clipboard-button" type="button" on:click={() => copy("pip install " + wheel)}>
{#if !copied}
{@html svgCopy}
{:else}
{@html svgCheck}
{/if}
</button>
<pre class="language-bash" style="padding-right: 25px;"><code class="language-bash text-xs">pip install {wheel}</code></pre>
<p class="float-right text-sm">
*Note: Setting <code style="font-size: 0.85rem">share=True</code> in <code style="font-size: 0.85rem">launch()</code> will not work.
New to Gradio? Start here: <a class="link" href="/quickstart"
>Getting Started</a
>
</p>
<p
class="bg-gradient-to-r from-green-100 to-green-50 border border-green-200 px-4 py-1 rounded-full text-green-800 mb-1 w-fit float-left sm:float-right"
>
See the <a class="link" href="/changelog">Release History</a>
</p>
</div>
{/if}
<div class="lg:ml-10 flex justify-between mt-4">
<a
href="./themes"
class="text-left px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
<span class="text-orange-500">&#8592;</span> Themes
</div>
</a>
<a
href="./audio"
class="text-right px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
Audio <span class="text-orange-500">&#8594;</span>
</div>
</a>
</div>
<div class="flex flex-row">
<div class="lg:w-3/4 lg:ml-10 lg:mr-24">
<div class="obj" id="components">
<h2
id="components-header"
class="text-4xl font-light mb-2 pt-2 text-orange-500"
>
Components
</h2>
<p class="mt-8 mb-2 text-lg">
Gradio includes pre-built components that can be used as inputs or
outputs in your Interface or Blocks with a single line of code.
Components include <em>preprocessing</em> steps that convert user
data submitted through browser to something that be can used by a
Python function, and <em>postprocessing</em>
steps to convert values returned by a Python function into something
that can be displayed in a browser.
</p>
<p class="mt-2 text-lg">
Consider an example with three inputs &lpar;Textbox, Number, and
Image&rpar; and two outputs &lpar;Number and Gallery&rpar;, below is
a diagram of what our preprocessing will send to the function and
what our postprocessing will require from it.
</p>
<img src={dataflow_svg} class="mt-4" />
<p class="mt-2 text-lg">
Components also come with certain events that they support. These
are methods that are triggered with user actions. Below is a table
showing which events are supported for each component. All events
are also listed &lpar;with parameters&rpar; in the component's docs.
{#if on_main}
<div class="codeblock bg-gray-100 border border-gray-200 text-gray-800 px-3 py-1 mt-4 rounded-lg lg:ml-10">
<p class="my-2">
To install Gradio from main, run the following command:
</p>
<button class="clipboard-button" type="button" on:click={() => copy("pip install " + wheel)}>
{#if !copied}
{@html svgCopy}
{:else}
{@html svgCheck}
{/if}
</button>
<pre class="language-bash" style="padding-right: 25px;"><code class="language-bash text-xs">pip install {wheel}</code></pre>
<p class="float-right text-sm">
*Note: Setting <code style="font-size: 0.85rem">share=True</code> in <code style="font-size: 0.85rem">launch()</code> will not work.
</p>
</div>
{/if}
<div class="max-h-96 overflow-y-scroll my-6">
<table class="table-fixed leading-loose">
<thead class="text-center sticky top-0">
<tr>
<th class="p-3 bg-white w-1/5 sticky left-0" />
{#each events as event}
<th class="p-3 font-normal bg-white border-t border-l"
>{event}</th
>
{/each}
</tr>
</thead>
<tbody
class=" rounded-lg bg-gray-50 border border-gray-100 overflow-hidden text-left align-top divide-y"
<div class="lg:ml-10 flex justify-between mt-4">
<a
href="./themes"
class="text-left px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
<span class="text-orange-500">&#8592;</span> Themes
</div>
</a>
<a
href="./audio"
class="text-right px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
Audio <span class="text-orange-500">&#8594;</span>
</div>
</a>
</div>
<div class="flex flex-row">
<div class="lg:w-3/4 lg:ml-10 lg:mr-24">
<div class="obj" id="components">
<h2
id="components-header"
class="text-4xl font-light mb-2 pt-2 text-orange-500"
>
{#each Object.entries(components) as [name, obj] (name)}
<tr class="group hover:bg-gray-200/60">
<th class="p-3 w-1/5 bg-white sticky z-2 left-0 font-normal">
<a href={obj.name.toLowerCase()} class="thin-link"
>{obj.name}</a
>
</th>
Components
</h2>
<p class="mt-8 mb-2 text-lg">
Gradio includes pre-built components that can be used as inputs or
outputs in your Interface or Blocks with a single line of code.
Components include <em>preprocessing</em> steps that convert user
data submitted through browser to something that be can used by a
Python function, and <em>postprocessing</em>
steps to convert values returned by a Python function into something
that can be displayed in a browser.
</p>
<p class="mt-2 text-lg">
Consider an example with three inputs &lpar;Textbox, Number, and
Image&rpar; and two outputs &lpar;Number and Gallery&rpar;, below is
a diagram of what our preprocessing will send to the function and
what our postprocessing will require from it.
</p>
<img src={dataflow_svg} class="mt-4" />
<p class="mt-2 text-lg">
Components also come with certain events that they support. These
are methods that are triggered with user actions. Below is a table
showing which events are supported for each component. All events
are also listed &lpar;with parameters&rpar; in the component's docs.
</p>
</div>
<div class="max-h-96 overflow-y-scroll my-6">
<table class="table-fixed leading-loose">
<thead class="text-center sticky top-0">
<tr>
<th class="p-3 bg-white w-1/5 sticky left-0" />
{#each events as event}
<td class="p-3 text-gray-700 break-words text-center">
{#if events_matrix[obj.name].includes(event.toLowerCase())}
<p class="text-orange-500">&#10003;</p>
{:else}
<p class="text-gray-300">&#10005;</p>
{/if}
</td>
<th class="p-3 font-normal bg-white border-t border-l"
>{event}</th
>
{/each}
</tr>
{/each}
</tbody>
</table>
</thead>
<tbody
class=" rounded-lg bg-gray-50 border border-gray-100 overflow-hidden text-left align-top divide-y"
>
{#each Object.entries(components) as [name, obj] (name)}
<tr class="group hover:bg-gray-200/60">
<th class="p-3 w-1/5 bg-white sticky z-2 left-0 font-normal">
<a href={obj.name.toLowerCase()} class="thin-link"
>{obj.name}</a
>
</th>
{#each events as event}
<td class="p-3 text-gray-700 break-words text-center">
{#if events_matrix[obj.name].includes(event.toLowerCase())}
<p class="text-orange-500">&#10003;</p>
{:else}
<p class="text-gray-300">&#10005;</p>
{/if}
</td>
{/each}
</tr>
{/each}
</tbody>
</table>
</div>
</div>
</div>
<div class="flex justify-between my-4">
<a
href="./block-layouts"
class="text-left px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
<span class="text-orange-500">&#8592;</span> Block Layouts
</div>
</a>
<a
href="./audio"
class="text-right px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
Audio <span class="text-orange-500">&#8594;</span>
</div>
</a>
</div>
</div>
<div class="flex justify-between my-4">
<a
href="./block-layouts"
class="text-left px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
<span class="text-orange-500">&#8592;</span> Block Layouts
</div>
</a>
<a
href="./audio"
class="text-right px-4 py-1 bg-gray-50 rounded-full hover:underline"
>
<div class="text-lg">
Audio <span class="text-orange-500">&#8594;</span>
</div>
</a>
<div class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:w-2/12 lg:block">
</div>
</div>
</main>

View File

@ -76,7 +76,7 @@
<svelte:window bind:scrollY={y} />
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"flagging"}
{components}
@ -85,7 +85,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -488,7 +488,7 @@
</div>
</div>
<div
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block"
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block lg:w-2/12"
>
<div class="mx-8">
<a

View File

@ -21,7 +21,7 @@
/>
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"js-client"}
{components}
@ -30,7 +30,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -64,6 +64,9 @@
</div>
</div>
</div>
<div class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:w-2/12 lg:block">
</div>
</div>
</main>

View File

@ -33,7 +33,7 @@
/>
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"python-client"}
{components}
@ -42,7 +42,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -143,6 +143,10 @@
</div>
</div>
</div>
<div class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:w-2/12 lg:block">
</div>
</div>
</main>

View File

@ -76,7 +76,7 @@
<svelte:window bind:scrollY={y} />
<main class="container mx-auto px-4 flex gap-4">
<div class="flex">
<div class="flex w-full">
<DocsNav
current_nav_link={"themes"}
{components}
@ -85,7 +85,7 @@
{py_client}
/>
<div class="flex flex-col w-full min-w-full lg:w-10/12 lg:min-w-0">
<div class="flex flex-col w-full min-w-full lg:w-8/12 lg:min-w-0">
<div>
<p
class="lg:ml-10 bg-gradient-to-r from-orange-100 to-orange-50 border border-orange-200 px-4 py-1 mr-2 rounded-full text-orange-800 mb-1 w-fit float-left"
@ -506,7 +506,7 @@
</div>
</div>
<div
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block"
class="float-right top-8 hidden sticky h-screen overflow-y-auto lg:block lg:w-2/12"
>
<div class="mx-8">
<a

View File

@ -78,11 +78,10 @@
canonical={$page.url.pathname}
description="A Step-by-Step Gradio Tutorial"
/>
<div class="container mx-auto px-4 flex gap-4 relative">
<div class="container mx-auto px-4 flex relative w-full">
<div
bind:this={sidebar}
class="side-navigation h-screen leading-relaxed sticky top-0 text-md overflow-y-auto overflow-x-hidden hidden lg:block rounded-t-xl bg-gradient-to-r from-white to-gray-50"
style="min-width: 18%"
class="side-navigation h-screen leading-relaxed sticky top-0 text-md overflow-y-auto overflow-x-hidden hidden lg:block rounded-t-xl bg-gradient-to-r from-white to-gray-50 lg:w-3/12"
>
<div class="sticky top-0 pr-2 float-right">
<DropDown></DropDown>
@ -142,7 +141,7 @@
{/each}
{/each}
</div>
<div class="w-full lg:w-10/12 mx-auto">
<div class="w-full lg:w-8/12 mx-auto">
<div class="w-full flex justify-between my-4">
{#if prev_guide}
<a