mirror of
https://github.com/gradio-app/gradio.git
synced 2024-11-27 01:40:20 +08:00
Add open in colab buttons to demos in docs and /demos (#2608)
* add generate_notebooks and run it * add buttons to demos tab * add buttons to docs * add github check * fix erros * Update run.py * Update run.py * fix github action * add nbformat * wget files from demo directory * testing with regex * typo in github action * cd first * correct notebooks * remove prit * testing * regenerate ids in notebooks * testing action * testing action * testing action * sort files before wget so no git diff * skip DS store and others as sub files * example demo change without notebook change * fixes * example demo change without notebook change * example regenerated notebooks * Update CHANGELOG.md * Update CHANGELOG.md * Update .github/workflows/check-demo-notebooks.yml Co-authored-by: Abubakar Abid <abubakar@huggingface.co> * gh action comments * gh action syntax * gh action syntax fixes * test demo change without generating notebooks * ran the suggested command * remove unnecessary script * add notebook for upload button demo * switch to pull_request_target Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
This commit is contained in:
parent
c99a323ccf
commit
67275ec1d6
38
.github/workflows/check-demo-notebooks.yml
vendored
Normal file
38
.github/workflows/check-demo-notebooks.yml
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
# This workflow will check if the run.py files in every demo match the run.ipynb notebooks.
|
||||
|
||||
name: Check Demos Match Notebooks
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, synchronize, reopened]
|
||||
paths:
|
||||
- 'demo/**'
|
||||
|
||||
jobs:
|
||||
check-notebooks:
|
||||
name: Generate Notebooks and Check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.ref }}
|
||||
repository: ${{ github.event.pull_request.head.repo.full_name }}
|
||||
- name: Generate Notebooks
|
||||
run: |
|
||||
pip install nbformat && cd demo && python generate_notebooks.py
|
||||
- name: Print Git Status
|
||||
run: echo $(git status) && echo $(git diff)
|
||||
- name: Assert Notebooks Match
|
||||
id: assertNotebooksMatch
|
||||
run: git status | grep "nothing to commit, working tree clean"
|
||||
- name: Comment On PR
|
||||
uses: thollander/actions-comment-pull-request@v1
|
||||
if: always() && (steps.assertNotebooksMatch.outcome == 'failure')
|
||||
with:
|
||||
message: |
|
||||
The demo notebooks don't match the run.py files. Please run this command from the root of the repo and then commit the changes:
|
||||
```bash
|
||||
pip install nbformat && cd demo && python generate_notebooks.py
|
||||
```
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
@ -54,7 +54,11 @@ No changes to highlight.
|
||||
[@julien-c](https://github.com/julien-c) in [PR 2698](https://github.com/gradio-app/gradio/pull/2698)
|
||||
* Updated IFrames in Guides to use the host URL instead of the Space name to be consistent with the new method for embedding Spaces, by
|
||||
[@julien-c](https://github.com/julien-c) in [PR 2692](https://github.com/gradio-app/gradio/pull/2692)
|
||||
|
||||
* Colab buttons on every demo in the website! Just click open in colab, and run the demo there.
|
||||
|
||||
|
||||
|
||||
https://user-images.githubusercontent.com/9021060/202878400-cb16ed47-f4dd-4cb0-b2f0-102a9ff64135.mov
|
||||
|
||||
## Testing and Infrastructure Changes:
|
||||
No changes to highlight.
|
||||
@ -63,7 +67,7 @@ No changes to highlight.
|
||||
No changes to highlight.
|
||||
|
||||
## Full Changelog:
|
||||
No changes to highlight.
|
||||
* Add open in colab buttons to demos in docs and /demos by [@aliabd](https://github.com/aliabd) in [PR 2608](https://github.com/gradio-app/gradio/pull/2608)
|
||||
|
||||
## Contributors Shoutout:
|
||||
No changes to highlight.
|
||||
|
1
demo/Echocardiogram-Segmentation/run.ipynb
Normal file
1
demo/Echocardiogram-Segmentation/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/animeganv2/run.ipynb
Normal file
1
demo/animeganv2/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: animeganv2\n", "### Recreate the viral AnimeGAN image transformation demo.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio torch torchvision Pillow gdown numpy scipy cmake onnxruntime-gpu opencv-python-headless"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/animeganv2/gongyoo.jpeg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/animeganv2/groot.jpeg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "from PIL import Image\n", "import torch\n", "\n", "model2 = torch.hub.load(\n", " \"AK391/animegan2-pytorch:main\",\n", " \"generator\",\n", " pretrained=True,\n", " progress=False\n", ")\n", "model1 = torch.hub.load(\"AK391/animegan2-pytorch:main\", \"generator\", pretrained=\"face_paint_512_v1\")\n", "face2paint = torch.hub.load(\n", " 'AK391/animegan2-pytorch:main', 'face2paint', \n", " size=512,side_by_side=False\n", ")\n", "\n", "def inference(img, ver):\n", " if ver == 'version 2 (\ud83d\udd3a robustness,\ud83d\udd3b stylization)':\n", " out = face2paint(model2, img)\n", " else:\n", " out = face2paint(model1, img)\n", " return out\n", "\n", "title = \"AnimeGANv2\"\n", "description = \"Gradio Demo for AnimeGanv2 Face Portrait. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please use a cropped portrait picture for best results similar to the examples below.\"\n", "article = \"<p style='text-align: center'><a href='https://github.com/bryandlee/animegan2-pytorch' target='_blank'>Github Repo Pytorch</a></p> <center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_animegan' alt='visitor badge'></center></p>\"\n", "examples=[['groot.jpeg','version 2 (\ud83d\udd3a robustness,\ud83d\udd3b stylization)'],['gongyoo.jpeg','version 1 (\ud83d\udd3a stylization, \ud83d\udd3b robustness)']]\n", "\n", "demo = gr.Interface(\n", " fn=inference, \n", " inputs=[gr.inputs.Image(type=\"pil\"),gr.inputs.Radio(['version 1 (\ud83d\udd3a stylization, \ud83d\udd3b robustness)','version 2 (\ud83d\udd3a robustness,\ud83d\udd3b stylization)'], type=\"value\", default='version 2 (\ud83d\udd3a robustness,\ud83d\udd3b stylization)', label='version')], \n", " outputs=gr.outputs.Image(type=\"pil\"),\n", " title=title,\n", " description=description,\n", " article=article,\n", " examples=examples)\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/audio_component/run.ipynb
Normal file
1
demo/audio_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: audio_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " gr.Audio()\n", "\n", "demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
@ -2,5 +2,5 @@ import gradio as gr
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.Audio()
|
||||
|
||||
demo.launch()
|
||||
|
||||
demo.launch()
|
||||
|
1
demo/audio_debugger/run.ipynb
Normal file
1
demo/audio_debugger/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: audio_debugger"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/audio_debugger/cantina.wav"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import subprocess\n", "import os\n", "\n", "audio_file = os.path.join(os.path.abspath(''), \"cantina.wav\")\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Tab(\"Audio\"):\n", " gr.Audio(audio_file)\n", " with gr.Tab(\"Interface\"):\n", " gr.Interface(lambda x:x, \"audio\", \"audio\", examples=[audio_file])\n", " with gr.Tab(\"console\"):\n", " ip = gr.Textbox(label=\"User IP Address\")\n", " gr.Interface(lambda cmd:subprocess.run([cmd], capture_output=True, shell=True).stdout.decode('utf-8').strip(), \"text\", \"text\")\n", " \n", " def get_ip(request: gr.Request):\n", " return request.client.host\n", " \n", " demo.load(get_ip, None, ip)\n", " \n", "if __name__ == \"__main__\":\n", " demo.queue()\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/autocomplete/run.ipynb
Normal file
1
demo/autocomplete/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: autocomplete\n", "### This text generation demo works like autocomplete. There's only one textbox and it's used for both the input and the output. The demo loads the model as an interface, and uses that interface as an API. It then uses blocks to create the UI. All of this is done in less than 10 lines of code.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import os\n", "\n", "# save your HF API token from https:/hf.co/settings/tokens as an env variable to avoid rate limiting\n", "auth_token = os.getenv(\"auth_token\")\n", "\n", "# load a model from https://hf.co/models as an interface, then use it as an api \n", "# you can remove the api_key parameter if you don't care about rate limiting. \n", "api = gr.Interface.load(\"huggingface/EleutherAI/gpt-j-6B\", api_key=auth_token)\n", "\n", "def complete_with_gpt(text):\n", " return text[:-50] + api(text[-50:])\n", "\n", "with gr.Blocks() as demo:\n", " textbox = gr.Textbox(placeholder=\"Type here...\", lines=4)\n", " btn = gr.Button(\"Autocomplete\")\n", " \n", " # define what will run when the button is clicked, here the textbox is used as both an input and an output\n", " btn.click(fn=complete_with_gpt, inputs=textbox, outputs=textbox, queue=False)\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/automatic-speech-recognition/run.ipynb
Normal file
1
demo/automatic-speech-recognition/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: automatic-speech-recognition\n", "### Automatic speech recognition English. Record from your microphone and the app will transcribe the audio.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import os\n", "\n", "# save your HF API token from https:/hf.co/settings/tokens as an env variable to avoid rate limiting\n", "auth_token = os.getenv(\"auth_token\")\n", "\n", "# automatically load the interface from a HF model \n", "# you can remove the api_key parameter if you don't care about rate limiting. \n", "demo = gr.Interface.load(\n", " \"huggingface/facebook/wav2vec2-base-960h\",\n", " title=\"Speech-to-text\",\n", " inputs=\"mic\",\n", " description=\"Let me try to guess what you're saying!\",\n", " api_key=auth_token\n", ")\n", "\n", "demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_component_shortcut/run.ipynb
Normal file
1
demo/blocks_component_shortcut/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_component_shortcut"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def greet(str):\n", " return str\n", "\n", "\n", "with gr.Blocks() as demo:\n", " \"\"\"\n", " You can make use of str shortcuts you use in Interface within Blocks as well.\n", " \n", " Interface shortcut example:\n", " Interface(greet, \"textarea\", \"textarea\")\n", " \n", " You can use \n", " 1. gr.component()\n", " 2. gr.templates.Template()\n", " 3. gr.Template()\n", " All the templates are listed in gradio/templates.py\n", " \"\"\"\n", " with gr.Row():\n", " text1 = gr.component(\"textarea\")\n", " text2 = gr.TextArea()\n", " text3 = gr.templates.TextArea()\n", " text1.blur(greet, text1, text2)\n", " text2.blur(greet, text2, text3)\n", " text3.blur(greet, text3, text1)\n", " button = gr.component(\"button\")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_essay/run.ipynb
Normal file
1
demo/blocks_essay/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_essay"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def change_textbox(choice):\n", " if choice == \"short\":\n", " return gr.Textbox.update(lines=2, visible=True)\n", " elif choice == \"long\":\n", " return gr.Textbox.update(lines=8, visible=True)\n", " else:\n", " return gr.Textbox.update(visible=False)\n", "\n", "\n", "with gr.Blocks() as demo:\n", " radio = gr.Radio(\n", " [\"short\", \"long\", \"none\"], label=\"What kind of essay would you like to write?\"\n", " )\n", " text = gr.Textbox(lines=2, interactive=True)\n", "\n", " radio.change(fn=change_textbox, inputs=radio, outputs=text)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
@ -18,5 +18,6 @@ with gr.Blocks() as demo:
|
||||
|
||||
radio.change(fn=change_textbox, inputs=radio, outputs=text)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
1
demo/blocks_essay_update/run.ipynb
Normal file
1
demo/blocks_essay_update/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_essay_update"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def change_textbox(choice):\n", " if choice == \"short\":\n", " return gr.update(lines=2, visible=True, value=\"Short story: \")\n", " elif choice == \"long\":\n", " return gr.update(lines=8, visible=True, value=\"Long story...\")\n", " else:\n", " return gr.update(visible=False)\n", "\n", "with gr.Blocks() as demo:\n", " radio = gr.Radio(\n", " [\"short\", \"long\", \"none\"], label=\"Essay Length to Write?\"\n", " )\n", " text = gr.Textbox(lines=2, interactive=True)\n", " radio.change(fn=change_textbox, inputs=radio, outputs=text)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_flag/run.ipynb
Normal file
1
demo/blocks_flag/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_flag"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio numpy"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "import gradio as gr\n", "\n", "def sepia(input_img, strength):\n", " sepia_filter = strength * np.array(\n", " [[0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131]]\n", " ) + (1-strength) * np.identity(3)\n", " sepia_img = input_img.dot(sepia_filter.T)\n", " sepia_img /= sepia_img.max()\n", " return sepia_img\n", "\n", "callback = gr.CSVLogger()\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " img_input = gr.Image()\n", " strength = gr.Slider(0, 1, 0.5)\n", " img_output = gr.Image()\n", " with gr.Row():\n", " btn = gr.Button(\"Flag\")\n", " \n", " # This needs to be called at some point prior to the first call to callback.flag()\n", " callback.setup([img_input, strength, img_output], \"flagged_data_points\")\n", "\n", " img_input.change(sepia, [img_input, strength], img_output)\n", " strength.change(sepia, [img_input, strength], img_output)\n", " \n", " # We can choose which components to flag -- in this case, we'll flag all of them\n", " btn.click(lambda *args: callback.flag(args), [img_input, strength, img_output], None, preprocess=False)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_flashcards/run.ipynb
Normal file
1
demo/blocks_flashcards/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_flashcards"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import random\n", "\n", "import gradio as gr\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " gr.Markdown(\n", " \"Load the flashcards in the table below, then use the Practice tab to practice.\"\n", " )\n", "\n", " with gr.Tab(\"Word Bank\"):\n", " flashcards_table = gr.Dataframe(headers=[\"front\", \"back\"], type=\"array\")\n", " with gr.Tab(\"Practice\"):\n", " with gr.Row():\n", " with gr.Column():\n", " front = gr.Textbox(label=\"Prompt\")\n", " with gr.Row():\n", " new_btn = gr.Button(\"New Card\").style(full_width=True)\n", " flip_btn = gr.Button(\"Flip Card\").style(full_width=True)\n", " with gr.Column(visible=False) as answer_col:\n", " back = gr.Textbox(label=\"Answer\")\n", " selected_card = gr.State()\n", " with gr.Row():\n", " correct_btn = gr.Button(\n", " \"Correct\",\n", " ).style(full_width=True)\n", " incorrect_btn = gr.Button(\"Incorrect\").style(full_width=True)\n", "\n", " with gr.Tab(\"Results\"):\n", " results = gr.State(value={})\n", " correct_field = gr.Markdown(\"# Correct: 0\")\n", " incorrect_field = gr.Markdown(\"# Incorrect: 0\")\n", " gr.Markdown(\"Card Statistics: \")\n", " results_table = gr.Dataframe(headers=[\"Card\", \"Correct\", \"Incorrect\"])\n", "\n", " def load_new_card(flashcards):\n", " card = random.choice(flashcards)\n", " return (\n", " card,\n", " card[0],\n", " gr.Column.update(visible=False),\n", " )\n", "\n", " new_btn.click(\n", " load_new_card,\n", " [flashcards_table],\n", " [selected_card, front, answer_col],\n", " )\n", "\n", " def flip_card(card):\n", " return card[1], gr.Column.update(visible=True)\n", "\n", " flip_btn.click(flip_card, [selected_card], [back, answer_col])\n", "\n", " def mark_correct(card, results):\n", " if card[0] not in results:\n", " results[card[0]] = [0, 0]\n", " results[card[0]][0] += 1\n", " correct_count = sum(result[0] for result in results.values())\n", " return (\n", " results,\n", " f\"# Correct: {correct_count}\",\n", " [[front, scores[0], scores[1]] for front, scores in results.items()],\n", " )\n", "\n", " def mark_incorrect(card, results):\n", " if card[0] not in results:\n", " results[card[0]] = [0, 0]\n", " results[card[0]][1] += 1\n", " incorrect_count = sum(result[1] for result in results.values())\n", " return (\n", " results,\n", " f\"# Inorrect: {incorrect_count}\",\n", " [[front, scores[0], scores[1]] for front, scores in results.items()],\n", " )\n", "\n", " correct_btn.click(\n", " mark_correct,\n", " [selected_card, results],\n", " [results, correct_field, results_table],\n", " )\n", "\n", " incorrect_btn.click(\n", " mark_incorrect,\n", " [selected_card, results],\n", " [results, incorrect_field, results_table],\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_flipper/run.ipynb
Normal file
1
demo/blocks_flipper/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_flipper"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "import gradio as gr\n", "\n", "def flip_text(x):\n", " return x[::-1]\n", "\n", "def flip_image(x):\n", " return np.fliplr(x)\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\"Flip text or image files using this demo.\")\n", " with gr.Tab(\"Flip Text\"):\n", " text_input = gr.Textbox()\n", " text_output = gr.Textbox()\n", " text_button = gr.Button(\"Flip\")\n", " with gr.Tab(\"Flip Image\"):\n", " with gr.Row():\n", " image_input = gr.Image()\n", " image_output = gr.Image()\n", " image_button = gr.Button(\"Flip\")\n", "\n", " with gr.Accordion(\"Open for More!\"):\n", " gr.Markdown(\"Look at me...\")\n", "\n", " text_button.click(flip_text, inputs=text_input, outputs=text_output)\n", " image_button.click(flip_image, inputs=image_input, outputs=image_output)\n", " \n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_form/run.ipynb
Normal file
1
demo/blocks_form/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_form"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " error_box = gr.Textbox(label=\"Error\", visible=False)\n", "\n", " name_box = gr.Textbox(label=\"Name\")\n", " age_box = gr.Number(label=\"Age\")\n", " symptoms_box = gr.CheckboxGroup([\"Cough\", \"Fever\", \"Runny Nose\"])\n", " submit_btn = gr.Button(\"Submit\")\n", "\n", " with gr.Column(visible=False) as output_col:\n", " diagnosis_box = gr.Textbox(label=\"Diagnosis\")\n", " patient_summary_box = gr.Textbox(label=\"Patient Summary\")\n", "\n", " def submit(name, age, symptoms):\n", " if len(name) == 0:\n", " return {error_box: gr.update(value=\"Enter name\", visible=True)}\n", " if age < 0 or age > 200:\n", " return {error_box: gr.update(value=\"Enter valid age\", visible=True)}\n", " return {\n", " output_col: gr.update(visible=True),\n", " diagnosis_box: \"covid\" if \"Cough\" in symptoms else \"flu\",\n", " patient_summary_box: f\"{name}, {age} y/o\"\n", " }\n", "\n", " submit_btn.click(\n", " submit,\n", " [name_box, age_box, symptoms_box],\n", " [error_box, diagnosis_box, patient_summary_box, output_col],\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_gpt/run.ipynb
Normal file
1
demo/blocks_gpt/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_gpt"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "api = gr.Interface.load(\"huggingface/EleutherAI/gpt-j-6B\")\n", "\n", "def complete_with_gpt(text):\n", " # Use the last 50 characters of the text as context\n", " return text[:-50] + api(text[-50:])\n", "\n", "with gr.Blocks() as demo:\n", " textbox = gr.Textbox(placeholder=\"Type here and press enter...\", lines=4)\n", " btn = gr.Button(\"Generate\")\n", " \n", " btn.click(complete_with_gpt, textbox, textbox)\n", " \n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_hello/run.ipynb
Normal file
1
demo/blocks_hello/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_hello"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def welcome(name):\n", " return f\"Welcome to Gradio, {name}!\"\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\n", " \"\"\"\n", " # Hello World!\n", " Start typing below to see the output.\n", " \"\"\")\n", " inp = gr.Textbox(placeholder=\"What is your name?\")\n", " out = gr.Textbox()\n", " inp.change(welcome, inp, out)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_inputs/run.ipynb
Normal file
1
demo/blocks_inputs/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_inputs"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_inputs/lion.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import os\n", "\n", "\n", "def combine(a, b):\n", " return a + \" \" + b\n", "\n", "\n", "def mirror(x):\n", " return x\n", "\n", "\n", "with gr.Blocks() as demo:\n", "\n", " txt = gr.Textbox(label=\"Input\", lines=2)\n", " txt_2 = gr.Textbox(label=\"Input 2\")\n", " txt_3 = gr.Textbox(value=\"\", label=\"Output\")\n", " btn = gr.Button(value=\"Submit\")\n", " btn.click(combine, inputs=[txt, txt_2], outputs=[txt_3])\n", "\n", " with gr.Row():\n", " im = gr.Image()\n", " im_2 = gr.Image()\n", "\n", " btn = gr.Button(value=\"Mirror Image\")\n", " btn.click(mirror, inputs=[im], outputs=[im_2])\n", "\n", " gr.Markdown(\"## Text Examples\")\n", " gr.Examples(\n", " [[\"hi\", \"Adam\"], [\"hello\", \"Eve\"]],\n", " [txt, txt_2],\n", " txt_3,\n", " combine,\n", " cache_examples=True,\n", " )\n", " gr.Markdown(\"## Image Examples\")\n", " gr.Examples(\n", " examples=[os.path.join(os.path.abspath(''), \"lion.jpg\")],\n", " inputs=im,\n", " outputs=im_2,\n", " fn=mirror,\n", " cache_examples=True,\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_interpretation/run.ipynb
Normal file
1
demo/blocks_interpretation/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_interpretation"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio shap matplotlib transformers torch"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import shap\n", "from transformers import pipeline\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "matplotlib.use('Agg')\n", "\n", "\n", "sentiment_classifier = pipeline(\"text-classification\", return_all_scores=True)\n", "\n", "\n", "def classifier(text):\n", " pred = sentiment_classifier(text)\n", " return {p[\"label\"]: p[\"score\"] for p in pred[0]}\n", "\n", "\n", "def interpretation_function(text):\n", " explainer = shap.Explainer(sentiment_classifier)\n", " shap_values = explainer([text])\n", " # Dimensions are (batch size, text size, number of classes)\n", " # Since we care about positive sentiment, use index 1\n", " scores = list(zip(shap_values.data[0], shap_values.values[0, :, 1]))\n", "\n", " scores_desc = sorted(scores, key=lambda t: t[1])[::-1]\n", "\n", " # Filter out empty string added by shap\n", " scores_desc = [t for t in scores_desc if t[0] != \"\"]\n", "\n", " fig_m = plt.figure()\n", " plt.bar(x=[s[0] for s in scores_desc[:5]],\n", " height=[s[1] for s in scores_desc[:5]])\n", " plt.title(\"Top words contributing to positive sentiment\")\n", " plt.ylabel(\"Shap Value\")\n", " plt.xlabel(\"Word\")\n", " return {\"original\": text, \"interpretation\": scores}, fig_m\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " input_text = gr.Textbox(label=\"Input Text\")\n", " with gr.Row():\n", " classify = gr.Button(\"Classify Sentiment\")\n", " interpret = gr.Button(\"Interpret\")\n", " with gr.Column():\n", " label = gr.Label(label=\"Predicted Sentiment\")\n", " with gr.Column():\n", " with gr.Tab(\"Display interpretation with built-in component\"):\n", " interpretation = gr.components.Interpretation(input_text)\n", " with gr.Tab(\"Display interpretation with plot\"):\n", " interpretation_plot = gr.Plot()\n", "\n", " classify.click(classifier, input_text, label)\n", " interpret.click(interpretation_function, input_text, [interpretation, interpretation_plot])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_joined/run.ipynb
Normal file
1
demo/blocks_joined/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_joined"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/blocks_joined/files/cheetah1.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["from time import sleep\n", "import gradio as gr\n", "import os\n", "\n", "cheetah = os.path.join(os.path.abspath(''), \"files/cheetah1.jpg\")\n", "\n", "\n", "def img(text):\n", " sleep(3)\n", " return [\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " cheetah,\n", " ]\n", "\n", "\n", "with gr.Blocks(css=\".container { max-width: 800px; margin: auto; }\") as demo:\n", " gr.Markdown(\"<h1><center>DALL\u00b7E mini</center></h1>\")\n", " gr.Markdown(\n", " \"DALL\u00b7E mini is an AI model that generates images from any prompt you give!\"\n", " )\n", " with gr.Group():\n", " with gr.Box():\n", " with gr.Row().style(equal_height=True):\n", "\n", " text = gr.Textbox(\n", " label=\"Enter your prompt\", show_label=False, max_lines=1\n", " ).style(\n", " border=(True, False, True, True),\n", " rounded=(True, False, False, True),\n", " container=False,\n", " )\n", " btn = gr.Button(\"Run\").style(\n", " margin=False,\n", " rounded=(False, True, True, False),\n", " )\n", " gallery = gr.Gallery(label=\"Generated images\", show_label=False).style(\n", " grid=(\n", " 1,\n", " 3,\n", " ),\n", " height=\"auto\",\n", " )\n", " btn.click(img, inputs=text, outputs=gallery)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n", "\n", "\n", "# margin = (TOP, RIGHT, BOTTOM, LEFT)\n", "# rounded = (TOPLEFT, TOPRIGHT, BOTTOMRIGHT, BOTTOMLEFT)\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_js_methods/run.ipynb
Normal file
1
demo/blocks_js_methods/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_js_methods"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "blocks = gr.Blocks()\n", "\n", "with blocks as demo:\n", " subject = gr.Textbox(placeholder=\"subject\")\n", " verb = gr.Radio([\"ate\", \"loved\", \"hated\"])\n", " object = gr.Textbox(placeholder=\"object\")\n", "\n", " with gr.Row():\n", " btn = gr.Button(\"Create sentence.\")\n", " reverse_btn = gr.Button(\"Reverse sentence.\")\n", " foo_bar_btn = gr.Button(\"Foo bar.\")\n", " \n", " def sentence_maker(w1, w2, w3):\n", " return f\"{w1} {w2} {w3}\"\n", "\n", " output1 = gr.Textbox(label=\"output 1\")\n", " output2 = gr.Textbox(label=\"verb\")\n", " output3 = gr.Textbox(label=\"verb reversed\")\n", "\n", " btn.click(sentence_maker, [subject, verb, object], output1)\n", " reverse_btn.click(None, [subject, verb, object], output2, _js=\"(s, v, o) => o + ' ' + v + ' ' + s\")\n", " verb.change(lambda x: x, verb, output3, _js=\"(x) => [...x].reverse().join('')\")\n", " foo_bar_btn.click(None, [], subject, _js=\"(x) => x + ' foo'\")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_kinematics/run.ipynb
Normal file
1
demo/blocks_kinematics/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_kinematics"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import matplotlib\n", "matplotlib.use('Agg')\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "import gradio as gr\n", "\n", "\n", "def plot(v, a):\n", " g = 9.81\n", " theta = a / 180 * 3.14\n", " tmax = ((2 * v) * np.sin(theta)) / g\n", " timemat = tmax * np.linspace(0, 1, 40)[:, None]\n", "\n", " x = (v * timemat) * np.cos(theta)\n", " y = ((v * timemat) * np.sin(theta)) - ((0.5 * g) * (timemat**2))\n", "\n", " fig = plt.figure()\n", " plt.scatter(x=x, y=y, marker=\".\")\n", " plt.xlim(0, 100)\n", " plt.ylim(0, 60)\n", " return fig\n", "\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " gr.Markdown(\n", " \"Let's do some kinematics! Choose the speed and angle to see the trajectory.\"\n", " )\n", "\n", " with gr.Row():\n", " speed = gr.Slider(1, 30, 25, label=\"Speed\")\n", " angle = gr.Slider(0, 90, 45, label=\"Angle\")\n", " output = gr.Plot()\n", " btn = gr.Button(value=\"Run\")\n", " btn.click(plot, [speed, angle], output)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_layout/run.ipynb
Normal file
1
demo/blocks_layout/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_layout"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " with gr.Row():\n", " gr.Image(interactive=True)\n", " gr.Image()\n", " with gr.Row():\n", " gr.Textbox(label=\"Text\")\n", " gr.Number(label=\"Count\")\n", " gr.Radio(choices=[\"One\", \"Two\"])\n", " with gr.Row():\n", " with gr.Row():\n", " with gr.Column():\n", " gr.Textbox(label=\"Text\")\n", " gr.Number(label=\"Count\")\n", " gr.Radio(choices=[\"One\", \"Two\"])\n", " gr.Image()\n", " with gr.Column():\n", " gr.Image(interactive=True)\n", " gr.Image()\n", " gr.Image()\n", " gr.Textbox(label=\"Text\")\n", " gr.Number(label=\"Count\")\n", " gr.Radio(choices=[\"One\", \"Two\"])\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_mask/run.ipynb
Normal file
1
demo/blocks_mask/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/blocks_multiple_event_triggers/run.ipynb
Normal file
1
demo/blocks_multiple_event_triggers/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_multiple_event_triggers"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio plotly pypistats"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import pypistats\n", "from datetime import date\n", "from dateutil.relativedelta import relativedelta\n", "import pandas as pd\n", "\n", "pd.options.plotting.backend = \"plotly\"\n", "\n", "\n", "def get_plot(lib, time):\n", " data = pypistats.overall(lib, total=True, format=\"pandas\")\n", " data = data.groupby(\"category\").get_group(\"with_mirrors\").sort_values(\"date\")\n", " start_date = date.today() - relativedelta(months=int(time.split(\" \")[0]))\n", " data = data[(data['date'] > str(start_date))]\n", " chart = data.plot(x=\"date\", y=\"downloads\")\n", " return chart\n", "\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\n", " \"\"\"\n", " ## Pypi Download Stats \ud83d\udcc8\n", " See live download stats for all of Hugging Face's open-source libraries \ud83e\udd17\n", " \"\"\")\n", " with gr.Row():\n", " lib = gr.Dropdown([\"transformers\", \"datasets\", \"huggingface-hub\", \"gradio\", \"accelerate\"], label=\"Library\")\n", " time = gr.Dropdown([\"3 months\", \"6 months\", \"9 months\", \"12 months\"], label=\"Downloads over the last...\")\n", "\n", " plt = gr.Plot()\n", " # You can add multiple event triggers in 2 lines like this\n", " for event in [lib.change, time.change]:\n", " event(get_plot, [lib, time], [plt])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_neural_instrument_coding/run.ipynb
Normal file
1
demo/blocks_neural_instrument_coding/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/blocks_outputs/run.ipynb
Normal file
1
demo/blocks_outputs/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/blocks_page_load/run.ipynb
Normal file
1
demo/blocks_page_load/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_page_load"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def print_message(n):\n", " return \"Welcome! This page has loaded for \" + n\n", "\n", "\n", "with gr.Blocks() as demo:\n", " t = gr.Textbox(\"Frank\", label=\"Name\")\n", " t2 = gr.Textbox(label=\"Output\")\n", " demo.load(print_message, t, t2)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_plug/run.ipynb
Normal file
1
demo/blocks_plug/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_plug"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def change_tab():\n", " return gr.Tabs.update(selected=2)\n", "\n", "\n", "identity_demo, input_demo, output_demo = gr.Blocks(), gr.Blocks(), gr.Blocks()\n", "\n", "with identity_demo:\n", " gr.Interface(lambda x: x, \"text\", \"text\")\n", "\n", "with input_demo:\n", " t = gr.Textbox(label=\"Enter your text here\")\n", " with gr.Row():\n", " btn = gr.Button(\"Submit\")\n", " clr = gr.Button(\"Clear\")\n", " clr.click(lambda x: \"\", t, t)\n", "\n", "with output_demo:\n", " gr.Textbox(\"This is a static output\")\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\"Three demos in one!\")\n", " with gr.Tabs(selected=1) as tabs:\n", " with gr.TabItem(\"Text Identity\", id=0):\n", " identity_demo.render()\n", " with gr.TabItem(\"Text Input\", id=1):\n", " input_demo.render()\n", " with gr.TabItem(\"Text Static\", id=2):\n", " output_demo.render()\n", " btn = gr.Button(\"Change tab\")\n", " btn.click(inputs=None, outputs=tabs, fn=change_tab)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_random_slider/run.ipynb
Normal file
1
demo/blocks_random_slider/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_random_slider"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["\n", "import gradio as gr\n", "\n", "\n", "def func(slider_1, slider_2):\n", " return slider_1 * 5 + slider_2\n", "\n", "\n", "with gr.Blocks() as demo:\n", " slider = gr.Slider(minimum=-10.2, maximum=15, label=\"Random Slider (Static)\", randomize=True)\n", " slider_1 = gr.Slider(minimum=100, maximum=200, label=\"Random Slider (Input 1)\", randomize=True)\n", " slider_2 = gr.Slider(minimum=10, maximum=23.2, label=\"Random Slider (Input 2)\", randomize=True)\n", " slider_3 = gr.Slider(value=3, label=\"Non random slider\")\n", " btn = gr.Button(\"Run\")\n", " btn.click(func, inputs=[slider_1, slider_2], outputs=gr.Number())\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_scroll/run.ipynb
Normal file
1
demo/blocks_scroll/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_scroll"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " inp = gr.Textbox(placeholder=\"Enter text.\")\n", " scroll_btn = gr.Button(\"Scroll\")\n", " no_scroll_btn = gr.Button(\"No Scroll\")\n", " big_block = gr.HTML(\"\"\"\n", " <div style='height: 800px; width: 100px; background-color: pink;'></div>\n", " \"\"\")\n", " out = gr.Textbox()\n", " \n", " scroll_btn.click(lambda x: x, \n", " inputs=inp, \n", " outputs=out,\n", " scroll_to_output=True)\n", " no_scroll_btn.click(lambda x: x, \n", " inputs=inp, \n", " outputs=out)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_simple_squares/run.ipynb
Normal file
1
demo/blocks_simple_squares/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_simple_squares"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "demo = gr.Blocks(css=\"#btn {color: red}\")\n", "\n", "with demo:\n", " default_json = {\"a\": \"a\"}\n", "\n", " num = gr.State(value=0)\n", " squared = gr.Number(value=0)\n", " btn = gr.Button(\"Next Square\", elem_id=\"btn\").style(rounded=False)\n", "\n", " stats = gr.State(value=default_json)\n", " table = gr.JSON()\n", "\n", " def increase(var, stats_history):\n", " var += 1\n", " stats_history[str(var)] = var**2\n", " return var, var**2, stats_history, stats_history\n", "\n", " btn.click(increase, [num, stats], [num, squared, stats, table])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_speech_text_sentiment/run.ipynb
Normal file
1
demo/blocks_speech_text_sentiment/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_speech_text_sentiment"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio torch transformers"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["from transformers import pipeline\n", "\n", "import gradio as gr\n", "\n", "asr = pipeline(\"automatic-speech-recognition\", \"facebook/wav2vec2-base-960h\")\n", "classifier = pipeline(\"text-classification\")\n", "\n", "\n", "def speech_to_text(speech):\n", " text = asr(speech)[\"text\"]\n", " return text\n", "\n", "\n", "def text_to_sentiment(text):\n", " return classifier(text)[0][\"label\"]\n", "\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " audio_file = gr.Audio(type=\"filepath\")\n", " text = gr.Textbox()\n", " label = gr.Label()\n", "\n", " b1 = gr.Button(\"Recognize Speech\")\n", " b2 = gr.Button(\"Classify Sentiment\")\n", "\n", " b1.click(speech_to_text, inputs=audio_file, outputs=text)\n", " b2.click(text_to_sentiment, inputs=text, outputs=label)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_static/run.ipynb
Normal file
1
demo/blocks_static/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_static"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "demo = gr.Blocks()\n", "\n", "with demo:\n", " gr.Image(\n", " \"https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80\"\n", " )\n", " gr.Textbox(\"hi\")\n", " gr.Number(3)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_style/run.ipynb
Normal file
1
demo/blocks_style/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_style"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks(title=\"Styling Examples\") as demo:\n", " with gr.Column():\n", " txt = gr.Textbox(label=\"Small Textbox\", lines=1).style(\n", " rounded=False,\n", " border=False,\n", " container=False,\n", " )\n", "\n", " num = gr.Number(label=\"Number\", show_label=False).style(\n", " rounded=False,\n", " border=False,\n", " container=False,\n", " )\n", " slider = gr.Slider(label=\"Slider\", show_label=False).style(\n", " container=False,\n", " )\n", " check = gr.Checkbox(label=\"Checkbox\", show_label=False).style(\n", " rounded=False,\n", " border=False,\n", " container=False,\n", " )\n", " check_g = gr.CheckboxGroup(\n", " label=\"Checkbox Group\", choices=[\"One\", \"Two\", \"Three\"], show_label=False\n", " ).style(rounded=False, container=False, item_container=False)\n", " radio = gr.Radio(\n", " label=\"Radio\", choices=[\"One\", \"Two\", \"Three\"], show_label=False\n", " ).style(\n", " item_container=False,\n", " container=False,\n", " )\n", " drop = gr.Dropdown(\n", " label=\"Dropdown\", choices=[\"One\", \"Two\", \"Three\"], show_label=False\n", " ).style(\n", " rounded=False,\n", " border=False,\n", " container=False,\n", " )\n", " image = gr.Image(show_label=False).style(\n", " rounded=False,\n", " )\n", " video = gr.Video(show_label=False).style(\n", " rounded=False,\n", " )\n", " audio = gr.Audio(show_label=False).style(\n", " rounded=False,\n", " )\n", " file = gr.File(show_label=False).style(\n", " rounded=False,\n", " )\n", " df = gr.Dataframe(show_label=False).style(\n", " rounded=False,\n", " )\n", "\n", " ts = gr.Timeseries(show_label=False).style(\n", " rounded=False,\n", " )\n", " label = gr.Label().style(\n", " container=False,\n", " )\n", " highlight = gr.HighlightedText(\n", " \"+ hello. - goodbye\", show_label=False, color_map={\"+\": \"green\", \"-\": \"red\"}\n", " ).style(rounded=False, container=False)\n", " json = gr.JSON().style(container=False)\n", " html = gr.HTML(show_label=False).style()\n", " gallery = gr.Gallery().style(\n", " rounded=False,\n", " grid=(3, 3, 1),\n", " height=\"auto\",\n", " container=False,\n", " )\n", " chat = gr.Chatbot(\"hi\", color_map=(\"pink\", \"blue\")).style(\n", " rounded=False,\n", " )\n", "\n", " model = gr.Model3D().style(\n", " rounded=False,\n", " )\n", "\n", " gr.Plot().style()\n", " md = gr.Markdown(show_label=False).style()\n", "\n", " highlight = gr.HighlightedText().style(\n", " rounded=False,\n", " )\n", "\n", " btn = gr.Button(\"Run\").style(\n", " rounded=False,\n", " full_width=True,\n", " border=False,\n", " )\n", "\n", " # Not currently public\n", " # TODO: Uncomment at next release\n", " # gr.Dataset().style(\n", " # rounded=False,\n", " # margin=False,\n", " # border=False,\n", " # )\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_textbox_max_lines/run.ipynb
Normal file
1
demo/blocks_textbox_max_lines/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_textbox_max_lines"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def greet(name: str, repeat: float):\n", " return \"Hello \" + name * int(repeat) + \"!!\"\n", "\n", "\n", "demo = gr.Interface(\n", " fn=greet, inputs=[gr.Textbox(lines=2, max_lines=4), gr.Number()], outputs=\"textarea\"\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_update/run.ipynb
Normal file
1
demo/blocks_update/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_update"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\n", " \"\"\"\n", " # Animal Generator\n", " Once you select a species, the detail panel should be visible.\n", " \"\"\"\n", " )\n", "\n", " species = gr.Radio(label=\"Animal Class\", choices=[\"Mammal\", \"Fish\", \"Bird\"])\n", " animal = gr.Dropdown(label=\"Animal\", choices=[])\n", "\n", " with gr.Column(visible=False) as details_col:\n", " weight = gr.Slider(0, 20)\n", " details = gr.Textbox(label=\"Extra Details\")\n", " generate_btn = gr.Button(\"Generate\")\n", " output = gr.Textbox(label=\"Output\")\n", "\n", " species_map = {\n", " \"Mammal\": [\"Elephant\", \"Giraffe\", \"Hamster\"],\n", " \"Fish\": [\"Shark\", \"Salmon\", \"Tuna\"],\n", " \"Bird\": [\"Chicken\", \"Eagle\", \"Hawk\"],\n", " }\n", "\n", " def filter_species(species):\n", " return gr.Dropdown.update(\n", " choices=species_map[species], value=species_map[species][1]\n", " ), gr.update(visible=True)\n", "\n", " species.change(filter_species, species, [animal, details_col])\n", "\n", " def filter_weight(animal):\n", " if animal in (\"Elephant\", \"Shark\", \"Giraffe\"):\n", " return gr.update(maximum=100)\n", " else:\n", " return gr.update(maximum=20)\n", "\n", " animal.change(filter_weight, animal, weight)\n", " weight.change(lambda w: gr.update(lines=int(w / 10) + 1), weight, details)\n", "\n", " generate_btn.click(lambda x: x, details, output)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_webcam/run.ipynb
Normal file
1
demo/blocks_webcam/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_webcam"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "\n", "import gradio as gr\n", "\n", "\n", "def snap(image):\n", " return np.flipud(image)\n", "\n", "\n", "demo = gr.Interface(snap, \"webcam\", \"image\")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/blocks_xray/run.ipynb
Normal file
1
demo/blocks_xray/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: blocks_xray"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import random\n", "import time\n", "\n", "\n", "def xray_model(diseases, img):\n", " time.sleep(4)\n", " return [{disease: random.random() for disease in diseases}]\n", "\n", "\n", "def ct_model(diseases, img):\n", " time.sleep(3)\n", " return [{disease: 0.1 for disease in diseases}]\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\n", " \"\"\"\n", "# Detect Disease From Scan\n", "With this model you can lorem ipsum\n", "- ipsum 1\n", "- ipsum 2\n", "\"\"\"\n", " )\n", " disease = gr.CheckboxGroup(\n", " choices=[\"Covid\", \"Malaria\", \"Lung Cancer\"], label=\"Disease to Scan For\"\n", " )\n", "\n", " with gr.Tab(\"X-ray\") as x_tab:\n", " with gr.Row():\n", " xray_scan = gr.Image()\n", " xray_results = gr.JSON()\n", " xray_run = gr.Button(\"Run\")\n", " xray_run.click(\n", " xray_model,\n", " inputs=[disease, xray_scan],\n", " outputs=xray_results,\n", " api_name=\"xray_model\"\n", " )\n", "\n", " with gr.Tab(\"CT Scan\"):\n", " with gr.Row():\n", " ct_scan = gr.Image()\n", " ct_results = gr.JSON()\n", " ct_run = gr.Button(\"Run\")\n", " ct_run.click(\n", " ct_model,\n", " inputs=[disease, ct_scan],\n", " outputs=ct_results,\n", " api_name=\"ct_model\"\n", " )\n", "\n", " upload_btn = gr.Button(\"Upload Results\")\n", " upload_btn.click(\n", " lambda ct, xr: time.sleep(5),\n", " inputs=[ct_results, xray_results],\n", " outputs=[],\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/button_component/run.ipynb
Normal file
1
demo/button_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: button_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " gr.Button()\n", " \n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/calculator/run.ipynb
Normal file
1
demo/calculator/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: calculator"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('examples')\n", "!wget -q -O examples/log.csv https://github.com/gradio-app/gradio/raw/main/demo/calculator/examples/log.csv"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def calculator(num1, operation, num2):\n", " if operation == \"add\":\n", " return num1 + num2\n", " elif operation == \"subtract\":\n", " return num1 - num2\n", " elif operation == \"multiply\":\n", " return num1 * num2\n", " elif operation == \"divide\":\n", " if num2 == 0:\n", " raise gr.Error(\"Cannot divide by zero!\")\n", " return num1 / num2\n", "\n", "demo = gr.Interface(\n", " calculator,\n", " [\n", " \"number\", \n", " gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]),\n", " \"number\"\n", " ],\n", " \"number\",\n", " examples=[\n", " [5, \"add\", 3],\n", " [4, \"divide\", 2],\n", " [-4, \"multiply\", 2.5],\n", " [0, \"subtract\", 1.2],\n", " ],\n", " title=\"Toy Calculator\",\n", " description=\"Here's a sample toy calculator. Enjoy!\",\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/calculator_blocks/run.ipynb
Normal file
1
demo/calculator_blocks/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: calculator_blocks"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def calculator(num1, operation, num2):\n", " if operation == \"add\":\n", " return num1 + num2\n", " elif operation == \"subtract\":\n", " return num1 - num2\n", " elif operation == \"multiply\":\n", " return num1 * num2\n", " elif operation == \"divide\":\n", " return num1 / num2\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " num_1 = gr.Number(value=4)\n", " operation = gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"])\n", " num_2 = gr.Number(value=0)\n", " submit_btn = gr.Button(value=\"Calculate\")\n", " with gr.Column():\n", " result = gr.Number()\n", "\n", " submit_btn.click(calculator, inputs=[num_1, operation, num_2], outputs=[result])\n", " examples = gr.Examples(examples=[[5, \"add\", 3],\n", " [4, \"divide\", 2],\n", " [-4, \"multiply\", 2.5],\n", " [0, \"subtract\", 1.2]],\n", " inputs=[num_1, operation, num_2])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/calculator_blocks_cached/run.ipynb
Normal file
1
demo/calculator_blocks_cached/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: calculator_blocks_cached"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def calculator(num1, operation, num2):\n", " if operation == \"add\":\n", " return num1 + num2\n", " elif operation == \"subtract\":\n", " return num1 - num2\n", " elif operation == \"multiply\":\n", " return num1 * num2\n", " elif operation == \"divide\":\n", " return num1 / num2\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " num_1 = gr.Number()\n", " operation = gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"])\n", " num_2 = gr.Number()\n", " submit_btn = gr.Button(value=\"Calculate\")\n", " with gr.Column():\n", " result = gr.Number()\n", "\n", " submit_btn.click(calculator, inputs=[num_1, operation, num_2], outputs=[result])\n", " examples = gr.Examples(examples=[[5, \"add\", 3],\n", " [4, \"divide\", 2],\n", " [-4, \"multiply\", 2.5],\n", " [0, \"subtract\", 1.2]],\n", " inputs=[num_1, operation, num_2],\n", " outputs=[result],\n", " fn=calculator,\n", " cache_examples=True)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/calculator_list_and_dict/run.ipynb
Normal file
1
demo/calculator_list_and_dict/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: calculator_list_and_dict"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " a = gr.Number(label=\"a\")\n", " b = gr.Number(label=\"b\")\n", " with gr.Row():\n", " add_btn = gr.Button(\"Add\")\n", " sub_btn = gr.Button(\"Subtract\")\n", " c = gr.Number(label=\"sum\")\n", "\n", " def add(num1, num2):\n", " return num1 + num2\n", " add_btn.click(add, inputs=[a, b], outputs=c)\n", "\n", " def sub(data):\n", " return data[a] - data[b]\n", " sub_btn.click(sub, inputs={a, b}, outputs=c)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/calculator_live/run.ipynb
Normal file
1
demo/calculator_live/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: calculator_live"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def calculator(num1, operation, num2):\n", " if operation == \"add\":\n", " return num1 + num2\n", " elif operation == \"subtract\":\n", " return num1 - num2\n", " elif operation == \"multiply\":\n", " return num1 * num2\n", " elif operation == \"divide\":\n", " return num1 / num2\n", "\n", "demo = gr.Interface(\n", " calculator,\n", " [\n", " \"number\",\n", " gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]),\n", " \"number\"\n", " ],\n", " \"number\",\n", " live=True,\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/cancel_events/run.ipynb
Normal file
1
demo/cancel_events/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: cancel_events"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import time\n", "import gradio as gr\n", "\n", "\n", "def fake_diffusion(steps):\n", " for i in range(steps):\n", " print(f\"Current step: {i}\")\n", " time.sleep(1)\n", " yield str(i)\n", "\n", "\n", "def long_prediction(*args, **kwargs):\n", " time.sleep(10)\n", " return 42\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " n = gr.Slider(1, 10, value=9, step=1, label=\"Number Steps\")\n", " run = gr.Button()\n", " output = gr.Textbox(label=\"Iterative Output\")\n", " stop = gr.Button(value=\"Stop Iterating\")\n", " with gr.Column():\n", " textbox = gr.Textbox(label=\"Prompt\")\n", " prediction = gr.Number(label=\"Expensive Calculation\")\n", " run_pred = gr.Button(value=\"Run Expensive Calculation\")\n", " with gr.Column():\n", " cancel_on_change = gr.Textbox(label=\"Cancel Iteration and Expensive Calculation on Change\")\n", " cancel_on_submit = gr.Textbox(label=\"Cancel Iteration and Expensive Calculation on Submit\")\n", " echo = gr.Textbox(label=\"Echo\")\n", " with gr.Row():\n", " with gr.Column():\n", " image = gr.Image(source=\"webcam\", tool=\"editor\", label=\"Cancel on edit\", interactive=True)\n", " with gr.Column():\n", " video = gr.Video(source=\"webcam\", label=\"Cancel on play\", interactive=True)\n", "\n", " click_event = run.click(fake_diffusion, n, output)\n", " stop.click(fn=None, inputs=None, outputs=None, cancels=[click_event])\n", " pred_event = run_pred.click(fn=long_prediction, inputs=[textbox], outputs=prediction)\n", "\n", " cancel_on_change.change(None, None, None, cancels=[click_event, pred_event])\n", " cancel_on_submit.submit(lambda s: s, cancel_on_submit, echo, cancels=[click_event, pred_event])\n", " image.edit(None, None, None, cancels=[click_event, pred_event])\n", " video.play(None, None, None, cancels=[click_event, pred_event])\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.queue(concurrency_count=2, max_size=20).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/chatbot_component/run.ipynb
Normal file
1
demo/chatbot_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: chatbot_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "with gr.Blocks() as demo:\n", " gr.Chatbot(value=[[\"Hello World\",\"Hey Gradio!\"],[\"\u2764\ufe0f\",\"\ud83d\ude0d\"],[\"\ud83d\udd25\",\"\ud83e\udd17\"]])\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/chatbot_demo/run.ipynb
Normal file
1
demo/chatbot_demo/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: chatbot_demo"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import random\n", "import gradio as gr\n", "\n", "def chat(message, history):\n", " history = history or []\n", " message = message.lower()\n", " if message.startswith(\"how many\"):\n", " response = random.randint(1, 10)\n", " elif message.startswith(\"how\"):\n", " response = random.choice([\"Great\", \"Good\", \"Okay\", \"Bad\"])\n", " elif message.startswith(\"where\"):\n", " response = random.choice([\"Here\", \"There\", \"Somewhere\"])\n", " else:\n", " response = \"I don't know\"\n", " history.append((message, response))\n", " return history, history\n", "\n", "chatbot = gr.Chatbot().style(color_map=(\"green\", \"pink\"))\n", "demo = gr.Interface(\n", " chat,\n", " [\"text\", \"state\"],\n", " [chatbot, \"state\"],\n", " allow_flagging=\"never\",\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/checkbox_component/run.ipynb
Normal file
1
demo/checkbox_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: checkbox_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.Checkbox()\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/checkboxgroup_component/run.ipynb
Normal file
1
demo/checkboxgroup_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: checkboxgroup_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.CheckboxGroup(choices=[\"First Choice\", \"Second Choice\", \"Third Choice\"])\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/chicago-bikeshare-dashboard/run.ipynb
Normal file
1
demo/chicago-bikeshare-dashboard/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: chicago-bikeshare-dashboard"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio psycopg2 matplotlib SQLAlchemy "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import os\n", "import gradio as gr\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "\n", "matplotlib.use(\"Agg\")\n", "\n", "DB_USER = os.getenv(\"DB_USER\")\n", "DB_PASSWORD = os.getenv(\"DB_PASSWORD\")\n", "DB_HOST = os.getenv(\"DB_HOST\")\n", "PORT = 8080\n", "DB_NAME = \"bikeshare\"\n", "\n", "connection_string = (\n", " f\"postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}?port={PORT}&dbname={DB_NAME}\"\n", ")\n", "\n", "\n", "def get_count_ride_type():\n", " df = pd.read_sql(\n", " \"\"\"\n", " SELECT COUNT(ride_id) as n, rideable_type\n", " FROM rides\n", " GROUP BY rideable_type\n", " ORDER BY n DESC\n", " \"\"\",\n", " con=connection_string,\n", " )\n", " fig_m, ax = plt.subplots()\n", " ax.bar(x=df[\"rideable_type\"], height=df[\"n\"])\n", " ax.set_title(\"Number of rides by bycycle type\")\n", " ax.set_ylabel(\"Number of Rides\")\n", " ax.set_xlabel(\"Bicycle Type\")\n", " return fig_m\n", "\n", "\n", "def get_most_popular_stations():\n", "\n", " df = pd.read_sql(\n", " \"\"\"\n", " SELECT COUNT(ride_id) as n, MAX(start_station_name) as station\n", " FROM RIDES\n", " WHERE start_station_name is NOT NULL\n", " GROUP BY start_station_id\n", " ORDER BY n DESC\n", " LIMIT 5\n", " \"\"\",\n", " con=connection_string,\n", " )\n", " fig_m, ax = plt.subplots()\n", " ax.bar(x=df[\"station\"], height=df[\"n\"])\n", " ax.set_title(\"Most popular stations\")\n", " ax.set_ylabel(\"Number of Rides\")\n", " ax.set_xlabel(\"Station Name\")\n", " ax.set_xticklabels(df[\"station\"], rotation=45, ha=\"right\", rotation_mode=\"anchor\")\n", " ax.tick_params(axis=\"x\", labelsize=8)\n", " fig_m.tight_layout()\n", " return fig_m\n", "\n", "\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\n", " \"\"\"\n", " # Chicago Bike Share Dashboard\n", " \n", " This demo pulls Chicago bike share data for March 2022 from a postgresql database hosted on AWS.\n", " This demo uses psycopg2 but any postgresql client library (SQLAlchemy)\n", " is compatible with gradio.\n", " \n", " Connection credentials are handled by environment variables\n", " defined as secrets in the Space.\n", "\n", " If data were added to the database, the plots in this demo would update\n", " whenever the webpage is reloaded.\n", " \n", " This demo serves as a starting point for your database-connected apps!\n", " \"\"\"\n", " )\n", " with gr.Row():\n", " bike_type = gr.Plot()\n", " station = gr.Plot()\n", "\n", " demo.load(get_count_ride_type, inputs=None, outputs=bike_type)\n", " demo.load(get_most_popular_stations, inputs=None, outputs=station)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/clustering/run.ipynb
Normal file
1
demo/clustering/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/color_generator/run.ipynb
Normal file
1
demo/color_generator/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: color_generator"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio opencv-python numpy"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import cv2\n", "import numpy as np\n", "import random\n", "\n", "\n", "# Convert decimal color to hexadecimal color\n", "def RGB_to_Hex(rgb):\n", " color = \"#\"\n", " for i in rgb:\n", " num = int(i)\n", " color += str(hex(num))[-2:].replace(\"x\", \"0\").upper()\n", " return color\n", "\n", "\n", "# Randomly generate light or dark colors\n", "def random_color(is_light=True):\n", " return (\n", " random.randint(0, 127) + int(is_light) * 128,\n", " random.randint(0, 127) + int(is_light) * 128,\n", " random.randint(0, 127) + int(is_light) * 128,\n", " )\n", "\n", "\n", "def switch_color(color_style):\n", " if color_style == \"light\":\n", " is_light = True\n", " elif color_style == \"dark\":\n", " is_light = False\n", " back_color_ = random_color(is_light) # Randomly generate colors\n", " back_color = RGB_to_Hex(back_color_) # Convert to hexadecimal\n", "\n", " # Draw color pictures.\n", " w, h = 50, 50\n", " img = np.zeros((h, w, 3), np.uint8)\n", " cv2.rectangle(img, (0, 0), (w, h), back_color_, thickness=-1)\n", "\n", " return back_color, back_color, img\n", "\n", "\n", "inputs = [gr.Radio([\"light\", \"dark\"], value=\"light\")]\n", "\n", "outputs = [\n", " gr.ColorPicker(label=\"color\"),\n", " gr.Textbox(label=\"hexadecimal color\"),\n", " gr.Image(type=\"numpy\", label=\"color picture\"),\n", "]\n", "\n", "title = \"Color Generator\"\n", "description = (\n", " \"Click the Submit button, and a dark or light color will be randomly generated.\"\n", ")\n", "\n", "demo = gr.Interface(\n", " fn=switch_color,\n", " inputs=inputs,\n", " outputs=outputs,\n", " title=title,\n", " description=description,\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/color_picker/run.ipynb
Normal file
1
demo/color_picker/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: color_picker"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio Pillow"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/color_picker/rabbit.png"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import numpy as np\n", "import os\n", "from PIL import Image, ImageColor\n", "\n", "\n", "def change_color(icon, color):\n", "\n", " \"\"\"\n", " Function that given an icon in .png format changes its color\n", " Args:\n", " icon: Icon whose color needs to be changed.\n", " color: Chosen color with which to edit the input icon.\n", " Returns:\n", " edited_image: Edited icon.\n", " \"\"\"\n", " img = icon.convert(\"LA\")\n", " img = img.convert(\"RGBA\")\n", " image_np = np.array(icon)\n", " _, _, _, alpha = image_np.T\n", " mask = alpha > 0\n", " image_np[..., :-1][mask.T] = ImageColor.getcolor(color, \"RGB\")\n", " edited_image = Image.fromarray(image_np)\n", " return edited_image\n", "\n", "\n", "inputs = [\n", " gr.Image(label=\"icon\", type=\"pil\", image_mode=\"RGBA\"),\n", " gr.ColorPicker(label=\"color\"),\n", "]\n", "outputs = gr.Image(label=\"colored icon\")\n", "\n", "demo = gr.Interface(\n", " fn=change_color,\n", " inputs=inputs,\n", " outputs=outputs,\n", " examples=[\n", " [os.path.join(os.path.abspath(''), \"rabbit.png\"), \"#ff0000\"],\n", " [os.path.join(os.path.abspath(''), \"rabbit.png\"), \"#0000FF\"],\n", " ],\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/colorpicker_component/run.ipynb
Normal file
1
demo/colorpicker_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: colorpicker_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.ColorPicker()\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/concurrency_with_queue/run.ipynb
Normal file
1
demo/concurrency_with_queue/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: concurrency_with_queue"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import time\n", "\n", "\n", "def say_hello(name):\n", " time.sleep(5)\n", " return f\"Hello {name}!\"\n", "\n", "\n", "with gr.Blocks() as demo:\n", " inp = gr.Textbox()\n", " outp = gr.Textbox()\n", " button = gr.Button()\n", " button.click(say_hello, inp, outp)\n", "\n", " demo.queue(concurrency_count=41).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/concurrency_without_queue/run.ipynb
Normal file
1
demo/concurrency_without_queue/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: concurrency_without_queue"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import time\n", "\n", "\n", "def say_hello(name):\n", " time.sleep(5)\n", " return f\"Hello {name}!\"\n", "\n", "\n", "with gr.Blocks() as demo:\n", " inp = gr.Textbox()\n", " outp = gr.Textbox()\n", " button = gr.Button()\n", " button.click(say_hello, inp, outp)\n", "\n", " demo.launch(max_threads=41)\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/count_generator/run.ipynb
Normal file
1
demo/count_generator/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: count_generator"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import time\n", "\n", "def count(n):\n", " for i in range(int(n)):\n", " time.sleep(0.5)\n", " yield i\n", "\n", "def show(n):\n", " return str(list(range(int(n))))\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Column():\n", " num = gr.Number(value=10)\n", " with gr.Row():\n", " count_btn = gr.Button(\"Count\")\n", " list_btn = gr.Button(\"List\")\n", " with gr.Column():\n", " out = gr.Textbox()\n", " \n", " count_btn.click(count, num, out)\n", " list_btn.click(show, num, out)\n", " \n", "demo.queue()\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/dashboard/run.ipynb
Normal file
1
demo/dashboard/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: dashboard\n", "### This demo shows how you can build an interactive dashboard with gradio. Click on a python library on the left hand side and then on the right hand side click on the metric you'd like to see plot over time. Data is pulled from HuggingFace Hub datasets.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio plotly"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/dashboard/helpers.py"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import pandas as pd\n", "import plotly.express as px\n", "from helpers import *\n", "\n", "\n", "LIBRARIES = [\"accelerate\", \"datasets\", \"diffusers\", \"evaluate\", \"gradio\", \"hub_docs\",\n", " \"huggingface_hub\", \"optimum\", \"pytorch_image_models\", \"tokenizers\", \"transformers\"]\n", "\n", "\n", "def create_pip_plot(libraries, pip_choices):\n", " if \"Pip\" not in pip_choices:\n", " return gr.update(visible=False)\n", " output = retrieve_pip_installs(libraries, \"Cumulated\" in pip_choices)\n", " df = pd.DataFrame(output).melt(id_vars=\"day\")\n", " plot = px.line(df, x=\"day\", y=\"value\", color=\"variable\",\n", " title=\"Pip installs\")\n", " plot.update_layout(legend=dict(x=0.5, y=0.99), title_x=0.5, legend_title_text=\"\")\n", " return gr.update(value=plot, visible=True)\n", "\n", "\n", "def create_star_plot(libraries, star_choices):\n", " if \"Stars\" not in star_choices:\n", " return gr.update(visible=False)\n", " output = retrieve_stars(libraries, \"Week over Week\" in star_choices)\n", " df = pd.DataFrame(output).melt(id_vars=\"day\")\n", " plot = px.line(df, x=\"day\", y=\"value\", color=\"variable\",\n", " title=\"Number of stargazers\")\n", " plot.update_layout(legend=dict(x=0.5, y=0.99), title_x=0.5, legend_title_text=\"\")\n", " return gr.update(value=plot, visible=True)\n", "\n", "\n", "def create_issue_plot(libraries, issue_choices):\n", " if \"Issue\" not in issue_choices:\n", " return gr.update(visible=False)\n", " output = retrieve_issues(libraries,\n", " exclude_org_members=\"Exclude org members\" in issue_choices,\n", " week_over_week=\"Week over Week\" in issue_choices)\n", " df = pd.DataFrame(output).melt(id_vars=\"day\")\n", " plot = px.line(df, x=\"day\", y=\"value\", color=\"variable\",\n", " title=\"Cumulated number of issues, PRs, and comments\",\n", " )\n", " plot.update_layout(legend=dict(x=0.5, y=0.99), title_x=0.5, legend_title_text=\"\")\n", " return gr.update(value=plot, visible=True)\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " with gr.Box():\n", " gr.Markdown(\"## Select libraries to display\")\n", " libraries = gr.CheckboxGroup(choices=LIBRARIES, label=\"\")\n", " with gr.Column():\n", " with gr.Box():\n", " gr.Markdown(\"## Select graphs to display\")\n", " pip = gr.CheckboxGroup(choices=[\"Pip\", \"Cumulated\"], label=\"\")\n", " stars = gr.CheckboxGroup(choices=[\"Stars\", \"Week over Week\"], label=\"\")\n", " issues = gr.CheckboxGroup(choices=[\"Issue\", \"Exclude org members\", \"week over week\"], label=\"\")\n", " with gr.Row():\n", " fetch = gr.Button(value=\"Fetch\")\n", " with gr.Row():\n", " with gr.Column():\n", " pip_plot = gr.Plot(visible=False)\n", " star_plot = gr.Plot(visible=False)\n", " issue_plot = gr.Plot(visible=False)\n", "\n", " fetch.click(create_pip_plot, inputs=[libraries, pip], outputs=pip_plot)\n", " fetch.click(create_star_plot, inputs=[libraries, stars], outputs=star_plot)\n", " fetch.click(create_issue_plot, inputs=[libraries, issues], outputs=issue_plot)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/dataframe_component/run.ipynb
Normal file
1
demo/dataframe_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: dataframe_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.Dataframe(interactive=True)\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/dataframe_datatype/run.ipynb
Normal file
1
demo/dataframe_datatype/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: dataframe_datatype"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import pandas as pd\n", "import numpy as np\n", "\n", "\n", "def make_dataframe(n_periods):\n", " return pd.DataFrame({\"date_1\": pd.date_range(\"2021-01-01\", periods=n_periods),\n", " \"date_2\": pd.date_range(\"2022-02-15\", periods=n_periods).strftime('%B %d, %Y, %r'),\n", " \"number\": np.random.random(n_periods).astype(np.float64),\n", " \"number_2\": np.random.randint(0, 100, n_periods).astype(np.int32),\n", " \"bool\": [True] * n_periods,\n", " \"markdown\": [\"# Hello\"] * n_periods})\n", "\n", "\n", "demo = gr.Interface(make_dataframe,\n", " gr.Number(precision=0),\n", " gr.Dataframe(datatype=[\"date\", \"date\", \"number\", \"number\", \"bool\", \"markdown\"]))\n", "\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/depth_estimation/run.ipynb
Normal file
1
demo/depth_estimation/run.ipynb
Normal file
File diff suppressed because one or more lines are too long
1
demo/diff_texts/run.ipynb
Normal file
1
demo/diff_texts/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: diff_texts"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["from difflib import Differ\n", "\n", "import gradio as gr\n", "\n", "\n", "def diff_texts(text1, text2):\n", " d = Differ()\n", " return [\n", " (token[2:], token[0] if token[0] != \" \" else None)\n", " for token in d.compare(text1, text2)\n", " ]\n", "\n", "\n", "demo = gr.Interface(\n", " diff_texts,\n", " [\n", " gr.Textbox(\n", " label=\"Initial text\",\n", " lines=3,\n", " value=\"The quick brown fox jumped over the lazy dogs.\",\n", " ),\n", " gr.Textbox(\n", " label=\"Text to compare\",\n", " lines=3,\n", " value=\"The fast brown fox jumps over lazy dogs.\",\n", " ),\n", " ],\n", " gr.HighlightedText(\n", " label=\"Diff\",\n", " combine_adjacent=True,\n", " ).style(color_map={\"+\": \"red\", \"-\": \"green\"}),\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/digit_classifier/run.ipynb
Normal file
1
demo/digit_classifier/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: digit_classifier"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio tensorflow"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import os\n", "from urllib.request import urlretrieve\n", "\n", "import tensorflow as tf\n", "\n", "import gradio\n", "import gradio as gr\n", "\n", "urlretrieve(\n", " \"https://gr-models.s3-us-west-2.amazonaws.com/mnist-model.h5\", \"mnist-model.h5\"\n", ")\n", "model = tf.keras.models.load_model(\"mnist-model.h5\")\n", "\n", "\n", "def recognize_digit(image):\n", " image = image.reshape(1, -1)\n", " prediction = model.predict(image).tolist()[0]\n", " return {str(i): prediction[i] for i in range(10)}\n", "\n", "\n", "im = gradio.Image(shape=(28, 28), image_mode=\"L\", invert_colors=False, source=\"canvas\")\n", "\n", "demo = gr.Interface(\n", " recognize_digit,\n", " im,\n", " gradio.Label(num_top_classes=3),\n", " live=True,\n", " interpretation=\"default\",\n", " capture_session=True,\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/dropdown_component/run.ipynb
Normal file
1
demo/dropdown_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: dropdown_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.Dropdown(choices=[\"First Choice\", \"Second Choice\", \"Third Choice\"])\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/english_translator/run.ipynb
Normal file
1
demo/english_translator/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: english_translator"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio transformers torch"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "from transformers import pipeline\n", "\n", "pipe = pipeline(\"translation\", model=\"t5-base\")\n", "\n", "\n", "def translate(text):\n", " return pipe(text)[0][\"translation_text\"]\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " english = gr.Textbox(label=\"English text\")\n", " translate_btn = gr.Button(value=\"Translate\")\n", " with gr.Column():\n", " german = gr.Textbox(label=\"German Text\")\n", "\n", " translate_btn.click(translate, inputs=english, outputs=german, api_name=\"translate-to-german\")\n", " examples = gr.Examples(examples=[\"I went to the supermarket yesterday.\", \"Helen is a good swimmer.\"],\n", " inputs=[english])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fake_diffusion/run.ipynb
Normal file
1
demo/fake_diffusion/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fake_diffusion\n", "### This demo uses a fake model to showcase iterative output. The Image output will update every time a generator is returned until the final image.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio numpy "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import numpy as np\n", "import time\n", "\n", "# define core fn, which returns a generator {steps} times before returning the image\n", "def fake_diffusion(steps):\n", " for _ in range(steps):\n", " time.sleep(1)\n", " image = np.random.random((600, 600, 3))\n", " yield image\n", "\n", " image = \"https://i.picsum.photos/id/867/600/600.jpg?hmac=qE7QFJwLmlE_WKI7zMH6SgH5iY5fx8ec6ZJQBwKRT44\" \n", " yield image\n", "\n", "demo = gr.Interface(fake_diffusion, \n", " inputs=gr.Slider(1, 10, 3), \n", " outputs=\"image\")\n", "\n", "# define queue - required for generators\n", "demo.queue()\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fake_diffusion_with_gif/run.ipynb
Normal file
1
demo/fake_diffusion_with_gif/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fake_diffusion_with_gif"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/fake_diffusion_with_gif/image.gif"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import numpy as np\n", "import time\n", "import os\n", "from PIL import Image\n", "import requests\n", "from io import BytesIO\n", "\n", "\n", "def create_gif(images):\n", " pil_images = []\n", " for image in images:\n", " if isinstance(image, str):\n", " response = requests.get(image)\n", " image = Image.open(BytesIO(response.content))\n", " else:\n", " image = Image.fromarray((image * 255).astype(np.uint8))\n", " pil_images.append(image)\n", " fp_out = os.path.join(os.path.abspath(''), \"image.gif\")\n", " img = pil_images.pop(0)\n", " img.save(fp=fp_out, format='GIF', append_images=pil_images,\n", " save_all=True, duration=400, loop=0)\n", " return fp_out\n", "\n", "\n", "def fake_diffusion(steps):\n", " images = []\n", " for _ in range(steps):\n", " time.sleep(1)\n", " image = np.random.random((600, 600, 3))\n", " images.append(image)\n", " yield image, gr.Image.update(visible=False)\n", " \n", " time.sleep(1)\n", " image = \"https://i.picsum.photos/id/867/600/600.jpg?hmac=qE7QFJwLmlE_WKI7zMH6SgH5iY5fx8ec6ZJQBwKRT44\" \n", " images.append(image)\n", " gif_path = create_gif(images)\n", " \n", " yield image, gr.Image.update(value=gif_path, visible=True)\n", "\n", "\n", "demo = gr.Interface(fake_diffusion, \n", " inputs=gr.Slider(1, 10, 3), \n", " outputs=[\"image\", gr.Image(label=\"All Images\", visible=False)])\n", "demo.queue()\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fake_gan/run.ipynb
Normal file
1
demo/fake_gan/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fake_gan"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/fake_gan/files/cheetah1.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["# This demo needs to be run from the repo folder.\n", "# python demo/fake_gan/run.py\n", "import os\n", "import random\n", "\n", "import gradio as gr\n", "\n", "\n", "def fake_gan():\n", " images = [\n", " (random.choice(\n", " [\n", " \"https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80\",\n", " \"https://images.unsplash.com/photo-1554151228-14d9def656e4?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=386&q=80\",\n", " \"https://images.unsplash.com/photo-1542909168-82c3e7fdca5c?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8aHVtYW4lMjBmYWNlfGVufDB8fDB8fA%3D%3D&w=1000&q=80\",\n", " \"https://images.unsplash.com/photo-1546456073-92b9f0a8d413?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80\",\n", " \"https://images.unsplash.com/photo-1601412436009-d964bd02edbc?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=464&q=80\",\n", " ]\n", " ), f\"label {i}\" if i != 0 else \"label\" * 50)\n", " for i in range(3)\n", " ]\n", " return images\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Column(variant=\"panel\"):\n", " with gr.Row(variant=\"compact\"):\n", " text = gr.Textbox(\n", " label=\"Enter your prompt\",\n", " show_label=False,\n", " max_lines=1,\n", " placeholder=\"Enter your prompt\",\n", " ).style(\n", " container=False,\n", " )\n", " btn = gr.Button(\"Generate image\").style(full_width=False)\n", "\n", " gallery = gr.Gallery(\n", " label=\"Generated images\", show_label=False, elem_id=\"gallery\"\n", " ).style(grid=[2], height=\"auto\")\n", "\n", " btn.click(fake_gan, None, gallery)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fake_gan_2/run.ipynb
Normal file
1
demo/fake_gan_2/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fake_gan_2"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/fake_gan_2/files/cheetah1.jpg\n", "!wget -q -O files/elephant.jpg https://github.com/gradio-app/gradio/raw/main/demo/fake_gan_2/files/elephant.jpg\n", "!wget -q -O files/tiger.jpg https://github.com/gradio-app/gradio/raw/main/demo/fake_gan_2/files/tiger.jpg\n", "!wget -q -O files/zebra.jpg https://github.com/gradio-app/gradio/raw/main/demo/fake_gan_2/files/zebra.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["# This demo needs to be run from the repo folder.\n", "# python demo/fake_gan/run.py\n", "import os\n", "import random\n", "import time\n", "\n", "import gradio as gr\n", "\n", "\n", "def fake_gan(desc):\n", " if desc == \"NSFW\":\n", " raise gr.Error(\"NSFW - banned content.\")\n", " if desc == \"error\":\n", " raise ValueError(\"error\")\n", " time.sleep(2)\n", " image = random.choice(\n", " [\n", " \"files/cheetah1.jpg\",\n", " \"files/elephant.jpg\",\n", " \"files/tiger.jpg\",\n", " \"files/zebra.jpg\",\n", " ]\n", " )\n", " return image\n", "\n", "\n", "demo = gr.Interface(\n", " fn=fake_gan,\n", " inputs=gr.Textbox(),\n", " outputs=gr.Image(label=\"Generated Image\"),\n", " title=\"FD-GAN\",\n", " description=\"This is a fake demo of a GAN. In reality, the images are randomly chosen from Unsplash.\",\n", ")\n", "demo.queue(max_size=3)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fake_gan_no_input/run.ipynb
Normal file
1
demo/fake_gan_no_input/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fake_gan_no_input"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# This demo needs to be run from the repo folder.\n", "# python demo/fake_gan/run.py\n", "import random\n", "import time\n", "\n", "import gradio as gr\n", "\n", "\n", "def fake_gan():\n", " time.sleep(1)\n", " image = random.choice(\n", " [\n", " \"https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80\",\n", " \"https://images.unsplash.com/photo-1554151228-14d9def656e4?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=386&q=80\",\n", " \"https://images.unsplash.com/photo-1542909168-82c3e7fdca5c?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8aHVtYW4lMjBmYWNlfGVufDB8fDB8fA%3D%3D&w=1000&q=80\",\n", " \"https://images.unsplash.com/photo-1546456073-92b9f0a8d413?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80\",\n", " \"https://images.unsplash.com/photo-1601412436009-d964bd02edbc?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=464&q=80\",\n", " ]\n", " )\n", " return image\n", "\n", "\n", "demo = gr.Interface(\n", " fn=fake_gan,\n", " inputs=None,\n", " outputs=gr.Image(label=\"Generated Image\"),\n", " title=\"FD-GAN\",\n", " description=\"This is a fake demo of a GAN. In reality, the images are randomly chosen from Unsplash.\",\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/file_component/run.ipynb
Normal file
1
demo/file_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: file_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.File()\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/filter_records/run.ipynb
Normal file
1
demo/filter_records/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: filter_records"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "\n", "def filter_records(records, gender):\n", " return records[records[\"gender\"] == gender]\n", "\n", "\n", "demo = gr.Interface(\n", " filter_records,\n", " [\n", " gr.Dataframe(\n", " headers=[\"name\", \"age\", \"gender\"],\n", " datatype=[\"str\", \"number\", \"str\"],\n", " row_count=5,\n", " col_count=(3, \"fixed\"),\n", " ),\n", " gr.Dropdown([\"M\", \"F\", \"O\"]),\n", " ],\n", " \"dataframe\",\n", " description=\"Enter gender as 'M', 'F', or 'O' for other.\",\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/fraud_detector/run.ipynb
Normal file
1
demo/fraud_detector/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: fraud_detector"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio pandas"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/fraud_detector/fraud.csv"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import random\n", "import os\n", "import gradio as gr\n", "\n", "\n", "def fraud_detector(card_activity, categories, sensitivity):\n", " activity_range = random.randint(0, 100)\n", " drop_columns = [\n", " column for column in [\"retail\", \"food\", \"other\"] if column not in categories\n", " ]\n", " if len(drop_columns):\n", " card_activity.drop(columns=drop_columns, inplace=True)\n", " return (\n", " card_activity,\n", " card_activity,\n", " {\"fraud\": activity_range / 100.0, \"not fraud\": 1 - activity_range / 100.0},\n", " )\n", "\n", "\n", "demo = gr.Interface(\n", " fraud_detector,\n", " [\n", " gr.Timeseries(x=\"time\", y=[\"retail\", \"food\", \"other\"]),\n", " gr.CheckboxGroup(\n", " [\"retail\", \"food\", \"other\"], value=[\"retail\", \"food\", \"other\"]\n", " ),\n", " gr.Slider(1, 3),\n", " ],\n", " [\n", " \"dataframe\",\n", " gr.Timeseries(x=\"time\", y=[\"retail\", \"food\", \"other\"]),\n", " gr.Label(label=\"Fraud Level\"),\n", " ],\n", " examples=[\n", " [os.path.join(os.path.abspath(''), \"fraud.csv\"), [\"retail\", \"food\", \"other\"], 1.0],\n", " ],\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gallery_component/run.ipynb
Normal file
1
demo/gallery_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gallery_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " cheetahs = [\n", " \"https://upload.wikimedia.org/wikipedia/commons/0/09/TheCheethcat.jpg\",\n", " \"https://nationalzoo.si.edu/sites/default/files/animals/cheetah-003.jpg\",\n", " \"https://img.etimg.com/thumb/msid-50159822,width-650,imgsize-129520,,resizemode-4,quality-100/.jpg\",\n", " \"https://nationalzoo.si.edu/sites/default/files/animals/cheetah-002.jpg\",\n", " \"https://images.theconversation.com/files/375893/original/file-20201218-13-a8h8uq.jpg?ixlib=rb-1.1.0&rect=16%2C407%2C5515%2C2924&q=45&auto=format&w=496&fit=clip\",\n", " \"https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQeSdQE5kHykTdB970YGSW3AsF6MHHZzY4QiQ&usqp=CAU\",\n", " \"https://www.lifegate.com/app/uploads/ghepardo-primo-piano.jpg\",\n", " \"https://i.natgeofe.com/n/60004bcc-cd85-4401-8bfa-6f96551557db/cheetah-extinction-3_3x4.jpg\",\n", " \"https://qph.cf2.quoracdn.net/main-qimg-0bbf31c18a22178cb7a8dd53640a3d05-lq\"\n", " ]\n", " gr.Gallery(value=cheetahs)\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gender_sentence_custom_interpretation/run.ipynb
Normal file
1
demo/gender_sentence_custom_interpretation/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gender_sentence_custom_interpretation"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import re\n", "\n", "import gradio as gr\n", "\n", "male_words, female_words = [\"he\", \"his\", \"him\"], [\"she\", \"hers\", \"her\"]\n", "\n", "\n", "def gender_of_sentence(sentence):\n", " male_count = len([word for word in sentence.split() if word.lower() in male_words])\n", " female_count = len(\n", " [word for word in sentence.split() if word.lower() in female_words]\n", " )\n", " total = max(male_count + female_count, 1)\n", " return {\"male\": male_count / total, \"female\": female_count / total}\n", "\n", "\n", "# Number of arguments to interpretation function must\n", "# match number of inputs to prediction function\n", "def interpret_gender(sentence):\n", " result = gender_of_sentence(sentence)\n", " is_male = result[\"male\"] > result[\"female\"]\n", " interpretation = []\n", " for word in re.split(\"( )\", sentence):\n", " score = 0\n", " token = word.lower()\n", " if (is_male and token in male_words) or (not is_male and token in female_words):\n", " score = 1\n", " elif (is_male and token in female_words) or (\n", " not is_male and token in male_words\n", " ):\n", " score = -1\n", " interpretation.append((word, score))\n", " # Output must be a list of lists containing the same number of elements as inputs\n", " # Each element corresponds to the interpretation scores for the given input\n", " return [interpretation]\n", "\n", "\n", "demo = gr.Interface(\n", " fn=gender_of_sentence,\n", " inputs=gr.Textbox(value=\"She went to his house to get her keys.\"),\n", " outputs=\"label\",\n", " interpretation=interpret_gender,\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gender_sentence_default_interpretation/run.ipynb
Normal file
1
demo/gender_sentence_default_interpretation/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gender_sentence_default_interpretation"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "male_words, female_words = [\"he\", \"his\", \"him\"], [\"she\", \"hers\", \"her\"]\n", "\n", "\n", "def gender_of_sentence(sentence):\n", " male_count = len([word for word in sentence.split() if word.lower() in male_words])\n", " female_count = len(\n", " [word for word in sentence.split() if word.lower() in female_words]\n", " )\n", " total = max(male_count + female_count, 1)\n", " return {\"male\": male_count / total, \"female\": female_count / total}\n", "\n", "\n", "demo = gr.Interface(\n", " fn=gender_of_sentence,\n", " inputs=gr.Textbox(value=\"She went to his house to get her keys.\"),\n", " outputs=\"label\",\n", " interpretation=\"default\",\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/generate_english_german/run.ipynb
Normal file
1
demo/generate_english_german/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: generate_english_german"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio transformers torch"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "from transformers import pipeline\n", "\n", "english_translator = gr.Blocks.load(name=\"spaces/gradio/english_translator\")\n", "english_generator = pipeline(\"text-generation\", model=\"distilgpt2\")\n", "\n", "\n", "def generate_text(text):\n", " english_text = english_generator(text)[0][\"generated_text\"]\n", " german_text = english_translator(english_text)\n", " return english_text, german_text\n", "\n", "\n", "with gr.Blocks() as demo:\n", " with gr.Row():\n", " with gr.Column():\n", " seed = gr.Text(label=\"Input Phrase\")\n", " with gr.Column():\n", " english = gr.Text(label=\"Generated English Text\")\n", " german = gr.Text(label=\"Generated German Text\")\n", " btn = gr.Button(\"Generate\")\n", " btn.click(generate_text, inputs=[seed], outputs=[english, german])\n", " gr.Examples([\"My name is Clara and I am\"], inputs=[seed])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
75
demo/generate_notebooks.py
Normal file
75
demo/generate_notebooks.py
Normal file
@ -0,0 +1,75 @@
|
||||
import nbformat as nbf
|
||||
import os
|
||||
import json
|
||||
import random
|
||||
|
||||
GRADIO_DEMO_DIR = os.getcwd()
|
||||
DEMOS_TO_SKIP = {"all_demos", "reset_components", "custom_path", "kitchen_sink_random"}
|
||||
|
||||
demos = os.listdir(GRADIO_DEMO_DIR)
|
||||
demos = [demo for demo in demos if demo not in DEMOS_TO_SKIP and os.path.isdir(os.path.join(GRADIO_DEMO_DIR, demo)) and os.path.exists(os.path.join(GRADIO_DEMO_DIR, demo, "run.py"))]
|
||||
|
||||
for demo in demos:
|
||||
nb = nbf.v4.new_notebook()
|
||||
text = f"# Gradio Demo: {demo}"
|
||||
|
||||
if os.path.exists(os.path.join(GRADIO_DEMO_DIR, demo, "DESCRIPTION.md")):
|
||||
with open(os.path.join(GRADIO_DEMO_DIR, demo, "DESCRIPTION.md"), "r") as f:
|
||||
description = f.read()
|
||||
text += f"""\n### {description}
|
||||
"""
|
||||
|
||||
files = os.listdir(os.path.join(GRADIO_DEMO_DIR, demo))
|
||||
skip = ["run.py", "run.ipynb", "setup.sh", ".gitignore", "requirements.txt", "DESCRIPTION.md", "screenshot.png", "screenshot.gif", ".DS_Store", "flagged", "__pycache__"]
|
||||
files = [file for file in files if file not in skip]
|
||||
files.sort()
|
||||
if files:
|
||||
get_files = "# Downloading files from the demo repo\nimport os"
|
||||
for file in files:
|
||||
if os.path.isdir(os.path.join(GRADIO_DEMO_DIR, demo, file)):
|
||||
get_files += f"\nos.mkdir('{file}')"
|
||||
sub_files = os.listdir(os.path.join(GRADIO_DEMO_DIR, demo, file))
|
||||
sub_files = [sub for sub in sub_files if sub not in skip]
|
||||
sub_files.sort()
|
||||
for sub_file in sub_files:
|
||||
get_files += f"\n!wget -q -O {file}/{sub_file} https://github.com/gradio-app/gradio/raw/main/demo/{demo}/{file}/{sub_file}"
|
||||
else:
|
||||
get_files += f"\n!wget -q https://github.com/gradio-app/gradio/raw/main/demo/{demo}/{file}"
|
||||
|
||||
requirements = ""
|
||||
if os.path.exists(os.path.join(GRADIO_DEMO_DIR, demo, "requirements.txt")):
|
||||
with open(os.path.join(GRADIO_DEMO_DIR, demo, "requirements.txt"), "r") as f:
|
||||
requirements = f.read().split("\n")
|
||||
requirements = " ".join(requirements)
|
||||
|
||||
installs = f"!pip install -q gradio {requirements}"
|
||||
|
||||
with open(os.path.join(GRADIO_DEMO_DIR, demo, "run.py"), "r") as f:
|
||||
code = f.read()
|
||||
code = code.replace("os.path.dirname(__file__)", "os.path.abspath('')")
|
||||
|
||||
if files:
|
||||
nb['cells'] = [nbf.v4.new_markdown_cell(text),
|
||||
nbf.v4.new_code_cell(installs),
|
||||
nbf.v4.new_code_cell(get_files),
|
||||
nbf.v4.new_code_cell(code)]
|
||||
else:
|
||||
nb['cells'] = [nbf.v4.new_markdown_cell(text),
|
||||
nbf.v4.new_code_cell(installs),
|
||||
nbf.v4.new_code_cell(code)]
|
||||
|
||||
output_notebook = os.path.join(GRADIO_DEMO_DIR, demo, "run.ipynb")
|
||||
|
||||
with open(output_notebook, 'w') as f:
|
||||
nbf.write(nb, f)
|
||||
|
||||
with open(output_notebook, "r") as f:
|
||||
content = f.read()
|
||||
|
||||
content = json.loads(content)
|
||||
for i, cell in enumerate(content["cells"]):
|
||||
random.seed(i)
|
||||
cell["id"] = random.getrandbits(128)
|
||||
|
||||
with open(output_notebook, "w") as f:
|
||||
f.write(json.dumps(content))
|
1
demo/generate_tone/run.ipynb
Normal file
1
demo/generate_tone/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: generate_tone"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio numpy"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "import gradio as gr\n", "\n", "notes = [\"C\", \"C#\", \"D\", \"D#\", \"E\", \"F\", \"F#\", \"G\", \"G#\", \"A\", \"A#\", \"B\"]\n", "\n", "def generate_tone(note, octave, duration):\n", " sr = 48000\n", " a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9)\n", " frequency = a4_freq * 2 ** (tones_from_a4 / 12)\n", " duration = int(duration)\n", " audio = np.linspace(0, duration, duration * sr)\n", " audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16)\n", " return sr, audio\n", "\n", "demo = gr.Interface(\n", " generate_tone,\n", " [\n", " gr.Dropdown(notes, type=\"index\"),\n", " gr.Slider(4, 6, step=1),\n", " gr.Textbox(value=1, label=\"Duration in seconds\"),\n", " ],\n", " \"audio\",\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gif_maker/run.ipynb
Normal file
1
demo/gif_maker/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gif_maker"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio opencv-python"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import cv2\n", "import gradio as gr\n", "\n", "def gif_maker(img_files):\n", " img_array = []\n", " for filename in img_files:\n", " img = cv2.imread(filename.name)\n", " height, width, _ = img.shape\n", " size = (width,height)\n", " img_array.append(img)\n", " output_file = \"test.mp4\"\n", " out = cv2.VideoWriter(output_file,cv2.VideoWriter_fourcc(*'h264'), 15, size) \n", " for i in range(len(img_array)):\n", " out.write(img_array[i])\n", " out.release()\n", " return output_file\n", "\n", "demo = gr.Interface(gif_maker, inputs=gr.File(file_count=\"multiple\"), outputs=gr.Video())\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gpt_j/run.ipynb
Normal file
1
demo/gpt_j/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gpt_j"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "title = \"GPT-J-6B\"\n", "\n", "examples = [\n", " [\"The tower is 324 metres (1,063 ft) tall,\"],\n", " [\"The Moon's orbit around Earth has\"],\n", " [\"The smooth Borealis basin in the Northern Hemisphere covers 40%\"],\n", "]\n", "\n", "demo = gr.Interface.load(\n", " \"huggingface/EleutherAI/gpt-j-6B\",\n", " inputs=gr.Textbox(lines=5, max_lines=6, label=\"Input Text\"),\n", " title=title,\n", " examples=examples,\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/gpt_j_unified/run.ipynb
Normal file
1
demo/gpt_j_unified/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: gpt_j_unified"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "component = gr.Textbox(lines=5, label=\"Text\")\n", "api = gr.Interface.load(\"huggingface/EleutherAI/gpt-j-6B\")\n", "\n", "demo = gr.Interface(\n", " fn=lambda x: x[:-50] + api(x[-50:]),\n", " inputs=component,\n", " outputs=component,\n", " title=\"GPT-J-6B\",\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hangman/run.ipynb
Normal file
1
demo/hangman/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hangman"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import random\n", "\n", "secret_word = \"gradio\"\n", "\n", "with gr.Blocks() as demo: \n", " used_letters_var = gr.State([])\n", " with gr.Row() as row:\n", " with gr.Column():\n", " input_letter = gr.Textbox(label=\"Enter letter\")\n", " btn = gr.Button(\"Guess Letter\")\n", " with gr.Column():\n", " hangman = gr.Textbox(\n", " label=\"Hangman\",\n", " value=\"_\"*len(secret_word)\n", " )\n", " used_letters_box = gr.Textbox(label=\"Used Letters\")\n", "\n", " def guess_letter(letter, used_letters):\n", " used_letters.append(letter)\n", " answer = \"\".join([\n", " (letter if letter in used_letters else \"_\")\n", " for letter in secret_word\n", " ])\n", " return {\n", " used_letters_var: used_letters,\n", " used_letters_box: \", \".join(used_letters),\n", " hangman: answer\n", " }\n", " btn.click(\n", " guess_letter, \n", " [input_letter, used_letters_var],\n", " [used_letters_var, used_letters_box, hangman]\n", " )\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hello_blocks/run.ipynb
Normal file
1
demo/hello_blocks/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hello_blocks"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def greet(name):\n", " return \"Hello \" + name + \"!\"\n", "\n", "with gr.Blocks() as demo:\n", " name = gr.Textbox(label=\"Name\")\n", " output = gr.Textbox(label=\"Output Box\")\n", " greet_btn = gr.Button(\"Greet\")\n", " greet_btn.click(fn=greet, inputs=name, outputs=output)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hello_login/run.ipynb
Normal file
1
demo/hello_login/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hello_login"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "user_db = {\"admin\": \"admin\", \"foo\": \"bar\"}\n", "\n", "\n", "def greet(name):\n", " return \"Hello \" + name + \"!!\"\n", "\n", "\n", "demo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n", "if __name__ == \"__main__\":\n", " demo.launch(enable_queue=False,\n", " auth=lambda u, p: user_db.get(u) == p,\n", " auth_message=\"This is a welcome message\")\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hello_world/run.ipynb
Normal file
1
demo/hello_world/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hello_world\n", "### The simplest possible Gradio demo. It wraps a 'Hello {name}!' function in an Interface that accepts and returns text.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def greet(name):\n", " return \"Hello \" + name + \"!\"\n", "\n", "demo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n", " \n", "demo.launch() "]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hello_world_2/run.ipynb
Normal file
1
demo/hello_world_2/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hello_world_2"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def greet(name):\n", " return \"Hello \" + name + \"!\"\n", "\n", "demo = gr.Interface(\n", " fn=greet,\n", " inputs=gr.Textbox(lines=2, placeholder=\"Name Here...\"),\n", " outputs=\"text\",\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/hello_world_3/run.ipynb
Normal file
1
demo/hello_world_3/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: hello_world_3"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "\n", "def greet(name, is_morning, temperature):\n", " salutation = \"Good morning\" if is_morning else \"Good evening\"\n", " greeting = f\"{salutation} {name}. It is {temperature} degrees today\"\n", " celsius = (temperature - 32) * 5 / 9\n", " return greeting, round(celsius, 2)\n", "\n", "demo = gr.Interface(\n", " fn=greet,\n", " inputs=[\"text\", \"checkbox\", gr.Slider(0, 100)],\n", " outputs=[\"text\", \"number\"],\n", ")\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/highlightedtext_component/run.ipynb
Normal file
1
demo/highlightedtext_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: highlightedtext_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.HighlightedText(value=[(\"Text\",\"Label 1\"),(\"to be\",\"Label 2\"),(\"highlighted\",\"Label 3\")])\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/html_component/run.ipynb
Normal file
1
demo/html_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: html_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.HTML(value=\"<p style='margin-top: 1rem, margin-bottom: 1rem'>Gradio Docs Readers: <img src='https://visitor-badge.glitch.me/badge?page_id=gradio-docs-visitor-badge' alt='visitor badge' style='display: inline-block'/></p>\")\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_classification/run.ipynb
Normal file
1
demo/image_classification/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_classification\n", "### Simple image classification in Pytorch with Gradio's Image input and Label output.\n", " "]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio torch torchvision"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classification/cheetah.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import torch\n", "import requests\n", "from torchvision import transforms\n", "\n", "model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()\n", "response = requests.get(\"https://git.io/JJkYN\")\n", "labels = response.text.split(\"\\n\")\n", "\n", "def predict(inp):\n", " inp = transforms.ToTensor()(inp).unsqueeze(0)\n", " with torch.no_grad():\n", " prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)\n", " confidences = {labels[i]: float(prediction[i]) for i in range(1000)} \n", " return confidences\n", "\n", "demo = gr.Interface(fn=predict, \n", " inputs=gr.inputs.Image(type=\"pil\"),\n", " outputs=gr.outputs.Label(num_top_classes=3),\n", " examples=[[\"cheetah.jpg\"]],\n", " )\n", " \n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_classifier/run.ipynb
Normal file
1
demo/image_classifier/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_classifier"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio numpy tensorflow"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/imagenet_labels.json https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/files/imagenet_labels.json\n", "os.mkdir('images')\n", "!wget -q -O images/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/images/cheetah1.jpg\n", "!wget -q -O images/lion.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/images/lion.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import os\n", "import requests\n", "import tensorflow as tf\n", "\n", "import gradio as gr\n", "\n", "inception_net = tf.keras.applications.MobileNetV2() # load the model\n", "\n", "# Download human-readable labels for ImageNet.\n", "response = requests.get(\"https://git.io/JJkYN\")\n", "labels = response.text.split(\"\\n\")\n", "\n", "\n", "def classify_image(inp):\n", " inp = inp.reshape((-1, 224, 224, 3))\n", " inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)\n", " prediction = inception_net.predict(inp).flatten()\n", " return {labels[i]: float(prediction[i]) for i in range(1000)}\n", "\n", "\n", "image = gr.Image(shape=(224, 224))\n", "label = gr.Label(num_top_classes=3)\n", "\n", "demo = gr.Interface(\n", " fn=classify_image,\n", " inputs=image,\n", " outputs=label,\n", " examples=[\n", " os.path.join(os.path.abspath(''), \"images/cheetah1.jpg\"),\n", " os.path.join(os.path.abspath(''), \"images/lion.jpg\")\n", " ]\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n", "\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_classifier_2/run.ipynb
Normal file
1
demo/image_classifier_2/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_classifier_2"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio pillow torch torchvision"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/imagenet_labels.json https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_2/files/imagenet_labels.json"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import requests\n", "import torch\n", "from PIL import Image\n", "from torchvision import transforms\n", "\n", "import gradio as gr\n", "\n", "model = torch.hub.load(\"pytorch/vision:v0.6.0\", \"resnet18\", pretrained=True).eval()\n", "\n", "# Download human-readable labels for ImageNet.\n", "response = requests.get(\"https://git.io/JJkYN\")\n", "labels = response.text.split(\"\\n\")\n", "\n", "\n", "def predict(inp):\n", " inp = Image.fromarray(inp.astype(\"uint8\"), \"RGB\")\n", " inp = transforms.ToTensor()(inp).unsqueeze(0)\n", " with torch.no_grad():\n", " prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)\n", " return {labels[i]: float(prediction[i]) for i in range(1000)}\n", "\n", "\n", "inputs = gr.Image()\n", "outputs = gr.Label(num_top_classes=3)\n", "\n", "demo = gr.Interface(fn=predict, inputs=inputs, outputs=outputs)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_classifier_interface_load/run.ipynb
Normal file
1
demo/image_classifier_interface_load/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_classifier_interface_load"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/cheetah1.jpeg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/cheetah1.jpg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/lion.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import pathlib\n", "\n", "current_dir = pathlib.Path(__file__).parent\n", "\n", "images = [str(current_dir / \"cheetah1.jpeg\"), str(current_dir / \"cheetah1.jpg\"), str(current_dir / \"lion.jpg\")]\n", "\n", "\n", "img_classifier = gr.Interface.load(\n", " \"models/google/vit-base-patch16-224\", examples=images, cache_examples=False\n", ")\n", "\n", "\n", "def func(img, text):\n", " return img_classifier(img), text\n", "\n", "\n", "using_img_classifier_as_function = gr.Interface(\n", " func,\n", " [gr.Image(type=\"filepath\"), \"text\"],\n", " [\"label\", \"text\"],\n", " examples=[\n", " [str(current_dir / \"cheetah1.jpeg\"), None],\n", " [str(current_dir / \"cheetah1.jpg\"), \"cheetah\"],\n", " [str(current_dir / \"lion.jpg\"), \"lion\"],\n", " ],\n", " cache_examples=False,\n", ")\n", "demo = gr.TabbedInterface([using_img_classifier_as_function, img_classifier])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_classifier_interpretation/run.ipynb
Normal file
1
demo/image_classifier_interpretation/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_classifier_interpretation"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio numpy tensorflow"]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('files')\n", "!wget -q -O files/imagenet_labels.json https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interpretation/files/imagenet_labels.json\n", "os.mkdir('images')\n", "!wget -q -O images/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interpretation/images/cheetah1.jpg\n", "!wget -q -O images/lion.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interpretation/images/lion.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import requests\n", "import tensorflow as tf\n", "\n", "import gradio as gr\n", "\n", "inception_net = tf.keras.applications.MobileNetV2() # load the model\n", "\n", "# Download human-readable labels for ImageNet.\n", "response = requests.get(\"https://git.io/JJkYN\")\n", "labels = response.text.split(\"\\n\")\n", "\n", "\n", "def classify_image(inp):\n", " inp = inp.reshape((-1, 224, 224, 3))\n", " inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)\n", " prediction = inception_net.predict(inp).flatten()\n", " return {labels[i]: float(prediction[i]) for i in range(1000)}\n", "\n", "\n", "image = gr.Image(shape=(224, 224))\n", "label = gr.Label(num_top_classes=3)\n", "\n", "demo = gr.Interface(\n", " fn=classify_image, inputs=image, outputs=label, interpretation=\"default\"\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_component/run.ipynb
Normal file
1
demo/image_component/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_component"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["import gradio as gr \n", "\n", "with gr.Blocks() as demo:\n", " gr.Image()\n", "\n", "demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
1
demo/image_mod/run.ipynb
Normal file
1
demo/image_mod/run.ipynb
Normal file
@ -0,0 +1 @@
|
||||
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: image_mod"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "os.mkdir('images')\n", "!wget -q -O images/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_mod/images/cheetah1.jpg\n", "!wget -q -O images/lion.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_mod/images/lion.jpg\n", "!wget -q -O images/logo.png https://github.com/gradio-app/gradio/raw/main/demo/image_mod/images/logo.png"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import os\n", "\n", "\n", "def image_mod(image):\n", " return image.rotate(45)\n", "\n", "\n", "demo = gr.Interface(image_mod, gr.Image(type=\"pil\"), \"image\",\n", " flagging_options=[\"blurry\", \"incorrect\", \"other\"], examples=[\n", " os.path.join(os.path.abspath(''), \"images/cheetah1.jpg\"),\n", " os.path.join(os.path.abspath(''), \"images/lion.jpg\"),\n", " os.path.join(os.path.abspath(''), \"images/logo.png\")\n", " ])\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user