generate docs json in ci, reimplement main vs release (#5092)

* fixup site

* fix docs versions

* test ci

* test ci some more

* test ci some more

* test ci some more

* asd

* asd

* asd

* asd

* asd

* asd

* asd

* asd

* asd

* test

* fix

* add changeset

* fix

* fix

* fix

* test ci

* test ci

* test ci

* test ci

* test ci

* test ci

* test ci

* test ci

* test ci

* notebook ci

* notebook ci

* more ci

* more ci

* update changeset

* update changeset

* update changeset

* fix site

* fix

* fix

* fix

* fix

* fix ci

* render mising pages

* remove changeset

* fix path

* fix workflows

* fix workflows

* fix workflows

* fix comment

* tweaks

* tweaks

---------

Co-authored-by: gradio-pr-bot <gradio-pr-bot@users.noreply.github.com>
This commit is contained in:
pngwn 2023-08-11 15:54:56 +01:00 committed by GitHub
parent 3c00f0fbfb
commit 643442e1a5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
374 changed files with 2868 additions and 10326 deletions

View File

@ -0,0 +1,5 @@
---
"website": minor
---
feat:generate docs json in ci, reimplement main vs release

View File

@ -9,11 +9,10 @@
**/.svelte-kit/**
**/demo/**
**/gradio/**
**/website/**
**/.pnpm-store/**
**/.venv/**
**/.github/**
**/guides/**
/guides/**
**/.mypy_cache/**
!test-strategy.md
**/js/_space-test/**
@ -22,4 +21,5 @@
**/gradio_cached_examples/**
**/storybook-static/**
**/.vscode/**
sweep.yaml
sweep.yaml
**/.vercel/**

View File

@ -3,5 +3,5 @@
"singleQuote": false,
"trailingComma": "none",
"printWidth": 80,
"plugins": ["prettier-plugin-svelte", "prettier-plugin-css-order"]
"plugins": ["prettier-plugin-svelte"]
}

View File

@ -53,3 +53,8 @@ runs:
node_auth_token: ${{ inputs.node_auth_token }}
npm_token: ${{ inputs.npm_token }}
skip_build: ${{ inputs.skip_build }}
- name: generate json
shell: bash
run: |
. venv/bin/activate
python js/_website/generate_jsons/generate.py

View File

@ -40,7 +40,7 @@ runs:
- name: Install deps
if: steps.frontend-cache.outputs.cache-hit != 'true' || inputs.always-install-pnpm == 'true'
shell: bash
run: pnpm i --frozen-lockfile
run: pnpm i --frozen-lockfile --ignore-scripts
- name: Build Css
if: inputs.always-install-pnpm == 'true'
shell: bash

View File

@ -90,7 +90,7 @@ jobs:
- name: Build frontend
if: steps.frontend-cache.outputs.cache-hit != 'true'
run: |
pnpm i --frozen-lockfile
pnpm i --frozen-lockfile --ignore-scripts
pnpm build
- name: Install Test Requirements (Linux)
if: runner.os == 'Linux'
@ -180,7 +180,7 @@ jobs:
- name: Build frontend
if: steps.frontend-cache.outputs.cache-hit != 'true'
run: |
pnpm i --frozen-lockfile
pnpm i --frozen-lockfile --ignore-scripts
pnpm build
- name: Install Gradio and Client Libraries Locally (Linux)
if: runner.os == 'Linux'

View File

@ -6,6 +6,13 @@ on:
- main
jobs:
comment-spaces-start:
uses: "./.github/workflows/comment-queue.yml"
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ github.event.pull_request.number }}
message: spaces~pending~null
build_pr:
runs-on: ubuntu-latest
steps:
@ -26,14 +33,15 @@ jobs:
- name: Install pip
run: python -m pip install build requests
- name: Get PR Number
id: get_pr_number
run: |
python -c "import os;print(os.environ['GITHUB_REF'].split('/')[2])" > pr_number.txt
echo "PR_NUMBER=$(cat pr_number.txt)" >> $GITHUB_ENV
echo "GRADIO_VERSION=$(python -c 'import requests;print(requests.get("https://pypi.org/pypi/gradio/json").json()["info"]["version"])')" >> $GITHUB_ENV
echo "PR_NUMBER=$(cat pr_number.txt)" >> $GITHUB_OUTPUT
echo "GRADIO_VERSION=$(python -c 'import requests;print(requests.get("https://pypi.org/pypi/gradio/json").json()["info"]["version"])')" >> $GITHUB_OUTPUT
- name: Build pr package
run: |
echo ${{ env.GRADIO_VERSION }} > gradio/version.txt
pnpm i --frozen-lockfile
echo ${{ steps.get_pr_number.outputs.GRADIO_VERSION }} > gradio/version.txt
pnpm i --frozen-lockfile --ignore-scripts
pnpm build
python3 -m build -w
env:
@ -41,11 +49,11 @@ jobs:
- name: Upload wheel
uses: actions/upload-artifact@v3
with:
name: gradio-${{ env.GRADIO_VERSION }}-py3-none-any.whl
path: dist/gradio-${{ env.GRADIO_VERSION }}-py3-none-any.whl
name: gradio-${{ steps.get_pr_number.outputs.GRADIO_VERSION }}-py3-none-any.whl
path: dist/gradio-${{ steps.get_pr_number.outputs.GRADIO_VERSION }}-py3-none-any.whl
- name: Set up Demos
run: |
python scripts/copy_demos.py https://gradio-builds.s3.amazonaws.com/${{ github.sha }}/gradio-${{ env.GRADIO_VERSION }}-py3-none-any.whl \
python scripts/copy_demos.py https://gradio-builds.s3.amazonaws.com/${{ github.sha }}/gradio-${{ steps.get_pr_number.outputs.GRADIO_VERSION }}-py3-none-any.whl \
"gradio-client @ git+https://github.com/gradio-app/gradio@${{ github.sha }}#subdirectory=client/python"
- name: Upload all_demos
uses: actions/upload-artifact@v3
@ -54,7 +62,7 @@ jobs:
path: demo/all_demos
- name: Create metadata artifact
run: |
python -c "import json; json.dump({'gh_sha': '${{ github.sha }}', 'pr_number': ${{ env.PR_NUMBER }}, 'version': '${{ env.GRADIO_VERSION }}', 'wheel': 'gradio-${{ env.GRADIO_VERSION }}-py3-none-any.whl'}, open('metadata.json', 'w'))"
python -c "import json; json.dump({'gh_sha': '${{ github.sha }}', 'pr_number': ${{ steps.get_pr_number.outputs.pr_number }}, 'version': '${{ steps.get_pr_number.outputs.GRADIO_VERSION }}', 'wheel': 'gradio-${{ steps.get_pr_number.outputs.GRADIO_VERSION }}-py3-none-any.whl'}, open('metadata.json', 'w'))"
- name: Upload metadata
uses: actions/upload-artifact@v3
with:

View File

@ -9,6 +9,13 @@ on:
- 'demo/**'
jobs:
comment-notebook-start:
uses: "./.github/workflows/comment-queue.yml"
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ github.event.pull_request.number }}
message: notebooks~pending~null
check-notebooks:
name: Generate Notebooks and Check
runs-on: ubuntu-latest

View File

@ -1,44 +0,0 @@
on:
workflow_run:
workflows: [Check Demos Match Notebooks]
types: [completed]
jobs:
comment-on-failure:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install pip
run: python -m pip install requests
- name: Download metadata
run: python scripts/download_artifacts.py ${{github.event.workflow_run.id }} metadata.json ${{ secrets.COMMENT_TOKEN }} --owner ${{ github.repository_owner }}
- run: unzip metadata.json.zip
- name: Pipe metadata to env
run: echo "pr_number=$(python -c 'import json; print(json.load(open("metadata.json"))["pr_number"])')" >> $GITHUB_ENV
- name: Comment On Notebook check fail
if: ${{ github.event.workflow_run.conclusion == 'failure' && github.event.workflow_run.name == 'Check Demos Match Notebooks'}}
uses: thollander/actions-comment-pull-request@v2
with:
message: |
The demo notebooks don't match the run.py files. Please run this command from the root of the repo and then commit the changes:
```bash
pip install nbformat && cd demo && python generate_notebooks.py
```
comment_includes: The demo notebooks don't match the run.py files
GITHUB_TOKEN: ${{ secrets.COMMENT_TOKEN }}
pr_number: ${{ env.pr_number }}
comment_tag: notebook-check
- name: Comment On Notebook check fail
if: ${{ github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.name == 'Check Demos Match Notebooks'}}
uses: thollander/actions-comment-pull-request@v2
with:
message: |
🎉 The demo notebooks match the run.py files! 🎉
comment_includes: The demo notebooks match the run.py files!
GITHUB_TOKEN: ${{ secrets.COMMENT_TOKEN }}
pr_number: ${{ env.pr_number }}
comment_tag: notebook-check

39
.github/workflows/comment-queue.yml vendored Normal file
View File

@ -0,0 +1,39 @@
name: Comment on pull request without race conditions
on:
workflow_call:
inputs:
pr_number:
type: string
message:
required: true
type: string
tag:
required: false
type: string
default: "previews"
additional_text:
required: false
type: string
default: ""
secrets:
gh_token:
required: true
concurrency:
group: 1
jobs:
comment:
concurrency:
group: ${{inputs.pr_number || inputs.tag}}
runs-on: ubuntu-latest
steps:
- name: comment on pr
uses: "gradio-app/github/actions/comment-pr@main"
with:
gh_token: ${{ secrets.gh_token }}
tag: ${{ inputs.tag }}
pr_number: ${{ inputs.pr_number}}
message: ${{ inputs.message }}
additional_text: ${{ inputs.additional_text }}

View File

@ -12,27 +12,33 @@ on:
jobs:
get-current-pr:
runs-on: ubuntu-latest
steps:
- uses: 8BitJonny/gh-get-current-pr@2.2.0
id: get-pr
outputs:
pr_found: ${{ steps.get-pr.outputs.pr_found }}
pr_number: ${{ steps.get-pr.outputs.number }}
pr_labels: ${{ steps.get-pr.outputs.pr_labels }}
steps:
- uses: 8BitJonny/gh-get-current-pr@2.2.0
id: get-pr
comment-chromatic-start:
uses: "./.github/workflows/comment-queue.yml"
needs: get-current-pr
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-current-pr.outputs.pr_number }}
message: |
storybook~pending~null
visual~pending~0~0~null
chromatic-deployment:
needs: get-current-pr
runs-on: ubuntu-latest
outputs:
changes: ${{ steps.publish-chromatic.outputs.changeCount }}
errors: ${{ steps.publish-chromatic.outputs.errorCount }}
storybook_url: ${{ steps.publish-chromatic.outputs.storybookUrl }}
build_url: ${{ steps.publish-chromatic.outputs.buildUrl }}
if: ${{ github.repository == 'gradio-app/gradio' && !contains(needs.get-current-pr.outputs.pr_labels, 'no-visual-update') }}
steps:
- name: post pending deployment comment to PR
if: ${{ needs.get-current-pr.outputs.pr_found }} == 'true'
uses: thollander/actions-comment-pull-request@v2
with:
message: |
Chromatic build pending :hourglass:
comment_tag: chromatic-build
GITHUB_TOKEN: ${{ secrets.COMMENT_TOKEN }}
pr_number: ${{ needs.get-current-pr.outputs.pr_number }}
- uses: actions/checkout@v3
with:
fetch-depth: 0
@ -54,17 +60,24 @@ jobs:
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
token: ${{ secrets.GITHUB_TOKEN }}
exitOnceUploaded: true
- name: post deployment link to PR
if: ${{ needs.get-current-pr.outputs.pr_found }} == 'true'
uses: thollander/actions-comment-pull-request@v2
with:
message: |
:tada: Chromatic build completed!
There are ${{ steps.publish-chromatic.outputs.changeCount }} visual changes to review.
There are ${{ steps.publish-chromatic.outputs.errorCount }} failed tests to fix.
* [Storybook Preview](${{ steps.publish-chromatic.outputs.storybookUrl }})
* [Build Review](${{ steps.publish-chromatic.outputs.buildUrl }})
GITHUB_TOKEN: ${{ secrets.COMMENT_TOKEN }}
comment_tag: chromatic-build
pr_number: ${{ needs.get-current-pr.outputs.pr_number }}
comment-chromatic-end:
uses: "./.github/workflows/comment-queue.yml"
needs: [chromatic-deployment, get-current-pr]
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-current-pr.outputs.pr_number }}
message: |
storybook~success~${{ needs.chromatic-deployment.outputs.storybook_url }}
visual~success~${{ needs.chromatic-deployment.outputs.changes }}~${{ needs.chromatic-deployment.outputs.errors }}~${{ needs.chromatic-deployment.outputs.build_url }}
comment-chromatic-fail:
uses: "./.github/workflows/comment-queue.yml"
needs: [chromatic-deployment, get-current-pr]
if: always() && needs.chromatic-deployment.result == 'failure'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-current-pr.outputs.pr_number }}
message: |
storybook~failure~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/
visual~failure~0~0~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/

View File

@ -8,6 +8,11 @@ on:
jobs:
deploy-current-pr:
outputs:
pr_number: ${{ steps.set-outputs.outputs.pr_number }}
space_url: ${{ steps.upload-demo.outputs.SPACE_URL }}
sha: ${{ steps.set-outputs.outputs.gh_sha }}
gradio_version: ${{ steps.set-outputs.outputs.gradio_version }}
runs-on: ubuntu-latest
if: >
github.event.workflow_run.event == 'pull_request' &&
@ -23,21 +28,22 @@ jobs:
- name: Download metadata
run: python scripts/download_artifacts.py ${{github.event.workflow_run.id }} metadata.json ${{ secrets.COMMENT_TOKEN }} --owner ${{ github.repository_owner }}
- run: unzip metadata.json.zip
- name: Pipe metadata to env
- name: set outputs
id: set-outputs
run: |
echo "wheel_name=$(python -c 'import json; print(json.load(open("metadata.json"))["wheel"])')" >> $GITHUB_ENV
echo "gh_sha=$(python -c 'import json; print(json.load(open("metadata.json"))["gh_sha"])')" >> $GITHUB_ENV
echo "gradio_version=$(python -c 'import json; print(json.load(open("metadata.json"))["version"])')" >> $GITHUB_ENV
echo "pr_number=$(python -c 'import json; print(json.load(open("metadata.json"))["pr_number"])')" >> $GITHUB_ENV
echo "wheel_name=$(python -c 'import json; print(json.load(open("metadata.json"))["wheel"])')" >> $GITHUB_OUTPUT
echo "gh_sha=$(python -c 'import json; print(json.load(open("metadata.json"))["gh_sha"])')" >> $GITHUB_OUTPUT
echo "gradio_version=$(python -c 'import json; print(json.load(open("metadata.json"))["version"])')" >> $GITHUB_OUTPUT
echo "pr_number=$(python -c 'import json; print(json.load(open("metadata.json"))["pr_number"])')" >> $GITHUB_OUTPUT
- name: 'Download wheel'
run: python scripts/download_artifacts.py ${{ github.event.workflow_run.id }} ${{ env.wheel_name }} ${{ secrets.COMMENT_TOKEN }} --owner ${{ github.repository_owner }}
- run: unzip ${{ env.wheel_name }}.zip
run: python scripts/download_artifacts.py ${{ github.event.workflow_run.id }} ${{ steps.set-outputs.outputs.wheel_name }} ${{ secrets.COMMENT_TOKEN }} --owner ${{ github.repository_owner }}
- run: unzip ${{ steps.set-outputs.outputs.wheel_name }}.zip
- name: Upload wheel
run: |
export AWS_ACCESS_KEY_ID=${{ secrets.PR_DEPLOY_KEY }}
export AWS_SECRET_ACCESS_KEY=${{ secrets.PR_DEPLOY_SECRET }}
export AWS_DEFAULT_REGION=us-east-1
aws s3 cp ${{ env.wheel_name }} s3://gradio-builds/${{ env.gh_sha }}/
aws s3 cp ${{ steps.set-outputs.outputs.wheel_name }} s3://gradio-builds/${{ steps.set-outputs.outputs.gh_sha }}/
- name: Install Hub Client Library
run: pip install huggingface-hub
- name: 'Download all_demos'
@ -45,29 +51,41 @@ jobs:
- run: unzip all_demos.zip -d all_demos
- run: cp -R all_demos/* demo/all_demos
- name: Upload demo to spaces
id: upload-demo
run: |
python scripts/upload_demo_to_space.py all_demos \
gradio-pr-deploys/pr-${{ env.pr_number }}-all-demos \
gradio-pr-deploys/pr-${{ steps.set-outputs.outputs.pr_number }}-all-demos \
${{ secrets.SPACES_DEPLOY_TOKEN }} \
--gradio-version ${{ env.gradio_version }} > url.txt
echo "SPACE_URL=$(cat url.txt)" >> $GITHUB_ENV
- name: Comment On Release PR
uses: thollander/actions-comment-pull-request@v2
with:
message: |
All the demos for this PR have been deployed at ${{ env.SPACE_URL }}
--gradio-version ${{ steps.set-outputs.outputs.gradio_version }} > url.txt
echo "SPACE_URL=$(cat url.txt)" >> $GITHUB_OUTPUT
comment-spaces-success:
uses: "./.github/workflows/comment-queue.yml"
needs: [deploy-current-pr]
if: >
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
needs.deploy-current-pr.result == 'success'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.deploy-current-pr.outputs.pr_number }}
message: spaces~success~${{ needs.deploy-current-pr.outputs.space_url }}
additional_text: |
**Install Gradio from this PR**
```bash
pip install https://gradio-builds.s3.amazonaws.com/${{ needs.deploy-current-pr.outputs.sha }}/gradio-${{ needs.deploy-current-pr.outputs.gradio_version }}-py3-none-any.whl
```
---
### Install Gradio from this PR:
```bash
pip install https://gradio-builds.s3.amazonaws.com/${{ env.gh_sha }}/gradio-${{ env.gradio_version }}-py3-none-any.whl
```
---
### Install Gradio Python Client from this PR
```bash
pip install "gradio-client @ git+https://github.com/gradio-app/gradio@${{ github.sha }}#subdirectory=client/python"
```
comment_tag: All the demos for this PR have been deployed at
GITHUB_TOKEN: ${{ secrets.COMMENT_TOKEN }}
pr_number: ${{ env.pr_number }}
**Install Gradio Python Client from this PR**
```bash
pip install "gradio-client @ git+https://github.com/gradio-app/gradio@${{ needs.deploy-current-pr.outputs.sha }}#subdirectory=client/python"
```
comment-spaces-failure:
uses: "./.github/workflows/comment-queue.yml"
needs: [deploy-current-pr]
if: always() && needs.deploy-current-pr.result == 'failure'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.deploy-current-pr.outputs.pr_number }}
message: spaces~failure~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/

109
.github/workflows/deploy-website.yml vendored Normal file
View File

@ -0,0 +1,109 @@
name: "deploy website"
on:
workflow_call:
inputs:
branch_name:
description: "The branch name"
type: string
pr_number:
description: "The PR number"
type: string
secrets:
vercel_token:
description: "Vercel API token"
gh_token:
description: "Github token"
required: true
vercel_org_id:
description: "Vercel organization ID"
required: true
vercel_project_id:
description: "Vercel project ID"
required: true
env:
VERCEL_ORG_ID: ${{ secrets.vercel_org_id }}
VERCEL_PROJECT_ID: ${{ secrets.vercel_project_id }}
jobs:
comment-deploy-start:
uses: "./.github/workflows/comment-queue.yml"
secrets:
gh_token: ${{ secrets.github_token }}
with:
pr_number: ${{ inputs.pr_number }}
message: website~pending~null
deploy:
name: "Deploy website"
runs-on: ubuntu-latest
outputs:
vercel_url: ${{ steps.output_url.outputs.vercel_url }}
steps:
- uses: actions/checkout@v3
- name: install dependencies
uses: "./.github/actions/install-frontend-deps"
with:
always-install-pnpm: true
skip_build: true
- name: download artifacts
uses: actions/download-artifact@v2
with:
name: website-json-${{ inputs.pr_number }}}
path: |
./js/_website/src/lib/json
- name: echo artifact path
shell: bash
run: ls ./js/_website/src/lib/json
- name: Install Vercel CLI
shell: bash
run: pnpm install --global vercel@latest
# preview
- name: Pull Vercel Environment Information
shell: bash
if: github.event_name == 'pull_request'
run: vercel pull --yes --environment=preview --token=${{ secrets.vercel_token }}
- name: Build Project Artifacts
if: github.event_name == 'pull_request'
shell: bash
run: vercel build --token=${{ secrets.vercel_token }}
- name: Deploy Project Artifacts to Vercel
if: github.event_name == 'pull_request'
id: output_url
shell: bash
run: echo "vercel_url=$(vercel deploy --prebuilt --token=${{ secrets.vercel_token }})" >> $GITHUB_OUTPUT
# production
- name: Pull Vercel Environment Information
if: github.event_name == 'push' && inputs.branch_name == 'main'
shell: bash
run: vercel pull --yes --environment=production --token=${{ secrets.vercel_token }}
- name: Build Project Artifacts
if: github.event_name == 'push' && inputs.branch_name == 'main'
shell: bash
run: vercel build --prod --token=${{ secrets.vercel_token }}
- name: Deploy Project Artifacts to Vercel
if: github.event_name == 'push' && inputs.branch_name == 'main'
shell: bash
run: echo "deploying production"
# run: echo "VERCEL_URL=$(vercel deploy --prebuilt --prod --token=${{ inputs.vercel_token }})" >> $GITHUB_ENV
- name: echo vercel url
shell: bash
run: echo $VERCEL_URL #add to comment
comment-deploy-success:
uses: "./.github/workflows/comment-queue.yml"
needs: deploy
if: needs.deploy.result == 'success'
secrets:
gh_token: ${{ secrets.gh_token }}
with:
pr_number: ${{ inputs.pr_number }}
message: website~success~${{needs.deploy.outputs.vercel_url}}
comment-deploy-failure:
uses: "./.github/workflows/comment-queue.yml"
needs: deploy
if: always() && needs.deploy.result == 'failure'
secrets:
gh_token: ${{ secrets.gh_token }}
with:
pr_number: ${{ inputs.pr_number }}
message: website~failure~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/

View File

@ -13,27 +13,74 @@ concurrency:
group: ${{ github.event.workflow_run.head_repository.full_name }}::${{ github.event.workflow_run.head_branch }}
jobs:
get-pr:
runs-on: ubuntu-latest
outputs:
found_pr: ${{ steps.pr_details.outputs.found_pr }}
pr_number: ${{ steps.pr_details.outputs.pr_number }}
source_repo: ${{ steps.pr_details.outputs.source_repo }}
source_branch: ${{ steps.pr_details.outputs.source_branch }}
steps:
- name: get pr details
id: pr_details
uses: gradio-app/github/actions/find-pr@main
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
comment-chromatic-start:
uses: "./.github/workflows/comment-queue.yml"
needs: get-pr
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-pr.outputs.pr_number }}
message: changes~pending~null
version:
permissions: write-all
name: static checks
needs: get-pr
runs-on: ubuntu-22.04
if: (github.event.workflow_run.head_repository.full_name == 'gradio-app/gradio' && github.event.workflow_run.head_branch != 'main') || github.event.workflow_run.head_repository.full_name != 'gradio-app/gradio'
if: needs.get-pr.outputs.found_pr == 'true'
outputs:
skipped: ${{ steps.version.outputs.skipper }}
comment_url: ${{ steps.version.outputs.comment_url }}
steps:
- id: 'get-pr'
uses: "gradio-app/github/actions/get-pr-branch@main"
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: get pr number
run: echo "PR number is ${{ steps.get-pr.outputs.pr_number }}"
- uses: actions/checkout@v3
with:
repository: ${{ github.event.workflow_run.head_repository.full_name }}
ref: ${{ github.event.workflow_run.head_branch }}
repository: ${{ needs.get-pr.outputs.source_repo }}
ref: ${{ needs.get-pr.outputs.source_branch }}
fetch-depth: 0
token: ${{ secrets.COMMENT_TOKEN }}
- name: generate changeset
id: version
uses: "gradio-app/github/actions/generate-changeset@main"
with:
github_token: ${{ secrets.COMMENT_TOKEN }}
main_pkg: gradio
pr_number: ${{ steps.get-pr.outputs.pr_number }}
pr_number: ${{ needs.get-pr.outputs.pr_number }}
comment-changes-skipper:
uses: "./.github/workflows/comment-queue.yml"
needs: [get-pr, version]
if: needs.version.result == 'success' && needs.version.outputs.skipped == 'true'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-pr.outputs.pr_number }}
message: changes~warning~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/
comment-changes-success:
uses: "./.github/workflows/comment-queue.yml"
needs: [get-pr, version]
if: needs.version.result == 'success' && needs.version.outputs.skipped == 'false'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-pr.outputs.pr_number }}
message: changes~success~${{ needs.version.outputs.comment_url }}
comment-changes-failure:
uses: "./.github/workflows/comment-queue.yml"
needs: [get-pr, version]
if: always() && needs.version.result == 'failure'
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
with:
pr_number: ${{ needs.get-pr.outputs.pr_number }}
message: changes~failure~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/

View File

@ -0,0 +1,47 @@
on:
workflow_run:
workflows: [Check Demos Match Notebooks]
types: [completed]
jobs:
get-pr-number:
runs-on: ubuntu-latest
outputs:
pr_number: ${{ steps.pr_number.outputs.pr_number }}
steps:
- uses: actions/checkout@v3
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install pip
run: python -m pip install requests
- name: Download metadata
run: python scripts/download_artifacts.py ${{github.event.workflow_run.id }} metadata.json ${{ secrets.COMMENT_TOKEN }} --owner ${{ github.repository_owner }}
- run: unzip metadata.json.zip
- name: Pipe metadata to env
id: pr_number
run: echo "pr_number=$(python -c 'import json; print(json.load(open("metadata.json"))["pr_number"])')" >> $GITHUB_OUTPUT
comment-success:
uses: "./.github/workflows/comment-queue.yml"
if: ${{ github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.name == 'Check Demos Match Notebooks'}}
needs: get-pr-number
secrets:
gh_token: ${{ secrets.GITHUB_TOKEN }}
with:
pr_number: ${{ needs.get-pr-number.outputs.pr_number }}
message: notebooks~success~null
comment-failure:
uses: "./.github/workflows/comment-queue.yml"
if: ${{ github.event.workflow_run.conclusion == 'failure' && github.event.workflow_run.name == 'Check Demos Match Notebooks'}}
needs: get-pr-number
secrets:
gh_token: ${{ secrets.GITHUB_TOKEN }}
with:
pr_number: ${{ needs.get-pr-number.outputs.pr_number }}
message: notebooks~failure~https://github.com/${{github.action_repository}}/actions/runs/${{github.run_id}}/
additional_text: |
The demo notebooks don't match the run.py files. Please run this command from the root of the repo and then commit the changes:
```bash
pip install nbformat && cd demo && python generate_notebooks.py
```

View File

@ -10,7 +10,8 @@ env:
CI: true
PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD: "1"
NODE_OPTIONS: "--max-old-space-size=4096"
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
concurrency:
group: deploy-${{ github.ref }}-${{ github.event_name == 'push' || github.event.inputs.fire != null }}
cancel-in-progress: true
@ -37,12 +38,29 @@ jobs:
run: pnpm test:run
functional-test:
runs-on: ubuntu-latest
outputs:
source_branch: ${{ steps.pr_details.outputs.source_branch }}
pr_number: ${{ steps.pr_details.outputs.pr_number }}
steps:
- uses: actions/checkout@v3
- name: install dependencies
id: install_deps
uses: "./.github/actions/install-all-deps"
with:
always-install-pnpm: true
- name: get pr details
id: pr_details
uses: gradio-app/github/actions/find-pr@main
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: deploy json to aws
if: steps.pr_details.outputs.source_branch == 'changeset-release/main'
run: |
export AWS_ACCESS_KEY_ID=${{ secrets.AWSACCESSKEYID }}
export AWS_SECRET_ACCESS_KEY=${{ secrets.AWSSECRETKEY }}
export AWS_DEFAULT_REGION=us-west-2
version=$(sed -nr 's/{ "version": "([0-9\.]+)" }/\1/p' ./js/_website/src/lib/json/version.json)
aws s3 cp ./js/_website/src/lib/json/ s3://gradio-docs-json/$version/ --recursive
- name: install outbreak_forecast dependencies
run: |
. venv/bin/activate
@ -63,3 +81,21 @@ jobs:
run: |
. venv/bin/activate
pnpm run test:ct
- name: save artifacts
uses: actions/upload-artifact@v2
with:
name: website-json-${{ steps.pr_details.outputs.pr_number }}}
path: |
./js/_website/src/lib/json
deploy_to_vercel:
uses: "./.github/workflows/deploy-website.yml"
needs: functional-test
if: always()
secrets:
gh_token: ${{ secrets.COMMENT_TOKEN }}
vercel_token: ${{ secrets.VERCEL_TOKEN }}
vercel_org_id: ${{ secrets.VERCEL_ORG_ID }}
vercel_project_id: ${{ secrets.VERCEL_PROJECT_ID }}
with:
branch_name: ${{ needs.functional-test.outputs.source_branch }}
pr_number: ${{ needs.functional-test.outputs.pr_number }}

View File

@ -207,7 +207,7 @@ export function api_factory(fetch_implementation: typeof fetch): Client {
const chunkSize = 1000;
const uploadResponses = [];
for (let i = 0; i < files.length; i += chunkSize) {
const chunk = files.slice(i, i + chunkSize);
const chunk = files.slice(i, i + chunkSize);
const formData = new FormData();
chunk.forEach((file) => {
formData.append("files", file);
@ -222,7 +222,7 @@ export function api_factory(fetch_implementation: typeof fetch): Client {
return { error: BROKEN_CONNECTION_MSG };
}
const output: UploadResponse["files"] = await response.json();
uploadResponses.push(...output);
uploadResponses.push(...output);
}
return { files: uploadResponses };
}

View File

@ -3,6 +3,7 @@ declare global {
__gradio_mode__: "app" | "website";
gradio_config: Config;
__is_colab__: boolean;
__gradio_space__: string | null;
}
}

View File

@ -1,31 +0,0 @@
"""Writes the config file for any of the demos to an output file
Usage: python write_config.py <demo_name> <output_file>
Example: python write_config.py calculator output.json
Assumes:
- The demo_name is a folder in this directory
- The demo_name folder contains a run.py file
- The run.py file defines a Gradio Interface/Blocks instance called `demo`
"""
from __future__ import annotations
import argparse
import importlib
import json
import gradio as gr
parser = argparse.ArgumentParser()
parser.add_argument("demo_name", help="the name of the demo whose config to write")
parser.add_argument("file_path", help="the path at which to write the config file")
args = parser.parse_args()
# import the run.py file from inside the directory specified by args.demo_name
run = importlib.import_module(f"{args.demo_name}.run")
demo: gr.Blocks = run.demo
config = demo.get_config_file()
json.dump(config, open(args.file_path, "w"), indent=2)

1
globals.d.ts vendored
View File

@ -1,6 +1,7 @@
declare global {
interface Window {
__gradio_mode__: "app" | "website";
__gradio_space__: string | null;
launchGradio: Function;
launchGradioFromSpaces: Function;
gradio_config: Config;

View File

@ -4,7 +4,7 @@
## What Does Gradio Do?
One of the *best ways to share* your machine learning model, API, or data science workflow with others is to create an **interactive app** that allows your users or colleagues to try out the demo in their browsers.
One of the _best ways to share_ your machine learning model, API, or data science workflow with others is to create an **interactive app** that allows your users or colleagues to try out the demo in their browsers.
Gradio allows you to **build demos and share them, all in Python.** And usually in just a few lines of code! So let's get started.
@ -89,8 +89,8 @@ You can read more about the many components and how to use them in the [Gradio d
Gradio includes a high-level class, `gr.ChatInterface`, which is similar to `gr.Interface`, but is specifically designed for chatbot UIs. The `gr.ChatInterface` class also wraps a function but this function must have a specific signature. The function should take two arguments: `message` and then `history` (the arguments can be named anything, but must be in this order)
* `message`: a `str` representing the user's input
* `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
- `message`: a `str` representing the user's input
- `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
Your function should return a single string response, which is the bot's response to the particular user input `message`.

View File

@ -16,7 +16,7 @@ Let's go through some of the most popular features of Gradio! Here are Gradio's
## Example Inputs
You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs#components).
You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs#components).
$code_calculator
$demo_calculator
@ -27,9 +27,9 @@ Continue learning about examples in the [More On Examples](https://gradio.app/mo
## Alerts
You wish to pass custom error messages to the user. To do so, raise a `gr.Error("custom message")` to display an error message. If you try to divide by zero in the calculator demo above, a popup modal will display the custom error message. Learn more about Error in the [docs](https://gradio.app/docs#error).
You wish to pass custom error messages to the user. To do so, raise a `gr.Error("custom message")` to display an error message. If you try to divide by zero in the calculator demo above, a popup modal will display the custom error message. Learn more about Error in the [docs](https://gradio.app/docs#error).
You can also issue `gr.Warning("message")` and `gr.Info("message")` by having them as standalone lines in your function, which will immediately display modals while continuing the execution of your function. Queueing needs to be enabled for this to work.
You can also issue `gr.Warning("message")` and `gr.Info("message")` by having them as standalone lines in your function, which will immediately display modals while continuing the execution of your function. Queueing needs to be enabled for this to work.
Note below how the `gr.Error` has to be raised, while the `gr.Warning` and `gr.Info` are single lines.
@ -42,16 +42,16 @@ def start_process(name):
if success == False:
raise gr.Error("Process failed")
```
## Descriptive Content
In the previous example, you may have noticed the `title=` and `description=` keyword arguments in the `Interface` constructor that helps users understand your app.
There are three arguments in the `Interface` constructor to specify where this content should go:
* `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.
* `description`: which accepts text, markdown or HTML and places it right under the title.
* `article`: which also accepts text, markdown or HTML and places it below the interface.
- `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.
- `description`: which accepts text, markdown or HTML and places it right under the title.
- `article`: which also accepts text, markdown or HTML and places it below the interface.
![annotated](https://github.com/gradio-app/gradio/blob/main/guides/assets/annotated.png?raw=true)
@ -65,7 +65,7 @@ gr.Number(label='Age', info='In years, must be greater than 0')
## Flagging
By default, an `Interface` will have "Flag" button. When a user testing your `Interface` sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for you to review. Within the directory provided by the `flagging_dir=` argument to the `Interface` constructor, a CSV file will log the flagged inputs. If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well.
By default, an `Interface` will have "Flag" button. When a user testing your `Interface` sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for you to review. Within the directory provided by the `flagging_dir=` argument to the `Interface` constructor, a CSV file will log the flagged inputs. If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well.
For example, with the calculator interface shown above, we would have the flagged data stored in the flagged directory shown below:
@ -75,7 +75,7 @@ For example, with the calculator interface shown above, we would have the flagge
| +-- logs.csv
```
*flagged/logs.csv*
_flagged/logs.csv_
```csv
num1,operation,num2,Output
@ -97,7 +97,7 @@ With the sepia interface shown earlier, we would have the flagged data stored in
| | +-- 1.png
```
*flagged/logs.csv*
_flagged/logs.csv_
```csv
im,Output
@ -113,11 +113,11 @@ If you wish for the user to provide a reason for flagging, you can pass a list o
As you've seen, Gradio includes components that can handle a variety of different data types, such as images, audio, and video. Most components can be used both as inputs or outputs.
When a component is used as an input, Gradio automatically handles the *preprocessing* needed to convert the data from a type sent by the user's browser (such as a base64 representation of a webcam snapshot) to a form that can be accepted by your function (such as a `numpy` array).
When a component is used as an input, Gradio automatically handles the _preprocessing_ needed to convert the data from a type sent by the user's browser (such as a base64 representation of a webcam snapshot) to a form that can be accepted by your function (such as a `numpy` array).
Similarly, when a component is used as an output, Gradio automatically handles the *postprocessing* needed to convert the data from what is returned by your function (such as a list of image paths) to a form that can be displayed in the user's browser (such as a `Gallery` of images in base64 format).
Similarly, when a component is used as an output, Gradio automatically handles the _postprocessing_ needed to convert the data from what is returned by your function (such as a list of image paths) to a form that can be displayed in the user's browser (such as a `Gallery` of images in base64 format).
You can control the *preprocessing* using the parameters when constructing the image component. For example, here if you instantiate the `Image` component with the following parameters, it will convert the image to the `PIL` type and reshape it to be `(100, 100)` no matter the original size that it was submitted as:
You can control the _preprocessing_ using the parameters when constructing the image component. For example, here if you instantiate the `Image` component with the following parameters, it will convert the image to the `PIL` type and reshape it to be `(100, 100)` no matter the original size that it was submitted as:
```py
img = gr.Image(shape=(100, 100), type="pil")
@ -229,11 +229,11 @@ Gradio supports the ability to create a custom Progress Bars so that you have cu
$code_progress_simple
$demo_progress_simple
If you use the `tqdm` library, you can even report progress updates automatically from any `tqdm.tqdm` that already exists within your function by setting the default argument as `gr.Progress(track_tqdm=True)`!
If you use the `tqdm` library, you can even report progress updates automatically from any `tqdm.tqdm` that already exists within your function by setting the default argument as `gr.Progress(track_tqdm=True)`!
## Batch Functions
Gradio supports the ability to pass *batch* functions. Batch functions are just
Gradio supports the ability to pass _batch_ functions. Batch functions are just
functions which take in a list of inputs and return a list of predictions.
For example, here is a batched function that takes in two lists of inputs (a list of
@ -246,12 +246,12 @@ def trim_words(words, lens):
trimmed_words = []
time.sleep(5)
for w, l in zip(words, lens):
trimmed_words.append(w[:int(l)])
trimmed_words.append(w[:int(l)])
return [trimmed_words]
```
The advantage of using batched functions is that if you enable queuing, the Gradio
server can automatically *batch* incoming requests and process them in parallel,
server can automatically _batch_ incoming requests and process them in parallel,
potentially speeding up your demo. Here's what the Gradio code looks like (notice
the `batch=True` and `max_batch_size=16` -- both of these parameters can be passed
into event triggers or into the `Interface` class)
@ -259,7 +259,7 @@ into event triggers or into the `Interface` class)
With `Interface`:
```python
demo = gr.Interface(trim_words, ["textbox", "number"], ["output"],
demo = gr.Interface(trim_words, ["textbox", "number"], ["output"],
batch=True, max_batch_size=16)
demo.queue()
demo.launch()
@ -292,8 +292,6 @@ generate images in batches](https://github.com/gradio-app/gradio/blob/main/demo/
Note: using batch functions with Gradio **requires** you to enable queuing in the underlying Interface or Blocks (see the queuing section above).
## Colab Notebooks
Gradio is able to run anywhere you run Python, including local jupyter notebooks as well as collaborative notebooks, such as [Google Colab](https://colab.research.google.com/). In the case of local jupyter notebooks and Google Colab notbooks, Gradio runs on a local server which you can interact with in your browser. (Note: for Google Colab, this is accomplished by [service worker tunneling](https://github.com/tensorflow/tensorboard/blob/master/docs/design/colab_integration.md), which requires cookies to be enabled in your browser.) For other remote notebooks, Gradio will also run on a server, but you will need to use [SSH tunneling](https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh) to view the app in your local browser. Often a simpler options is to use Gradio's built-in public links, [discussed in the next Guide](https://gradio.app/guides/sharing-your-app/#sharing-demos).
Gradio is able to run anywhere you run Python, including local jupyter notebooks as well as collaborative notebooks, such as [Google Colab](https://colab.research.google.com/). In the case of local jupyter notebooks and Google Colab notbooks, Gradio runs on a local server which you can interact with in your browser. (Note: for Google Colab, this is accomplished by [service worker tunneling](https://github.com/tensorflow/tensorboard/blob/master/docs/design/colab_integration.md), which requires cookies to be enabled in your browser.) For other remote notebooks, Gradio will also run on a server, but you will need to use [SSH tunneling](https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh) to view the app in your local browser. Often a simpler options is to use Gradio's built-in public links, [discussed in the next Guide](https://gradio.app/guides/sharing-your-app/#sharing-demos).

View File

@ -1,6 +1,6 @@
# Sharing Your App
How to share your Gradio app:
How to share your Gradio app:
1. [Sharing demos with the share parameter](#sharing-demos)
2. [Hosting on HF Spaces](#hosting-on-hf-spaces)
@ -20,9 +20,9 @@ Gradio demos can be easily shared publicly by setting `share=True` in the `launc
demo.launch(share=True)
```
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any packaging any dependencies. A share link usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a Gradio URL, we are only a proxy for your local server, and do not store any data sent through your app.
This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about any packaging any dependencies. A share link usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a Gradio URL, we are only a proxy for your local server, and do not store any data sent through your app.
Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default, except in colab notebooks), only a local link is created, which can be shared by [port-forwarding](https://www.ssh.com/ssh/tunneling/example) with specific users.
Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default, except in colab notebooks), only a local link is created, which can be shared by [port-forwarding](https://www.ssh.com/ssh/tunneling/example) with specific users.
![sharing](https://github.com/gradio-app/gradio/blob/main/guides/assets/sharing.svg?raw=true)
@ -30,7 +30,7 @@ Share links expire after 72 hours.
## Hosting on HF Spaces
If you'd like to have a permanent link to your Gradio demo on the internet, use Hugging Face Spaces. [Hugging Face Spaces](http://huggingface.co/spaces/) provides the infrastructure to permanently host your machine learning model for free!
If you'd like to have a permanent link to your Gradio demo on the internet, use Hugging Face Spaces. [Hugging Face Spaces](http://huggingface.co/spaces/) provides the infrastructure to permanently host your machine learning model for free!
After you have [created a free Hugging Face account](https://huggingface.co/join), you have three methods to deploy your Gradio app to Hugging Face Spaces:
@ -38,13 +38,13 @@ After you have [created a free Hugging Face account](https://huggingface.co/join
2. From your browser: Drag and drop a folder containing your Gradio model and all related files [here](https://huggingface.co/new-space).
3. Connect Spaces with your Git repository and Spaces will pull the Gradio app from there. See [this guide how to host on Hugging Face Spaces](https://huggingface.co/blog/gradio-spaces) for more information.
3. Connect Spaces with your Git repository and Spaces will pull the Gradio app from there. See [this guide how to host on Hugging Face Spaces](https://huggingface.co/blog/gradio-spaces) for more information.
<video autoplay muted loop>
<source src="https://github.com/gradio-app/gradio/blob/main/guides/assets/hf_demo.mp4?raw=true" type="video/mp4" />
</video>
Note: Some components, like `gr.Image`, will display a "Share" button only on Spaces, so that users can share the generated output to the Discussions page of the Space easily. You can disable this with `show_share_button`, such as `gr.Image(show_share_button=False)`.
Note: Some components, like `gr.Image`, will display a "Share" button only on Spaces, so that users can share the generated output to the Discussions page of the Space easily. You can disable this with `show_share_button`, such as `gr.Image(show_share_button=False)`.
![Image with show_share_button=True](https://github.com/gradio-app/gradio/blob/main/guides/assets/share_icon.png?raw=true)
@ -58,28 +58,31 @@ There are two ways to embed your Gradio demos. You can find quick links to both
### Embedding with Web Components
Web components typically offer a better experience to users than IFrames. Web components load lazily, meaning that they won't slow down the loading time of your website, and they automatically adjust their height based on the size of the Gradio app.
Web components typically offer a better experience to users than IFrames. Web components load lazily, meaning that they won't slow down the loading time of your website, and they automatically adjust their height based on the size of the Gradio app.
To embed with Web Components:
1. Import the gradio JS library into into your site by adding the script below in your site (replace {GRADIO_VERSION} in the URL with the library version of Gradio you are using).
1. Import the gradio JS library into into your site by adding the script below in your site (replace {GRADIO_VERSION} in the URL with the library version of Gradio you are using).
```html
<script type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/{GRADIO_VERSION}/gradio.js">
</script>
<script
type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/{GRADIO_VERSION}/gradio.js"
></script>
```
2. Add
2. Add
```html
<gradio-app src="https://$your_space_host.hf.space"></gradio-app>
```
element where you want to place the app. Set the `src=` attribute to your Space's embed URL, which you can find in the "Embed this Space" button. For example:
```html
<gradio-app src="https://abidlabs-pytorch-image-classifier.hf.space"></gradio-app>
<gradio-app
src="https://abidlabs-pytorch-image-classifier.hf.space"
></gradio-app>
```
<script>
@ -96,21 +99,24 @@ You can see examples of how web components look <a href="https://www.gradio.app"
You can also customize the appearance and behavior of your web component with attributes that you pass into the `<gradio-app>` tag:
* `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed
* `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.
* `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `"false"`)
* `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `"300px"`). Note that the final height is set based on the size of the Gradio app.
* `container`: whether to show the border frame and information about where the Space is hosted (by default `"true"`)
* `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `"true"`)
* `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `"false"`)
* `eager`: whether to load the Gradio app as soon as the page loads (by default `"false"`)
* `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `"system"`)
- `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed
- `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.
- `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `"false"`)
- `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `"300px"`). Note that the final height is set based on the size of the Gradio app.
- `container`: whether to show the border frame and information about where the Space is hosted (by default `"true"`)
- `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `"true"`)
- `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `"false"`)
- `eager`: whether to load the Gradio app as soon as the page loads (by default `"false"`)
- `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `"system"`)
Here's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.
Here's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.
```html
<gradio-app space="gradio/Echocardiogram-Segmentation" eager="true"
initial_height="0px"></gradio-app>
<gradio-app
space="gradio/Echocardiogram-Segmentation"
eager="true"
initial_height="0px"
></gradio-app>
```
_Note: While Gradio's CSS will never impact the embedding page, the embedding page can affect the style of the embedded Gradio app. Make sure that any CSS in the parent page isn't so general that it could also apply to the embedded Gradio app and cause the styling to break. Element selectors such as `header { ... }` and `footer { ... }` will be the most likely to cause issues._
@ -129,7 +135,7 @@ Note: if you use IFrames, you'll probably want to add a fixed `height` attribute
## API Page
You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a "Use via API" link.
You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a "Use via API" link.
![Use via API](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api3.gif)
@ -141,15 +147,15 @@ The endpoints are automatically created when you launch a Gradio `Interface`. If
btn.click(add, [num1, num2], output, api_name="addition")
```
This will add and document the endpoint `/api/addition/` to the automatically generated API page. Otherwise, your API endpoints will appear as "unnamed" endpoints.
This will add and document the endpoint `/api/addition/` to the automatically generated API page. Otherwise, your API endpoints will appear as "unnamed" endpoints.
*Note*: For Gradio apps in which [queueing is enabled](https://gradio.app/guides/key-features#queuing), users can bypass the queue if they make a POST request to your API endpoint. To disable this behavior, set `api_open=False` in the `queue()` method. To disable the API page altogether, set `show_api=False` in `.launch()`.
_Note_: For Gradio apps in which [queueing is enabled](https://gradio.app/guides/key-features#queuing), users can bypass the queue if they make a POST request to your API endpoint. To disable this behavior, set `api_open=False` in the `queue()` method. To disable the API page altogether, set `show_api=False` in `.launch()`.
## Authentication
### Password-protected app
You may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named "admin":
You may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named "admin":
```python
demo.launch(auth=("admin", "pass1234"))
@ -171,7 +177,7 @@ This is not the case by default for Safari, Chrome Incognito Mode.
### OAuth (Login via Hugging Face)
Gradio supports OAuth login via Hugging Face. This feature is currently **experimental** and only available on Spaces.
If allows to add a *"Sign in with Hugging Face"* button to your demo. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo)
If allows to add a _"Sign in with Hugging Face"_ button to your demo. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo)
for a live demo.
To enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space
@ -225,7 +231,7 @@ def echo(name, request: gr.Request):
io = gr.Interface(echo, "textbox", "textbox").launch()
```
Note: if your function is called directly instead of through the UI (this happens, for
Note: if your function is called directly instead of through the UI (this happens, for
example, when examples are cached), then `request` will be `None`. You should handle
this case explicitly to ensure that your app does not throw any errors. That is why
we have the explicit check `if request`.
@ -243,22 +249,22 @@ Note that this approach also allows you run your Gradio apps on custom paths (`h
## Security and File Access
Sharing your Gradio app with others (by hosting it on Spaces, on your own server, or through temporary share links) **exposes** certain files on the host machine to users of your Gradio app.
Sharing your Gradio app with others (by hosting it on Spaces, on your own server, or through temporary share links) **exposes** certain files on the host machine to users of your Gradio app.
In particular, Gradio apps ALLOW users to access to three kinds of files:
* **Files in the same directory (or a subdirectory) of where the Gradio script is launched from.** For example, if the path to your gradio scripts is `/home/usr/scripts/project/app.py` and you launch it from `/home/usr/scripts/project/`, then users of your shared Gradio app will be able to access any files inside `/home/usr/scripts/project/`. This is done so that you can easily reference these files in your Gradio app (e.g. for your app's `examples`).
- **Files in the same directory (or a subdirectory) of where the Gradio script is launched from.** For example, if the path to your gradio scripts is `/home/usr/scripts/project/app.py` and you launch it from `/home/usr/scripts/project/`, then users of your shared Gradio app will be able to access any files inside `/home/usr/scripts/project/`. This is done so that you can easily reference these files in your Gradio app (e.g. for your app's `examples`).
* **Temporary files created by Gradio.** These are files that are created by Gradio as part of running your prediction function. For example, if your prediction function returns a video file, then Gradio will save that video to a temporary file and then send the path to the temporary file to the front end. You can customize the location of temporary files created by Gradio by setting the environment variable `GRADIO_TEMP_DIR` to an absolute path, such as `/home/usr/scripts/project/temp/`.
- **Temporary files created by Gradio.** These are files that are created by Gradio as part of running your prediction function. For example, if your prediction function returns a video file, then Gradio will save that video to a temporary file and then send the path to the temporary file to the front end. You can customize the location of temporary files created by Gradio by setting the environment variable `GRADIO_TEMP_DIR` to an absolute path, such as `/home/usr/scripts/project/temp/`.
* **Files that you explicitly allow via the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).
- **Files that you explicitly allow via the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).
Gradio DOES NOT ALLOW access to:
* **Dotfiles** (any files whose name begins with `'.'`) or any files that are contained in any directory whose name begins with `'.'`
- **Dotfiles** (any files whose name begins with `'.'`) or any files that are contained in any directory whose name begins with `'.'`
* **Files that you explicitly allow via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default or by the `allowed_paths`.
- **Files that you explicitly allow via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default or by the `allowed_paths`.
* **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.
- **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.
Please make sure you are running the latest version of `gradio` for these security settings to apply.
Please make sure you are running the latest version of `gradio` for these security settings to apply.

View File

@ -4,25 +4,25 @@ This guide covers how State is handled in Gradio. Learn the difference between G
## Global State
Your function may use data that persists beyond a single function call. If the data is something accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
Your function may use data that persists beyond a single function call. If the data is something accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
$code_score_tracker
In the code above, the `scores` array is shared between all users. If multiple users are accessing this demo, their scores will all be added to the same list, and the returned top 3 scores will be collected from this shared reference.
In the code above, the `scores` array is shared between all users. If multiple users are accessing this demo, their scores will all be added to the same list, and the returned top 3 scores will be collected from this shared reference.
## Session State
Another type of data persistence Gradio supports is session **state**, where data persists across multiple submits within a page session. However, data is *not* shared between different users of your model. To store data in a session state, you need to do three things:
Another type of data persistence Gradio supports is session **state**, where data persists across multiple submits within a page session. However, data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
1. Pass in an extra parameter into your function, which represents the state of the interface.
2. At the end of the function, return the updated value of the state as an extra return value.
3. Add the `'state'` input and `'state'` output components when creating your `Interface`
A chatbot is an example where you would need session state - you want access to a users previous submissions, but you cannot store chat history in a global variable, because then chat history would get jumbled between different users.
A chatbot is an example where you would need session state - you want access to a users previous submissions, but you cannot store chat history in a global variable, because then chat history would get jumbled between different users.
$code_chatbot_dialogpt
$demo_chatbot_dialogpt
Notice how the state persists across submits within each page, but if you load this demo in another tab (or refresh the page), the demos will not share chat history.
Notice how the state persists across submits within each page, but if you load this demo in another tab (or refresh the page), the demos will not share chat history.
The default value of `state` is None. If you pass a default value to the state parameter of the function, it is used as the default value of the state instead. The `Interface` class only supports a single input and outputs state variable, though it can be a list with multiple elements. For more complex use cases, you can use Blocks, [which supports multiple `State` variables](/guides/state-in-blocks/).
The default value of `state` is None. If you pass a default value to the state parameter of the function, it is used as the default value of the state instead. The `Interface` class only supports a single input and outputs state variable, though it can be a list with multiple elements. For more complex use cases, you can use Blocks, [which supports multiple `State` variables](/guides/state-in-blocks/).

View File

@ -13,9 +13,9 @@ Note there is no submit button, because the interface resubmits automatically on
## Streaming Components
Some components have a "streaming" mode, such as `Audio` component in microphone mode, or the `Image` component in webcam mode. Streaming means data is sent continuously to the backend and the `Interface` function is continuously being rerun.
Some components have a "streaming" mode, such as `Audio` component in microphone mode, or the `Image` component in webcam mode. Streaming means data is sent continuously to the backend and the `Interface` function is continuously being rerun.
The difference between `gr.Audio(source='microphone')` and `gr.Audio(source='microphone', streaming=True)`, when both are used in `gr.Interface(live=True)`, is that the first `Component` will automatically submit data and run the `Interface` function when the user stops recording, whereas the second `Component` will continuously send data and run the `Interface` function *during* recording.
The difference between `gr.Audio(source='microphone')` and `gr.Audio(source='microphone', streaming=True)`, when both are used in `gr.Interface(live=True)`, is that the first `Component` will automatically submit data and run the `Interface` function when the user stops recording, whereas the second `Component` will continuously send data and run the `Interface` function _during_ recording.
Here is example code of streaming images from the webcam.
@ -23,4 +23,4 @@ $code_stream_frames
Streaming can also be done in an output component. A `gr.Audio(streaming=True)` output component can take a stream of audio data yielded piece-wise by a generator function and combines them into a single audio file.
$code_stream_audio_out
$code_stream_audio_out

View File

@ -1,11 +1,11 @@
# More on Examples
This guide covers what more you can do with Examples: Loading examples from a directory, providing partial examples, and caching. If Examples is new to you, check out the intro in the [Key Features](/guides/key-features/#example-inputs) guide.
This guide covers what more you can do with Examples: Loading examples from a directory, providing partial examples, and caching. If Examples is new to you, check out the intro in the [Key Features](/guides/key-features/#example-inputs) guide.
## Providing Examples
As covered in the [Key Features](/guides/key-features/#example-inputs) guide, adding examples to an Interface is as easy as providing a list of lists to the `examples`
keyword argument.
keyword argument.
Each sublist is a data sample, where each element corresponds to an input of the prediction function.
The inputs must be ordered in the same order as the prediction function expects them.
@ -13,10 +13,11 @@ If your interface only has one input component, then you can provide your exampl
### Loading Examples from a Directory
You can also specify a path to a directory containing your examples. If your Interface takes only a single file-type input, e.g. an image classifier, you can simply pass a directory filepath to the `examples=` argument, and the `Interface` will load the images in the directory as examples.
You can also specify a path to a directory containing your examples. If your Interface takes only a single file-type input, e.g. an image classifier, you can simply pass a directory filepath to the `examples=` argument, and the `Interface` will load the images in the directory as examples.
In the case of multiple inputs, this directory must
contain a log.csv file with the example values.
In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
```csv
num,operation,num2
5,"add",3
@ -33,9 +34,8 @@ Sometimes your app has many input components, but you would only like to provide
## Caching examples
You may wish to provide some cached examples of your model for users to quickly try out, in case your model takes a while to run normally.
If `cache_examples=True`, the `Interface` will run all of your examples through your app and save the outputs when you call the `launch()` method. This data will be saved in a directory called `gradio_cached_examples`.
If `cache_examples=True`, the `Interface` will run all of your examples through your app and save the outputs when you call the `launch()` method. This data will be saved in a directory called `gradio_cached_examples`.
Whenever a user clicks on an example, the output will automatically be populated in the app now, using data from this cached directory instead of actually running the function. This is useful so users can quickly try out your model without adding any load!
Whenever a user clicks on an example, the output will automatically be populated in the app now, using data from this cached directory instead of actually running the function. This is useful so users can quickly try out your model without adding any load!
Keep in mind once the cache is generated, it will not be updated in future launches. If the examples or function logic change, delete the cache folder to clear the cache and rebuild it with another `launch()`.

View File

@ -1,21 +1,20 @@
# Advanced Interface Features
There's more to cover on the [Interface](https://gradio.app/docs#interface) class. This guide covers all the advanced features: Using [Interpretation](https://gradio.app/docs#interpretation), custom styling, loading from the [Hugging Face Hub](https://hf.co), and using [Parallel](https://gradio.app/docs#parallel) and [Series](https://gradio.app/docs#series).
There's more to cover on the [Interface](https://gradio.app/docs#interface) class. This guide covers all the advanced features: Using [Interpretation](https://gradio.app/docs#interpretation), custom styling, loading from the [Hugging Face Hub](https://hf.co), and using [Parallel](https://gradio.app/docs#parallel) and [Series](https://gradio.app/docs#series).
## Interpreting your Predictions
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the `interpretation` keyword in the `Interface` class to `default`. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:
Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the `interpretation` keyword in the `Interface` class to `default`. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:
$code_image_classifier_interpretation
In addition to `default`, Gradio also includes [Shapley-based interpretation](https://christophm.github.io/interpretable-ml-book/shap.html), which provides more accurate interpretations, albeit usually with a slower runtime. To use this, simply set the `interpretation` parameter to `"shap"` (note: also make sure the python package `shap` is installed). Optionally, you can modify the `num_shap` parameter, which controls the tradeoff between accuracy and runtime (increasing this value generally increases accuracy). Here is an example:
```python
gr.Interface(fn=classify_image,
inputs=image,
outputs=label,
interpretation="shap",
inputs=image,
outputs=label,
interpretation="shap",
num_shap=5).launch()
```
@ -29,7 +28,7 @@ You can also write your own interpretation function. The demo below adds custom
$code_gender_sentence_custom_interpretation
Learn more about Interpretation in the [docs](https://gradio.app/docs#interpretation).
Learn more about Interpretation in the [docs](https://gradio.app/docs#interpretation).
## Custom Styling
@ -45,7 +44,7 @@ If you'd like to reference external files in your css, preface the file path (wh
gr.Interface(..., css="body {background-image: url('file=clouds.jpg')}")
```
**Warning**: Custom CSS is *not* guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using custom CSS sparingly and instead using [Themes](/guides/theming-guide/) whenever possible.
**Warning**: Custom CSS is _not_ guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using custom CSS sparingly and instead using [Themes](/guides/theming-guide/) whenever possible.
## Loading Hugging Face Models and Spaces
@ -58,7 +57,7 @@ gr.Interface.load("huggingface/gpt2").launch();
```
```python
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
inputs=gr.Textbox(lines=5, label="Input Text") # customizes the input component
).launch()
```
@ -66,8 +65,8 @@ gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
- To load any Space from the Hugging Face Hub and recreate it locally (so that you can customize the inputs and outputs for example), you pass `"spaces/"` followed by the model name:
```python
gr.Interface.load("spaces/eugenesiow/remove-bg",
inputs="webcam",
gr.Interface.load("spaces/eugenesiow/remove-bg",
inputs="webcam",
title="Remove your webcam background!").launch()
```
@ -90,16 +89,16 @@ generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
gr.Parallel(generator1, generator2, generator3).launch()
```
`Series` lets you put models and spaces in series, piping the output of one model into the input of the next model.
`Series` lets you put models and spaces in series, piping the output of one model into the input of the next model.
```python
generator = gr.Interface.load("huggingface/gpt2")
translator = gr.Interface.load("huggingface/t5-small")
gr.Series(generator, translator).launch()
gr.Series(generator, translator).launch()
# this demo generates text, then translates it to German, and outputs the final result.
```
And of course, you can also mix `Parallel` and `Series` together whenever that makes sense!
Learn more about Parallel and Series in the [docs](https://gradio.app/docs#parallel).
Learn more about Parallel and Series in the [docs](https://gradio.app/docs#parallel).

View File

@ -1,22 +1,20 @@
# The 4 Kinds of Gradio Interfaces
So far, we've always assumed that in order to build an Gradio demo, you need both inputs and outputs. But this isn't always the case for machine learning demos: for example, *unconditional image generation models* don't take any input but produce an image as the output.
So far, we've always assumed that in order to build an Gradio demo, you need both inputs and outputs. But this isn't always the case for machine learning demos: for example, _unconditional image generation models_ don't take any input but produce an image as the output.
It turns out that the `gradio.Interface` class can actually handle 4 different kinds of demos:
1. **Standard demos**: which have both separate inputs and outputs (e.g. an image classifier or speech-to-text model)
2. **Output-only demos**: which don't take any input but produce on output (e.g. an unconditional image generation model)
3. **Input-only demos**: which don't produce any output but do take in some sort of input (e.g. a demo that saves images that you upload to a persistent external database)
4. **Unified demos**: which have both input and output components, but the input and output components *are the same*. This means that the output produced overrides the input (e.g. a text autocomplete model)
4. **Unified demos**: which have both input and output components, but the input and output components _are the same_. This means that the output produced overrides the input (e.g. a text autocomplete model)
Depending on the kind of demo, the user interface (UI) looks slightly different:
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/interfaces4.png)
Let's see how to build each kind of demo using the `Interface` class, along with examples:
## Standard demos
To create a demo that has both the input and the output components, you simply need to set the values of the `inputs` and `outputs` parameter in `Interface()`. Here's an example demo of a simple image filter:
@ -24,7 +22,6 @@ To create a demo that has both the input and the output components, you simply n
$code_sepia_filter
$demo_sepia_filter
## Output-only demos
What about demos that only contain outputs? In order to build such a demo, you simply set the value of the `inputs` parameter in `Interface()` to `None`. Here's an example demo of a mock image generation model:

View File

@ -1,6 +1,6 @@
# Blocks and Event Listeners
We took a quick look at Blocks in the [Quickstart](https://gradio.app/guides/quickstart/#blocks-more-flexibility-and-control). Let's dive deeper. This guide will cover the how Blocks are structured, event listeners and their types, running events continuously, updating configurations, and using dictionaries vs lists.
We took a quick look at Blocks in the [Quickstart](https://gradio.app/guides/quickstart/#blocks-more-flexibility-and-control). Let's dive deeper. This guide will cover the how Blocks are structured, event listeners and their types, running events continuously, updating configurations, and using dictionaries vs lists.
## Blocks Structure
@ -15,7 +15,7 @@ $demo_hello_blocks
## Event Listeners and Interactivity
In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument.
In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument.
```python
output = gr.Textbox(label="Output", interactive=True)
@ -39,7 +39,7 @@ A Blocks app is not limited to a single data flow the way Interfaces are. Take a
$code_reversible_flow
$demo_reversible_flow
Note that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.
Note that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.
Here's an example of a "multi-step" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).
@ -56,7 +56,7 @@ The event listeners you've seen so far have a single input component. If you'd l
Let's see an example of each:
$code_calculator_list_and_dict
Both `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.
Both `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.
1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.
2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). The function `sub()` takes a single dictionary argument `data`, where the keys are the input components, and the values are the values of those components.
@ -84,7 +84,7 @@ with gr.Blocks() as demo:
else:
return 0, "hungry"
gr.Button("EAT").click(
fn=eat,
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
@ -92,7 +92,7 @@ with gr.Blocks() as demo:
Above, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.
Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.
Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.
```python
with gr.Blocks() as demo:
@ -104,7 +104,7 @@ with gr.Blocks() as demo:
else:
return {status_box: "hungry"}
gr.Button("EAT").click(
fn=eat,
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
@ -127,14 +127,14 @@ See how we can configure the Textbox itself through the `gr.update()` method. Th
## Running Events Consecutively
You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.
You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.
For example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.
$code_chatbot_consecutive
$demo_chatbot_consecutive
The `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`.
The `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`.
## Running Events Continuously
@ -150,11 +150,11 @@ $demo_sine_curve
## Gathering Event Data
You can gather specific data about an event by adding the associated event data class as a type hint to an argument in the event listener function.
You can gather specific data about an event by adding the associated event data class as a type hint to an argument in the event listener function.
For example, event data for `.select()` can be type hinted by a `gradio.SelectData` argument. This event is triggered when a user selects some part of the triggering component, and the event data includes information about what the user specifically selected. If a user selected a specific word in a `Textbox`, a specific image in a `Gallery`, or a specific cell in a `DataFrame`, the event data argument would contain information about the specific selection.
In the 2 player tic-tac-toe demo below, a user can select a cell in the `DataFrame` to make a move. The event data argument contains information about the specific cell that was selected. We can first check to see if the cell is empty, and then update the cell with the user's move.
In the 2 player tic-tac-toe demo below, a user can select a cell in the `DataFrame` to make a move. The event data argument contains information about the specific cell that was selected. We can first check to see if the cell is empty, and then update the cell with the user's move.
$code_tictactoe
$demo_tictactoe
$demo_tictactoe

View File

@ -40,7 +40,7 @@ Learn more about Rows in the [docs](https://gradio.app/docs/#row).
## Columns and Nesting
Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
$code_rows_and_columns
$demo_rows_and_columns
@ -72,7 +72,7 @@ $demo_blocks_form
## Variable Number of Outputs
By adjusting the visibility of components in a dynamic way, it is possible to create
demos with Gradio that support a *variable numbers of outputs*. Here's a very simple example
demos with Gradio that support a _variable numbers of outputs_. Here's a very simple example
where the number of output textboxes is controlled by an input slider:
$code_variable_outputs

View File

@ -1,6 +1,6 @@
# State in Blocks
We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same.
We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same.
## Global State
@ -8,26 +8,23 @@ Global state in Blocks works the same as in Interface. Any variable created outs
## Session State
Gradio supports session **state**, where data persists across multiple submits within a page session, in Blocks apps as well. To reiterate, session data is *not* shared between different users of your model. To store data in a session state, you need to do three things:
Gradio supports session **state**, where data persists across multiple submits within a page session, in Blocks apps as well. To reiterate, session data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor.
2. In the event listener, put the `State` object as an input and output.
3. In the event listener function, add the variable to the input parameters and the return value.
Let's take a look at a game of hangman.
Let's take a look at a game of hangman.
$code_hangman
$demo_hangman
Let's see how we do each of the 3 steps listed above in this game:
1. We store the used letters in `used_letters_var`. In the constructor of `State`, we set the initial value of this to `[]`, an empty list.
1. We store the used letters in `used_letters_var`. In the constructor of `State`, we set the initial value of this to `[]`, an empty list.
2. In `btn.click()`, we have a reference to `used_letters_var` in both the inputs and outputs.
3. In `guess_letter`, we pass the value of this `State` to `used_letters`, and then return an updated value of this `State` in the return statement.
With more complex apps, you will likely have many State variables storing session state in a single Blocks app.
Learn more about `State` in the [docs](https://gradio.app/docs#state).

View File

@ -1,8 +1,8 @@
# Custom JS and CSS
This guide covers how to style Blocks with more flexibility, as well as adding Javascript code to event listeners.
This guide covers how to style Blocks with more flexibility, as well as adding Javascript code to event listeners.
**Warning**: The use of query selectors in custom JS and CSS is *not* guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using query selectors sparingly.
**Warning**: The use of query selectors in custom JS and CSS is _not_ guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using query selectors sparingly.
## Custom CSS
@ -18,6 +18,7 @@ Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`
For additional styling ability, you can pass any CSS to your app using the `css=` kwarg.
The base class for the Gradio app is `gradio-container`, so here's an example that changes the background color of the Gradio app:
```python
with gr.Blocks(css=".gradio-container {background-color: red}") as demo:
...
@ -30,7 +31,7 @@ with gr.Blocks(css=".gradio-container {background: url('file=clouds.jpg')}") as
...
```
You can also pass the filepath to a CSS file to the `css` argument.
You can also pass the filepath to a CSS file to the `css` argument.
## The `elem_id` and `elem_classes` Arguments
@ -38,7 +39,7 @@ You can `elem_id` to add an HTML element `id` to any component, and `elem_classe
```python
css = """
#warning {background-color: #FFCCCB}
#warning {background-color: #FFCCCB}
.feedback textarea {font-size: 24px !important}
"""
@ -54,4 +55,4 @@ The CSS `#warning` ruleset will only target the second Textbox, while the `.feed
Event listeners have a `_js` argument that can take a Javascript function as a string and treat it just like a Python event listener function. You can pass both a Javascript function and a Python function (in which case the Javascript function is run first) or only Javascript (and set the Python `fn` to `None`). Take a look at the code below:
$code_blocks_js_methods
$demo_blocks_js_methods
$demo_blocks_js_methods

View File

@ -1,6 +1,6 @@
# Using Gradio Blocks Like Functions
Tags: TRANSLATION, HUB, SPACES
Tags: TRANSLATION, HUB, SPACES
**Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](https://gradio.app/guides/quickstart/#blocks-more-flexibility-and-control).
@ -18,7 +18,7 @@ The following section will show how.
## Treating Blocks like functions
Let's say we have the following demo that translates english text to german text.
Let's say we have the following demo that translates english text to german text.
$code_english_translator
@ -78,12 +78,12 @@ english_generator(text, fn_index=1)[0]["generated_text"]
```
Functions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space,
you would use index 1.
you would use index 1.
## Parting Remarks
We showed how treating a Blocks app like a regular python helps you compose functionality across different apps.
Any Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on
Any Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on
[Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app.
You can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example.

View File

@ -10,9 +10,9 @@ This tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that
$demo_chatinterface_streaming_echo
We'll start with a couple of simple examples, and then show how to use `gr.ChatInterface()` with real language models from several popular APIs and libraries, including `langchain`, `openai`, and Hugging Face.
We'll start with a couple of simple examples, and then show how to use `gr.ChatInterface()` with real language models from several popular APIs and libraries, including `langchain`, `openai`, and Hugging Face.
**Prerequisites**: please make sure you are using the **latest version** version of Gradio:
**Prerequisites**: please make sure you are using the **latest version** version of Gradio:
```bash
$ pip install --upgrade gradio
@ -22,8 +22,8 @@ $ pip install --upgrade gradio
When working with `gr.ChatInterface()`, the first thing you should do is define your chat function. Your chat function should take two arguments: `message` and then `history` (the arguments can be named anything, but must be in this order).
* `message`: a `str` representing the user's input.
* `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
- `message`: a `str` representing the user's input.
- `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
Your function should return a single string response, which is the bot's response to the particular user input `message`. Your function can take into account the `history` of messages, as well as the current message.
@ -71,7 +71,7 @@ def alternatingly_agree(message, history):
gr.ChatInterface(alternatingly_agree).launch()
```
## Streaming chatbots
## Streaming chatbots
If in your chat function, you use `yield` to generate a sequence of responses, you'll end up with a streaming chatbot. It's that simple!
@ -93,14 +93,13 @@ Notice that we've [enabled queuing](/guides/key-features#queuing), which is requ
If you're familiar with Gradio's `Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:
* add a title and description above your chatbot using `title` and `description` arguments.
* add a theme or custom css using `theme` and `css` arguments respectively.
* add `examples` and even enable `cache_examples`, which make it easier for users to try it out .
* You can change the text or disable each of the buttons that appear in the chatbot interface: `submit_btn`, `retry_btn`, `undo_btn`, `clear_btn`.
- add a title and description above your chatbot using `title` and `description` arguments.
- add a theme or custom css using `theme` and `css` arguments respectively.
- add `examples` and even enable `cache_examples`, which make it easier for users to try it out .
- You can change the text or disable each of the buttons that appear in the chatbot interface: `submit_btn`, `retry_btn`, `undo_btn`, `clear_btn`.
If you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox as well. Here's an example of how we can use these parameters:
```python
import gradio as gr
@ -129,13 +128,13 @@ gr.ChatInterface(
You may want to add additional parameters to your chatbot and expose them to your users through the Chatbot UI. For example, suppose you want to add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.
The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have *not* already been rendered, then the components will appear underneath the chatbot (and any examples) within a `gr.Accordion()`. You can set the label of this accordion using the `additional_inputs_accordion_name` parameter.
The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot (and any examples) within a `gr.Accordion()`. You can set the label of this accordion using the `additional_inputs_accordion_name` parameter.
Here's a complete example:
$code_chatinterface_system_prompt
If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will *not* be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.
If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.
```python
import gradio as gr
@ -150,7 +149,7 @@ def echo(message, history, system_prompt, tokens):
with gr.Blocks() as demo:
system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt")
slider = gr.Slider(10, 100, render=False)
gr.ChatInterface(
echo, additional_inputs=[system_prompt, slider]
)
@ -191,14 +190,13 @@ def predict(message, history):
gpt_response = llm(history_langchain_format)
return gpt_response.content
gr.ChatInterface(predict).launch()
gr.ChatInterface(predict).launch()
```
## A streaming example using `openai`
Of course, we could also use the `openai` library directy. Here a similar example, but this time with streaming results as well:
```python
import openai
import gradio as gr
@ -214,18 +212,18 @@ def predict(message, history):
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
messages= history_openai_format,
messages= history_openai_format,
temperature=1.0,
stream=True
)
partial_message = ""
for chunk in response:
if len(chunk['choices'][0]['delta']) != 0:
partial_message = partial_message + chunk['choices'][0]['delta']['content']
yield partial_message
yield partial_message
gr.ChatInterface(predict).queue().launch()
gr.ChatInterface(predict).queue().launch()
```
## Example using a local, open-source LLM with Hugging Face
@ -250,14 +248,14 @@ class StopOnTokens(StoppingCriteria):
return True
return False
def predict(message, history):
def predict(message, history):
history_transformer_format = history + [[message, ""]]
stop = StopOnTokens()
messages = "".join(["".join(["\n<human>:"+item[0], "\n<bot>:"+item[1]]) #curr_system_message +
messages = "".join(["".join(["\n<human>:"+item[0], "\n<bot>:"+item[1]]) #curr_system_message +
for item in history_transformer_format])
model_inputs = tokenizer([messages], return_tensors="pt").to("cuda")
streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
generate_kwargs = dict(
@ -278,10 +276,10 @@ def predict(message, history):
for new_token in streamer:
if new_token != '<':
partial_message += new_token
yield partial_message
yield partial_message
gr.ChatInterface(predict).queue().launch()
```
With those examples, you should be all set to create your own Gradio Chatbot demos soon! For building even more custom Chatbot applications, check out [a dedicated guide](/guides/creating-a-custom-chatbot-with-blocks) using the low-level `gr.Blocks()` API.
With those examples, you should be all set to create your own Gradio Chatbot demos soon! For building even more custom Chatbot applications, check out [a dedicated guide](/guides/creating-a-custom-chatbot-with-blocks) using the low-level `gr.Blocks()` API.

View File

@ -1,7 +1,7 @@
# How to Create a Custom Chatbot with Gradio Blocks
Tags: NLP, TEXT, CHAT
Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
## Introduction
@ -12,7 +12,7 @@ This tutorial will show how to make chatbot UIs from scratch with Gradio's low-l
$demo_chatbot_streaming
**Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo.
You can [read the Guide to Blocks first](https://gradio.app/quickstart/#blocks-more-flexibility-and-control) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
You can [read the Guide to Blocks first](https://gradio.app/quickstart/#blocks-more-flexibility-and-control) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
## A Simple Chatbot Demo
@ -22,25 +22,23 @@ $code_chatbot_simple
There are three Gradio components here:
* A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot.
* A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response
* A `ClearButton` button to clear the Textbox and entire Chatbot history
- A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot.
- A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response
- A `ClearButton` button to clear the Textbox and entire Chatbot history
We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns.
We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns.
Of course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
$demo_chatbot_simple
## Add Streaming to your Chatbot
There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:
There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:
$code_chatbot_streaming
You'll notice that when a user submits their message, we now *chain* three event events with `.then()`:
You'll notice that when a user submits their message, we now _chain_ three event events with `.then()`:
1. The first method `user()` updates the chatbot with the user message and clears the input field. This method also makes the input field non interactive so that the user can't send another message while the chatbot is responding. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `(user_message, None)`, the `None` signifying that the bot has not responded.
@ -71,12 +69,12 @@ def add_file(history, file):
return history
```
Putting this together, we can create a *multimodal* chatbot with a textbox for a user to submit text and an file upload button to submit images / audio / video files. The rest of the code looks pretty much the same as before:
Putting this together, we can create a _multimodal_ chatbot with a textbox for a user to submit text and an file upload button to submit images / audio / video files. The rest of the code looks pretty much the same as before:
$code_chatbot_multimodal
$demo_chatbot_multimodal
And you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible:
* [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.
* [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.
- [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.
- [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.

View File

@ -2,7 +2,7 @@
Tags: NLP, TEXT, CHAT
We're excited to announce that Gradio can now automatically create a discord bot from a deployed app! 🤖
We're excited to announce that Gradio can now automatically create a discord bot from a deployed app! 🤖
Discord is a popular communication platform that allows users to chat and interact with each other in real-time. By turning your Gradio app into a Discord bot, you can bring cutting edge AI to your discord server and give your community a whole new way to interact.
@ -27,6 +27,7 @@ Also, make sure you have a [Hugging Face account](https://huggingface.co/) and a
## 🏃‍♀️ Quickstart 🏃‍♀️
### Step 1: Implementing our chatbot
Let's build a very simple Chatbot using `ChatInterface` that simply repeats the user message. Write the following code into an `app.py`
```python
@ -39,6 +40,7 @@ demo = gr.ChatInterface(slow_echo).queue().launch()
```
### Step 2: Deploying our App
In order to create a discord bot for our app, it must be accessible over the internet. In this guide, we will use the `gradio deploy` command to deploy our chatbot to Hugging Face spaces from the command line. Run the following command.
```bash
@ -49,6 +51,7 @@ This command will ask you some questions, e.g. requested hardware, requirements,
Note the URL of the space that was created. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot
### Step 3: Creating a Discord Bot
Turning our space into a discord bot is also a one-liner thanks to the `gradio deploy-discord`. Run the following command:
```bash
@ -64,25 +67,28 @@ gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <tok
Note the URL that gets printed out to the console. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot
### Step 4: Getting a Discord Bot Token
If you didn't have a discord bot token for step 3, go to the URL that got printed in the console and follow the instructions there.
Once you obtain a token, run the command again but this time pass in the token:
Once you obtain a token, run the command again but this time pass in the token:
```bash
gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token>
```
### Step 5: Add the bot to your server
Visit the space of your discord bot. You should see "Add this bot to your server by clicking this link:" followed by a URL. Go to that URL and add the bot to your server!
### Step 6: Use your bot!
By default the bot can be called by starting a message with `/chat`, e.g. `/chat <your prompt here>`.
⚠️ Tip ⚠️: If either of the deployed spaces goes to sleep, the bot will stop working. By default, spaces go to sleep after 48 hours of inactivity. You can upgrade the hardware of your space to prevent it from going to sleep. See this [guide](https://huggingface.co/docs/hub/spaces-gpus#using-gpu-spaces) for more information.
<img src="https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/guide/echo_slash.gif">
### Using the `gradio_client.Client` Class
You can also create a discord bot from a deployed gradio app with python.
```python
@ -94,7 +100,7 @@ grc.Client("freddyaboulton/echo-chatbot").deploy_discord()
We have created an organization on Hugging Face called [gradio-discord-bots](https://huggingface.co/gradio-discord-bots) containing several template spaces that explain how to deploy state of the art LLMs powered by gradio as discord bots.
The easiest way to get started is by deploying Meta's Llama 2 LLM with 70 billion parameter. Simply go to this [space](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-70b-chat-hf) and follow the instructions.
The easiest way to get started is by deploying Meta's Llama 2 LLM with 70 billion parameter. Simply go to this [space](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-70b-chat-hf) and follow the instructions.
The deployment can be done in one line! 🤯
@ -107,16 +113,16 @@ grc.Client("ysharma/Explore_llamav2_with_TGI").deploy_discord(to_id="llama2-70b-
In addion to Meta's 70 billion Llama 2 model, we have prepared template spaces for the following LLMs and deployment options:
* [gpt-3.5-turbo](https://huggingface.co/spaces/gradio-discord-bots/gpt-35-turbo), powered by openai. Required OpenAI key.
* [falcon-7b-instruct](https://huggingface.co/spaces/gradio-discord-bots/falcon-7b-instruct) powered by Hugging Face Inference Endpoints.
* [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-13b-chat-hf) powered by Hugging Face Inference Endpoints.
* [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/llama-2-13b-chat-transformers) powered by Hugging Face transformers.
- [gpt-3.5-turbo](https://huggingface.co/spaces/gradio-discord-bots/gpt-35-turbo), powered by openai. Required OpenAI key.
- [falcon-7b-instruct](https://huggingface.co/spaces/gradio-discord-bots/falcon-7b-instruct) powered by Hugging Face Inference Endpoints.
- [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-13b-chat-hf) powered by Hugging Face Inference Endpoints.
- [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/llama-2-13b-chat-transformers) powered by Hugging Face transformers.
To deploy any of these models to discord, simply follow the instructions in the linked space for that model.
## Deploying non-chat gradio apps to discord
As mentioned above, you don't need a `gr.ChatInterface` if you want to deploy your gradio app to discord. All that's needed is an api route that takes in a single string and outputs a single string.
As mentioned above, you don't need a `gr.ChatInterface` if you want to deploy your gradio app to discord. All that's needed is an api route that takes in a single string and outputs a single string.
The following code will deploy a space that translates english to german as a discord bot.
@ -128,4 +134,4 @@ client.deploy_discord(api_names=['german'])
## Conclusion
That's it for this guide! We're really excited about this feature. Tag [@Gradio](https://twitter.com/Gradio) on twitter and show us how your discord community interacts with your discord bots.
That's it for this guide! We're really excited about this feature. Tag [@Gradio](https://twitter.com/Gradio) on twitter and show us how your discord community interacts with your discord bots.

View File

@ -3,7 +3,7 @@
Related spaces: https://huggingface.co/spaces/gradio/helsinki_translation_en_es
Tags: HUB, SPACES, EMBED
Contributed by <a href="https://huggingface.co/osanseviero">Omar Sanseviero</a> 🦙
Contributed by <a href="https://huggingface.co/osanseviero">Omar Sanseviero</a> 🦙
## Introduction
@ -26,9 +26,9 @@ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
def predict(text):
return pipe(text)[0]["translation_text"]
demo = gr.Interface(
fn=predict,
fn=predict,
inputs='text',
outputs='text',
)
@ -48,12 +48,10 @@ demo = gr.Interface.from_pipeline(pipe)
demo.launch()
```
The previous code produces the following interface, which you can try right here in your browser:
The previous code produces the following interface, which you can try right here in your browser:
<gradio-app space="Helsinki-NLP/opus-mt-en-es"></gradio-app>
## Using Hugging Face Inference API
Hugging Face has a free service called the [Inference API](https://huggingface.co/inference-api), which allows you to send HTTP requests to models in the Hub. For transformers or diffusers-based models, the API can be 2 to 10 times faster than running the inference yourself. The API is free (rate limited), and you can switch to dedicated [Inference Endpoints](https://huggingface.co/pricing) when you want to use it in production.
@ -72,15 +70,14 @@ Notice that we just put specify the model name and state that the `src` should b
You might notice that the first inference takes about 20 seconds. This happens since the Inference API is loading the model in the server. You get some benefits afterward:
* The inference will be much faster.
* The server caches your requests.
* You get built-in automatic scaling.
- The inference will be much faster.
- The server caches your requests.
- You get built-in automatic scaling.
## Hosting your Gradio demos
[Hugging Face Spaces](https://hf.co/spaces) allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to [hf.co/new-space](https://huggingface.co/new-space), select the Gradio SDK, create an `app.py` file, and voila! You have a demo you can share with anyone else. To learn more, read [this guide how to host on Hugging Face Spaces using the website](https://huggingface.co/blog/gradio-spaces).
Alternatively, you can create a Space programmatically, making use of the [huggingface_hub client library](https://huggingface.co/docs/huggingface_hub/index) library. Here's an example:
```python
@ -99,15 +96,13 @@ file_url = upload_file(
token=hf_token,
)
```
Here, `create_repo` creates a gradio repo with the target name under a specific account using that account's Write Token. `repo_name` gets the full repo name of the related repo. Finally `upload_file` uploads a file inside the repo with the name `app.py`.
## Embedding your Space demo on other websites
Throughout this guide, you've seen many embedded Gradio demos. You can also do this on own website! The first step is to create a Hugging Face Space with the demo you want to showcase. Then, [follow the steps here to embed the Space on your website](/guides/sharing-your-app/#embedding-hosted-spaces).
## Loading demos from Spaces
You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!
@ -138,5 +133,4 @@ That's it! Let's recap the various ways Gradio and Hugging Face work together:
4. You can embed Gradio demos that are hosted on Hugging Face Spaces onto your own website.
5. You can load demos from Hugging Face Spaces to remix and create new Gradio demos using `gr.load()`.
🤗

View File

@ -14,12 +14,10 @@ Here is a list of the topics covered in this guide.
3. Embedding Hugging Face Spaces directly into your Comet Projects
4. Logging Model Inferences from your Gradio Application to Comet
## What is Comet?
[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!
## Setup
First, install the dependencies needed to run these examples
@ -119,7 +117,6 @@ Add the Gradio Panel to your Experiment to interact with your application.
<source src="https://user-images.githubusercontent.com/7529846/214328194-95987f83-c180-4929-9bed-c8a0d3563ed7.mp4"></source>
</video>
## 2. Embedding Gradio Applications directly into your Comet Projects
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=9" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
@ -140,7 +137,6 @@ Once you have added your Panel, click `Edit` to access to the Panel Options page
<img width="560" alt="Edit-Gradio-Panel-URL" src="https://user-images.githubusercontent.com/7529846/214334843-870fe726-0aa1-4b21-bbc6-0c48f56c48d8.png">
## 3. Embedding Hugging Face Spaces directly into your Comet Projects
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=107" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
@ -161,8 +157,7 @@ Once you have added your Panel, click Edit to access to the Panel Options page a
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb)
In the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.
In the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.
In the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/#state) object. This will allow you to log multiple inferences from a model to a single Experiment.
@ -266,10 +261,10 @@ We hope you found this guide useful and that it provides some inspiration to hel
## How to contribute Gradio demos on HF spaces on the Comet organization
* Create an account on Hugging Face [here](https://huggingface.co/join).
* Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
* Request to join the Comet organization [here](https://huggingface.co/Comet).
- Create an account on Hugging Face [here](https://huggingface.co/join).
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
- Request to join the Comet organization [here](https://huggingface.co/Comet).
## Additional Resources
* [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)
- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)

View File

@ -8,20 +8,20 @@ Contributed by Gradio and the <a href="https://onnx.ai/">ONNX</a> team
In this Guide, we'll walk you through:
* Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
* How to setup a Gradio demo for EfficientNet-Lite4
* How to contribute your own Gradio demos for the ONNX organization on Hugging Face
- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
- How to setup a Gradio demo for EfficientNet-Lite4
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
Here's an example of an ONNX model: try out the EfficientNet-Lite4 demo below.
<iframe src="https://onnx-efficientnet-lite4.hf.space" frameBorder="0" height="810" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## What is the ONNX Model Zoo?
Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.
The [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.
## What are Hugging Face Spaces & Gradio?
### Gradio
@ -39,9 +39,11 @@ Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with
Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)
## How did Hugging Face help the ONNX Model Zoo?
There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).
## What is the role of ONNX Runtime?
ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).
@ -54,7 +56,6 @@ Here we walk through setting up a example demo for EfficientNet-Lite4 using Grad
First we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.
```python
import numpy as np
import math
@ -112,9 +113,9 @@ sess = ort.InferenceSession(model)
def inference(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = pre_process_edgetpu(img, (224, 224, 3))
img_batch = np.expand_dims(img, axis=0)
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
@ -123,20 +124,19 @@ def inference(img):
for r in result:
resultdic[labels[str(r)]] = float(results[0][r])
return resultdic
title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
examples = [['catonnx.jpg']]
gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()
```
## How to contribute Gradio demos on HF spaces using ONNX models
* Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
* Create an account on Hugging Face [here](https://huggingface.co/join).
* See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/models#models)
* Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
* Request to join ONNX Organization [here](https://huggingface.co/onnx).
* Once approved transfer model from your username to ONNX organization
* Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/models#models)
- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Create an account on Hugging Face [here](https://huggingface.co/join).
- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/models#models)
- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
- Request to join ONNX Organization [here](https://huggingface.co/onnx).
- Once approved transfer model from your username to ONNX organization
- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/models#models)

View File

@ -8,9 +8,9 @@ Contributed by Gradio team
In this Guide, we'll walk you through:
* Introduction of Gradio, and Hugging Face Spaces, and Wandb
* How to setup a Gradio demo using the Wandb integration for JoJoGAN
* How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
- Introduction of Gradio, and Hugging Face Spaces, and Wandb
- How to setup a Gradio demo using the Wandb integration for JoJoGAN
- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
Here's an example of an model trained and experiments tracked on wandb, try out the JoJoGAN demo below.
@ -22,7 +22,6 @@ Weights and Biases (W&B) allows data scientists and machine learning scientists
<img alt="Screen Shot 2022-08-01 at 5 54 59 PM" src="https://user-images.githubusercontent.com/81195143/182252755-4a0e1ca8-fd25-40ff-8c91-c9da38aaa9ec.png">
## What are Hugging Face Spaces & Gradio?
### Gradio
@ -35,24 +34,23 @@ Get started [here](https://gradio.app/getting_started)
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
## Setting up a Gradio Demo for JoJoGAN
Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.
Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.
Let's get started!
1. Create a W&B account
Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you dont have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.
Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you dont have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.
2. Open Colab Install Gradio and W&B
We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.
We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
Install Gradio and Wandb at the top:
Install Gradio and Wandb at the top:
```sh
@ -61,91 +59,90 @@ pip install gradio wandb
3. Finetune StyleGAN and W&B experiment tracking
This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:
This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:
```python
alpha = 1.0
alpha = 1-alpha
```python
preserve_color = True
num_iter = 100
log_interval = 50
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
# load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# reset generator
del generator
generator = deepcopy(original_generator)
# reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
# Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
# Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
img = generator(in_latent, input_is_latent=True)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
g_optim.zero_grad()
loss.backward()
g_optim.step()
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
```
g_optim.zero_grad()
loss.backward()
g_optim.step()
alpha = 1.0
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
```
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Referece (y)", "Style Code(w)", "Real Face Image(x)"]
@ -159,26 +156,29 @@ wandb.log(
step=0)
# load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
# Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
@ -187,7 +187,7 @@ for idx in tqdm(range(num_iter)):
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
@ -211,7 +211,8 @@ for idx in tqdm(range(num_iter)):
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
```
````
4. Save, Download, and Load Model
@ -248,7 +249,7 @@ from google.colab import files
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
files.download('your-model-name.pt')
files.download('your-model-name.pt')
latent_dim = 512
device="cuda"
@ -277,19 +278,19 @@ transform = transforms.Compose(
)
def inference(img):
img.save('out.jpg')
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
with torch.no_grad():
my_sample = generator(my_w, input_is_latent=True)
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
imageio.imwrite('filename.jpeg', npimage)
return 'filename.jpeg'
```
````
5. Build a Gradio Demo
@ -301,8 +302,8 @@ title = "JoJoGAN"
description = "Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
demo = gr.Interface(
inference,
gr.Image(type="pil"),
inference,
gr.Image(type="pil"),
gr.Image(type="file"),
title=title,
description=description
@ -313,7 +314,7 @@ demo.launch(share=True)
6. Integrate Gradio into your W&B Dashboard
The last step—integrating your Gradio demo with your W&B dashboard—is just one extra line:
The last step—integrating your Gradio demo with your W&B dashboard—is just one extra line:
```python
@ -325,17 +326,16 @@ demo.integrate(wandb=wandb)
Outside of W&B with Web components, using the gradio-app tags allows anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:
```html
<gradio-app space="akhaliq/JoJoGAN"> </gradio-app>
```
7. (Optional) Embed W&B plots in your Gradio App
It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and
embed them within your Gradio app within a `gr.HTML` block.
It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and
embed them within your Gradio app within a `gr.HTML` block.
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
```python
import gradio as gr
@ -351,20 +351,19 @@ with gr.Blocks() as demo:
demo.launch(share=True)
```
## Conclusion
We hope you enjoyed this brief demo of embedding a Gradio demo to a W&B report! Thanks for making it to the end. To recap:
* Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.
- Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.
* W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.
- W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.
* Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.
- Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.
## How to contribute Gradio demos on HF spaces on the Wandb organization
* Create an account on Hugging Face [here](https://huggingface.co/join).
* Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
* Request to join wandb organization [here](https://huggingface.co/wandb).
* Once approved transfer model from your username to Wandb organization
- Create an account on Hugging Face [here](https://huggingface.co/join).
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
- Request to join wandb organization [here](https://huggingface.co/wandb).
- Once approved transfer model from your username to Wandb organization

View File

@ -5,13 +5,12 @@ Tags: VISION, RESNET, PYTORCH
## Introduction
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Let's get started!
### Prerequisites
@ -20,7 +19,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## Step 1 — Setting up the Image Classification Model
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
```python
import torch
@ -32,7 +31,7 @@ Because we will be using the model for inference, we have called the `.eval()` m
## Step 2 — Defining a `predict` function
Next, we will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
In the case of our pretrained model, it will look like this:
@ -49,23 +48,23 @@ def predict(inp):
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
return confidences
```
Let's break this down. The function takes one parameter:
* `inp`: the input image as a `PIL` image
- `inp`: the input image as a `PIL` image
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
* `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
## Step 3 — Creating a Gradio Interface
Now that we have our predictive function set up, we can create a Gradio Interface around it.
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
@ -74,7 +73,7 @@ Finally, we'll add one more parameter, the `examples`, which allows us to prepop
```python
import gradio as gr
gr.Interface(fn=predict,
gr.Interface(fn=predict,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=3),
examples=["lion.jpg", "cheetah.jpg"]).launch()
@ -84,7 +83,6 @@ This produces the following interface, which you can try right here in your brow
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!

View File

@ -5,13 +5,12 @@ Tags: VISION, MOBILENET, TENSORFLOW
## Introduction
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from traffic control systems to satellite imaging.
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from traffic control systems to satellite imaging.
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Let's get started!
### Prerequisites
@ -20,7 +19,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## Step 1 — Setting up the Image Classification Model
First, we will need an image classification model. For this tutorial, we will use a pretrained Mobile Net model, as it is easily downloadable from [Keras](https://keras.io/api/applications/mobilenet/). You can use a different pretrained model or train your own.
First, we will need an image classification model. For this tutorial, we will use a pretrained Mobile Net model, as it is easily downloadable from [Keras](https://keras.io/api/applications/mobilenet/). You can use a different pretrained model or train your own.
```python
import tensorflow as tf
@ -28,11 +27,11 @@ import tensorflow as tf
inception_net = tf.keras.applications.MobileNetV2()
```
This line automatically downloads the MobileNet model and weights using the Keras library.
This line automatically downloads the MobileNet model and weights using the Keras library.
## Step 2 — Defining a `predict` function
Next, we will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
In the case of our pretrained model, it will look like this:
@ -53,15 +52,15 @@ def classify_image(inp):
Let's break this down. The function takes one parameter:
* `inp`: the input image as a `numpy` array
- `inp`: the input image as a `numpy` array
Then, the function adds a batch dimension, passes it through the model, and returns:
* `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
## Step 3 — Creating a Gradio Interface
Now that we have our predictive function set up, we can create a Gradio Interface around it.
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a drag-and-drop image component. To create this input, we can use the `"gradio.inputs.Image"` class, which creates the component and handles the preprocessing to convert that to a numpy array. We will instantiate the class with a parameter that automatically preprocesses the input image to be 224 pixels by 224 pixels, which is the size that MobileNet expects.
@ -72,7 +71,7 @@ Finally, we'll add one more parameter, the `examples`, which allows us to prepop
```python
import gradio as gr
gr.Interface(fn=classify_image,
gr.Interface(fn=classify_image,
inputs=gr.Image(shape=(224, 224)),
outputs=gr.Label(num_top_classes=3),
examples=["banana.jpg", "car.jpg"]).launch()
@ -82,7 +81,6 @@ This produces the following interface, which you can try right here in your brow
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!

View File

@ -5,13 +5,12 @@ Tags: VISION, TRANSFORMERS, HUB
## Introduction
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
State-of-the-art image classifiers are based on the *transformers* architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like this (try one of the examples!):
State-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like this (try one of the examples!):
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Let's get started!
### Prerequisites
@ -20,20 +19,20 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## Step 1 — Choosing a Vision Image Classification Model
First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.
First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.
Expand the Tasks category on the left sidebar and select "Image Classification" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.
At the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.
At the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.
## Step 2 — Loading the Vision Transformer Model with Gradio
When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
All of these are automatically inferred from the model tags.
Besides the import statement, it only takes a single line of Python to load and launch the demo.
Besides the import statement, it only takes a single line of Python to load and launch the demo.
We use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.
We use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.
```python
import gradio as gr
@ -43,13 +42,12 @@ gr.Interface.load(
examples=["alligator.jpg", "laptop.jpg"]).launch()
```
Notice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.
Notice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
And you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!

View File

@ -1,7 +1,7 @@
# Connecting to a Database
Related spaces: https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard
Tags: TABULAR, PLOTS
Tags: TABULAR, PLOTS
## Introduction
@ -10,8 +10,8 @@ connecting to a PostgreSQL database hosted on AWS but gradio is completely agnos
database you are connecting to and where it's hosted. So as long as you can write python code to connect
to your data, you can display it in a web UI with gradio 💪
## Overview
## Overview
We will be analyzing bike share data from Chicago. The data is hosted on kaggle [here](https://www.kaggle.com/datasets/evangower/cyclistic-bike-share?select=202203-divvy-tripdata.csv).
Our goal is to create a dashboard that will enable our business stakeholders to answer the following questions:
@ -22,11 +22,10 @@ At the end of this guide, we will have a functioning application that looks like
<gradio-app space="gradio/chicago-bikeshare-dashboard"> </gradio-app>
## Step 1 - Creating your database
We will be storing our data on a PostgreSQL hosted on Amazon's RDS service. Create an AWS account if you don't already have one
and create a PostgreSQL database on the free tier.
and create a PostgreSQL database on the free tier.
**Important**: If you plan to host this demo on HuggingFace Spaces, make sure database is on port **8080**. Spaces will
block all outgoing connections unless they are made to port 80, 443, or 8080 as noted [here](https://huggingface.co/docs/hub/spaces-overview#networking).
@ -35,15 +34,15 @@ RDS will not let you create a postgreSQL instance on ports 80 or 443.
Once your database is created, download the dataset from Kaggle and upload it to your database.
For the sake of this demo, we will only upload March 2022 data.
## Step 2.a - Write your ETL code
We will be querying our database for the total count of rides split by the type of bicycle (electric, standard, or docked).
We will also query for the total count of rides that depart from each station and take the top 5.
We will also query for the total count of rides that depart from each station and take the top 5.
We will then take the result of our queries and visualize them in with matplotlib.
We will use the pandas [read_sql](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html)
method to connect to the database. This requires the `psycopg2` library to be installed.
method to connect to the database. This requires the `psycopg2` library to be installed.
In order to connect to our database, we will specify the database username, password, and host as environment variables.
This will make our app more secure by avoiding storing sensitive information as plain text in our application files.
@ -80,7 +79,7 @@ def get_count_ride_type():
def get_most_popular_stations():
df = pd.read_sql(
"""
SELECT COUNT(ride_id) as n, MAX(start_station_name) as station
@ -111,8 +110,8 @@ If you were to run our script locally, you could pass in your credentials as env
DB_USER='username' DB_PASSWORD='password' DB_HOST='host' python app.py
```
## Step 2.c - Write your gradio app
We will display or matplotlib plots in two separate `gr.Plot` components displayed side by side using `gr.Row()`.
Because we have wrapped our function to fetch the data in a `demo.load()` event trigger,
our demo will fetch the latest data **dynamically** from the database each time the web page loads. 🪄
@ -132,6 +131,7 @@ demo.launch()
```
## Step 3 - Deployment
If you run the code above, your app will start running locally.
You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
@ -144,9 +144,10 @@ You will have to add the `DB_USER`, `DB_PASSWORD`, and `DB_HOST` variables as "R
![secrets](https://github.com/gradio-app/gradio/blob/main/guides/assets/secrets.png?raw=true)
## Conclusion
Congratulations! You know how to connect your gradio app to a database hosted on the cloud! ☁️
Our dashboard is now running on [Spaces](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard).
The complete code is [here](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard/blob/main/app.py)
As you can see, gradio gives you the power to connect to your data wherever it lives and display however you want! 🔥
As you can see, gradio gives you the power to connect to your data wherever it lives and display however you want! 🔥

View File

@ -1,7 +1,6 @@
# Creating a Real-Time Dashboard from BigQuery Data
Tags: TABULAR, DASHBOARD, PLOTS
Tags: TABULAR, DASHBOARD, PLOTS
[Google BigQuery](https://cloud.google.com/bigquery) is a cloud-based service for processing very large data sets. It is a serverless and highly scalable data warehousing solution that enables users to analyze data [using SQL-like queries](https://www.oreilly.com/library/view/google-bigquery-the/9781492044451/ch01.html).
@ -13,11 +12,11 @@ We'll cover the following steps in this Guide:
1. Setting up your BigQuery credentials
2. Using the BigQuery client
3. Building the real-time dashboard (in just *7 lines of Python*)
3. Building the real-time dashboard (in just _7 lines of Python_)
We'll be working with the [New York Times' COVID dataset](https://www.nytimes.com/interactive/2021/us/covid-cases.html) that is available as a public dataset on BigQuery. The dataset, named `covid19_nyt.us_counties` contains the latest information about the number of confirmed cases and deaths from COVID across US counties.
We'll be working with the [New York Times' COVID dataset](https://www.nytimes.com/interactive/2021/us/covid-cases.html) that is available as a public dataset on BigQuery. The dataset, named `covid19_nyt.us_counties` contains the latest information about the number of confirmed cases and deaths from COVID across US counties.
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make your are familiar with the Blocks class.
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make your are familiar with the Blocks class.
## Setting up your BigQuery Credentials
@ -27,7 +26,7 @@ To use Gradio with BigQuery, you will need to obtain your BigQuery credentials a
2. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
3. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "BigQuery API", click on it, and click the "Enable" button. If you see the "Manage" button, then the BigQuery is already enabled, and you're all set.
3. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "BigQuery API", click on it, and click the "Enable" button. If you see the "Manage" button, then the BigQuery is already enabled, and you're all set.
4. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
@ -37,16 +36,16 @@ To use Gradio with BigQuery, you will need to obtain your BigQuery credentials a
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
@ -66,7 +65,7 @@ from google.cloud import bigquery
client = bigquery.Client.from_service_account_json("path/to/key.json")
```
With your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.
With your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.
Here is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:
@ -74,15 +73,15 @@ Here is an example of a function which queries the `covid19_nyt.us_counties` dat
import numpy as np
QUERY = (
'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
'ORDER BY date DESC,confirmed_cases DESC '
'LIMIT 20')
def run_query():
query_job = client.query(QUERY)
query_result = query_job.result()
query_job = client.query(QUERY)
query_result = query_job.result()
df = query_result.to_dataframe()
# Select a subset of columns
# Select a subset of columns
df = df[["confirmed_cases", "deaths", "county", "state_name"]]
# Convert numeric columns to standard numpy types
df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64})
@ -93,7 +92,7 @@ def run_query():
Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.
Here is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60*60 seconds).
Here is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\*60 seconds).
```py
import gradio as gr
@ -105,7 +104,7 @@ demo.queue().launch() # Run the demo using queuing
```
Perhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time
by passing in the `every` parameter.
by passing in the `every` parameter.
Here is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`
@ -116,8 +115,8 @@ with gr.Blocks() as demo:
gr.Markdown("# 💉 Covid Dashboard (Updated Hourly)")
with gr.Row():
gr.DataFrame(run_query, every=60*60)
gr.ScatterPlot(run_query, every=60*60, x="confirmed_cases",
gr.ScatterPlot(run_query, every=60*60, x="confirmed_cases",
y="deaths", tooltip="county", width=500, height=500)
demo.queue().launch() # Run the demo with queuing enabled
```
```

View File

@ -1,6 +1,6 @@
# Create a Dashboard from Supabase Data
Tags: TABULAR, DASHBOARD, PLOTS
Tags: TABULAR, DASHBOARD, PLOTS
[Supabase](https://supabase.com/) is a cloud-based open-source backend that provides a PostgreSQL database, authentication, and other useful features for building web and mobile applications. In this tutorial, you will learn how to read data from Supabase and plot it in **real-time** on a Gradio Dashboard.
@ -8,21 +8,21 @@ Tags: TABULAR, DASHBOARD, PLOTS
In this end-to-end guide, you will learn how to:
* Create tables in Supabase
* Write data to Supabase using the Supabase Python Client
* Visualize the data in a real-time dashboard using Gradio
- Create tables in Supabase
- Write data to Supabase using the Supabase Python Client
- Visualize the data in a real-time dashboard using Gradio
If you already have data on Supabase that you'd like to visualize in a dashboard, you can skip the first two sections and go directly to [visualizing the data](#visualize-the-data-in-a-real-time-gradio-dashboard)!
## Create a table in Supabase
First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.
First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.
1\. Start by creating a new project in Supabase. Once you're logged in, click the "New Project" button
2\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)
3\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).
3\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).
4\. Click on "Table Editor" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:
@ -35,15 +35,13 @@ First of all, we need some data to visualize. Following this [excellent guide](h
</table>
</center>
5\. Click Save to save the table schema.
5\. Click Save to save the table schema.
Our table is now ready!
## Write data to Supabase
The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.
The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.
6\. Install `supabase` by running the following command in your terminal:
@ -53,7 +51,7 @@ pip install supabase
7\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)
8\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):
8\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):
```python
import supabase
@ -66,9 +64,9 @@ import random
main_list = []
for i in range(10):
value = {'product_id': i,
value = {'product_id': i,
'product_name': f"Item {i}",
'inventory_count': random.randint(1, 100),
'inventory_count': random.randint(1, 100),
'price': random.random()*100
}
main_list.append(value)
@ -81,13 +79,12 @@ Return to your Supabase dashboard and refresh the page, you should now see 10 ro
## Visualize the Data in a Real-Time Gradio Dashboard
Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.
Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.
Note: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.
9\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:
```python
import supabase
import pandas as pd
@ -117,9 +114,8 @@ Notice that by passing in a function to `gr.BarPlot()`, we have the BarPlot quer
<gradio-app space="abidlabs/supabase"></gradio-app>
## Conclusion
That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.
That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.
Try adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!
Try adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!

View File

@ -1,14 +1,14 @@
# Creating a Real-Time Dashboard from Google Sheets
Tags: TABULAR, DASHBOARD, PLOTS
Tags: TABULAR, DASHBOARD, PLOTS
[Google Sheets](https://www.google.com/sheets/about/) are an easy way to store tabular data in the form of spreadsheets. With Gradio and pandas, it's easy to read data from public or private Google Sheets and then display the data or plot it. In this blog post, we'll build a small *real-time* dashboard, one that updates when the data in the Google Sheets updates.
[Google Sheets](https://www.google.com/sheets/about/) are an easy way to store tabular data in the form of spreadsheets. With Gradio and pandas, it's easy to read data from public or private Google Sheets and then display the data or plot it. In this blog post, we'll build a small _real-time_ dashboard, one that updates when the data in the Google Sheets updates.
Building the dashboard itself will just be 9 lines of Python code using Gradio, and our final dashboard will look like this:
<gradio-app space="gradio/line-plot"></gradio-app>
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make you are familiar with the Blocks class.
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make you are familiar with the Blocks class.
The process is a little different depending on if you are working with a publicly accessible or a private Google Sheet. We'll cover both, so let's get started!
@ -49,7 +49,7 @@ with gr.Blocks() as demo:
demo.queue().launch() # Run the demo with queuing enabled
```
And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
## Private Google Sheets
@ -64,7 +64,7 @@ To authenticate yourself, obtain credentials from Google Cloud. Here's [how to s
2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
@ -74,16 +74,16 @@ To authenticate yourself, obtain credentials from Google Cloud. Here's [how to s
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
@ -97,7 +97,6 @@ Once you have the credentials `.json` file, you can use the following steps to q
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
```
2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`
3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):
@ -111,7 +110,7 @@ URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375s
gc = gspread.service_account("path/to/key.json")
sh = gc.open_by_url(URL)
worksheet = sh.sheet1
worksheet = sh.sheet1
def get_data():
values = worksheet.get_all_values()
@ -135,13 +134,9 @@ with gr.Blocks() as demo:
demo.queue().launch() # Run the demo with queuing enabled
```
You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
## Conclusion
And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.

View File

@ -23,7 +23,7 @@ dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = new_df["name"].tolist()
prices = new_df["price"].tolist()
@ -66,7 +66,7 @@ fig.update_layout(
)
```
Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
More info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly.

View File

@ -2,7 +2,6 @@
Related spaces: https://huggingface.co/spaces/scikit-learn/gradio-skops-integration, https://huggingface.co/spaces/scikit-learn/tabular-playground, https://huggingface.co/spaces/merve/gradio-analysis-dashboard
## Introduction
Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!
@ -13,7 +12,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## Let's Create a Simple Interface!
We will take a look at how we can create a simple UI that predicts failures based on product information.
We will take a look at how we can create a simple UI that predicts failures based on product information.
```python
import gradio as gr
@ -34,16 +33,16 @@ df = df["train"].to_pandas()
def infer(input_dataframe):
return pd.DataFrame(model.predict(input_dataframe))
gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
```
Let's break down above code.
* `fn`: the inference function that takes input dataframe and returns predictions.
* `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
* `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
* `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
- `fn`: the inference function that takes input dataframe and returns predictions.
- `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
- `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
- `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
We will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.
@ -69,7 +68,7 @@ def plot(df):
plt.savefig("corr.png")
plots = ["corr.png","scatter.png", "bar.png"]
return plots
inputs = [gr.Dataframe(label="Supersoaker Production Data")]
outputs = [gr.Gallery(label="Profiling Dashboard").style(grid=(1,3))]
@ -78,12 +77,12 @@ gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], titl
<gradio-app space="gradio/gradio-analysis-dashboard-minimal"></gradio-app>
We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
* `fn`: The function that will create plots based on data.
* `inputs`: We use the same `Dataframe` component we used above.
* `outputs`: The `Gallery` component is used to keep our visualizations.
* `examples`: We will have the dataset itself as the example.
- `fn`: The function that will create plots based on data.
- `inputs`: We use the same `Dataframe` component we used above.
- `outputs`: The `Gallery` component is used to keep our visualizations.
- `examples`: We will have the dataset itself as the example.
## Easily load tabular data interfaces with one line of code using skops
@ -101,4 +100,4 @@ gr.Interface.load("huggingface/scikit-learn/tabular-playground", title=title, de
<gradio-app space="gradio/gradio-skops-integration"></gradio-app>
`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.html#sphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.
`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.html#sphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.

View File

@ -1,8 +1,7 @@
# Getting Started with the Gradio Python client
# Getting Started with the Gradio Python client
Tags: CLIENT, API, SPACES
The Gradio Python client makes it very easy to use any Gradio app as an API. As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/whisper-screenshot.jpg)
@ -14,19 +13,19 @@ Here's the entire code to do it:
```python
from gradio_client import Client
client = Client("abidlabs/whisper")
client.predict("audio_sample.wav")
client = Client("abidlabs/whisper")
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
```
The Gradio client works with any hosted Gradio app, whether it be an image generator, a text summarizer, a stateful chatbot, a tax calculator, or anything else! The Gradio Client is mostly used with apps hosted on [Hugging Face Spaces](https://hf.space), but your app can be hosted anywhere, such as your own server.
**Prerequisites**: To use the Gradio client, you do *not* need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
**Prerequisites**: To use the Gradio client, you do _not_ need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
## Installation
If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency.
If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency.
Otherwise, the lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with Python versions 3.9 or higher:
@ -34,7 +33,6 @@ Otherwise, the lightweight `gradio_client` package can be installed from pip (or
$ pip install gradio_client
```
## Connecting to a running Gradio App
Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.
@ -52,14 +50,13 @@ You can also connect to private Spaces by passing in your HF token with the `hf_
```python
from gradio_client import Client
client = Client("abidlabs/my-private-space", hf_token="...")
client = Client("abidlabs/my-private-space", hf_token="...")
```
## Duplicating a Space for private use
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
and then use it to make as many requests as you'd like!
and then use it to make as many requests as you'd like!
The `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):
@ -69,17 +66,16 @@ from gradio_client import Client
HF_TOKEN = os.environ.get("HF_TOKEN")
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
client.predict("audio_sample.wav")
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
```
If you have previously duplicated a Space, re-running `duplicate()` will *not* create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.
If you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.
**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.
## Connecting a general Gradio app
If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
@ -106,11 +102,10 @@ Named API endpoints: 1
- [Textbox] value_0: str (value)
```
This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.
This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.
We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.
## Making a prediction
The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:
@ -126,7 +121,6 @@ client.predict("Hello")
If there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:
```python
from gradio_client import Client
@ -136,7 +130,7 @@ client.predict(4, "add", 5)
>> 9.0
```
For certain inputs, such as images, you should pass in the filepath or URL to the file. Likewise, for the corresponding output types, you will get a filepath or URL returned.
For certain inputs, such as images, you should pass in the filepath or URL to the file. Likewise, for the corresponding output types, you will get a filepath or URL returned.
```python
from gradio_client import Client
@ -147,10 +141,9 @@ client.predict("https://audio-samples.github.io/samples/mp3/blizzard_uncondition
>> "My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r—"
```
## Running jobs asynchronously
Oe should note that `.predict()` is a *blocking* operation as it waits for the operation to complete before returning the prediction.
Oe should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.
In many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:
@ -189,7 +182,7 @@ job = client.submit("Hello", api_name="/predict", result_callbacks=[print_result
## Status
The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).
The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).
```py
from gradio_client import Client
@ -201,23 +194,22 @@ job.status()
>> <Status.STARTING: 'STARTING'>
```
*Note*: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.
_Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.
## Cancelling Jobs
The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:
```py
client = Client("abidlabs/whisper")
job1 = client.submit("audio_sample1.wav")
job2 = client.submit("audio_sample2.wav")
client = Client("abidlabs/whisper")
job1 = client.submit("audio_sample1.wav")
job2 = client.submit("audio_sample2.wav")
job1.cancel() # will return False, assuming the job has started
job2.cancel() # will return True, indicating that the job has been canceled
```
If the first job has started processing, then it will not be canceled. If the second job
has not yet started, it will be successfully canceled and removed from the queue.
has not yet started, it will be successfully canceled and removed from the queue.
## Generator Endpoints
@ -235,7 +227,7 @@ job.outputs()
>> ['0', '1', '2']
```
Note that running `job.result()` on a generator endpoint only gives you the *first* value returned by the endpoint.
Note that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.
The `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:
@ -263,4 +255,4 @@ client = Client("abidlabs/test-yield")
job = client.submit("abcdef")
time.sleep(3)
job.cancel() # job cancels after 2 iterations
```
```

View File

@ -14,7 +14,7 @@ Here's the entire code to do it:
import { client } from "@gradio/client";
const response = await fetch(
"https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
"https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
);
const audio_file = await response.blob();
@ -69,7 +69,7 @@ The `@gradio/client` exports another function, `duplicate`, to make this process
import { client } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
@ -85,9 +85,9 @@ If you have previously duplicated a Space, re-running `duplicate` will _not_ cre
import { client } from "@gradio/client";
const app = await duplicate("abidlabs/whisper", {
hf_token: "hf_...",
timeout: 60,
hardware: "a10g-small",
hf_token: "hf_...",
timeout: 60,
hardware: "a10g-small"
});
```
@ -121,25 +121,25 @@ And we will see the following:
```json
{
"named_endpoints": {
"/predict": {
"parameters": [
{
"label": "text",
"component": "Textbox",
"type": "string"
}
],
"returns": [
{
"label": "output",
"component": "Textbox",
"type": "string"
}
]
}
},
"unnamed_endpoints": {}
"named_endpoints": {
"/predict": {
"parameters": [
{
"label": "text",
"component": "Textbox",
"type": "string"
}
],
"returns": [
{
"label": "output",
"component": "Textbox",
"type": "string"
}
]
}
},
"unnamed_endpoints": {}
}
```
@ -173,7 +173,7 @@ For certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `Fi
import { client } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
@ -189,11 +189,11 @@ If the API you are working with can return results over time, or you wish to acc
import { client } from "@gradio/client";
function log_result(payload) {
const {
data: [translation],
} = payload;
const {
data: [translation]
} = payload;
console.log(`The translated result is: ${translation}`);
console.log(`The translated result is: ${translation}`);
}
const app = await client("abidlabs/en2fr");
@ -210,9 +210,9 @@ The event interface also allows you to get the status of the running job by list
import { client } from "@gradio/client";
function log_status(status) {
console.log(
`The current status for this job is: ${JSON.stringify(status, null, 2)}.`
);
console.log(
`The current status for this job is: ${JSON.stringify(status, null, 2)}.`
);
}
const app = await client("abidlabs/en2fr");
@ -264,6 +264,6 @@ const job = app.submit(0, [9]);
job.on("data", (data) => console.log(data));
setTimeout(() => {
job.cancel();
job.cancel();
}, 3000);
```

View File

@ -4,16 +4,15 @@ Tags: CLIENT, API, WEB APP
In this blog post, we will demonstrate how to use the `gradio_client` [Python library](getting-started-with-the-python-client/), which enables developers to make requests to a Gradio app programmatically, by creating an example FastAPI web app. The web app we will be building is called "Acapellify," and it will allow users to upload video files as input and return a version of that video without instrumental music. It will also display a gallery of generated videos.
**Prerequisites**
Before we begin, make sure you are running Python 3.9 or later, and have the following libraries installed:
* `gradio_client`
* `fastapi`
* `uvicorn`
- `gradio_client`
- `fastapi`
- `uvicorn`
You can install these libraries from `pip`:
You can install these libraries from `pip`:
```bash
$ pip install gradio_client fastapi uvicorn
@ -29,9 +28,9 @@ Otherwise, install ffmpeg [by following these instructions](https://www.hostinge
## Step 1: Write the Video Processing Function
Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.
Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.
Luckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!
Luckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!
Open a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:
@ -45,11 +44,11 @@ def acapellify(audio_path):
return result[0]
```
That's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.
That's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.
---
**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:
**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:
```py
from gradio_client import Client
@ -63,11 +62,11 @@ Everything else remains the same!
Now, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:
Our video processing workflow will consist of three steps:
Our video processing workflow will consist of three steps:
1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.
1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.
2. Then, we pass in the audio file through the `acapellify()` function above.
3. Finally, we combine the new audio with the original video to produce a final acapellified video.
3. Finally, we combine the new audio with the original video to produce a final acapellified video.
Here's the complete code in Python, which you can add to your `main.py` file:
@ -77,9 +76,9 @@ import subprocess
def process_video(video_path):
old_audio = os.path.basename(video_path).split(".")[0] + ".m4a"
subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])
new_audio = acapellify(old_audio)
new_video = f"acap_{video_path}"
subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f"static/{new_video}"])
return new_video
@ -119,11 +118,11 @@ async def upload_video(video: UploadFile = File(...)):
In this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.
The `/` route returns an HTML template that displays a gallery of all uploaded videos.
The `/` route returns an HTML template that displays a gallery of all uploaded videos.
The `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is "acapellified" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.
Note that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.
Note that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.
## Step 3: Create a FastAPI app (Frontend Template)
@ -138,103 +137,32 @@ Finally, we create the frontend of our web application. First, we create a folde
Write the following as the contents of `home.html`:
```html
&lt;!DOCTYPE html>
&lt;html>
&lt;head>
&lt;title>Video Gallery&lt;/title>
&lt;style>
body {
font-family: sans-serif;
margin: 0;
padding: 0;
background-color: #f5f5f5;
}
h1 {
text-align: center;
margin-top: 30px;
margin-bottom: 20px;
}
.gallery {
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 20px;
padding: 20px;
}
.video {
border: 2px solid #ccc;
box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2);
border-radius: 5px;
overflow: hidden;
width: 300px;
margin-bottom: 20px;
}
.video video {
width: 100%;
height: 200px;
}
.video p {
text-align: center;
margin: 10px 0;
}
form {
margin-top: 20px;
text-align: center;
}
input[type="file"] {
display: none;
}
.upload-btn {
display: inline-block;
background-color: #3498db;
color: #fff;
padding: 10px 20px;
font-size: 16px;
border: none;
border-radius: 5px;
cursor: pointer;
}
.upload-btn:hover {
background-color: #2980b9;
}
.file-name {
margin-left: 10px;
}
&lt;/style>
&lt;/head>
&lt;body>
&lt;h1>Video Gallery&lt;/h1>
{% if videos %}
&lt;div class="gallery">
{% for video in videos %}
&lt;div class="video">
&lt;video controls>
&lt;source src="{{ url_for('static', path=video) }}" type="video/mp4">
Your browser does not support the video tag.
&lt;/video>
&lt;p>{{ video }}&lt;/p>
&lt;/div>
{% endfor %}
&lt;/div>
{% else %}
&lt;p>No videos uploaded yet.&lt;/p>
{% endif %}
&lt;form action="/uploadvideo/" method="post" enctype="multipart/form-data">
&lt;label for="video-upload" class="upload-btn">Choose video file&lt;/label>
&lt;input type="file" name="video" id="video-upload">
&lt;span class="file-name">&lt;/span>
&lt;button type="submit" class="upload-btn">Upload&lt;/button>
&lt;/form>
&lt;script>
// Display selected file name in the form
const fileUpload = document.getElementById("video-upload");
const fileName = document.querySelector(".file-name");
fileUpload.addEventListener("change", (e) => {
fileName.textContent = e.target.files[0].name;
});
&lt;/script>
&lt;/body>
&lt;!DOCTYPE html> &lt;html> &lt;head> &lt;title>Video Gallery&lt;/title>
&lt;style> body { font-family: sans-serif; margin: 0; padding: 0;
background-color: #f5f5f5; } h1 { text-align: center; margin-top: 30px;
margin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;
justify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid
#ccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:
hidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:
200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:
20px; text-align: center; } input[type="file"] { display: none; } .upload-btn {
display: inline-block; background-color: #3498db; color: #fff; padding: 10px
20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }
.upload-btn:hover { background-color: #2980b9; } .file-name { margin-left: 10px;
} &lt;/style> &lt;/head> &lt;body> &lt;h1>Video Gallery&lt;/h1> {% if videos %}
&lt;div class="gallery"> {% for video in videos %} &lt;div class="video">
&lt;video controls> &lt;source src="{{ url_for('static', path=video) }}"
type="video/mp4"> Your browser does not support the video tag. &lt;/video>
&lt;p>{{ video }}&lt;/p> &lt;/div> {% endfor %} &lt;/div> {% else %} &lt;p>No
videos uploaded yet.&lt;/p> {% endif %} &lt;form action="/uploadvideo/"
method="post" enctype="multipart/form-data"> &lt;label for="video-upload"
class="upload-btn">Choose video file&lt;/label> &lt;input type="file"
name="video" id="video-upload"> &lt;span class="file-name">&lt;/span> &lt;button
type="submit" class="upload-btn">Upload&lt;/button> &lt;/form> &lt;script> //
Display selected file name in the form const fileUpload =
document.getElementById("video-upload"); const fileName =
document.querySelector(".file-name"); fileUpload.addEventListener("change", (e)
=> { fileName.textContent = e.target.files[0].name; }); &lt;/script> &lt;/body>
&lt;/html>
```
@ -262,5 +190,4 @@ And that's it! Start uploading videos and you'll get some "acapellified" videos
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)
If you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).
If you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).

View File

@ -13,6 +13,7 @@ This guide will show how you can use `gradio_tools` to grant your LLM Agent acce
A [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.
### What is Gradio?
[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! 🐍
## gradio_tools - An end-to-end example
@ -21,7 +22,7 @@ To get started with `gradio_tools`, all you need to do is import and initialize
In the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the
`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and
the `TextToVideoTool` to create a video from a prompt.
the `TextToVideoTool` to create a video from a prompt.
We then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask
it to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.
@ -71,15 +72,17 @@ class GradioTool(BaseTool):
def postprocess(self, output: Tuple[Any] | Any) -> str:
pass
```
The requirements are:
1. The name for your tool
2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.
3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.
4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.md#making-a-prediction)
5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.
6. *Optional* - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but
if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.
6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but
if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.
And that's it!
@ -123,8 +126,9 @@ class StableDiffusionTool(GradioTool):
```
Some notes on this implementation:
1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/python#gradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use
in the `create_job` method.
in the `create_job` method.
2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.
3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.
@ -133,4 +137,3 @@ in the `create_job` method.
You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!
Again, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.
We're excited to see the tools you all build!

View File

@ -5,13 +5,13 @@ Tags: SKETCHPAD, LABELS, LIVE
## Introduction
How well can an algorithm guess what you're drawing? A few years ago, Google released the **Quick Draw** dataset, which contains drawings made by humans of a variety of every objects. Researchers have used this dataset to train models to guess Pictionary-style drawings.
How well can an algorithm guess what you're drawing? A few years ago, Google released the **Quick Draw** dataset, which contains drawings made by humans of a variety of every objects. Researchers have used this dataset to train models to guess Pictionary-style drawings.
Such models are perfect to use with Gradio's *sketchpad* input, so in this tutorial we will build a Pictionary web application using Gradio. We will be able to build the whole web application in Python, and will look like this (try drawing something!):
Such models are perfect to use with Gradio's _sketchpad_ input, so in this tutorial we will build a Pictionary web application using Gradio. We will be able to build the whole web application in Python, and will look like this (try drawing something!):
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Let's get started! This guide covers how to build a pictionary app (step-by-step):
Let's get started! This guide covers how to build a pictionary app (step-by-step):
1. [Set up the Sketch Recognition Model](#1-set-up-the-sketch-recognition-model)
2. [Define a `predict` function](#2-define-a-predict-function)
@ -23,7 +23,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## 1. Set up the Sketch Recognition Model
First, you will need a sketch recognition model. Since many researchers have already trained their own models on the Quick Draw dataset, we will use a pretrained model in this tutorial. Our model is a light 1.5 MB model trained by Nate Raw, that [you can download here](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/pytorch_model.bin).
First, you will need a sketch recognition model. Since many researchers have already trained their own models on the Quick Draw dataset, we will use a pretrained model in this tutorial. Our model is a light 1.5 MB model trained by Nate Raw, that [you can download here](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/pytorch_model.bin).
If you are interested, here [is the code](https://github.com/nateraw/quickdraw-pytorch) that was used to train the model. We will simply load the pretrained model in PyTorch, as follows:
@ -53,7 +53,7 @@ model.eval()
## 2. Define a `predict` function
Next, you will need to define a function that takes in the *user input*, which in this case is a sketched image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/class_names.txt).
Next, you will need to define a function that takes in the _user input_, which in this case is a sketched image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://huggingface.co/spaces/nateraw/quickdraw/blob/main/class_names.txt).
In the case of our pretrained model, it will look like this:
@ -74,17 +74,17 @@ def predict(img):
Let's break this down. The function takes one parameters:
* `img`: the input image as a `numpy` array
- `img`: the input image as a `numpy` array
Then, the function converts the image to a PyTorch `tensor`, passes it through the model, and returns:
* `confidences`: the top five predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
- `confidences`: the top five predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
## 3. Create a Gradio Interface
Now that we have our predictive function set up, we can create a Gradio Interface around it.
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a sketchpad. To create a sketchpad input, we can use the convenient string shortcut, `"sketchpad"` which creates a canvas for a user to draw on and handles the preprocessing to convert that to a numpy array.
In this case, the input component is a sketchpad. To create a sketchpad input, we can use the convenient string shortcut, `"sketchpad"` which creates a canvas for a user to draw on and handles the preprocessing to convert that to a numpy array.
The output component will be a `"label"`, which displays the top labels in a nice form.
@ -93,7 +93,7 @@ Finally, we'll add one more parameter, setting `live=True`, which allows our int
```python
import gradio as gr
gr.Interface(fn=predict,
gr.Interface(fn=predict,
inputs="sketchpad",
outputs="label",
live=True).launch()
@ -103,7 +103,6 @@ This produces the following interface, which you can try right here in your brow
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
And you're done! That's all the code you need to build a Pictionary-style guessing app. Have fun and try to find some edge cases 🧐

View File

@ -5,12 +5,11 @@ Tags: GAN, IMAGE, HUB
Contributed by <a href="https://huggingface.co/NimaBoscarino">Nima Boscarino</a> and <a href="https://huggingface.co/nateraw">Nate Raw</a>
## Introduction
It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html), and the web3 movement are all the rage these days! Digital assets are being listed on marketplaces for astounding amounts of money, and just about every celebrity is debuting their own NFT collection. While your crypto assets [may be taxable, such as in Canada](https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html), today we'll explore some fun and tax-free ways to generate your own assortment of procedurally generated [CryptoPunks](https://www.larvalabs.com/cryptopunks).
Generative Adversarial Networks, often known just as *GANs*, are a specific class of deep-learning models that are designed to learn from an input dataset to create (*generate!*) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
Generative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a peek at what we're going to be putting together:
@ -22,9 +21,9 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
## GANs: a very brief introduction
Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the *generator*, is responsible for generating images. The other network, the *discriminator*, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?
Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the _generator_, is responsible for generating images. The other network, the _discriminator_, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?
The generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (*adversarial!*) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!
The generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (_adversarial!_) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!
For a more in-depth look at GANs, you can take a look at [this excellent post on Analytics Vidhya](https://www.analyticsvidhya.com/blog/2021/06/a-detailed-explanation-of-gan-with-implementation-using-tensorflow-and-keras/) or this [PyTorch tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). For now, though, we'll dive into a demo!
@ -90,15 +89,15 @@ def predict(seed):
We're giving our `predict` function a `seed` parameter, so that we can fix the random tensor generation with a seed. We'll then be able to reproduce punks if we want to see them again by passing in the same seed.
*Note!* Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time.
_Note!_ Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time.
## Step 3 — Creating a Gradio interface
At this point you can even run the code you have with `predict(<SOME_NUMBER>)`, and you'll find your freshly generated punks in your file system at `./punks.png`. To make a truly interactive demo, though, we'll build out a simple interface with Gradio. Our goals here are to:
* Set a slider input so users can choose the "seed" value
* Use an image component for our output to showcase the generated punks
* Use our `predict()` to take the seed and generate the images
- Set a slider input so users can choose the "seed" value
- Use an image component for our output to showcase the generated punks
- Use our `predict()` to take the seed and generate the images
With `gr.Interface()`, we can define all of that with a single function call:
@ -221,6 +220,7 @@ gr.Interface(
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True)
```
----------
Congratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos 🤗
---
Congratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos 🤗

View File

@ -1,4 +1,5 @@
# Custom Machine Learning Interpretations with Blocks
Tags: INTERPRETATION, SENTIMENT ANALYSIS
**Prerequisite**: This Guide requires you to know about Blocks and the interpretation feature of Interfaces.
@ -29,7 +30,7 @@ We'll have a single input `Textbox` and a single output `Label` component.
Below is the code for the app as well as the app itself.
```python
import gradio as gr
import gradio as gr
from transformers import pipeline
sentiment_classifier = pipeline("text-classification", return_all_scores=True)
@ -71,13 +72,13 @@ The following code computes the `(word, score)` pairs:
def interpretation_function(text):
explainer = shap.Explainer(sentiment_classifier)
shap_values = explainer([text])
# Dimensions are (batch size, text size, number of classes)
# Since we care about positive sentiment, use index 1
scores = list(zip(shap_values.data[0], shap_values.values[0, :, 1]))
# Scores contains (word, score) pairs
# Format expected by gr.components.Interpretation
return {"original": text, "interpretation": scores}
```
@ -108,7 +109,6 @@ demo.launch()
<gradio-app space="freddyaboulton/sentiment-classification-interpretation"> </gradio-app>
## Customizing how the interpretation is displayed
The `gr.components.Interpretation` component does a good job of showing how individual words contribute to the sentiment prediction,
@ -121,6 +121,7 @@ We can do this by modifying our `interpretation_function` to additionally return
We will display it with the `gr.Plot` component in a separate tab.
This is how the interpretation function will look:
```python
def interpretation_function(text):
explainer = shap.Explainer(sentiment_classifier)
@ -135,7 +136,7 @@ def interpretation_function(text):
scores_desc = [t for t in scores_desc if t[0] != ""]
fig_m = plt.figure()
# Select top 5 words that contribute to positive sentiment
plt.bar(x=[s[0] for s in scores_desc[:5]],
height=[s[1] for s in scores_desc[:5]])
@ -146,6 +147,7 @@ def interpretation_function(text):
```
And this is how the app code will look:
```python
with gr.Blocks() as demo:
with gr.Row():
@ -174,6 +176,7 @@ You can see the demo below!
<gradio-app space="freddyaboulton/sentiment-classification-interpretation-tabs"> </gradio-app>
## Beyond Sentiment Classification
Although we have focused on sentiment classification so far, you can add interpretations to almost any machine learning model.
The output must be an `gr.Image` or `gr.Label` but the input can be almost anything (`gr.Number`, `gr.Slider`, `gr.Radio`, `gr.Image`).
@ -181,7 +184,6 @@ Here is a demo built with blocks of interpretations for an image classification
<gradio-app space="freddyaboulton/image-classification-interpretation-blocks"> </gradio-app>
## Closing remarks
We did a deep dive 🤿 into how interpretations work and how you can add them to your Blocks app.

View File

@ -14,7 +14,7 @@ This short Guide will cover both of these methods, so no matter how you write Py
## Python IDE Reload 🔥
If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:
If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:
```python
import gradio as gr
@ -24,19 +24,19 @@ with gr.Blocks() as demo:
inp = gr.Textbox(placeholder="What is your name?")
out = gr.Textbox()
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
outputs=out)
if __name__ == "__main__":
demo.launch()
demo.launch()
```
The problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.
Instead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:
In the terminal, run `gradio run.py`. That's it!
In the terminal, run `gradio run.py`. That's it!
Now, you'll see that after you'll see something like this:
@ -62,15 +62,15 @@ with gr.Blocks() as my_demo:
inp = gr.Textbox(placeholder="What is your name?")
out = gr.Textbox()
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
outputs=out)
if __name__ == "__main__":
my_demo.launch()
my_demo.launch()
```
Then you would launch it in reload mode like this: `gradio run.py my_demo.app`.
Then you would launch it in reload mode like this: `gradio run.py my_demo.app`.
🔥 If your application accepts command line arguments, you can pass them in as well. Here's an example:
@ -101,14 +101,14 @@ As a small aside, this auto-reloading happens if you change your `run.py` source
What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!
We've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:
We've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:
`%load_ext gradio`
Then, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:
```py
%%blocks
%%blocks
import gradio as gr
@ -116,30 +116,29 @@ gr.Markdown("# Greetings from Gradio!")
inp = gr.Textbox(placeholder="What is your name?")
out = gr.Textbox()
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
outputs=out)
```
Notice that:
* You do not need to put the boiler plate `with gr.Blocks() as demo:` and `demo.launch()` code — Gradio does that for you automatically!
- You do not need to put the boiler plate `with gr.Blocks() as demo:` and `demo.launch()` code — Gradio does that for you automatically!
* Every time you rerun the cell, Gradio will re-launch your app on the same port and using the same underlying web server. This means you'll see your changes *much, much faster* than if you were rerunning the cell normally.
- Every time you rerun the cell, Gradio will re-launch your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.
Here's what it looks like in a jupyter notebook:
![](https://i.ibb.co/nrszFws/Blocks.gif)
🪄 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1jUlX1w7JqckRHVE-nbDyMPyZ7fYD8488?authuser=1#scrollTo=zxHYjbCTTz_5) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!
🪄 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1jUlX1w7JqckRHVE-nbDyMPyZ7fYD8488?authuser=1#scrollTo=zxHYjbCTTz_5) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!
The Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.
The Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.
--------
---
## Next Steps
Now that you know how to develop quickly using Gradio, start building your own!
Now that you know how to develop quickly using Gradio, start building your own!
If you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) 🤗

View File

@ -5,7 +5,7 @@ Tags: VISION, IMAGE
## Introduction
3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: *.obj*, *.glb*, & *.gltf*.
3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.
This guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:
@ -15,7 +15,6 @@ This guide will show you how to build a demo for your 3D image model in a few li
Make sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart).
## Taking a Look at the Code
Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.
@ -48,14 +47,13 @@ Let's break down the code above:
Creating the Interface:
* `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.
* `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.
* `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.
* `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.
* `label`: the label that appears on the top left of the component.
* `examples`: list of 3D model files. The 3D model component can accept *.obj*, *.glb*, & *.gltf* file types.
* `cache_examples`: saves the predicted output for the examples, to save time on inference.
- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.
- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.
- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.
- `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.
- `label`: the label that appears on the top left of the component.
- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.
- `cache_examples`: saves the predicted output for the examples, to save time on inference.
## Exploring mode complex Model3D Demos:
@ -66,9 +64,9 @@ Below is a demo that uses the PIFu model to convert an image of a clothed human
<gradio-app space="radames/PIFu-Clothed-Human-Digitization"> </gradio-app>
----------
---
And you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:
* Gradio's ["Getting Started" guide](https://gradio.app/getting_started/)
* The first [3D Model Demo](https://huggingface.co/spaces/dawood/Model3D) and [complete code](https://huggingface.co/spaces/dawood/Model3D/tree/main) (on Hugging Face Spaces)
- Gradio's ["Getting Started" guide](https://gradio.app/getting_started/)
- The first [3D Model Demo](https://huggingface.co/spaces/dawood/Model3D) and [complete code](https://huggingface.co/spaces/dawood/Model3D/tree/main) (on Hugging Face Spaces)

View File

@ -1,23 +1,22 @@
# Named-Entity Recognition
# Named-Entity Recognition
Related spaces: https://huggingface.co/spaces/rajistics/biobert_ner_demo, https://huggingface.co/spaces/abidlabs/ner, https://huggingface.co/spaces/rajistics/Financial_Analyst_AI
Tags: NER, TEXT, HIGHLIGHT
## Introduction
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech.
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech.
For example, given the sentence:
> Does Chicago have any Pakistani restaurants?
A named-entity recognition algorithm may identify:
A named-entity recognition algorithm may identify:
* "Chicago" as a **location**
* "Pakistani" as an **ethnicity**
- "Chicago" as a **location**
- "Pakistani" as an **ethnicity**
and so on.
and so on.
Using `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.
@ -25,7 +24,7 @@ Here is an example of a demo that you'll be able to build:
$demo_ner_pipeline
This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!
This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!
### Prerequisites
@ -33,10 +32,10 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
### Approach 1: List of Entity Dictionaries
Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an *entity*, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate:
Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate:
```py
from transformers import pipeline
from transformers import pipeline
ner_pipeline = pipeline("ner")
ner_pipeline("Does Chicago have any Pakistani restaurants")
```
@ -74,12 +73,8 @@ In some cases, this can be easier than the first approach. Here is a demo showin
$code_text_analysis
$demo_text_analysis
---
--------------------------------------------
And you're done! That's all you need to know to build a web-based GUI for your NER model.
Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.
And you're done! That's all you need to know to build a web-based GUI for your NER model.
Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.

View File

@ -1,4 +1,4 @@
# Real Time Speech Recognition
# Real Time Speech Recognition
Related spaces: https://huggingface.co/spaces/abidlabs/streaming-asr-paused, https://huggingface.co/spaces/abidlabs/full-context-asr
Tags: ASR, SPEECH, STREAMING
@ -9,32 +9,31 @@ Automatic speech recognition (ASR), the conversion of spoken speech to text, is
Using `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a ***full-context*** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it ***streaming***, meaning that the audio model will convert speech as you speak. The streaming demo that we create will look something like this (try it below or [in a new tab](https://huggingface.co/spaces/abidlabs/streaming-asr-paused)!):
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. The streaming demo that we create will look something like this (try it below or [in a new tab](https://huggingface.co/spaces/abidlabs/streaming-asr-paused)!):
<iframe src="https://abidlabs-streaming-asr-paused.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
Real-time ASR is inherently *stateful*, meaning that the model's predictions change depending on what words the user previously spoke. So, in this tutorial, we will also cover how to use **state** with Gradio demos.
Real-time ASR is inherently _stateful_, meaning that the model's predictions change depending on what words the user previously spoke. So, in this tutorial, we will also cover how to use **state** with Gradio demos.
### Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:
* Transformers (for this, `pip install transformers` and `pip install torch`)
* DeepSpeech (`pip install deepspeech==0.8.2`)
- Transformers (for this, `pip install transformers` and `pip install torch`)
- DeepSpeech (`pip install deepspeech==0.8.2`)
Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.
Here's how to build a real time speech recognition (ASR) app:
Here's how to build a real time speech recognition (ASR) app:
1. [Set up the Transformers ASR Model](#1-set-up-the-transformers-asr-model)
2. [Create a Full-Context ASR Demo with Transformers](#2-create-a-full-context-asr-demo-with-transformers)
3. [Create a Streaming ASR Demo with Transformers](#3-create-a-streaming-asr-demo-with-transformers)
2. [Create a Full-Context ASR Demo with Transformers](#2-create-a-full-context-asr-demo-with-transformers)
3. [Create a Streaming ASR Demo with Transformers](#3-create-a-streaming-asr-demo-with-transformers)
4. [Create a Streaming ASR Demo with DeepSpeech](#4-create-a-streaming-asr-demo-with-deep-speech)
## 1. Set up the Transformers ASR Model
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the Hugging Face model, `Wav2Vec2`.
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the Hugging Face model, `Wav2Vec2`.
Here is the code to load `Wav2Vec2` from Hugging Face `transformers`.
@ -46,9 +45,9 @@ p = pipeline("automatic-speech-recognition")
That's it! By default, the automatic speech recognition model pipeline loads Facebook's `facebook/wav2vec2-base-960h` model.
## 2. Create a Full-Context ASR Demo with Transformers
## 2. Create a Full-Context ASR Demo with Transformers
We will start by creating a *full-context* ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
We will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.
@ -60,30 +59,30 @@ def transcribe(audio):
return text
gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text").launch()
```
So what's happening here? The `transcribe` function takes a single parameter, `audio`, which is a filepath to the audio file that the user has recorded. The `pipeline` object expects a filepath and converts it to text, which is returned to the frontend and displayed in a textbox.
So what's happening here? The `transcribe` function takes a single parameter, `audio`, which is a filepath to the audio file that the user has recorded. The `pipeline` object expects a filepath and converts it to text, which is returned to the frontend and displayed in a textbox.
Let's see it in action! (Record a short audio clip and then click submit, or [open in a new tab](https://huggingface.co/spaces/abidlabs/full-context-asr)):
<iframe src="https://abidlabs-full-context-asr.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## 3. Create a Streaming ASR Demo with Transformers
## 3. Create a Streaming ASR Demo with Transformers
Ok great! We've built an ASR model that works well for short audio clips. However, if you are recording longer audio clips, you probably want a *streaming* interface, one that transcribes audio as the user speaks instead of just all-at-once at the end.
Ok great! We've built an ASR model that works well for short audio clips. However, if you are recording longer audio clips, you probably want a _streaming_ interface, one that transcribes audio as the user speaks instead of just all-at-once at the end.
The good news is that it's not too difficult to adapt the demo we just made to make it streaming, using the same `Wav2Vec2` model.
The good news is that it's not too difficult to adapt the demo we just made to make it streaming, using the same `Wav2Vec2` model.
The biggest change is that we must now introduce a `state` parameter, which holds the audio that has been *transcribed so far*. This allows us to only the latest chunk of audio and simply append it to the audio we previously transcribed.
The biggest change is that we must now introduce a `state` parameter, which holds the audio that has been _transcribed so far_. This allows us to only the latest chunk of audio and simply append it to the audio we previously transcribed.
When adding state to a Gradio demo, you need to do a total of 3 things:
* Add a `state` parameter to the function
* Return the updated `state` at the end of the function
* Add the `"state"` components to the `inputs` and `outputs` in `Interface`
- Add a `state` parameter to the function
- Return the updated `state` at the end of the function
- Add the `"state"` components to the `inputs` and `outputs` in `Interface`
Here's what the code looks like:
@ -96,10 +95,10 @@ def transcribe(audio, state=""):
# Set the starting state to an empty string
gr.Interface(
fn=transcribe,
fn=transcribe,
inputs=[
gr.Audio(source="microphone", type="filepath", streaming=True),
"state"
gr.Audio(source="microphone", type="filepath", streaming=True),
"state"
],
outputs=[
"textbox",
@ -114,8 +113,7 @@ Let's see how it does (try below or [in a new tab](https://huggingface.co/spaces
<iframe src="https://abidlabs-streaming-asr.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
One thing that you may notice is that the transcription quality has dropped since the chunks of audio are so small, they lack the context to properly be transcribed. A "hacky" fix to this is to simply increase the runtime of the `transcribe()` function so that longer audio chunks are processed. We can do this by adding a `time.sleep()` inside the function, as shown below (we'll see a proper fix next)
One thing that you may notice is that the transcription quality has dropped since the chunks of audio are so small, they lack the context to properly be transcribed. A "hacky" fix to this is to simply increase the runtime of the `transcribe()` function so that longer audio chunks are processed. We can do this by adding a `time.sleep()` inside the function, as shown below (we'll see a proper fix next)
```python
from transformers import pipeline
@ -131,9 +129,9 @@ def transcribe(audio, state=""):
return state, state
gr.Interface(
fn=transcribe,
fn=transcribe,
inputs=[
gr.Audio(source="microphone", type="filepath", streaming=True),
gr.Audio(source="microphone", type="filepath", streaming=True),
"state"
],
outputs=[
@ -147,12 +145,11 @@ Try the demo below to see the difference (or [open in a new tab](https://hugging
<iframe src="https://abidlabs-streaming-asr-paused.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## 4. Create a Streaming ASR Demo with DeepSpeech
You're not restricted to ASR models from the `transformers` library -- you can use your own models or models from other libraries. The `DeepSpeech` library contains models that are specifically designed to handle streaming audio data. These models perform really well with streaming data as they are able to account for previous chunks of audio data when making predictions.
You're not restricted to ASR models from the `transformers` library -- you can use your own models or models from other libraries. The `DeepSpeech` library contains models that are specifically designed to handle streaming audio data. These models perform really well with streaming data as they are able to account for previous chunks of audio data when making predictions.
Going through the DeepSpeech library is beyond the scope of this Guide (check out their [excellent documentation here](https://deepspeech.readthedocs.io/en/r0.9/)), but you can use Gradio very similarly with a DeepSpeech ASR model as with a Transformers ASR model.
Going through the DeepSpeech library is beyond the scope of this Guide (check out their [excellent documentation here](https://deepspeech.readthedocs.io/en/r0.9/)), but you can use Gradio very similarly with a DeepSpeech ASR model as with a Transformers ASR model.
Here's a complete example (on Linux):
@ -216,25 +213,22 @@ Then, create a Gradio Interface as before (the only difference being that the re
import gradio as gr
gr.Interface(
fn=transcribe,
fn=transcribe,
inputs=[
gr.Audio(source="microphone", type="numpy"),
"state"
],
outputs= [
"text",
gr.Audio(source="microphone", type="numpy"),
"state"
],
],
outputs= [
"text",
"state"
],
live=True).launch()
```
Running all of this should allow you to deploy your realtime ASR model with a nice GUI. Try it out and see how well it works for you.
--------------------------------------------
And you're done! That's all the code you need to build a web-based GUI for your ASR model.
Fun tip: you can share your ASR model instantly with others simply by setting `share=True` in `launch()`.
---
And you're done! That's all the code you need to build a web-based GUI for your ASR model.
Fun tip: you can share your ASR model instantly with others simply by setting `share=True` in `launch()`.

View File

@ -1,18 +1,18 @@
# Running Background Tasks
# Running Background Tasks
Related spaces: https://huggingface.co/spaces/freddyaboulton/gradio-google-forms
Tags: TASKS, SCHEDULED, TABULAR, DATA
Tags: TASKS, SCHEDULED, TABULAR, DATA
## Introduction
This guide explains how you can run background tasks from your gradio app.
Background tasks are operations that you'd like to perform outside the request-response
lifecycle of your app either once or on a periodic schedule.
Examples of background tasks include periodically synchronizing data to an external database or
Examples of background tasks include periodically synchronizing data to an external database or
sending a report of model predictions via email.
## Overview
## Overview
We will be creating a simple "Google-forms-style" application to gather feedback from users of the gradio library.
We will use a local sqlite database to store our data, but we will periodically synchronize the state of the database
with a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.
@ -22,8 +22,8 @@ At the end of the demo, you'll have a fully working application like this one:
<gradio-app space="freddyaboulton/gradio-google-forms"> </gradio-app>
## Step 1 - Write your database logic 💾
Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as
any comments they want to share about the library. Let's write some code that creates a database table to
store this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.
@ -68,6 +68,7 @@ def add_review(name: str, review: int, comments: str):
```
Let's also write a function to load the latest reviews when the gradio application loads:
```python
def load_data():
db = sqlite3.connect(DB_FILE)
@ -77,7 +78,8 @@ def load_data():
```
## Step 2 - Create a gradio app ⚡
Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!
Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!
```python
with gr.Blocks() as demo:
@ -146,15 +148,16 @@ scheduler.add_job(func=backup_db, trigger="interval", seconds=60)
scheduler.start()
```
## Step 4 (Bonus) - Deployment to HuggingFace Spaces
You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free ✨
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
You will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.
## Conclusion
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️.
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️.
Checkout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).
The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)
The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)

View File

@ -4,16 +4,15 @@ Tags: DEPLOYMENT, WEB SERVER, NGINX
## Introduction
Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.
Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.
In some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).
In some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).
In this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.
**Prerequisites**
1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)
1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)
2. A working Gradio app saved as a python file on your web server
## Editing your Nginx configuration file
@ -33,7 +32,7 @@ include /etc/nginx/sites-enabled/*;
```bash
server {
listen 80;
server_name example.com www.example.com; # Change this to your domain name
server_name example.com www.example.com; # Change this to your domain name
location /gradio-demo/ { # Change this if you'd like to server your Gradio app on a different path
proxy_pass http://127.0.0.1:7860/; # Change this if your Gradio app will be running on a different port
@ -63,7 +62,7 @@ return x
gr.Interface(test, "textbox", "textbox").queue().launch(root_path="/gradio-demo")
```
2. Start a `tmux` session by typing `tmux` and pressing enter (optional)
2. Start a `tmux` session by typing `tmux` and pressing enter (optional)
It's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily
@ -73,7 +72,6 @@ It's recommended that you run your Gradio app in a `tmux` session so that you ca
1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the "D" key.
2. Finally, restart nginx by running `sudo systemctl restart nginx`.
2. Finally, restart nginx by running `sudo systemctl restart nginx`.
And that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there

View File

@ -2,8 +2,7 @@
Tags: QUEUE, PERFORMANCE
Let's say that your Gradio demo goes *viral* on social media -- you have lots of users trying it out simultaneously, and you want to provide your users with the best possible experience or, in other words, minimize the amount of time that each user has to wait in the queue to see their prediction.
Let's say that your Gradio demo goes _viral_ on social media -- you have lots of users trying it out simultaneously, and you want to provide your users with the best possible experience or, in other words, minimize the amount of time that each user has to wait in the queue to see their prediction.
How can you configure your Gradio demo to handle the most traffic? In this Guide, we dive into some of the parameters of Gradio's `.queue()` method as well as some other related configurations, and discuss how to set these parameters in a way that allows you to serve lots of users simultaneously with minimal latency.
@ -29,7 +28,7 @@ app.launch()
```
In the demo `app` above, predictions will now be sent over a websocket instead.
Unlike POST requests, websockets do not timeout and they allow bidirectional traffic. On the Gradio server, a **queue** is set up, which adds each request that comes to a list. When a worker is free, the first available request is passed into the worker for inference. When the inference is complete, the queue sends the prediction back through the websocket to the particular Gradio user who called that prediction.
Unlike POST requests, websockets do not timeout and they allow bidirectional traffic. On the Gradio server, a **queue** is set up, which adds each request that comes to a list. When a worker is free, the first available request is passed into the worker for inference. When the inference is complete, the queue sends the prediction back through the websocket to the particular Gradio user who called that prediction.
Note: If you host your Gradio app on [Hugging Face Spaces](https://hf.space), the queue is already **enabled by default**. You can still call the `.queue()` method manually in order to configure the queue parameters described below.
@ -43,27 +42,27 @@ The first parameter we will explore is the `concurrency_count` parameter of `que
So why not set this parameter much higher? Keep in mind that since requests are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `concurrency_count` too high. You may also start to get diminishing returns if the `concurrency_count` is too high because of costs of switching between different worker threads.
**Recommendation**: Increase the `concurrency_count` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview).
**Recommendation**: Increase the `concurrency_count` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview).
*Note*: there is a second parameter which controls the *total* number of threads that Gradio can generate, whether or not queuing is enabled. This is the `max_threads` parameter in the `launch()` method. When you increase the `concurrency_count` parameter in `queue()`, this is automatically increased as well. However, in some cases, you may want to manually increase this, e.g. if queuing is not enabled.
_Note_: there is a second parameter which controls the _total_ number of threads that Gradio can generate, whether or not queuing is enabled. This is the `max_threads` parameter in the `launch()` method. When you increase the `concurrency_count` parameter in `queue()`, this is automatically increased as well. However, in some cases, you may want to manually increase this, e.g. if queuing is not enabled.
### The `max_size` parameter
A more blunt way to reduce the wait times is simply to prevent too many people from joining the queue in the first place. You can set the maximum number of requests that the queue processes using the `max_size` parameter of `queue()`. If a request arrives when the queue is already of the maximum size, it will not be allowed to join the queue and instead, the user will receive an error saying that the queue is full and to try again. By default, `max_size=None`, meaning that there is no limit to the number of users that can join the queue.
Paradoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.
Paradoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.
**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction.
**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction.
### The `max_batch_size` parameter
Another way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.
Another way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.
If you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.
If you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.
While setting a batch is conceptually similar to having workers process requests in parallel, it is often *faster* than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.
While setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.
Here's an example of a function that does *not* accept a batch of inputs -- it processes a single input at a time:
Here's an example of a function that does _not_ accept a batch of inputs -- it processes a single input at a time:
```py
import time
@ -81,23 +80,21 @@ import time
def trim_words(words, lengths):
trimmed_words = []
for w, l in zip(words, lengths):
trimmed_words.append(w[:int(l)])
trimmed_words.append(w[:int(l)])
return [trimmed_words]
```
The second function can be used with `batch=True` and an appropriate `max_batch_size` parameter.
**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits. If you set `max_batch_size` as high as possible, you will most likely need to set `concurrency_count` back to `1` since you will no longer have the memory to have multiple workers running in parallel.
**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits. If you set `max_batch_size` as high as possible, you will most likely need to set `concurrency_count` back to `1` since you will no longer have the memory to have multiple workers running in parallel.
### The `api_open` parameter
When creating a Gradio demo, you may want to restrict all traffic to happen through the user interface as opposed to the [programmatic API](/guides/sharing-your-app/#api-page) that is automatically created for your Gradio demo. This is important because when people make requests through the programmatic API, they can potentially bypass users who are waiting in the queue and degrade the experience of these users.
When creating a Gradio demo, you may want to restrict all traffic to happen through the user interface as opposed to the [programmatic API](/guides/sharing-your-app/#api-page) that is automatically created for your Gradio demo. This is important because when people make requests through the programmatic API, they can potentially bypass users who are waiting in the queue and degrade the experience of these users.
**Recommendation**: set the `api_open` parameter in `queue()` to `False` in your demo to prevent programmatic requests.
### Upgrading your Hardware (GPUs, TPUs, etc.)
If you have done everything above, and your demo is still not fast enough, you can upgrade the hardware that your model is running on. Changing the model from running on CPUs to running on GPUs will usually provide a 10x-50x increase in inference time for deep learning models.
@ -113,5 +110,4 @@ you might need to adjust the value of the `concurrency_count` parameter describe
## Conclusion
Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!
Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!

View File

@ -1,4 +1,5 @@
# Theming
Tags: THEMES
## Introduction
@ -19,11 +20,11 @@ with gr.Blocks(theme=gr.themes.Soft()) as demo:
Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:
* `gr.themes.Base()`
* `gr.themes.Default()`
* `gr.themes.Glass()`
* `gr.themes.Monochrome()`
* `gr.themes.Soft()`
- `gr.themes.Base()`
- `gr.themes.Default()`
- `gr.themes.Glass()`
- `gr.themes.Monochrome()`
- `gr.themes.Soft()`
Each of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.
@ -39,7 +40,7 @@ gr.themes.builder()
$demo_theme_builder
You can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.
You can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.
As you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.
@ -72,6 +73,7 @@ or you could use the `Color` objects directly, like this:
with gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-1.hf.space?__theme=light"
@ -81,28 +83,28 @@ with gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, seconda
Predefined colors are:
* `slate`
* `gray`
* `zinc`
* `neutral`
* `stone`
* `red`
* `orange`
* `amber`
* `yellow`
* `lime`
* `green`
* `emerald`
* `teal`
* `cyan`
* `sky`
* `blue`
* `indigo`
* `violet`
* `purple`
* `fuchsia`
* `pink`
* `rose`
- `slate`
- `gray`
- `zinc`
- `neutral`
- `stone`
- `red`
- `orange`
- `amber`
- `yellow`
- `lime`
- `green`
- `emerald`
- `teal`
- `cyan`
- `sky`
- `blue`
- `indigo`
- `violet`
- `purple`
- `fuchsia`
- `pink`
- `rose`
You could also create your own custom `Color` objects and pass them in.
@ -127,6 +129,7 @@ or you could use the `Size` objects directly, like this:
with gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-2.hf.space?__theme=light"
@ -136,16 +139,16 @@ with gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm,
The predefined size objects are:
* `radius_none`
* `radius_sm`
* `radius_md`
* `radius_lg`
* `spacing_sm`
* `spacing_md`
* `spacing_lg`
* `text_sm`
* `text_md`
* `text_lg`
- `radius_none`
- `radius_sm`
- `radius_md`
- `radius_lg`
- `spacing_sm`
- `spacing_md`
- `spacing_lg`
- `text_sm`
- `text_md`
- `text_lg`
You could also create your own custom `Size` objects and pass them in.
@ -170,7 +173,6 @@ with gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont("Inconsolata")
></iframe>
</div>
## Extending Themes via `.set()`
You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:
@ -185,7 +187,7 @@ with gr.Blocks(theme=theme) as demo:
...
```
In the example above, we've set the `loader_color` and `slider_color` variables to `#FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.
In the example above, we've set the `loader_color` and `slider_color` variables to `#FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.
Your IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.
@ -199,7 +201,7 @@ CSS variable names can get quite long, like `button_primary_background_fill_hove
4. Any relevant state, such as `button_primary_background_fill_hover`.
5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.
Of course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.
Of course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.
### CSS Variable Organization
@ -240,7 +242,7 @@ theme = gr.themes.Default().set(
)
```
Having to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
Having to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
```python
theme = gr.themes.Default().set(
@ -282,10 +284,9 @@ $code_theme_new_step_1
></iframe>
</div>
The Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.
We'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.
We'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.
$code_theme_new_step_2
@ -296,11 +297,12 @@ $code_theme_new_step_2
></iframe>
</div>
See how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.
See how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.
Let's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.
$code_theme_new_step_3
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-3.hf.space?__theme=light"
@ -308,19 +310,19 @@ $code_theme_new_step_3
></iframe>
</div>
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
You may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.
You may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.
## Sharing Themes
Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!
### Uploading a Theme
There are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.
* Via the class instance
- Via the class instance
Each theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.
@ -330,9 +332,10 @@ seafoam.push_to_hub(repo_name="seafoam",
hf_token="<token>")
```
* Via the command line
- Via the command line
First save the theme to disk
```python
seafoam.dump(filename="seafoam.json")
```
@ -371,7 +374,7 @@ The theme preview for our seafoam theme is here: [seafoam preview](https://huggi
### Discovering Themes
The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,
it will automatically show up in the theme gallery after a couple of minutes.
it will automatically show up in the theme gallery after a couple of minutes.
You can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.
@ -383,6 +386,7 @@ You can sort the themes by the number of likes on the space and from most to lea
</div>
### Downloading
To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:
```python
@ -403,8 +407,8 @@ with gr.Blocks(theme="gradio/seafoam@>=0.0.1,<0.1.0") as demo:
....
```
Enjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!
If you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!
Enjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!
If you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!
<style>
.wrapper {

View File

@ -15,24 +15,24 @@ Flagging with Gradio's `Interface` is especially easy. By default, underneath th
There are [four parameters](https://gradio.app/docs/#interface-header) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.
* `allow_flagging`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`.
* `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.
* `auto`: users will not see a button to flag, but every sample will be flagged automatically.
* `never`: users will not see a button to flag, and no sample will be flagged.
* `flagging_options`: this parameter can be either `None` (default) or a list of strings.
* If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.
* If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `["Incorrect", "Ambiguous"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `allow_flagging` is `"manual"`.
* The chosen option is then logged along with the input and output.
* `flagging_dir`: this parameter takes a string.
* It represents what to name the directory where flagged data is stored.
* `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class
* Using this parameter allows you to write custom code that gets run when the flag button is clicked
* By default, this is set to an instance of `gr.CSVLogger`
* One example is setting it to an instance of `gr.HuggingFaceDatasetSaver` which can allow you to pipe any flagged data into a HuggingFace Dataset. (See more below.)
- `allow_flagging`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`.
- `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.
- `auto`: users will not see a button to flag, but every sample will be flagged automatically.
- `never`: users will not see a button to flag, and no sample will be flagged.
- `flagging_options`: this parameter can be either `None` (default) or a list of strings.
- If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.
- If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `["Incorrect", "Ambiguous"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `allow_flagging` is `"manual"`.
- The chosen option is then logged along with the input and output.
- `flagging_dir`: this parameter takes a string.
- It represents what to name the directory where flagged data is stored.
- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class
- Using this parameter allows you to write custom code that gets run when the flag button is clicked
- By default, this is set to an instance of `gr.CSVLogger`
- One example is setting it to an instance of `gr.HuggingFaceDatasetSaver` which can allow you to pipe any flagged data into a HuggingFace Dataset. (See more below.)
## What happens to flagged data?
Within the directory provided by the `flagging_dir` argument, a CSV file will log the flagged data.
Within the directory provided by the `flagging_dir` argument, a CSV file will log the flagged data.
Here's an example: The code below creates the calculator interface embedded below it:
@ -63,13 +63,15 @@ iface.launch()
<gradio-app space="gradio/calculator-flag-basic/"></gradio-app>
When you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.
When you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.
```directory
+-- flagged/
| +-- logs.csv
```
_flagged/logs.csv_
```csv
num1,operation,num2,Output,timestamp
5,add,7,12,2022-01-31 11:40:51.093412
@ -88,7 +90,9 @@ If the interface involves file data, such as for Image and Audio components, fol
| | +-- 0.png
| | +-- 1.png
```
_flagged/logs.csv_
```csv
im,Output timestamp
im/0.png,Output/0.png,2022-02-04 19:49:58.026963
@ -97,7 +101,8 @@ im/1.png,Output/1.png,2022-02-02 10:40:51.093412
If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.
If we go back to the calculator example, the following code will create the interface embedded below it.
If we go back to the calculator example, the following code will create the interface embedded below it.
```python
iface = gr.Interface(
calculator,
@ -109,11 +114,13 @@ iface = gr.Interface(
iface.launch()
```
<gradio-app space="gradio/calculator-flagging-options/"></gradio-app>
When users click the flag button, the csv file will now include a column indicating the selected option.
_flagged/logs.csv_
```csv
num1,operation,num2,Output,flag,timestamp
5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412
@ -131,7 +138,6 @@ We've made this super easy with the `flagging_callback` parameter.
For example, below we're going to pipe flagged data from our calculator example into a Hugging Face Dataset, e.g. so that we can build a "crowd-sourced" dataset:
```python
import os
@ -151,8 +157,8 @@ iface = gr.Interface(
iface.launch()
```
Notice that we define our own
instance of `gradio.HuggingFaceDatasetSaver` using our Hugging Face token and
Notice that we define our own
instance of `gradio.HuggingFaceDatasetSaver` using our Hugging Face token and
the name of a dataset we'd like to save samples to. In addition, we also set `allow_flagging="manual"`
because on Hugging Face Spaces, `allow_flagging` is set to `"never"` by default. Here's our demo:
@ -162,7 +168,7 @@ You can now see all the examples flagged above in this [public Hugging Face data
![flagging callback hf](https://github.com/gradio-app/gradio/blob/main/guides/assets/flagging-callback-hf.png?raw=true)
We created the `gradio.HuggingFaceDatasetSaver` class, but you can pass your own custom class as long as it inherits from `FLaggingCallback` defined in [this file](https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py). If you create a cool callback, contribute it to the repo!
We created the `gradio.HuggingFaceDatasetSaver` class, but you can pass your own custom class as long as it inherits from `FLaggingCallback` defined in [this file](https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py). If you create a cool callback, contribute it to the repo!
## Flagging with Blocks
@ -173,10 +179,10 @@ and assign that using the built-in events in Blocks.
At the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.
This requires two steps:
1. You have to run your callback's `.setup()` somewhere in the code prior to the
first time you flag data
1. You have to run your callback's `.setup()` somewhere in the code prior to the
first time you flag data
2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,
making sure to collect the arguments correctly and disabling the typical preprocessing.
making sure to collect the arguments correctly and disabling the typical preprocessing.
Here is an example with an image sepia filter Blocks demo that lets you flag
data using the default `CSVLogger`:
@ -188,4 +194,4 @@ $demo_blocks_flag
Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `allow_flagging=auto` (when all of the data submitted through the demo is being flagged)
### That's all! Happy building :)
### That's all! Happy building :)

View File

@ -1,22 +1,22 @@
# Contributing a Guide
Want to help teach Gradio? Consider contributing a Guide! 🤗
Want to help teach Gradio? Consider contributing a Guide! 🤗
Broadly speaking, there are two types of guides:
* **Use cases**: guides that cover step-by-step how to build a particular type of machine learning demo or app using Gradio. Here's an example: [_Creating a Chatbot_](https://github.com/gradio-app/gradio/blob/master/guides/creating_a_chatbot.md)
* **Feature explanation**: guides that describe in detail a particular feature of Gradio. Here's an example: [_Using Flagging_](https://github.com/gradio-app/gradio/blob/master/guides/using_flagging.md)
- **Use cases**: guides that cover step-by-step how to build a particular type of machine learning demo or app using Gradio. Here's an example: [_Creating a Chatbot_](https://github.com/gradio-app/gradio/blob/master/guides/creating_a_chatbot.md)
- **Feature explanation**: guides that describe in detail a particular feature of Gradio. Here's an example: [_Using Flagging_](https://github.com/gradio-app/gradio/blob/master/guides/using_flagging.md)
We encourage you to submit either type of Guide! (Looking for ideas? We may also have open [issues](https://github.com/gradio-app/gradio/issues?q=is%3Aopen+is%3Aissue+label%3Aguides) where users have asked for guides on particular topics)
## Guide Structure
As you can see with the previous examples, Guides are standard markdown documents. They usually:
* start with an Introduction section describing the topic
* include subheadings to make articles easy to navigate
* include real code snippets that make it easy to follow along and implement the Guide
* include embedded Gradio demos to make them more interactive and provide immediate demonstrations of the topic being discussed. These Gradio demos are hosted on [Hugging Face Spaces](https://huggingface.co/spaces) and are embedded using the standard \<iframe\> tag.
- start with an Introduction section describing the topic
- include subheadings to make articles easy to navigate
- include real code snippets that make it easy to follow along and implement the Guide
- include embedded Gradio demos to make them more interactive and provide immediate demonstrations of the topic being discussed. These Gradio demos are hosted on [Hugging Face Spaces](https://huggingface.co/spaces) and are embedded using the standard \<iframe\> tag.
## How to Contribute a Guide
@ -27,4 +27,4 @@ As you can see with the previous examples, Guides are standard markdown document
5. Add 3 `tags` at the top of the markdown document to help users find your guide (again, see the previously linked Guides for how to do this)
6. Open a PR to have your guide reviewed
That's it! We're looking forward to reading your Guide 🥳
That's it! We're looking forward to reading your Guide 🥳

View File

@ -115,4 +115,4 @@ $demo_blocks_flipper
这里有更多的东西!在[building with blocks](https://gradio.app/building_with_blocks)部分中,我们将介绍如何创建像这样的复杂的 `Blocks` 应用程序。
恭喜,您已经熟悉了 Gradio 的基础知识! 🥳 转到我们的[下一个指南](https://gradio.app/key_features)了解更多关于 Gradio 的主要功能。
恭喜,您已经熟悉了 Gradio 的基础知识! 🥳 转到我们的[下一个指南](https://gradio.app/key_features)了解更多关于 Gradio 的主要功能。

View File

@ -35,9 +35,9 @@ $demo_calculator
Interface 构造函数中有三个参数用于指定此内容应放置在哪里:
* `title`:接受文本,并可以将其显示在界面的顶部,也将成为页面标题。
* `description`接受文本、Markdown 或 HTML并将其放置在标题正下方。
* `article`也接受文本、Markdown 或 HTML并将其放置在界面下方。
- `title`:接受文本,并可以将其显示在界面的顶部,也将成为页面标题。
- `description`接受文本、Markdown 或 HTML并将其放置在标题正下方。
- `article`也接受文本、Markdown 或 HTML并将其放置在界面下方。
![annotated](/assets/guides/annotated.png)
@ -61,7 +61,7 @@ gr.Number(label='年龄', info='以年为单位必须大于0')
| +-- logs.csv
```
*flagged/logs.csv*
_flagged/logs.csv_
```csv
num1,operation,num2,Output
@ -83,7 +83,7 @@ num1,operation,num2,Output
| | +-- 1.png
```
*flagged/logs.csv*
_flagged/logs.csv_
```csv
im,Output
@ -234,12 +234,13 @@ def trim_words(words, lens):
return [trimmed_words]
for w, l in zip(words, lens):
```
使用批处理函数的优点是如果启用了队列Gradio 服务器可以自动*批处理*传入的请求并并行处理它们,从而可能加快演示速度。以下是 Gradio 代码的示例(请注意 `batch=True``max_batch_size=16` - 这两个参数都可以传递给事件触发器或 `Interface` 类)
with `Interface`
```python
demo = gr.Interface(trim_words, ["textbox", "number"], ["output"],
demo = gr.Interface(trim_words, ["textbox", "number"], ["output"],
batch=True, max_batch_size=16)
demo.queue()
demo.launch()
@ -270,4 +271,4 @@ demo.launch()
## Gradio 笔记本 (Colab Notebooks)
Gradio 可以在任何运行 Python 的地方运行,包括本地 Jupyter 笔记本和协作笔记本,如[Google Colab](https://colab.research.google.com/)。对于本地 Jupyter 笔记本和 Google Colab 笔记本Gradio 在本地服务器上运行,您可以在浏览器中与之交互。(注意:对于 Google Colab这是通过[服务工作器隧道](https://github.com/tensorflow/tensorboard/blob/master/docs/design/colab_integration.md)实现的,您的浏览器需要启用 cookies。对于其他远程笔记本Gradio 也将在服务器上运行,但您需要使用[SSH 隧道](https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh)在本地浏览器中查看应用程序。通常,更简单的选择是使用 Gradio 内置的公共链接,[在下一篇指南中讨论](/sharing-your-app/#sharing-demos)。
Gradio 可以在任何运行 Python 的地方运行,包括本地 Jupyter 笔记本和协作笔记本,如[Google Colab](https://colab.research.google.com/)。对于本地 Jupyter 笔记本和 Google Colab 笔记本Gradio 在本地服务器上运行,您可以在浏览器中与之交互。(注意:对于 Google Colab这是通过[服务工作器隧道](https://github.com/tensorflow/tensorboard/blob/master/docs/design/colab_integration.md)实现的,您的浏览器需要启用 cookies。对于其他远程笔记本Gradio 也将在服务器上运行,但您需要使用[SSH 隧道](https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh)在本地浏览器中查看应用程序。通常,更简单的选择是使用 Gradio 内置的公共链接,[在下一篇指南中讨论](/sharing-your-app/#sharing-demos)。

View File

@ -33,6 +33,7 @@ demo.launch(share=True)
如果您想在互联网上获得您的 Gradio 演示的永久链接,请使用 Hugging Face Spaces。 [Hugging Face Spaces](http://huggingface.co/spaces/) 提供了免费托管您的机器学习模型的基础设施!
在您创建了一个免费的 Hugging Face 账户后,有三种方法可以将您的 Gradio 应用部署到 Hugging Face Spaces
1. 从终端:在应用目录中运行 `gradio deploy`。CLI 将收集一些基本元数据,然后启动您的应用。要更新您的空间,可以重新运行此命令或启用 Github Actions 选项,在 `git push` 时自动更新 Spaces。
2. 从浏览器:将包含 Gradio 模型和所有相关文件的文件夹拖放到 [此处](https://huggingface.co/new-space)。
3. 将 Spaces 与您的 Git 存储库连接Spaces 将从那里拉取 Gradio 应用。有关更多信息,请参阅 [此指南如何在 Hugging Face Spaces 上托管](https://huggingface.co/blog/gradio-spaces)。
@ -55,22 +56,24 @@ demo.launch(share=True)
要使用 Web 组件嵌入:
1. 通过在您的网站中添加以下脚本来导入 gradio JS 库(在 URL 中替换{GRADIO_VERSION}为您使用的 Gradio 库的版本)。
1. 通过在您的网站中添加以下脚本来导入 gradio JS 库(在 URL 中替换{GRADIO_VERSION}为您使用的 Gradio 库的版本)。
```html
&lt;script type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/{GRADIO_VERSION}/gradio.js">
&lt;/script>
```html
&lt;script type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/{GRADIO_VERSION}/gradio.js">
&lt;/script>
```
2. 在您想放置应用的位置添加
```html
2. 在您想放置应用的位置添加
`html
&lt;gradio-app src="https://$your_space_host.hf.space">&lt;/gradio-app>
```
元素。将 `src=` 属性设置为您的 Space 的嵌入 URL您可以在“嵌入此空间”按钮中找到。例如
`
元素。将 `src=` 属性设置为您的 Space 的嵌入 URL您可以在“嵌入此空间”按钮中找到。例如
```html
&lt;gradio-app src="https://abidlabs-pytorch-image-classifier.hf.space">&lt;/gradio-app>
```html
&lt;gradio-app src="https://abidlabs-pytorch-image-classifier.hf.space">&lt;/gradio-app>
```
<script>
@ -87,20 +90,20 @@ fetch("https://pypi.org/pypi/gradio/json"
您还可以使用传递给 `<gradio-app>` 标签的属性来自定义 Web 组件的外观和行为:
* `src`:如前所述,`src` 属性链接到您想要嵌入的托管 Gradio 演示的 URL
* `space`:一个可选的缩写,如果您的 Gradio 演示托管在 Hugging Face Space 上。接受 `username/space_name` 而不是完整的 URL。示例`gradio/Echocardiogram-Segmentation`。如果提供了此属性,则不需要提供 `src`
* `control_page_title`:一个布尔值,指定是否将 html 标题设置为 Gradio 应用的标题(默认为 `"false"`
* `initial_height`:加载 Gradio 应用时 Web 组件的初始高度(默认为 `"300px"`)。请注意,最终高度是根据 Gradio 应用的大小设置的。
* `container`:是否显示边框框架和有关 Space 托管位置的信息(默认为 `"true"`
* `info`:是否仅显示有关 Space 托管位置的信息在嵌入的应用程序下方(默认为 `"true"`
* `autoscroll`:在预测完成后是否自动滚动到输出(默认为 `"false"`
* `eager`:在页面加载时是否立即加载 Gradio 应用(默认为 `"false"`
* `theme_mode`:是否使用 `dark``light` 或默认的 `system` 主题模式(默认为 `"system"`
- `src`:如前所述,`src` 属性链接到您想要嵌入的托管 Gradio 演示的 URL
- `space`:一个可选的缩写,如果您的 Gradio 演示托管在 Hugging Face Space 上。接受 `username/space_name` 而不是完整的 URL。示例`gradio/Echocardiogram-Segmentation`。如果提供了此属性,则不需要提供 `src`
- `control_page_title`:一个布尔值,指定是否将 html 标题设置为 Gradio 应用的标题(默认为 `"false"`
- `initial_height`:加载 Gradio 应用时 Web 组件的初始高度(默认为 `"300px"`)。请注意,最终高度是根据 Gradio 应用的大小设置的。
- `container`:是否显示边框框架和有关 Space 托管位置的信息(默认为 `"true"`
- `info`:是否仅显示有关 Space 托管位置的信息在嵌入的应用程序下方(默认为 `"true"`
- `autoscroll`:在预测完成后是否自动滚动到输出(默认为 `"false"`
- `eager`:在页面加载时是否立即加载 Gradio 应用(默认为 `"false"`
- `theme_mode`:是否使用 `dark``light` 或默认的 `system` 主题模式(默认为 `"system"`
以下是使用这些属性创建一个懒加载且初始高度为 0px 的 Gradio 应用的示例。
```html
&lt;gradio-app space="gradio/Echocardiogram-Segmentation" eager="true"
&lt;gradio-app space="gradio/Echocardiogram-Segmentation" eager="true"
initial_height="0px">&lt;/gradio-app>
```
@ -134,7 +137,7 @@ btn.click(add, [num1, num2], output, api_name="addition")
这将记录自动生成的 API 页面的端点 `/api/addition/`
*注意*:对于启用了[队列功能](https://gradio.app/key-features#queuing)的 Gradio 应用程序,如果用户向您的 API 端点发出 POST 请求,他们可以绕过队列。要禁用此行为,请在 `queue()` 方法中设置 `api_open=False`
_注意_:对于启用了[队列功能](https://gradio.app/key-features#queuing)的 Gradio 应用程序,如果用户向您的 API 端点发出 POST 请求,他们可以绕过队列。要禁用此行为,请在 `queue()` 方法中设置 `api_open=False`
## 鉴权
@ -192,18 +195,18 @@ $code_custom_path
特别是Gradio 应用程序允许用户访问以下三类文件:
* **与 Gradio 脚本所在目录(或子目录)中的文件相同。** 例如,如果您的 Gradio 脚本的路径是 `/home/usr/scripts/project/app.py`,并且您从 `/home/usr/scripts/project/` 启动它,则共享 Gradio 应用程序的用户将能够访问 `/home/usr/scripts/project/` 中的任何文件。这样做是为了您可以在 Gradio 应用程序中轻松引用这些文件(例如应用程序的“示例”)。
- **与 Gradio 脚本所在目录(或子目录)中的文件相同。** 例如,如果您的 Gradio 脚本的路径是 `/home/usr/scripts/project/app.py`,并且您从 `/home/usr/scripts/project/` 启动它,则共享 Gradio 应用程序的用户将能够访问 `/home/usr/scripts/project/` 中的任何文件。这样做是为了您可以在 Gradio 应用程序中轻松引用这些文件(例如应用程序的“示例”)。
* **Gradio 创建的临时文件。** 这些是由 Gradio 作为运行您的预测函数的一部分创建的文件。例如,如果您的预测函数返回一个视频文件,则 Gradio 将该视频保存到临时文件中,然后将临时文件的路径发送到前端。您可以通过设置环境变量 `GRADIO_TEMP_DIR` 为绝对路径(例如 `/home/usr/scripts/project/temp/`)来自定义 Gradio 创建的临时文件的位置。
- **Gradio 创建的临时文件。** 这些是由 Gradio 作为运行您的预测函数的一部分创建的文件。例如,如果您的预测函数返回一个视频文件,则 Gradio 将该视频保存到临时文件中,然后将临时文件的路径发送到前端。您可以通过设置环境变量 `GRADIO_TEMP_DIR` 为绝对路径(例如 `/home/usr/scripts/project/temp/`)来自定义 Gradio 创建的临时文件的位置。
* **通过 `launch()` 中的 `allowed_paths` 参数允许的文件。** 此参数允许您传递一个包含其他目录或确切文件路径的列表,以允许用户访问它们。(默认情况下,此参数为空列表)。
- **通过 `launch()` 中的 `allowed_paths` 参数允许的文件。** 此参数允许您传递一个包含其他目录或确切文件路径的列表,以允许用户访问它们。(默认情况下,此参数为空列表)。
Gradio**不允许**访问以下内容:
* **点文件**(其名称以 '.' 开头的任何文件)或其名称以 '.' 开头的任何目录中的任何文件。
- **点文件**(其名称以 '.' 开头的任何文件)或其名称以 '.' 开头的任何目录中的任何文件。
* **通过 `launch()` 中的 `blocked_paths` 参数允许的文件。** 您可以将其他目录或确切文件路径的列表传递给 `launch()` 中的 `blocked_paths` 参数。此参数优先于 Gradio 默认或 `allowed_paths` 允许的文件。
- **通过 `launch()` 中的 `blocked_paths` 参数允许的文件。** 您可以将其他目录或确切文件路径的列表传递给 `launch()` 中的 `blocked_paths` 参数。此参数优先于 Gradio 默认或 `allowed_paths` 允许的文件。
* **主机机器上的任何其他路径**。用户不应能够访问主机上的其他任意路径。
- **主机机器上的任何其他路径**。用户不应能够访问主机上的其他任意路径。
请确保您正在运行最新版本的 `gradio`,以使这些安全设置生效。
请确保您正在运行最新版本的 `gradio`,以使这些安全设置生效。

View File

@ -19,4 +19,4 @@ $demo_calculator_live
以下是从网络摄像头实时流式传输图像的示例代码。
$code_stream_frames
$code_stream_frames

View File

@ -16,7 +16,8 @@
对于多个输入,该目录必须包含一个带有示例值的 log.csv 文件。
在计算器演示的上下文中,我们可以设置 `examples='/demo/calculator/examples'` ,在该目录中包含以下 `log.csv` 文件:
contain a log.csv file with the example values.
In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
```csv
num,operation,num2
5,"add",3

View File

@ -53,7 +53,7 @@ gr.Interface.load("huggingface/gpt2").launch();
```
```python
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",
inputs=gr.Textbox(lines=5, label="Input Text") # customizes the input component
).launch()
```
@ -94,4 +94,4 @@ gr.Series(generator, translator).launch() # this demo generates text, then tran
当然,您还可以在适当的情况下同时使用 `Parallel``Series`
在[文档](https://gradio.app/docs#parallel)中了解有关并行和串行 (`Parallel``Series`) 的更多信息。
在[文档](https://gradio.app/docs#parallel)中了解有关并行和串行 (`Parallel``Series`) 的更多信息。

View File

@ -82,7 +82,7 @@ with gr.Blocks() as demo:
else:
return 0, "hungry"
gr.Button("EAT").click(
fn=eat,
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
@ -102,7 +102,7 @@ with gr.Blocks() as demo:
else:
return {status_box: "hungry"}
gr.Button("EAT").click(
fn=eat,
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
@ -154,4 +154,4 @@ $demo_sine_curve
$code_tictactoe
$demo_tictactoe
$demo_tictactoe

View File

@ -71,7 +71,7 @@ $demo_blocks_form
## 可变数量的输出 (Variable Number of Outputs)
通过以动态方式调整组件的可见性,可以创建支持 *可变数量输出* 的 Gradio 演示。这是一个非常简单的例子,其中输出文本框的数量由输入滑块控制:
通过以动态方式调整组件的可见性,可以创建支持 _可变数量输出_ 的 Gradio 演示。这是一个非常简单的例子,其中输出文本框的数量由输入滑块控制:
例如:

View File

@ -18,6 +18,7 @@ Gradio 自带一套预构建的主题,您可以从 `gr.themes.*` 中加载这
要增加附加的样式能力,您可以使用 `css=` kwarg 将任何 CSS 传递给您的应用程序。
Gradio 应用程序的基类是 `gradio-container`,因此下面是一个示例,用于更改 Gradio 应用程序的背景颜色:
```python
with gr.Blocks(css=".gradio-container {background-color: red}") as demo:
...
@ -38,7 +39,7 @@ with gr.Blocks(css=".gradio-container {background: url('file=clouds.jpg')}") as
```python
css = """
#warning {background-color: #FFCCCB}
#warning {background-color: #FFCCCB}
.feedback textarea {font-size: 24px !important}
"""
@ -54,4 +55,4 @@ CSS `#warning` 规则集仅针对第二个文本框,而 `.feedback` 规则集
事件监听器具有 `_js` 参数,可以接受 JavaScript 函数作为字符串,并像 Python 事件监听器函数一样处理它。您可以传递 JavaScript 函数和 Python 函数(在这种情况下,先运行 JavaScript 函数),或者仅传递 JavaScript并将 Python 的 `fn` 设置为 `None`)。请查看下面的代码:
$code_blocks_js_methods
$demo_blocks_js_methods
$demo_blocks_js_methods

View File

@ -1,4 +1,5 @@
# 使用 Gradio 块像函数一样
Tags: TRANSLATION, HUB, SPACES
**先决条件**: 本指南是在块介绍的基础上构建的。请确保[先阅读该指南](https://gradio.app/quickstart/#blocks-more-flexibility-and-control)。

View File

@ -26,9 +26,9 @@ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
def predict(text):
return pipe(text)[0]["translation_text"]
demo = gr.Interface(
fn=predict,
fn=predict,
inputs='text',
outputs='text',
)
@ -70,9 +70,9 @@ demo.launch()
您可能会注意到,第一次推理大约需要 20 秒。这是因为推理 API 正在服务器中加载模型。之后您会获得一些好处:
* 推理速度更快。
* 服务器缓存您的请求。
* 您获得内置的自动缩放功能。
- 推理速度更快。
- 服务器缓存您的请求。
- 您获得内置的自动缩放功能。
## 托管您的 Gradio 演示
@ -96,6 +96,7 @@ file_url = upload_file(
token=hf_token,
)
```
在这里,`create_repo` 使用特定帐户的 Write Token 在特定帐户下创建一个带有目标名称的 gradio repo。`repo_name` 获取相关存储库的完整存储库名称。最后,`upload_file` 将文件上传到存储库中,并将其命名为 `app.py`
## 在其他网站上嵌入您的 Space 演示

View File

@ -260,10 +260,10 @@ with gr.Blocks() as demo:
## 如何在 Comet 组织上贡献 Gradio 演示
* 在 Hugging Face 上创建帐号[此处](https://huggingface.co/join)。
* 在用户名下添加 Gradio 演示,请参阅[此处](https://huggingface.co/course/chapter9/4?fw=pt)以设置 Gradio 演示。
* 请求加入 Comet 组织[此处](https://huggingface.co/Comet)。
- 在 Hugging Face 上创建帐号[此处](https://huggingface.co/join)。
- 在用户名下添加 Gradio 演示,请参阅[此处](https://huggingface.co/course/chapter9/4?fw=pt)以设置 Gradio 演示。
- 请求加入 Comet 组织[此处](https://huggingface.co/Comet)。
## 更多资源
* [Comet 文档](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)
- [Comet 文档](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)

View File

@ -8,15 +8,16 @@ Tags: ONNXSPACES
在这个指南中,我们将为您介绍以下内容:
* ONNX、ONNX 模型仓库、Gradio 和 Hugging Face Spaces 的介绍
* 如何为 EfficientNet-Lite4 设置 Gradio 演示
* 如何为 Hugging Face 上的 ONNX 组织贡献自己的 Gradio 演示
- ONNX、ONNX 模型仓库、Gradio 和 Hugging Face Spaces 的介绍
- 如何为 EfficientNet-Lite4 设置 Gradio 演示
- 如何为 Hugging Face 上的 ONNX 组织贡献自己的 Gradio 演示
下面是一个 ONNX 模型的示例:在下面尝试 EfficientNet-Lite4 演示。
<iframe src="https://onnx-efficientnet-lite4.hf.space" frameBorder="0" height="810" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## ONNX 模型仓库是什么?
Open Neural Network Exchange[ONNX](https://onnx.ai/)是一种表示机器学习模型的开放标准格式。ONNX 由一个实现了该格式的合作伙伴社区支持,该社区将其实施到许多框架和工具中。例如,如果您在 TensorFlow 或 PyTorch 中训练了一个模型,您可以轻松地将其转换为 ONNX然后使用类似 ONNX Runtime 的引擎 / 编译器在各种设备上运行它。
[ONNX 模型仓库](https://github.com/onnx/models)是由社区成员贡献的一组预训练的先进模型,格式为 ONNX。每个模型都附带了用于模型训练和运行推理的 Jupyter 笔记本。这些笔记本以 Python 编写,并包含到训练数据集的链接,以及描述模型架构的原始论文的参考文献。
@ -38,9 +39,11 @@ Hugging Face Spaces 是 Gradio 演示的免费托管选项。Spaces 提供了 3
Hugging Face 模型中心还支持 ONNX 模型,并且可以通过[ONNX 标签](https://huggingface.co/models?library=onnx&sort=downloads)对 ONNX 模型进行筛选
## Hugging Face 是如何帮助 ONNX 模型仓库的?
ONNX 模型仓库中有许多 Jupyter 笔记本供用户测试模型。以前,用户需要自己下载模型并在本地运行这些笔记本测试。有了 Hugging Face测试过程可以更简单和用户友好。用户可以在 Hugging Face Spaces 上轻松尝试 ONNX 模型仓库中的某个模型,并使用 ONNX Runtime 运行由 Gradio 提供支持的快速演示全部在云端进行无需在本地下载任何内容。请注意ONNX 有各种运行时,例如[ONNX Runtime](https://github.com/microsoft/onnxruntime)、[MXNet](https://github.com/apache/incubator-mxnet)等
## ONNX Runtime 的作用是什么?
ONNX Runtime 是一个跨平台的推理和训练机器学习加速器。它使得在 Hugging Face 上使用 ONNX 模型仓库中的模型进行实时 Gradio 演示成为可能。
ONNX Runtime 可以实现更快的客户体验和更低的成本,支持来自 PyTorch 和 TensorFlow/Keras 等深度学习框架以及 scikit-learn、LightGBM、XGBoost 等传统机器学习库的模型。ONNX Runtime 与不同的硬件、驱动程序和操作系统兼容,并通过利用适用的硬件加速器以及图形优化和转换提供最佳性能。有关更多信息,请参阅[官方网站](https://onnxruntime.ai/)。
@ -110,9 +113,9 @@ sess = ort.InferenceSession(model)
def inference(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = pre_process_edgetpu(img, (224, 224, 3))
img_batch = np.expand_dims(img, axis=0)
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
@ -121,7 +124,7 @@ def inference(img):
for r in result:
resultdic[labels[str(r)]] = float(results[0][r])
return resultdic
title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4是最大的变体也是EfficientNet-Lite模型集合中最准确的。它是一个仅包含整数的量化模型具有所有EfficientNet模型中最高的准确度。在Pixel 4 CPU上它实现了80.4的ImageNet top-1准确度同时仍然可以实时运行例如30ms/图像)。"
examples = [['catonnx.jpg']]
@ -130,10 +133,10 @@ gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, descrip
## 如何使用 ONNX 模型在 HF Spaces 上贡献 Gradio 演示
* 将模型添加到[onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
* 在 Hugging Face 上创建一个账号[here](https://huggingface.co/join).
* 要查看还有哪些模型需要添加到 ONNX 组织中,请参阅[Models list](https://github.com/onnx/models#models)中的列表
* 在您的用户名下添加 Gradio Demo请参阅此[博文](https://huggingface.co/blog/gradio-spaces)以在 Hugging Face 上设置 Gradio Demo。
* 请求加入 ONNX 组织[here](https://huggingface.co/onnx).
* 一旦获准,将模型从您的用户名下转移到 ONNX 组织
* 在模型表中为模型添加徽章,在[Models list](https://github.com/onnx/models#models)中查看示例
- 将模型添加到[onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- 在 Hugging Face 上创建一个账号[here](https://huggingface.co/join).
- 要查看还有哪些模型需要添加到 ONNX 组织中,请参阅[Models list](https://github.com/onnx/models#models)中的列表
- 在您的用户名下添加 Gradio Demo请参阅此[博文](https://huggingface.co/blog/gradio-spaces)以在 Hugging Face 上设置 Gradio Demo。
- 请求加入 ONNX 组织[here](https://huggingface.co/onnx).
- 一旦获准,将模型从您的用户名下转移到 ONNX 组织
- 在模型表中为模型添加徽章,在[Models list](https://github.com/onnx/models#models)中查看示例

View File

@ -8,9 +8,9 @@
在本指南中,我们将引导您完成以下内容:
* Gradio、Hugging Face Spaces 和 Wandb 的介绍
* 如何使用 Wandb 集成为 JoJoGAN 设置 Gradio 演示
* 如何在 Hugging Face 的 Wandb 组织中追踪实验并贡献您自己的 Gradio 演示
- Gradio、Hugging Face Spaces 和 Wandb 的介绍
- 如何使用 Wandb 集成为 JoJoGAN 设置 Gradio 演示
- 如何在 Hugging Face 的 Wandb 组织中追踪实验并贡献您自己的 Gradio 演示
下面是一个使用 Wandb 跟踪训练和实验的模型示例,请在下方尝试 JoJoGAN 演示。
@ -42,216 +42,215 @@ Hugging Face Spaces 是 Gradio 演示的免费托管选项。Spaces 有 3 个 SD
1. 创建 W&B 账号
如果您还没有 W&B 账号,请按照[这些快速说明](https://app.wandb.ai/login)创建免费账号。这不应该超过几分钟的时间。一旦完成(或者如果您已经有一个账户),接下来,我们将运行一个快速的 colab。
如果您还没有 W&B 账号,请按照[这些快速说明](https://app.wandb.ai/login)创建免费账号。这不应该超过几分钟的时间。一旦完成(或者如果您已经有一个账户),接下来,我们将运行一个快速的 colab。
2. 打开 Colab 安装 Gradio 和 W&B
我们将按照 JoJoGAN 存储库中提供的 colab 进行操作,稍作修改以更有效地使用 Wandb 和 Gradio。
我们将按照 JoJoGAN 存储库中提供的 colab 进行操作,稍作修改以更有效地使用 Wandb 和 Gradio。
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
在顶部安装 Gradio 和 Wandb:
在顶部安装 Gradio 和 Wandb:
```sh
```sh
pip install gradio wandb
```
pip install gradio wandb
```
3. 微调 StyleGAN 和 W&B 实验跟踪
下一步将打开一个 W&B 仪表板,以跟踪实验,并显示一个 Gradio 演示提供的预训练模型,您可以从下拉菜单中选择。这是您需要的代码:
下一步将打开一个 W&B 仪表板,以跟踪实验,并显示一个 Gradio 演示提供的预训练模型,您可以从下拉菜单中选择。这是您需要的代码:
```python
```python
alpha = 1.0
alpha = 1-alpha
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
# 加载判别器用于感知损失
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# 加载判别器用于感知损失
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# 重置生成器
del generator
generator = deepcopy(original_generator)
# 重置生成器
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
# 用于生成一族合理真实图像-> 假图像的更换图层
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
# 用于生成一族合理真实图像-> 假图像的更换图层
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
img = generator(in_latent, input_is_latent=True)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
g_optim.zero_grad()
loss.backward()
g_optim.step()
g_optim.zero_grad()
loss.backward()
g_optim.step()
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({" 当前样本数 ": out_table})
```
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({" 当前样本数 ": out_table})
```
4. 保存、下载和加载模型
以下是如何保存和下载您的模型。
以下是如何保存和下载您的模型。
```python
```python
from PIL import Image
import torch
torch.backends.cudnn.benchmark = True
from torchvision import transforms, utils
from util import *
import math
import random
import numpy as np
from torch import nn, autograd, optim
from torch.nn import functional as F
from tqdm import tqdm
import lpips
from model import *
from e4e_projection import projection as e4e_projection
from PIL import Image
import torch
torch.backends.cudnn.benchmark = True
from torchvision import transforms, utils
from util import *
import math
import random
import numpy as np
from torch import nn, autograd, optim
from torch.nn import functional as F
from tqdm import tqdm
import lpips
from model import *
from e4e_projection import projection as e4e_projection
from copy import deepcopy
import imageio
from copy import deepcopy
import imageio
import os
import sys
import torchvision.transforms as transforms
from argparse import Namespace
from e4e.models.psp import pSp
from util import *
from huggingface_hub import hf_hub_download
from google.colab import files
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
import os
import sys
import torchvision.transforms as transforms
from argparse import Namespace
from e4e.models.psp import pSp
from util import *
from huggingface_hub import hf_hub_download
from google.colab import files
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
files.download('your-model-name.pt')
files.download('your-model-name.pt')
latent_dim = 512
device="cuda"
model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt")
original_generator = Generator(1024, latent_dim, 8, 2).to(device)
ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)
original_generator.load_state_dict(ckpt["g_ema"], strict=False)
mean_latent = original_generator.mean_latent(10000)
latent_dim = 512
device="cuda"
model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt")
original_generator = Generator(1024, latent_dim, 8, 2).to(device)
ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)
original_generator.load_state_dict(ckpt["g_ema"], strict=False)
mean_latent = original_generator.mean_latent(10000)
generator = deepcopy(original_generator)
generator = deepcopy(original_generator)
ckpt = torch.load("/content/JoJoGAN/your-model-name.pt", map_location=lambda storage, loc: storage)
generator.load_state_dict(ckpt["g"], strict=False)
generator.eval()
ckpt = torch.load("/content/JoJoGAN/your-model-name.pt", map_location=lambda storage, loc: storage)
generator.load_state_dict(ckpt["g"], strict=False)
generator.eval()
plt.rcParams['figure.dpi'] = 150
plt.rcParams['figure.dpi'] = 150
transform = transforms.Compose(
[
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
transform = transforms.Compose(
[
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
with torch.no_grad():
my_sample = generator(my_w, input_is_latent=True)
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
with torch.no_grad():
my_sample = generator(my_w, input_is_latent=True)
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
imageio.imwrite('filename.jpeg', npimage)
return 'filename.jpeg'
```
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
imageio.imwrite('filename.jpeg', npimage)
return 'filename.jpeg'
```
5. 构建 Gradio 演示
```python
```python
import gradio as gr
import gradio as gr
title = "JoJoGAN"
description = "JoJoGAN 的 Gradio 演示:一次性面部风格化。要使用它,只需上传您的图像,或单击示例之一加载它们。在下面的链接中阅读更多信息。"
title = "JoJoGAN"
description = "JoJoGAN 的 Gradio 演示:一次性面部风格化。要使用它,只需上传您的图像,或单击示例之一加载它们。在下面的链接中阅读更多信息。"
demo = gr.Interface(
inference,
gr.Image(type="pil"),
gr.Image(type=" 文件 "),
title=title,
description=description
)
demo = gr.Interface(
inference,
gr.Image(type="pil"),
gr.Image(type=" 文件 "),
title=title,
description=description
)
demo.launch(share=True)
```
demo.launch(share=True)
```
6. 将 Gradio 集成到 W&B 仪表板
最后一步——将 Gradio 演示与 W&B 仪表板集成,只需要一行额外的代码 :
最后一步——将 Gradio 演示与 W&B 仪表板集成,只需要一行额外的代码 :
```python
```python
demo.integrate(wandb=wandb)
```
demo.integrate(wandb=wandb)
```
调用集成之后,将创建一个演示,您可以将其集成到仪表板或报告中
调用集成之后,将创建一个演示,您可以将其集成到仪表板或报告中
在 W&B 之外,使用 gradio-app 标记允许任何人直接将 Gradio 演示嵌入到其博客、网站、文档等中的 HF spaces 上 :
在 W&B 之外,使用 gradio-app 标记允许任何人直接将 Gradio 演示嵌入到其博客、网站、文档等中的 HF spaces 上 :
```html
&lt;gradio-app space="akhaliq/JoJoGAN"&gt; &lt;gradio-app&gt;
```
```html
&lt;gradio-app space="akhaliq/JoJoGAN"&gt; &lt;gradio-app&gt;
```
7.(可选)在 Gradio 应用程序中嵌入 W&B 图
也可以在 Gradio 应用程序中嵌入 W&B 图。为此,您可以创建一个 W&B 报告,并在一个 `gr.HTML` 块中将其嵌入到 Gradio 应用程序中。
报告需要是公开的,您需要在 iFrame 中包装 URL如下所示 :
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
```python
import gradio as gr
@ -271,15 +270,15 @@ Hugging Face Spaces 是 Gradio 演示的免费托管选项。Spaces 有 3 个 SD
希望您喜欢此嵌入 Gradio 演示到 W&B 报告的简短演示!感谢您一直阅读到最后。回顾一下 :
* 仅需要一个单一参考图像即可对 JoJoGAN 进行微调,通常在 GPU 上需要约 1 分钟。训练完成后,可以将样式应用于任何输入图像。在论文中阅读更多内容。
- 仅需要一个单一参考图像即可对 JoJoGAN 进行微调,通常在 GPU 上需要约 1 分钟。训练完成后,可以将样式应用于任何输入图像。在论文中阅读更多内容。
* W&B 可以通过添加几行代码来跟踪实验,您可以在单个集中的仪表板中可视化、排序和理解您的实验。
- W&B 可以通过添加几行代码来跟踪实验,您可以在单个集中的仪表板中可视化、排序和理解您的实验。
* Gradio 则在用户友好的界面中演示模型,可以在网络上任何地方共享。
- Gradio 则在用户友好的界面中演示模型,可以在网络上任何地方共享。
## 如何在 Wandb 组织的 HF spaces 上 贡献 Gradio 演示
* 在 Hugging Face 上创建一个帐户[此处](https://huggingface.co/join)。
* 在您的用户名下添加 Gradio 演示,请参阅[此教程](https://huggingface.co/course/chapter9/4?fw=pt) 以在 Hugging Face 上设置 Gradio 演示。
* 申请加入 wandb 组织[此处](https://huggingface.co/wandb)。
* 批准后,将模型从自己的用户名转移到 Wandb 组织中。
- 在 Hugging Face 上创建一个帐户[此处](https://huggingface.co/join)。
- 在您的用户名下添加 Gradio 演示,请参阅[此教程](https://huggingface.co/course/chapter9/4?fw=pt) 以在 Hugging Face 上设置 Gradio 演示。
- 申请加入 wandb 组织[此处](https://huggingface.co/wandb)。
- 批准后,将模型从自己的用户名转移到 Wandb 组织中。

View File

@ -7,7 +7,7 @@ Tags: VISION, RESNET, PYTORCH
图像分类是计算机视觉中的一个核心任务。构建更好的分类器以区分图片中存在的物体是当前研究的一个热点领域,因为它的应用范围从自动驾驶车辆到医学成像等领域都很广泛。
这样的模型非常适合 Gradio 的 *image* 输入组件,因此在本教程中,我们将使用 Gradio 构建一个用于图像分类的 Web 演示。我们将能够在 Python 中构建整个 Web 应用程序,效果如下(试试其中一个示例!):
这样的模型非常适合 Gradio 的 _image_ 输入组件,因此在本教程中,我们将使用 Gradio 构建一个用于图像分类的 Web 演示。我们将能够在 Python 中构建整个 Web 应用程序,效果如下(试试其中一个示例!):
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
@ -48,17 +48,17 @@ def predict(inp):
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
return confidences
```
让我们逐步来看一下这段代码。该函数接受一个参数:
* `inp`:输入图片,类型为 `PIL` 图像
- `inp`:输入图片,类型为 `PIL` 图像
然后,该函数将图像转换为 PIL 图像,最终转换为 PyTorch 的 `tensor`,将其输入模型,并返回:
* `confidences`:预测结果,以字典形式表示,其中键是类别标签,值是置信度概率
- `confidences`:预测结果,以字典形式表示,其中键是类别标签,值是置信度概率
## 第三步 - 创建 Gradio 界面
@ -73,7 +73,7 @@ def predict(inp):
```python
import gradio as gr
gr.Interface(fn=predict,
gr.Interface(fn=predict,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=3),
examples=["lion.jpg", "cheetah.jpg"]).launch()
@ -83,6 +83,6 @@ gr.Interface(fn=predict,
<iframe src="https://abidlabs-pytorch-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
完成了!这就是构建图像分类器 Web 演示所需的所有代码。如果您想与他人共享,请在 `launch()` 接口时设置 `share=True`

View File

@ -7,7 +7,7 @@
图像分类是计算机视觉中的一项核心任务。构建更好的分类器来识别图像中的物体是一个研究的热点领域,因为它在交通控制系统到卫星成像等应用中都有广泛的应用。
这样的模型非常适合与 Gradio 的 *image* 输入组件一起使用,因此在本教程中,我们将使用 Gradio 构建一个用于图像分类的 Web 演示。我们可以在 Python 中构建整个 Web 应用程序,它的界面将如下所示(试试其中一个例子!):
这样的模型非常适合与 Gradio 的 _image_ 输入组件一起使用,因此在本教程中,我们将使用 Gradio 构建一个用于图像分类的 Web 演示。我们可以在 Python 中构建整个 Web 应用程序,它的界面将如下所示(试试其中一个例子!):
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
@ -52,11 +52,11 @@ def classify_image(inp):
让我们来详细了解一下。该函数接受一个参数:
* `inp`:输入图像的 `numpy` 数组
- `inp`:输入图像的 `numpy` 数组
然后,函数添加一个批次维度,通过模型进行处理,并返回:
* `confidences`:预测结果,以字典形式表示,其中键是类标签,值是置信概率
- `confidences`:预测结果,以字典形式表示,其中键是类标签,值是置信概率
## 第三步 —— 创建 Gradio 界面
@ -71,7 +71,7 @@ def classify_image(inp):
```python
import gradio as gr
gr.Interface(fn=classify_image,
gr.Interface(fn=classify_image,
inputs=gr.Image(shape=(224, 224)),
outputs=gr.Label(num_top_classes=3),
examples=["banana.jpg", "car.jpg"]).launch()
@ -81,6 +81,6 @@ gr.Interface(fn=classify_image,
<iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
完成!这就是构建图像分类器的 Web 演示所需的所有代码。如果您想与他人分享,请尝试在启动接口时设置 `share=True`

View File

@ -7,7 +7,7 @@
图像分类是计算机视觉中的重要任务。构建更好的分类器以确定图像中存在的对象是当前研究的热点领域,因为它在从人脸识别到制造质量控制等方面都有应用。
最先进的图像分类器基于 *transformers* 架构,该架构最初在自然语言处理任务中很受欢迎。这种架构通常被称为 vision transformers (ViT)。这些模型非常适合与 Gradio 的*图像*输入组件一起使用,因此在本教程中,我们将构建一个使用 Gradio 进行图像分类的 Web 演示。我们只需用**一行 Python 代码**即可构建整个 Web 应用程序,其效果如下(试用一下示例之一!):
最先进的图像分类器基于 _transformers_ 架构,该架构最初在自然语言处理任务中很受欢迎。这种架构通常被称为 vision transformers (ViT)。这些模型非常适合与 Gradio 的*图像*输入组件一起使用,因此在本教程中,我们将构建一个使用 Gradio 进行图像分类的 Web 演示。我们只需用**一行 Python 代码**即可构建整个 Web 应用程序,其效果如下(试用一下示例之一!):
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
@ -47,6 +47,6 @@ gr.Interface.load(
<iframe src="https://abidlabs-vision-transformer.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
完成!只需一行代码,您就建立了一个图像分类器的 Web 演示。如果您想与他人分享,请在 `launch()` 接口时设置 `share=True`

View File

@ -1,7 +1,7 @@
# 连接到数据库
相关空间https://huggingface.co/spaces/gradio/chicago-bike-share-dashboard
标签TABULAR, PLOTS
标签TABULAR, PLOTS
## 介绍
@ -24,7 +24,7 @@
## 步骤 1 - 创建数据库
我们将在 Amazon 的 RDS 服务上托管我们的数据。如果还没有 AWS 账号,请创建一个
并在免费层级上创建一个 PostgreSQL 数据库。
并在免费层级上创建一个 PostgreSQL 数据库。
**重要提示**:如果您计划在 HuggingFace Spaces 上托管此演示,请确保数据库在 **8080** 端口上。Spaces
将阻止除端口 80、443 或 8080 之外的所有外部连接,如此[处所示](https://huggingface.co/docs/hub/spaces-overview#networking)。
@ -34,13 +34,14 @@ RDS 不允许您在 80 或 443 端口上创建 postgreSQL 实例。
为了演示的目的,我们只会上传 2022 年 3 月的数据。
## 步骤 2.a - 编写 ETL 代码
我们将查询数据库,按自行车类型(电动、标准或有码)进行分组,并获取总骑行次数。
我们还将查询每个站点的出发骑行次数,并获取前 5 个。
我们还将查询每个站点的出发骑行次数,并获取前 5 个。
然后,我们将使用 matplotlib 将查询结果可视化。
我们将使用 pandas 的[read_sql](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html)
方法来连接数据库。这需要安装 `psycopg2` 库。
方法来连接数据库。这需要安装 `psycopg2` 库。
为了连接到数据库,我们将指定数据库的用户名、密码和主机作为环境变量。
这样可以通过避免将敏感信息以明文形式存储在应用程序文件中,使我们的应用程序更安全。
@ -77,7 +78,7 @@ def get_count_ride_type():
def get_most_popular_stations():
df = pd.read_sql(
"""
SELECT COUNT(ride_id) as n, MAX(start_station_name) as station
@ -109,6 +110,7 @@ DB_USER='username' DB_PASSWORD='password' DB_HOST='host' python app.py
```
## 步骤 2.c - 编写您的 gradio 应用程序
我们将使用两个单独的 `gr.Plot` 组件将我们的 matplotlib 图表并排显示在一起,使用 `gr.Row()`
因为我们已经在 `demo.load()` 事件触发器中封装了获取数据的函数,
我们的演示将在每次网页加载时从数据库**动态**获取最新数据。🪄
@ -128,6 +130,7 @@ demo.launch()
```
## 步骤 3 - 部署
如果您运行上述代码,您的应用程序将在本地运行。
您甚至可以通过将 `share=True` 参数传递给 `launch` 来获得一个临时共享链接。
@ -140,9 +143,10 @@ demo.launch()
![secrets](/assets/guides/secrets.png)
## 结论
恭喜你!您知道如何将您的 Gradio 应用程序连接到云端托管的数据库!☁️
我们的仪表板现在正在[Spaces](https://huggingface.co/spaces/gradio/chicago-bike-share-dashboard)上运行。
完整代码在[这里](https://huggingface.co/spaces/gradio/chicago-bike-share-dashboard/blob/main/app.py)
正如您所见Gradio 使您可以连接到您的数据并以您想要的方式显示!🔥
正如您所见Gradio 使您可以连接到您的数据并以您想要的方式显示!🔥

View File

@ -1,6 +1,6 @@
# 从 BigQuery 数据创建实时仪表盘
Tags: 表格 , 仪表盘 , 绘图
Tags: 表格 , 仪表盘 , 绘图
[Google BigQuery](https://cloud.google.com/bigquery) 是一个基于云的用于处理大规模数据集的服务。它是一个无服务器且高度可扩展的数据仓库解决方案,使用户能够使用类似 SQL 的查询分析数据。
@ -12,7 +12,7 @@ Tags: 表格 , 仪表盘 , 绘图
1. 设置 BigQuery 凭据
2. 使用 BigQuery 客户端
3. 构建实时仪表盘(仅需 *7 行 Python 代码*
3. 构建实时仪表盘(仅需 _7 行 Python 代码_
我们将使用[纽约时报的 COVID 数据集](https://www.nytimes.com/interactive/2021/us/covid-cases.html),该数据集作为一个公共数据集可在 BigQuery 上使用。数据集名为 `covid19_nyt.us_counties`,其中包含有关美国各县 COVID 确诊病例和死亡人数的最新信息。
@ -36,16 +36,16 @@ Tags: 表格 , 仪表盘 , 绘图
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
@ -73,15 +73,15 @@ client = bigquery.Client.from_service_account_json("path/to/key.json")
import numpy as np
QUERY = (
'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
'ORDER BY date DESC,confirmed_cases DESC '
'LIMIT 20')
def run_query():
query_job = client.query(QUERY)
query_result = query_job.result()
query_job = client.query(QUERY)
query_result = query_job.result()
df = query_result.to_dataframe()
# Select a subset of columns
# Select a subset of columns
df = df[["confirmed_cases", "deaths", "county", "state_name"]]
# Convert numeric columns to standard numpy types
df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64})
@ -92,7 +92,7 @@ def run_query():
一旦您有了查询数据的函数,您可以使用 Gradio 库的 `gr.DataFrame` 组件以表格形式显示结果。这是一种检查数据并确保查询正确的有用方式。
以下是如何使用 `gr.DataFrame` 组件显示结果的示例。通过将 `run_query` 函数传递给 `gr.DataFrame`,我们指示 Gradio 在页面加载时立即运行该函数并显示结果。此外,您还可以传递关键字 `every`以告知仪表板每小时刷新一次60*60 秒)。
以下是如何使用 `gr.DataFrame` 组件显示结果的示例。通过将 `run_query` 函数传递给 `gr.DataFrame`,我们指示 Gradio 在页面加载时立即运行该函数并显示结果。此外,您还可以传递关键字 `every`以告知仪表板每小时刷新一次60\*60 秒)。
```py
import gradio as gr
@ -115,8 +115,8 @@ with gr.Blocks() as demo:
gr.Markdown("# 💉 Covid Dashboard (Updated Hourly)")
with gr.Row():
gr.DataFrame(run_query, every=60*60)
gr.ScatterPlot(run_query, every=60*60, x="confirmed_cases",
gr.ScatterPlot(run_query, every=60*60, x="confirmed_cases",
y="deaths", tooltip="county", width=500, height=500)
demo.queue().launch() # Run the demo with queuing enabled
```
```

View File

@ -1,5 +1,6 @@
# 从 Supabase 数据创建仪表盘
Tags: TABULAR, DASHBOARD, PLOTS
Tags: TABULAR, DASHBOARD, PLOTS
[Supabase](https://supabase.com/) 是一个基于云的开源后端,提供了 PostgreSQL 数据库、身份验证和其他有用的功能,用于构建 Web 和移动应用程序。在本教程中,您将学习如何从 Supabase 读取数据,并在 Gradio 仪表盘上以**实时**方式绘制数据。
@ -7,9 +8,9 @@ Tags: TABULAR, DASHBOARD, PLOTS
在这个端到端指南中,您将学习如何:
* 在 Supabase 中创建表
* 使用 Supabase Python 客户端向 Supabase 写入数据
* 使用 Gradio 在实时仪表盘中可视化数据
- 在 Supabase 中创建表
- 使用 Supabase Python 客户端向 Supabase 写入数据
- 使用 Gradio 在实时仪表盘中可视化数据
如果您已经在 Supabase 上有数据想要在仪表盘中可视化,您可以跳过前两个部分,直接到[可视化数据](#visualize-the-data-in-a-real-time-gradio-dashboard)
@ -63,9 +64,9 @@ import random
main_list = []
for i in range(10):
value = {'product_id': i,
value = {'product_id': i,
'product_name': f"Item {i}",
'inventory_count': random.randint(1, 100),
'inventory_count': random.randint(1, 100),
'price': random.random()*100
}
main_list.append(value)
@ -85,6 +86,7 @@ data = client.table('Product').insert(main_list).execute()
9\. 编写一个函数,从 `Product` 表加载数据并将其作为 pandas DataFrame 返回:
import supabase
```python
import supabase
import pandas as pd
@ -118,4 +120,4 @@ dashboard.queue().launch()
就是这样!在本教程中,您学习了如何将数据写入 Supabase 数据集,然后读取该数据并将结果绘制为条形图。如果您更新 Supabase 数据库中的数据,您会注意到 Gradio 仪表盘将在一分钟内更新。
尝试在此示例中添加更多绘图和可视化(或使用不同的数据集),以构建一个更复杂的仪表盘!
尝试在此示例中添加更多绘图和可视化(或使用不同的数据集),以构建一个更复杂的仪表盘!

View File

@ -1,19 +1,25 @@
# 从 Google Sheets 创建实时仪表盘
Tags: TABULAR, DASHBOARD, PLOTS
[Google Sheets](https://www.google.com/sheets/about/) 是一种以电子表格形式存储表格数据的简便方法。借助 Gradio 和 pandas可以轻松从公共或私有 Google Sheets 读取数据,然后显示数据或绘制数据。在本博文中,我们将构建一个小型 *real-time* 仪表盘,该仪表盘在 Google Sheets 中的数据更新时进行更新。
Tags: TABULAR, DASHBOARD, PLOTS
[Google Sheets](https://www.google.com/sheets/about/) 是一种以电子表格形式存储表格数据的简便方法。借助 Gradio 和 pandas可以轻松从公共或私有 Google Sheets 读取数据,然后显示数据或绘制数据。在本博文中,我们将构建一个小型 _real-time_ 仪表盘,该仪表盘在 Google Sheets 中的数据更新时进行更新。
构建仪表盘本身只需要使用 Gradio 的 9 行 Python 代码,我们的最终仪表盘如下所示:
<gradio-app space="gradio/line-plot"></gradio-app>
**先决条件**:本指南使用[Gradio Blocks](../quickstart/#blocks-more-flexibility-and-control),因此请确保您熟悉 Blocks 类。
具体步骤略有不同,具体取决于您是使用公开访问还是私有 Google Sheet。我们将分别介绍这两种情况所以让我们开始吧
## Public Google Sheets
由于[`pandas` 库](https://pandas.pydata.org/)的存在,从公共 Google Sheet 构建仪表盘非常简单:
1. 获取要使用的 Google Sheets 的网址。为此,只需进入 Google Sheets单击右上角的“共享”按钮然后单击“获取可共享链接”按钮。这将给您一个类似于以下示例的网址
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
```
2. 现在,修改此网址并使用它从 Google Sheets 读取数据到 Pandas DataFrame 中。 (在下面的代码中,用您的公开 Google Sheet 的网址替换 `URL` 变量)
```python
import pandas as pd
URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0"csv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=')
@ -22,6 +28,7 @@ def get_data():
```
3. 数据查询是一个函数,这意味着可以使用 `gr.DataFrame` 组件实时显示或使用 `gr.LinePlot` 组件实时绘制数据(当然,根据数据的不同,可能需要不同的绘图方法)。只需将函数传递给相应的组件,并根据组件刷新的频率(以秒为单位)设置 `every` 参数。以下是 Gradio 代码:
```python
import gradio as gr
@ -37,41 +44,50 @@ demo.queue().launch() # Run the demo with queuing enabled
```
到此为止!您现在拥有一个仪表盘,每 5 秒刷新一次,从 Google Sheets 中获取数据。
## 私有 Google Sheets
对于私有 Google Sheets流程需要更多的工作量但并不多关键区别在于现在您必须经过身份验证以授权访问私有 Google Sheets。
### 身份验证
要进行身份验证,需从 Google Cloud 获取凭据。以下是[如何设置 Google Cloud 凭据](https://developers.google.com/workspace/guides/create-credentials)
1. 首先,登录您的 Google Cloud 帐户并转到 Google Cloud 控制台https://console.cloud.google.com/
2. 在 Cloud 控制台中单击左上角的汉堡菜单然后从菜单中选择“API 和服务”。如果您没有现有项目,则需要创建一个。
3. 然后,点击“+ 启用的 API 和服务”按钮允许您为项目启用特定的服务。搜索“Google Sheets API”点击它然后单击“启用”按钮。如果看到“管理”按钮则表示 Google Sheets 已启用,并且您已准备就绪。
4. 在 API 和服务菜单中,点击“凭据”选项卡,然后点击“创建凭据”按钮。
5. 在“创建凭据”对话框中,选择“服务帐号密钥”作为要创建的凭据类型,并为其命名。**记下服务帐号的电子邮件地址**
6. 在选择服务帐号之后选择“JSON”密钥类型然后点击“创建”按钮。这将下载包含您凭据的 JSON 密钥文件到您的计算机。文件类似于以下示例:
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
### 查询
在获得凭据的 `.json` 文件后,可以按照以下步骤查询您的 Google Sheet
1. 单击 Google Sheet 右上角的“共享”按钮。使用身份验证子部分第 5 步的服务的电子邮件地址共享 Google Sheets此步骤很重要。然后单击“获取可共享链接”按钮。这将给您一个类似于以下示例的网址
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
```
2. 安装 [`gspread` 库](https://docs.gspread.org/en/v5.7.0/),通过在终端运行以下命令使 Python 中使用 [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) 更加简单:`pip install gspread`
3. 编写一个函数来从 Google Sheet 中加载数据,如下所示(用您的私有 Google Sheet 的 URL 替换 `URL` 变量):
```python
import gspreadimport pandas as pd
# 与 Google 进行身份验证并获取表格URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/edit#gid=0'

View File

@ -1,4 +1,5 @@
# 如何使用地图组件绘制图表
Related spaces:
Tags: PLOTS, MAPS
@ -23,7 +24,7 @@ dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = new_df["name"].tolist()
prices = new_df["price"].tolist()

View File

@ -33,16 +33,16 @@ df = df["train"].to_pandas()
def infer(input_dataframe):
return pd.DataFrame(model.predict(input_dataframe))
gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
```
让我们来解析上述代码。
* `fn`:推理函数,接受输入数据帧并返回预测结果。
* `inputs`:我们使用 `Dataframe` 组件作为输入。我们将输入定义为具有 2 行 4 列的数据帧,最初的数据帧将呈现出上述形状的空数据帧。当将 `row_count` 设置为 `dynamic` 时,不必依赖于正在输入的数据集来预定义组件。
* `outputs`:用于存储输出的数据帧组件。该界面可以接受单个或多个样本进行推断,并在一列中为每个样本返回 0 或 1因此我们将 `row_count` 设置为 2`col_count` 设置为 1。`headers` 是由数据帧的列名组成的列表。
* `examples`:您可以通过拖放 CSV 文件或通过示例传递 pandas DataFrame界面会自动获取其标题。
- `fn`:推理函数,接受输入数据帧并返回预测结果。
- `inputs`:我们使用 `Dataframe` 组件作为输入。我们将输入定义为具有 2 行 4 列的数据帧,最初的数据帧将呈现出上述形状的空数据帧。当将 `row_count` 设置为 `dynamic` 时,不必依赖于正在输入的数据集来预定义组件。
- `outputs`:用于存储输出的数据帧组件。该界面可以接受单个或多个样本进行推断,并在一列中为每个样本返回 0 或 1因此我们将 `row_count` 设置为 2`col_count` 设置为 1。`headers` 是由数据帧的列名组成的列表。
- `examples`:您可以通过拖放 CSV 文件或通过示例传递 pandas DataFrame界面会自动获取其标题。
现在我们将为简化版数据可视化仪表板创建一个示例。您可以在相关空间中找到更全面的版本。
@ -68,7 +68,7 @@ def plot(df):
plt.savefig("corr.png")
plots = ["corr.png","scatter.png", "bar.png"]
return plots
inputs = [gr.Dataframe(label="Supersoaker Production Data")]
outputs = [gr.Gallery(label="Profiling Dashboard").style(grid=(1,3))]
@ -79,10 +79,10 @@ gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], titl
我们将使用与训练模型相同的数据集,但这次我们将创建一个可视化仪表板以展示它。
* `fn`:根据数据创建图表的函数。
* `inputs`:我们使用了与上述相同的 `Dataframe` 组件。
* `outputs`:我们使用 `Gallery` 组件来存放我们的可视化结果。
* `examples`:我们将数据集本身作为示例。
- `fn`:根据数据创建图表的函数。
- `inputs`:我们使用了与上述相同的 `Dataframe` 组件。
- `outputs`:我们使用 `Gallery` 组件来存放我们的可视化结果。
- `examples`:我们将数据集本身作为示例。
## 使用 skops 一行代码轻松加载表格数据界面

View File

@ -13,8 +13,8 @@ Gradio Python 客户端使得将任何 Gradio 应用程序作为 API 使用变
```python
from gradio_client import Client
client = Client("abidlabs/whisper")
client.predict("audio_sample.wav")
client = Client("abidlabs/whisper")
client.predict("audio_sample.wav")
>> "这是Whisper语音识别模型的测试。"
```
@ -50,7 +50,7 @@ client = Client("abidlabs/en2fr") # 一个将英文翻译为法文的Space
```python
from gradio_client import Client
client = Client("abidlabs/my-private-space", hf_token="...")
client = Client("abidlabs/my-private-space", hf_token="...")
```
## 复制空间以供私人使用
@ -65,13 +65,13 @@ from gradio_client import Client
HF_TOKEN = os.environ.get("HF_TOKEN")
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
client.predict("audio_sample.wav")
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
```
>> " 这是 Whisper 语音识别模型的测试。"
> > " 这是 Whisper 语音识别模型的测试。"
如果之前已复制了一个空间,重新运行 `duplicate()` 将*不会*创建一个新的空间。相反,客户端将连接到之前创建的空间。因此,多次运行 `Client.duplicate()` 方法是安全的。
@ -90,6 +90,7 @@ client = Client("https://bec81a83-5b5c-471e.gradio.live")
## 检查 API 端点
一旦连接到 Gradio 应用程序,可以通过调用 `Client.view_api()` 方法查看可用的 API 端点。对于 Whisper 空间,我们可以看到以下信息:
```bash
Client.predict() Usage Info
---------------------------
@ -101,6 +102,7 @@ Named API endpoints: 1
Returns:
- [Textbox] value_0: str (value)
```
这显示了在此空间中有 1 个 API 端点,并显示了如何使用 API 端点进行预测:我们应该调用 `.predict()` 方法(我们将在下面探讨),提供类型为 `str` 的参数 `input_audio`,它是一个`文件路径或 URL`
我们还应该提供 `api_name='/predict'` 参数给 `predict()` 方法。虽然如果一个 Gradio 应用程序只有一个命名的端点,这不是必需的,但它允许我们在单个应用程序中调用不同的端点(如果它们可用)。如果一个应用程序有无名的 API 端点,可以通过运行 `.view_api(all_endpoints=True)` 来显示它们。
@ -117,9 +119,10 @@ client.predict("Hello")
>> Bonjour
```
如果有多个参数,那么你应该将它们作为单独的参数传递给 `.predict()`,就像这样:
```python
````python
from gradio_client import Client
client = Client("gradio/calculator")
@ -139,13 +142,12 @@ client.predict("https://audio-samples.github.io/samples/mp3/blizzard_uncondition
>> "My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r—"
```
````
## 异步运行任务Running jobs asynchronously
应注意`.predict()`是一个*阻塞*操作,因为它在返回预测之前等待操作完成。
在许多情况下,直到你需要预测结果之前,你最好让作业在后台运行。你可以通过使用`.submit()`方法创建一个`Job`实例,然后稍后调用`.result()`在作业上获取结果。例如:
```python
@ -195,24 +197,22 @@ job.status()
>> <Status.STARTING: 'STARTING'>
```
*注意*`Job`类还有一个`.done()`实例方法,返回一个布尔值,指示作业是否已完成。
_注意_`Job`类还有一个`.done()`实例方法,返回一个布尔值,指示作业是否已完成。
## 取消作业 Cancelling Jobs
`Job`类还有一个`.cancel()`实例方法,取消已排队但尚未开始的作业。例如,如果你运行:
```py
client = Client("abidlabs/whisper")
job1 = client.submit("audio_sample1.wav")
job2 = client.submit("audio_sample2.wav")
client = Client("abidlabs/whisper")
job1 = client.submit("audio_sample1.wav")
job2 = client.submit("audio_sample2.wav")
job1.cancel() # 将返回 False假设作业已开始
job2.cancel() # 将返回 True表示作业已取消
```
如果第一个作业已开始处理,则它将不会被取消。如果第二个作业尚未开始,则它将成功取消并从队列中删除。
## 生成器端点 Generator Endpoints
某些Gradio API端点不返回单个值而是返回一系列值。你可以随时从这样的生成器端点获取返回的一系列值方法是运行`job.outputs()`
@ -257,4 +257,4 @@ client = Client("abidlabs/test-yield")
job = client.submit("abcdef")
time.sleep(3)
job.cancel() # 作业在运行 2 个迭代后取消
```
```

View File

@ -14,7 +14,7 @@ Gradio JavaScript客户端使得使用任何Gradio应用作为API非常简单。
import { client } from "@gradio/client";
const response = await fetch(
"https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
"https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
);
const audio_file = await response.blob();
@ -69,7 +69,7 @@ const app = client("abidlabs/my-private-space", { hf_token="hf_..." })
import { client } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
@ -85,9 +85,9 @@ const transcription = app.predict("/predict", [audio_file]);
import { client } from "@gradio/client";
const app = await duplicate("abidlabs/whisper", {
hf_token: "hf_...",
timeout: 60,
hardware: "a10g-small",
hf_token: "hf_...",
timeout: 60,
hardware: "a10g-small"
});
```
@ -121,25 +121,25 @@ console.log(app_info);
```json
{
"named_endpoints": {
"/predict": {
"parameters": [
{
"label": "text",
"component": "Textbox",
"type": "string"
}
],
"returns": [
{
"label": "output",
"component": "Textbox",
"type": "string"
}
]
}
},
"unnamed_endpoints": {}
"named_endpoints": {
"/predict": {
"parameters": [
{
"label": "text",
"component": "Textbox",
"type": "string"
}
],
"returns": [
{
"label": "output",
"component": "Textbox",
"type": "string"
}
]
}
},
"unnamed_endpoints": {}
}
```
@ -173,7 +173,7 @@ const result = await app.predict("/predict", [4, "add", 5]);
import { client } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
@ -189,11 +189,11 @@ const result = await client.predict("/predict", [audio_file]);
import { client } from "@gradio/client";
function log_result(payload) {
const {
data: [translation],
} = payload;
const {
data: [translation]
} = payload;
console.log(`翻译结果为:${translation}`);
console.log(`翻译结果为:${translation}`);
}
const app = await client("abidlabs/en2fr");
@ -210,9 +210,7 @@ job.on("data", log_result);
import { client } from "@gradio/client";
function log_status(status) {
console.log(
`此作业的当前状态为:${JSON.stringify(status, null, 2)}`
);
console.log(`此作业的当前状态为:${JSON.stringify(status, null, 2)}`);
}
const app = await client("abidlabs/en2fr");
@ -264,6 +262,6 @@ const job = app.submit(0, [9]);
job.on("data", (data) => console.log(data));
setTimeout(() => {
job.cancel();
job.cancel();
}, 3000);
```

View File

@ -4,14 +4,13 @@ Tags: CLIENT, API, WEB APP
在本博客文章中,我们将演示如何使用 `gradio_client` [Python库](getting-started-with-the-python-client/) 来以编程方式创建Gradio应用的请求通过创建一个示例FastAPI Web应用。我们将构建的 Web 应用名为“Acappellify”它允许用户上传视频文件作为输入并返回一个没有伴奏音乐的视频版本。它还会显示生成的视频库。
**先决条件**
在开始之前请确保您正在运行Python 3.9或更高版本,并已安装以下库:
* `gradio_client`
* `fastapi`
* `uvicorn`
- `gradio_client`
- `fastapi`
- `uvicorn`
您可以使用`pip`安装这些库:
@ -138,103 +137,32 @@ async def upload_video(video: UploadFile = File(...)):
将以下内容写入`home.html`文件中:
```html
&lt;!DOCTYPE html>
&lt;html>
&lt;head>
&lt;title> 视频库 &lt;/title>
&lt;style>
body {
font-family: sans-serif;
margin: 0;
padding: 0;
background-color: #f5f5f5;
}
h1 {
text-align: center;
margin-top: 30px;
margin-bottom: 20px;
}
.gallery {
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 20px;
padding: 20px;
}
.video {
border: 2px solid #ccc;
box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2);
border-radius: 5px;
overflow: hidden;
width: 300px;
margin-bottom: 20px;
}
.video video {
width: 100%;
height: 200px;
}
.video p {
text-align: center;
margin: 10px 0;
}
form {
margin-top: 20px;
text-align: center;
}
input[type="file"] {
display: none;
}
.upload-btn {
display: inline-block;
background-color: #3498db;
color: #fff;
padding: 10px 20px;
font-size: 16px;
border: none;
border-radius: 5px;
cursor: pointer;
}
.upload-btn:hover {
background-color: #2980b9;
}
.file-name {
margin-left: 10px;
}
&lt;/style>
&lt;/head>
&lt;body>
&lt;h1> 视频库 &lt;/h1>
{% if videos %}
&lt;div class="gallery">
{% for video in videos %}
&lt;div class="video">
&lt;video controls>
&lt;source src="{{ url_for('static', path=video) }}" type="video/mp4">
您的浏览器不支持视频标签。
&lt;/video>
&lt;p>{{ video }}&lt;/p>
&lt;/div>
{% endfor %}
&lt;/div>
{% else %}
&lt;p> 尚未上传任何视频。&lt;/p>
{% endif %}
&lt;form action="/uploadvideo/" method="post" enctype="multipart/form-data">
&lt;label for="video-upload" class="upload-btn"> 选择视频文件 &lt;/label>
&lt;input type="file" name="video" id="video-upload">
&lt;span class="file-name">&lt;/span>
&lt;button type="submit" class="upload-btn"> 上传 &lt;/button>
&lt;/form>
&lt;script>
// 在表单中显示所选文件名
const fileUpload = document.getElementById("video-upload");
const fileName = document.querySelector(".file-name");
fileUpload.addEventListener("change", (e) => {
fileName.textContent = e.target.files[0].name;
});
&lt;/script>
&lt;/body>
&lt;!DOCTYPE html> &lt;html> &lt;head> &lt;title> 视频库 &lt;/title> &lt;style>
body { font-family: sans-serif; margin: 0; padding: 0; background-color:
#f5f5f5; } h1 { text-align: center; margin-top: 30px; margin-bottom: 20px; }
.gallery { display: flex; flex-wrap: wrap; justify-content: center; gap: 20px;
padding: 20px; } .video { border: 2px solid #ccc; box-shadow: 0px 0px 10px
rgba(0, 0, 0, 0.2); border-radius: 5px; overflow: hidden; width: 300px;
margin-bottom: 20px; } .video video { width: 100%; height: 200px; } .video p {
text-align: center; margin: 10px 0; } form { margin-top: 20px; text-align:
center; } input[type="file"] { display: none; } .upload-btn { display:
inline-block; background-color: #3498db; color: #fff; padding: 10px 20px;
font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }
.upload-btn:hover { background-color: #2980b9; } .file-name { margin-left: 10px;
} &lt;/style> &lt;/head> &lt;body> &lt;h1> 视频库 &lt;/h1> {% if videos %}
&lt;div class="gallery"> {% for video in videos %} &lt;div class="video">
&lt;video controls> &lt;source src="{{ url_for('static', path=video) }}"
type="video/mp4"> 您的浏览器不支持视频标签。 &lt;/video> &lt;p>{{ video
}}&lt;/p> &lt;/div> {% endfor %} &lt;/div> {% else %} &lt;p>
尚未上传任何视频。&lt;/p> {% endif %} &lt;form action="/uploadvideo/"
method="post" enctype="multipart/form-data"> &lt;label for="video-upload"
class="upload-btn"> 选择视频文件 &lt;/label> &lt;input type="file" name="video"
id="video-upload"> &lt;span class="file-name">&lt;/span> &lt;button
type="submit" class="upload-btn"> 上传 &lt;/button> &lt;/form> &lt;script> //
在表单中显示所选文件名 const fileUpload =
document.getElementById("video-upload"); const fileName =
document.querySelector(".file-name"); fileUpload.addEventListener("change", (e)
=> { fileName.textContent = e.target.files[0].name; }); &lt;/script> &lt;/body>
&lt;/html>
```
@ -263,4 +191,3 @@ INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)
如果您想了解如何在项目中使用 Gradio Python 客户端的更多信息,请[阅读专门的指南](/getting-started-with-the-python-client/)。

View File

@ -13,6 +13,7 @@
[LangChain代理](https://docs.langchain.com/docs/components/agents/agent)是一个大型语言模型LLM它根据使用其众多工具之一的输入来生成输出。
### Gradio是什么
[Gradio](https://github.com/gradio-app/gradio)是用于构建机器学习Web应用程序并与全球共享的事实上的标准框架-完全由Python驱动🐍
## gradio_tools - 一个端到端的示例
@ -70,13 +71,15 @@ class GradioTool(BaseTool):
def postprocess(self, output: Tuple[Any] | Any) -> str:
pass
```
需要满足的要求是:
1. 工具的名称
2. 工具的描述。这非常关键!代理根据其描述决定使用哪个工具。请确切描述输入和输出应该是什么样的,最好包括示例。
3. Gradio应用程序的url或space id例如`freddyaboulton/calculator`。基于该值,`gradio_tool`将通过API创建一个[gradio客户端](https://github.com/gradio-app/gradio/blob/main/client/python/README.md)实例来查询上游应用程序。如果您不熟悉gradio客户端库请确保点击链接了解更多信息。
4. create_job - 给定一个字符串该方法应该解析该字符串并从客户端返回一个job。大多数情况下这只需将字符串传递给客户端的`submit`函数即可。有关创建job的更多信息请参阅[这里](https://github.com/gradio-app/gradio/blob/main/client/python/README.md#making-a-prediction)
5. postprocess - 给定作业的结果将其转换为LLM可以向用户显示的字符串。
6. *Optional可选* - 某些库,例如[MiniChain](https://github.com/srush/MiniChain/tree/main)可能需要一些关于工具使用的底层gradio输入和输出类型的信息。默认情况下这将返回gr.Textbox(),但如果您想提供更准确的信息,请实现工具的`_block_input(self, gr)``_block_output(self, gr)`方法。`gr`变量是gradio模块通过`import gradio as gr`获得的结果)。`GradiTool`父类将自动引入`gr`并将其传递给`_block_input``_block_output`方法。
6. _Optional可选_ - 某些库,例如[MiniChain](https://github.com/srush/MiniChain/tree/main)可能需要一些关于工具使用的底层gradio输入和输出类型的信息。默认情况下这将返回gr.Textbox(),但如果您想提供更准确的信息,请实现工具的`_block_input(self, gr)``_block_output(self, gr)`方法。`gr`变量是gradio模块通过`import gradio as gr`获得的结果)。`GradiTool`父类将自动引入`gr`并将其传递给`_block_input``_block_output`方法。
就是这样!
@ -90,7 +93,7 @@ from gradio_tool import GradioTool
import os
class StableDiffusionTool(GradioTool):
"""Tool for calling stable diffusion from llm"""
"""Tool for calling stable diffusion from llm"""
def __init__(
self,
@ -116,6 +119,7 @@ class StableDiffusionTool(GradioTool):
def _block_output(self, gr) -> "gr.components.Component":
return gr.Image()
```
关于此实现的一些注意事项:
1. 所有的 `GradioTool` 实例都有一个名为 `client` 的属性,它指向底层的 [gradio 客户端](https://github.com/gradio-app/gradio/tree/main/client/python#gradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python),这就是您在 `create_job` 方法中应该使用的内容。
@ -128,3 +132,4 @@ class StableDiffusionTool(GradioTool):
现在,您已经知道如何通过数千个运行在野外的 gradio 空间来扩展您的 LLM 的能力了!
同样,我们欢迎对 [gradio_tools](https://github.com/freddyaboulton/gradio-tools) 库的任何贡献。我们很兴奋看到大家构建的工具!
```

View File

@ -7,7 +7,7 @@
一个算法能够有多好地猜出你在画什么几年前Google 发布了 **Quick Draw** 数据集,其中包含人类绘制的各种物体的图画。研究人员使用这个数据集训练模型来猜测 Pictionary 风格的图画。
这样的模型非常适合与 Gradio 的 *sketchpad* 输入一起使用,因此在本教程中,我们将使用 Gradio 构建一个 Pictionary 网络应用程序。我们将能够完全使用 Python 构建整个网络应用程序,并且将如下所示(尝试画点什么!):
这样的模型非常适合与 Gradio 的 _sketchpad_ 输入一起使用,因此在本教程中,我们将使用 Gradio 构建一个 Pictionary 网络应用程序。我们将能够完全使用 Python 构建整个网络应用程序,并且将如下所示(尝试画点什么!):
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
@ -74,11 +74,11 @@ def predict(img):
让我们分解一下。该函数接受一个参数:
* `img`:输入图像,作为一个 `numpy` 数组
- `img`:输入图像,作为一个 `numpy` 数组
然后,函数将图像转换为 PyTorch 的 `tensor`,将其通过模型,并返回:
* `confidences`:前五个预测的字典,其中键是类别标签,值是置信度概率
- `confidences`:前五个预测的字典,其中键是类别标签,值是置信度概率
## 3. 创建一个 Gradio 界面
@ -93,7 +93,7 @@ def predict(img):
```python
import gradio as gr
gr.Interface(fn=predict,
gr.Interface(fn=predict,
inputs="sketchpad",
outputs="label",
live=True).launch()
@ -103,6 +103,6 @@ gr.Interface(fn=predict,
<iframe src="https://abidlabs-draw2.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
----------
---
完成!这就是构建一个 Pictionary 风格的猜词游戏所需的所有代码。玩得开心,并尝试找到一些边缘情况🧐

View File

@ -89,15 +89,15 @@ def predict(seed):
我们给 `predict` 函数一个 `seed` 参数,这样我们就可以使用一个种子固定随机张量生成。然后,我们可以通过传入相同的种子再次查看生成的 punks。
*注意!* 我们的模型需要一个 100x1x1 的输入张量进行单次推理,或者 (BatchSize)x100x1x1 来生成一批图像。在这个演示中,我们每次生成 4 个 punk。
_注意_ 我们的模型需要一个 100x1x1 的输入张量进行单次推理,或者 (BatchSize)x100x1x1 来生成一批图像。在这个演示中,我们每次生成 4 个 punk。
## 第三步—创建一个 Gradio 接口
此时,您甚至可以运行您拥有的代码 `predict(<SOME_NUMBER>)`,并在您的文件系统中找到新生成的 punk 在 `./punks.png`。然而,为了制作一个真正的交互演示,我们将用 Gradio 构建一个简单的界面。我们的目标是:
* 设置一个滑块输入以便用户可以选择“seed”值
* 使用图像组件作为输出,展示生成的 punk
* 使用我们的 `predict()` 函数来接受种子并生成图像
- 设置一个滑块输入以便用户可以选择“seed”值
- 使用图像组件作为输出,展示生成的 punk
- 使用我们的 `predict()` 函数来接受种子并生成图像
通过使用 `gr.Interface()`,我们可以使用一个函数调用来定义所有这些 :
@ -220,6 +220,7 @@ gr.Interface(
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True)
```
----------
恭喜!你已经成功构建了自己的基于 GAN 的 CryptoPunks 生成器,配备了一个时尚的 Gradio 界面,使任何人都能轻松使用。现在你可以在 Hub 上[寻找更多的 GANs](https://huggingface.co/models?other=gan)(或者自己训练)并继续制作更多令人赞叹的演示项目。🤗
---
恭喜!你已经成功构建了自己的基于 GAN 的 CryptoPunks 生成器,配备了一个时尚的 Gradio 界面,使任何人都能轻松使用。现在你可以在 Hub 上[寻找更多的 GANs](https://huggingface.co/models?other=gan)(或者自己训练)并继续制作更多令人赞叹的演示项目。🤗

View File

@ -1,7 +1,8 @@
# 如何创建一个聊天机器人
Tags: NLP, TEXT, CHAT
Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
## 简介
聊天机器人在自然语言处理 (NLP) 研究和工业界被广泛使用。由于聊天机器人是直接由客户和最终用户使用的,因此验证聊天机器人在面对各种输入提示时的行为是否符合预期至关重要。
@ -19,19 +20,19 @@ $ 演示 _ 聊天机器人 _ 流式
让我们从重新创建上面的简单演示开始。正如您可能已经注意到的,我们的机器人只是随机对任何输入回复 " 你好吗?"、" 我爱你 " 或 " 我非常饿 "。这是使用 Gradio 创建此演示的代码:
$ 代码 _ 简单聊天机器人
$ 代码 \_ 简单聊天机器人
这里有三个 Gradio 组件:
* 一个 `Chatbot`,其值将整个对话的历史记录作为用户和机器人之间的响应对列表存储。
* 一个文本框,用户可以在其中键入他们的消息,然后按下 Enter/ 提交以触发聊天机器人的响应
* 一个 `ClearButton` 按钮,用于清除文本框和整个聊天机器人的历史记录
- 一个 `Chatbot`,其值将整个对话的历史记录作为用户和机器人之间的响应对列表存储。
- 一个文本框,用户可以在其中键入他们的消息,然后按下 Enter/ 提交以触发聊天机器人的响应
- 一个 `ClearButton` 按钮,用于清除文本框和整个聊天机器人的历史记录
我们有一个名为 `respond()` 的函数,它接收聊天机器人的整个历史记录,附加一个随机消息,等待 1 秒,然后返回更新后的聊天历史记录。`respond()` 函数在返回时还清除了文本框。
当然,实际上,您会用自己更复杂的函数替换 `respond()`,该函数可能调用预训练模型或 API 来生成响应。
$ 演示 _ 简单聊天机器人
$ 演示 \_ 简单聊天机器人
## 为聊天机器人添加流式响应
@ -39,7 +40,7 @@ $ 演示 _ 简单聊天机器人
$code_chatbot_streaming
当用户提交他们的消息时,您会注意到我们现在使用 `.then()` 与三个事件事件 *链* 起来:
当用户提交他们的消息时,您会注意到我们现在使用 `.then()` 与三个事件事件 _链_ 起来:
1. 第一个方法 `user()` 用用户消息更新聊天机器人并清除输入字段。此方法还使输入字段处于非交互状态,以防聊天机器人正在响应时用户发送另一条消息。由于我们希望此操作立即发生,因此我们设置 `queue=False`,以跳过任何可能的队列。聊天机器人的历史记录附加了`(user_message, None)`,其中的 `None` 表示机器人未作出响应。
@ -77,5 +78,5 @@ $demo_chatbot_multimodal
你完成了!这就是构建聊天机器人模型界面所需的所有代码。最后,我们将结束我们的指南,并提供一些在 Spaces 上运行的聊天机器人的链接,以让你了解其他可能性:
* [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B):一个带有停止生成和重新生成响应功能的样式化聊天机器人。
* [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl):一个多模态聊天机器人,允许您对响应进行投票。
- [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B):一个带有停止生成和重新生成响应功能的样式化聊天机器人。
- [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl):一个多模态聊天机器人,允许您对响应进行投票。

View File

@ -22,7 +22,7 @@ Tags: INTERPRETATION, SENTIMENT ANALYSIS
让我们使用 Blocks API 构建一款情感分类应用程序。该应用程序将以文本作为输入,并输出此文本表达负面或正面情感的概率。我们会有一个单独的输入 `Textbox` 和一个单独的输出 `Label` 组件。以下是应用程序的代码以及应用程序本身。
```python
import gradio as gr
import gradio as gr
from transformers import pipeline
sentiment_classifier = pipeline("text-classification", return_all_scores=True)
@ -45,6 +45,7 @@ demo.launch()
```
<gradio-app space="freddyaboulton/sentiment-classification"> </gradio-app>
## 向应用程序添加解释
我们的目标是向用户呈现输入中的单词如何 contributed 到模型的预测。
@ -62,13 +63,13 @@ demo.launch()
def interpretation_function(text):
explainer = shap.Explainer(sentiment_classifier)
shap_values = explainer([text])
# Dimensions are (batch size, text size, number of classes)
# Since we care about positive sentiment, use index 1
scores = list(zip(shap_values.data[0], shap_values.values[0, :, 1]))
# Scores contains (word, score) pairs
# Format expected by gr.components.Interpretation
return {"original": text, "interpretation": scores}
```
@ -78,6 +79,7 @@ def interpretation_function(text):
这将使输入中的每个单词变成红色或蓝色。
如果它有助于积极情感,则为红色,如果它有助于负面情感,则为蓝色。
这就是界面如何显示文本的解释输出。
```python
with gr.Blocks() as demo:
with gr.Row():
@ -107,6 +109,7 @@ demo.launch()
我们可以通过修改我们的 `interpretation_function` 来执行此操作,以同时返回一个 matplotlib 条形图。我们将在单独的选项卡中使用 'gr.Plot' 组件显示它。
这是解释函数的外观:
```python
def interpretation_function(text):
explainer = shap.Explainer(sentiment_classifier)
@ -121,7 +124,7 @@ def interpretation_function(text):
scores_desc = [t for t in scores_desc if t[0] != ""]
fig_m = plt.figure()
# Select top 5 words that contribute to positive sentiment
plt.bar(x=[s[0] for s in scores_desc[:5]],
height=[s[1] for s in scores_desc[:5]])
@ -175,4 +178,4 @@ demo 在这里 !
我们还展示了 Blocks API 如何让您控制解释在应用程序中的可视化方式。
添加解释是使您的用户了解和信任您的模型的有用方式。现在,您拥有了将其添加到所有应用程序所需的所有工具!
添加解释是使您的用户了解和信任您的模型的有用方式。现在,您拥有了将其添加到所有应用程序所需的所有工具!

View File

@ -123,9 +123,9 @@ inp.change(fn=lambda x: f"欢迎,{x}",
请注意:
* 您不需要放置样板代码 `with gr.Blocks() as demo:``demo.launch()` — Gradio 会自动为您完成!
- 您不需要放置样板代码 `with gr.Blocks() as demo:``demo.launch()` — Gradio 会自动为您完成!
* 每次重新运行单元格时Gradio 都将在相同的端口上重新启动您的应用程序,并使用相同的底层网络服务器。这意味着您将比正常重新运行单元格更快地看到变化。
- 每次重新运行单元格时Gradio 都将在相同的端口上重新启动您的应用程序,并使用相同的底层网络服务器。这意味着您将比正常重新运行单元格更快地看到变化。
下面是在 Jupyter Notebook 中的示例:
@ -135,7 +135,7 @@ inp.change(fn=lambda x: f"欢迎,{x}",
Notebook Magic 现在是作者构建 Gradio 演示的首选方式。无论您如何编写 Python 代码,我们都希望这两种方法都能为您提供更好的 Gradio 开发体验。
--------
---
## 下一步

View File

@ -5,7 +5,7 @@
## 介绍
机器学习中的 3D 模型越来越受欢迎,并且是一些最有趣的演示实验。使用 `gradio`,您可以轻松构建您的 3D 图像模型的演示并与任何人分享。Gradio 3D 模型组件接受 3 种文件类型,包括:*.obj**.glb* 和 *.gltf*
机器学习中的 3D 模型越来越受欢迎,并且是一些最有趣的演示实验。使用 `gradio`,您可以轻松构建您的 3D 图像模型的演示并与任何人分享。Gradio 3D 模型组件接受 3 种文件类型,包括:_.obj__.glb_ 和 _.gltf_
本指南将向您展示如何使用几行代码构建您的 3D 图像模型的演示;像下面这个示例一样。点击、拖拽和缩放来玩转 3D 对象:
@ -47,13 +47,13 @@ demo.launch()
创建界面:
* `fn`:当用户点击提交时使用的预测函数。在我们的例子中,它是 `load_mesh` 函数。
* `inputs`:创建一个 model3D 输入组件。输入是一个上传的文件,作为{str}文件路径。
* `outputs`:创建一个 model3D 输出组件。输出组件也期望一个文件作为{str}文件路径。
* `clear_color`:这是 3D 模型画布的背景颜色。期望 RGBa 值。
* `label`:出现在组件左上角的标签。
* `examples`3D 模型文件的列表。3D 模型组件可以接受*.obj**.glb*和*.gltf*文件类型。
* `cache_examples`:保存示例的预测输出,以节省推理时间。
- `fn`:当用户点击提交时使用的预测函数。在我们的例子中,它是 `load_mesh` 函数。
- `inputs`:创建一个 model3D 输入组件。输入是一个上传的文件,作为{str}文件路径。
- `outputs`:创建一个 model3D 输出组件。输出组件也期望一个文件作为{str}文件路径。
- `clear_color`:这是 3D 模型画布的背景颜色。期望 RGBa 值。
- `label`:出现在组件左上角的标签。
- `examples`3D 模型文件的列表。3D 模型组件可以接受*.obj**.glb*和*.gltf*文件类型。
- `cache_examples`:保存示例的预测输出,以节省推理时间。
## 探索更复杂的 Model3D 演示
@ -64,9 +64,9 @@ demo.launch()
<gradio-app space="radames/PIFu-Clothed-Human-Digitization"> </gradio-app>
----------
---
搞定!这就是构建 Model3D 模型界面所需的所有代码。以下是一些您可能会发现有用的参考资料:
* Gradio 的[“入门指南”](https://gradio.app/getting_started/)
* 第一个[3D 模型演示](https://huggingface.co/spaces/dawood/Model3D)和[完整代码](https://huggingface.co/spaces/dawood/Model3D/tree/main)(在 Hugging Face Spaces 上)
- Gradio 的[“入门指南”](https://gradio.app/getting_started/)
- 第一个[3D 模型演示](https://huggingface.co/spaces/dawood/Model3D)和[完整代码](https://huggingface.co/spaces/dawood/Model3D/tree/main)(在 Hugging Face Spaces 上)

View File

@ -12,8 +12,9 @@
> 芝加哥有巴基斯坦餐厅吗?
命名实体识别算法可以识别出:
* "Chicago" as a **location**
* "Pakistani" as an **ethnicity**
- "Chicago" as a **location**
- "Pakistani" as an **ethnicity**
等等。
@ -34,7 +35,7 @@ $demo_ner_pipeline
许多命名实体识别模型输出的是一个字典列表。每个字典包含一个*实体*,一个 " 起始 " 索引和一个 " 结束 " 索引。这就是 `transformers` 库中的 NER 模型的操作方式。
```py
from transformers import pipeline
from transformers import pipeline
ner_pipeline = pipeline("ner")
ner_pipeline("芝加哥有巴基斯坦餐厅吗?")
```
@ -72,7 +73,7 @@ $demo_ner_pipeline
$code_text_analysis
$demo_text_analysis
--------------------------------------------
---
到此为止!您已经了解了为您的 NER 模型构建基于 Web 的图形用户界面所需的全部内容。

Some files were not shown because too many files have changed in this diff Show More