mirror of
https://github.com/gradio-app/gradio.git
synced 2025-04-24 13:01:18 +08:00
fixed comments; added launch website script
This commit is contained in:
parent
6bf776c52c
commit
5ec886710e
@ -1,15 +1,15 @@
|
||||
# Image Classification in PyTorch
|
||||
|
||||
related_spaces: abidlabs/pytorch-image-classifier
|
||||
related_spaces: https://huggingface.co/spaces/abidlabs/pytorch-image-classifier, https://huggingface.co/spaces/pytorch/ResNet, https://huggingface.co/spaces/pytorch/ResNext, https://huggingface.co/spaces/pytorch/SqueezeNet
|
||||
tags: VISION, RESNET, PYTORCH
|
||||
|
||||
## Introduction
|
||||
|
||||
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
|
||||
|
||||
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and will look like this (try one of the examples!):
|
||||
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/pytorch-image-classifier/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/pytorch-image-classifier/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
|
||||
Let's get started!
|
||||
@ -20,7 +20,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
|
||||
|
||||
## Step 1 — Setting up the Image Classification Model
|
||||
|
||||
First, you will need a image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
|
||||
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
|
||||
|
||||
```python
|
||||
import torch
|
||||
@ -32,7 +32,7 @@ Because we will be using the model for inference, we have called the `.eval()` m
|
||||
|
||||
## Step 2 — Defining a `predict` function
|
||||
|
||||
Next, you will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
||||
Next, we will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
||||
|
||||
In the case of our pretrained model, it will look like this:
|
||||
|
||||
@ -46,7 +46,6 @@ response = requests.get("https://git.io/JJkYN")
|
||||
labels = response.text.split("\n")
|
||||
|
||||
def predict(inp):
|
||||
inp = Image.fromarray(inp.astype('uint8'), 'RGB')
|
||||
inp = transforms.ToTensor()(inp).unsqueeze(0)
|
||||
with torch.no_grad():
|
||||
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
|
||||
@ -54,9 +53,9 @@ def predict(inp):
|
||||
return confidences
|
||||
```
|
||||
|
||||
Let's break this down. The function takes one parameters:
|
||||
Let's break this down. The function takes one parameter:
|
||||
|
||||
* `inp`: the input image as a `numpy` array
|
||||
* `inp`: the input image as a `PIL` image
|
||||
|
||||
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
|
||||
|
||||
@ -66,9 +65,9 @@ Then, the function converts the image to a PIL Image and then eventually a PyTor
|
||||
|
||||
Now that we have our predictive function set up, we can create a Gradio Interface around it.
|
||||
|
||||
In this case, the input component is a drag-and-drop image component. To create this input, we can use the convenient string shortcut, `"image"` which creates the component and handles the preprocessing to convert that to a numpy array.
|
||||
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
|
||||
|
||||
The output component will be a `"label"`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images.
|
||||
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
|
||||
|
||||
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
|
||||
|
||||
@ -76,14 +75,14 @@ Finally, we'll add one more parameter, the `examples`, which allows us to prepop
|
||||
import gradio as gr
|
||||
|
||||
gr.Interface(fn=predict,
|
||||
inputs="image",
|
||||
inputs=gr.inputs.Image(type="pil"),
|
||||
outputs=gr.outputs.Label(num_top_classes=3),
|
||||
examples=["lion.jpg", "cheetah.jpg"]).launch()
|
||||
```
|
||||
|
||||
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/pytorch-image-classifier/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/pytorch-image-classifier/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
----------
|
||||
|
||||
|
@ -1,15 +1,15 @@
|
||||
# Image Classification in TensorFlow and Keras
|
||||
|
||||
related_spaces: abidlabs/tensorflow-image-classifier
|
||||
related_spaces: https://huggingface.co/spaces/abidlabs/keras-image-classifier
|
||||
tags: VISION, MOBILENET, TENSORFLOW
|
||||
|
||||
## Introduction
|
||||
|
||||
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from traffic control systems to satellite imaging.
|
||||
|
||||
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and will look like this (try one of the examples!):
|
||||
Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like this (try one of the examples!):
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/tensorflow-image-classifier/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/tensorflow-image-classifier/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
|
||||
Let's get started!
|
||||
@ -20,7 +20,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
|
||||
|
||||
## Step 1 — Setting up the Image Classification Model
|
||||
|
||||
First, you will need a image classification model. For this tutorial, we will use a pretrained Mobile Net model, as it is easily downloadable from [Keras](https://keras.io/api/applications/mobilenet/). You can use a different pretrained model or train your own.
|
||||
First, we will need an image classification model. For this tutorial, we will use a pretrained Mobile Net model, as it is easily downloadable from [Keras](https://keras.io/api/applications/mobilenet/). You can use a different pretrained model or train your own.
|
||||
|
||||
```python
|
||||
import tensorflow as tf
|
||||
@ -32,7 +32,7 @@ This line automatically downloads the MobileNet model and weights using the Kera
|
||||
|
||||
## Step 2 — Defining a `predict` function
|
||||
|
||||
Next, you will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
||||
Next, we will need to define a function that takes in the *user input*, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
||||
|
||||
In the case of our pretrained model, it will look like this:
|
||||
|
||||
@ -51,7 +51,7 @@ def classify_image(inp):
|
||||
return confidences
|
||||
```
|
||||
|
||||
Let's break this down. The function takes one parameters:
|
||||
Let's break this down. The function takes one parameter:
|
||||
|
||||
* `inp`: the input image as a `numpy` array
|
||||
|
||||
@ -80,7 +80,7 @@ gr.Interface(fn=classify_image,
|
||||
|
||||
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/tensorflow-image-classifier/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/tensorflow-image-classifier/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
----------
|
||||
|
||||
|
@ -1,15 +1,15 @@
|
||||
# Image Classification with Vision Transformers
|
||||
|
||||
related_spaces: abidlabs/vision-transformer
|
||||
related_spaces: https://huggingface.co/spaces/abidlabs/vision-transformer
|
||||
tags: VISION, TRANSFORMERS, HUB
|
||||
|
||||
## Introduction
|
||||
|
||||
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
|
||||
|
||||
State-of-the-art image classifiers are based on the *transformers* architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and will look like this (try one of the examples!):
|
||||
State-of-the-art image classifiers are based on the *transformers* architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's *image* input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like this (try one of the examples!):
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/vision-transformer/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/vision-transformer/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
|
||||
Let's get started!
|
||||
@ -20,7 +20,7 @@ Make sure you have the `gradio` Python package already [installed](/getting_star
|
||||
|
||||
## Step 1 — Choosing a Vision Image Classification Model
|
||||
|
||||
First, you will need a image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models). The Hub contains thousands of models covering dozens of different machine learning tasks.
|
||||
First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.
|
||||
|
||||
Expand the Tasks category on the left sidebar and select "Image Classification" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.
|
||||
|
||||
@ -28,7 +28,7 @@ At the time of writing, the most popular one is `google/vit-base-patch16-224`, w
|
||||
|
||||
## Step 2 — Loading the Vision Transformer Model with Gradio
|
||||
|
||||
When using a model from the Hugging Face Hub, you do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
|
||||
When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
|
||||
All of these are automatically inferred from the model tags.
|
||||
|
||||
Besides the import statement, it only takes a single line of Python to load and launch the demo.
|
||||
@ -47,7 +47,7 @@ Notice that we have added one more parameter, the `examples`, which allows us to
|
||||
|
||||
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
|
||||
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/vision-transformer/+" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
<iframe src="https://hf.space/gradioiframe/abidlabs/vision-transformer/+" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
|
||||
|
||||
----------
|
||||
|
||||
|
12
scripts/launch_website.sh
Normal file
12
scripts/launch_website.sh
Normal file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
if [ -z "$(ls | grep CONTRIBUTING.md)" ]; then
|
||||
echo "Please run the script from repo directory"
|
||||
exit -1
|
||||
else
|
||||
echo "Building the website"
|
||||
cd website/homepage
|
||||
npm install
|
||||
npm run build
|
||||
cd dist
|
||||
python -m http.server
|
||||
fi
|
Loading…
x
Reference in New Issue
Block a user