mirror of
https://github.com/gradio-app/gradio.git
synced 2024-11-27 01:40:20 +08:00
Embedded Lite example apps in the docs (#8278)
* Disable MDsveX's smartypants option to preserve the Python code embedded in the doc as Lite apps unchanged * Add Lite embedded apps to 06_gradio-lite-and-transformers-js.md * add changeset * Add comments * add changeset --------- Co-authored-by: gradio-pr-bot <gradio-pr-bot@users.noreply.github.com>
This commit is contained in:
parent
719d5962bb
commit
4ae17a4653
5
.changeset/khaki-toys-hammer.md
Normal file
5
.changeset/khaki-toys-hammer.md
Normal file
@ -0,0 +1,5 @@
|
||||
---
|
||||
"website": patch
|
||||
---
|
||||
|
||||
feat:Embedded Lite example apps in the docs
|
@ -49,7 +49,23 @@ transformers-js-py
|
||||
</html>
|
||||
```
|
||||
|
||||
You can open your HTML file in a browser to see the Gradio app running!
|
||||
Here is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):
|
||||
|
||||
<gradio-lite shared-worker>
|
||||
import gradio as gr
|
||||
from transformers_js_py import pipeline
|
||||
<!-- --->
|
||||
pipe = await pipeline('sentiment-analysis')
|
||||
<!-- --->
|
||||
demo = gr.Interface.from_pipeline(pipe)
|
||||
<!-- --->
|
||||
demo.launch()
|
||||
<gradio-requirements>
|
||||
transformers-js-py
|
||||
</gradio-requirements>
|
||||
</gradio-lite>
|
||||
|
||||
And you you can open your HTML file in a browser to see the Gradio app running!
|
||||
|
||||
The Python code inside the `<gradio-lite>` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).
|
||||
The `<gradio-requirements>` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.
|
||||
@ -71,9 +87,37 @@ You can modify the line `pipe = await pipeline('sentiment-analysis')` in the sam
|
||||
For example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.
|
||||
If it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).
|
||||
|
||||
<gradio-lite shared-worker>
|
||||
import gradio as gr
|
||||
from transformers_js_py import pipeline
|
||||
<!-- --->
|
||||
pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')
|
||||
<!-- --->
|
||||
demo = gr.Interface.from_pipeline(pipe)
|
||||
<!-- --->
|
||||
demo.launch()
|
||||
<gradio-requirements>
|
||||
transformers-js-py
|
||||
</gradio-requirements>
|
||||
</gradio-lite>
|
||||
|
||||
As another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.
|
||||
In this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.
|
||||
|
||||
<gradio-lite shared-worker>
|
||||
import gradio as gr
|
||||
from transformers_js_py import pipeline
|
||||
<!-- --->
|
||||
pipe = await pipeline('image-classification')
|
||||
<!-- --->
|
||||
demo = gr.Interface.from_pipeline(pipe)
|
||||
<!-- --->
|
||||
demo.launch()
|
||||
<gradio-requirements>
|
||||
transformers-js-py
|
||||
</gradio-requirements>
|
||||
</gradio-lite>
|
||||
|
||||
<br>
|
||||
|
||||
**Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `<gradio-requirements>` as there are additional requirements needed to process audio files.
|
||||
@ -118,6 +162,28 @@ transformers-js-py
|
||||
|
||||
In this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.
|
||||
|
||||
<gradio-lite shared-worker>
|
||||
import gradio as gr
|
||||
from transformers_js_py import pipeline
|
||||
<!-- --->
|
||||
pipe = await pipeline('sentiment-analysis')
|
||||
<!-- --->
|
||||
async def fn(text):
|
||||
result = await pipe(text)
|
||||
return result
|
||||
<!-- --->
|
||||
demo = gr.Interface(
|
||||
fn=fn,
|
||||
inputs=gr.Textbox(),
|
||||
outputs=gr.JSON(),
|
||||
)
|
||||
<!-- --->
|
||||
demo.launch()
|
||||
<gradio-requirements>
|
||||
transformers-js-py
|
||||
</gradio-requirements>
|
||||
</gradio-lite>
|
||||
|
||||
## Conclusion
|
||||
|
||||
By combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.
|
||||
|
@ -115,7 +115,8 @@ export async function load({ params, url }) {
|
||||
);
|
||||
return `<div class="codeblock">${h}</div>`;
|
||||
}
|
||||
}
|
||||
},
|
||||
smartypants: false // This option converts `"` to `“` and `”` which breaks the code inside `<gradio-lite>` tags, so we disable it.
|
||||
});
|
||||
guide.new_html = compiled?.code;
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user