diff --git a/guides/adding_content_with_article.md b/guides/adding_content_with_article.md
index bd0fe5cf43..c590099135 100644
--- a/guides/adding_content_with_article.md
+++ b/guides/adding_content_with_article.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Adding Content With Article
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/building_a_live_GAN_Interface.md b/guides/building_a_live_GAN_Interface.md
index bd0fe5cf43..2c9ab040a8 100644
--- a/guides/building_a_live_GAN_Interface.md
+++ b/guides/building_a_live_GAN_Interface.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Building a Live GAN Interface
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 03 January 2022
\ No newline at end of file
diff --git a/guides/building_an_image_classifier.md b/guides/building_an_image_classifier.md
index bd0fe5cf43..f9aa7bb3da 100644
--- a/guides/building_an_image_classifier.md
+++ b/guides/building_an_image_classifier.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Building an Image Classifier
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 15 January 2022
\ No newline at end of file
diff --git a/guides/designing_your_interfaces.md b/guides/designing_your_interfaces.md
index bd0fe5cf43..93850d0301 100644
--- a/guides/designing_your_interfaces.md
+++ b/guides/designing_your_interfaces.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Designing Your Interfaces
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/dropdowns_radio_buttons_and_checkboxes.md b/guides/dropdowns_radio_buttons_and_checkboxes.md
index bd0fe5cf43..5c300251fb 100644
--- a/guides/dropdowns_radio_buttons_and_checkboxes.md
+++ b/guides/dropdowns_radio_buttons_and_checkboxes.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Dropdowns, Radio Buttons, and Checkboxes
+By [Abubakar Abid](https://huggingface.co/aliabd)
+Published: 05 Januarsy 2022
\ No newline at end of file
diff --git a/guides/faster_interfaces_with_queuing_and_caching.md b/guides/faster_interfaces_with_queuing_and_caching.md
index bd0fe5cf43..459be55132 100644
--- a/guides/faster_interfaces_with_queuing_and_caching.md
+++ b/guides/faster_interfaces_with_queuing_and_caching.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Faster Interfaces with Queuing and Caching
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/flagging_and storing_feedback.md b/guides/flagging_and storing_feedback.md
index bd0fe5cf43..6844236c70 100644
--- a/guides/flagging_and storing_feedback.md
+++ b/guides/flagging_and storing_feedback.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Flagging and Storing Feedback
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/image_to_labels.md b/guides/image_to_labels.md
index bd0fe5cf43..a65c080d0e 100644
--- a/guides/image_to_labels.md
+++ b/guides/image_to_labels.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Image To Labels
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/securing_with_authentication.md b/guides/securing_with_authentication.md
index bd0fe5cf43..2ae601560a 100644
--- a/guides/securing_with_authentication.md
+++ b/guides/securing_with_authentication.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Securing With Authentication
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_audio.md b/guides/working_with_audio.md
index bd0fe5cf43..d19f313c99 100644
--- a/guides/working_with_audio.md
+++ b/guides/working_with_audio.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With Audio
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_dataframes.md b/guides/working_with_dataframes.md
index bd0fe5cf43..f89549e812 100644
--- a/guides/working_with_dataframes.md
+++ b/guides/working_with_dataframes.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With Dataframes
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_html_outputs.md b/guides/working_with_html_outputs.md
index bd0fe5cf43..5a57b25074 100644
--- a/guides/working_with_html_outputs.md
+++ b/guides/working_with_html_outputs.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With HTML Outputs
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_images.md b/guides/working_with_images.md
index bd0fe5cf43..d378949abd 100644
--- a/guides/working_with_images.md
+++ b/guides/working_with_images.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With Images
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_text.md b/guides/working_with_text.md
index bd0fe5cf43..36b255299d 100644
--- a/guides/working_with_text.md
+++ b/guides/working_with_text.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With Text
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/guides/working_with_timeseries.md b/guides/working_with_timeseries.md
index bd0fe5cf43..e578eb9d40 100644
--- a/guides/working_with_timeseries.md
+++ b/guides/working_with_timeseries.md
@@ -1,128 +1,4 @@
-## 💬 How to Create a Chatbot with Gradio
-
-By [Abubakar Abid](https://huggingface.co/abidlabs)
-Published: 20 January 2022
-Tested with: `gradio>=2.7.5`
-
-## Introduction
-
-Chatbots are widely studied in natural language processing (NLP) research and are one of the common applications of NLP in industry. Because chatbots are designed to be used directly by customers and end users, it is important to validate that chatbots are behaving as expected when confronted with a wide variety of input prompts. Using `gradio`, you can easily build a demo of your chatbot model and share that with a testing team, or test it yourself using an intuitive chatbot GUI.
-
-This tutorial will show how to take a pretrained chatbot model and deploy it with a Gradio interface in 4 steps. The live chatbot interface that we create will look something like this:
-
-
-Chatbots are *stateful*, meaning that the model's prediction can change depending on how the user has previously interacted with the model. Our tutorial will also describe how to use **state** with a Gradio demos.
-
-### Prerequisites
-
-Make sure you have the `gradio` Python package already [installed](/getting_started). To use a pretrained chatbot model, also install `transformers`.
-
-## Step 1 — Setting up the Chatbot Model
-
-First, you will need to have a chatbot model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will use a pretrained chatbot model, `DialoGPT`, and its tokenizer from the [Hugging Face Hub](https://huggingface.co/microsoft/DialoGPT-medium), but you can replace this with your own model.
-
-Here is the code to load `DialoGPT` from Hugging Face `transformers`.
-
-```python
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-
-tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
-model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
-```
-
-## Step 2 — Defining a `predict` function
-
-Next, you will need to define a function that takes in the *user input* as well as the previous *chat history* to generate a response.
-
-In the case of our pretrained model, it will look like this:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-Let's break this down. The function takes two parameters:
-* `user_input`: which is what the user enters (through the Gradio GUI) in a particular step of the conversation.
-* `history`: which represents the **state**, consisting of the list of user and bot responses. To create a stateful Gradio demo, we *must* pass in a parameter to represent the state, and we set the default value of this parameter to be the initial value of the state (in this case, the empty list since this is what we would like the chat history to be at the start).
-
-Then, the function tokenizes the input and concatenates it with the tokens corresponding to the previous user and bot responses. Then, this is fed into the pretrained model to get a prediction. Finally, we do some cleaning up so that we can return two values from our function:
-
-* `response`: which is a list of strings corresponding to all of the user and bot responses. This will be rendered as the output in the Gradio demo.
-* `history` variable, which is the token representation of all of the user and bot responses. In stateful Gradio demos, we *must* return the updated state at the end of the function.
-
-## Step 3 — Creating a Gradio Interface
-
-Now that we have our predictive function set up, we can create a Gradio Interface around it.
-
-In this case, our function takes in two values, a text input and a state input. The corresponding input components in `gradio` are `"text"` and `"state"`.
-
-The function also returns two values. For now, we will display the list of responses as `"text"` and use the `"state"` output component type for the second return value.
-
-Note that the `"state"` input and output components are not displayed.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["text", "state"]).launch()
-```
-
-This produces the following interface, which you can try right here in your browser:
-
-
-
-## Step 4 — Stylizing Your Interface
-
-The problem is that the output of the chatbot looks pretty ugly. No problem, we can make it prettier by using a little bit of CSS. We modify our function to return an HTML list instead:
-
-```python
-def predict(input, history=[]):
- # tokenize the new input sentence
- new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)
-
- # generate a response
- history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
-
- # convert the tokens to text, and then split the responses into a list
- response.remove("")
-
- return response, history
-```
-
-We change the first output component to be `"html"` instead, since now we are returning a string of HTML code.
-
-```python
-import gradio as gr
-
-gr.Interface(fn=predict,
- inputs=["text", "state"],
- outputs=["html", "state"]).launch()
-```
-
-Notice that we have also passed in a little bit of custom css using the `css` parameter, and we are good to go! Try it out below:
-
-----------
-
-And you're done! That's all the code you need to build an interface for your chatbot model. Here are some references that you may find useful:
-
-* Gradio's ["Getting Started" guide]()
-* The [chatbot demo]() and [complete code]() (on Hugging Face Spaces)
-
+## Working With Timeseries
+By [Ali Abdalla](https://huggingface.co/aliabd)
+Published: 06 January 2022
\ No newline at end of file
diff --git a/website/homepage/render_html.py b/website/homepage/render_html.py
index 136010d401..240ca5d54b 100644
--- a/website/homepage/render_html.py
+++ b/website/homepage/render_html.py
@@ -18,7 +18,6 @@ GRADIO_DEMO_DIR = os.path.join(GRADIO_DIR, "demo")
guides = []
-counter = 0
for guide in sorted(os.listdir(GRADIO_GUIDES_DIR)):
if "template" in guide or "getting_started" in guide:
continue
@@ -28,10 +27,21 @@ for guide in sorted(os.listdir(GRADIO_GUIDES_DIR)):
)
with open(os.path.join(GRADIO_GUIDES_DIR, guide),"r") as f:
guide_content = f.read()
+
+ guide_author, guide_date = "", ""
+ if "By [" in guide_content:
+ guide_author = guide_content.split("By [")[1].split("]")[0]
+ elif "By " in guide_content:
+ guide_author = guide_content.split("By ")[1].split("
")[0]
+ if "Published: " in guide_content:
+ guide_date = guide_content.split("Published: ")[1].split("
")[0]
+
guide_dict = {
"guide_name": guide_name,
"pretty_guide_name": pretty_guide_name,
"guide_content": guide_content,
+ "guide_author": guide_author,
+ "guide_date": guide_date
}
guides.append(guide_dict)
diff --git a/website/homepage/src/guides_main_template.html b/website/homepage/src/guides_main_template.html
index 6e6988c9a4..3fe9aa2b11 100644
--- a/website/homepage/src/guides_main_template.html
+++ b/website/homepage/src/guides_main_template.html
@@ -176,6 +176,10 @@
height: min-content;">
By + + {{ guide.guide_date }} +
{% endfor %}