Merge branch 'master' of https://github.com/abidlabs/gradio into compartmentalize_interfaces
33
README.md
@ -2,7 +2,7 @@
|
||||
|
||||
`Gradio` is a python library that allows you to place input and output interfaces over trained models to make it easy for you to "play around" with your model. Gradio runs entirely locally using your browser.
|
||||
|
||||
To get a sense of `gradio`, take a look at the python notebooks in the `examples` folder, or read on below!
|
||||
To get a sense of `gradio`, take a look at the python notebooks in the `examples` folder, or read on below! And be sure to visit the gradio website: www.gradio.app.
|
||||
|
||||
## Installation
|
||||
```
|
||||
@ -12,21 +12,19 @@ pip install gradio
|
||||
|
||||
## Usage
|
||||
|
||||
Gradio is very easy to use with your existing code. The general way it's used is something like this:
|
||||
Gradio is very easy to use with your existing code. Here is a minimum working example:
|
||||
|
||||
|
||||
```python
|
||||
import tensorflow as tf
|
||||
import gradio
|
||||
import tensorflow as tf
|
||||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||||
|
||||
mdl = tf.keras.models.Sequential()
|
||||
# ... define and train the model as you would normally
|
||||
|
||||
iface = gradio.Interface(input=“sketchpad”, output=“class”, model_type=“keras”, model=mdl)
|
||||
iface.launch()
|
||||
io = gradio.Interface(inputs="imageupload", outputs="label", model_type="keras", model=image_mdl)
|
||||
io.launch()
|
||||
```
|
||||
|
||||
Changing the `input` and `output` parameters in the `Interface` face object allow you to create different interfaces, depending on the needs of your model. Take a look at the python notebooks for more examples. The currently supported interfaces are as follows:
|
||||
You can supply your own model instead of the pretrained model above, as well as use different kinds of models, not just keras models. Changing the `input` and `output` parameters in the `Interface` face object allow you to create different interfaces, depending on the needs of your model. Take a look at the python notebooks for more examples. The currently supported interfaces are as follows:
|
||||
|
||||
**Input interfaces**:
|
||||
* Sketchpad
|
||||
@ -35,39 +33,42 @@ Changing the `input` and `output` parameters in the `Interface` face object allo
|
||||
* Textbox
|
||||
|
||||
**Output interfaces**:
|
||||
* Class
|
||||
* Label
|
||||
* Textbox
|
||||
|
||||
## Screenshots
|
||||
|
||||
Here are a few screenshots that show examples of gradio interfaces
|
||||
|
||||
#### MNIST Digit Recognition (Input: Sketchpad, Output: Class)
|
||||
#### MNIST Digit Recognition (Input: Sketchpad, Output: Label)
|
||||
|
||||
```python
|
||||
iface = gradio.Interface(input='sketchpad', output='class', model=model, model_type='keras')
|
||||
iface = gradio.Interface(input='sketchpad', output='label', model=model, model_type='keras')
|
||||
iface.launch()
|
||||
```
|
||||
|
||||
![alt text](https://raw.githubusercontent.com/abidlabs/gradio/master/screenshots/mnist4.png)
|
||||
|
||||
#### Facial Emotion Detector (Input: Webcam, Output: Class)
|
||||
#### Facial Emotion Detector (Input: Webcam, Output: Label)
|
||||
|
||||
```python
|
||||
iface = gradio.Interface(input='webcam', output='class', model=model, model_type='keras')
|
||||
iface = gradio.Interface(inputs='webcam', outputs='label', model=model, model_type='keras')
|
||||
iface.launch()
|
||||
```
|
||||
|
||||
![alt text](https://raw.githubusercontent.com/abidlabs/gradio/master/screenshots/webcam_happy.png)
|
||||
|
||||
#### Sentiment Analysis (Input: Textbox, Output: Class)
|
||||
#### Sentiment Analysis (Input: Textbox, Output: Label)
|
||||
|
||||
```python
|
||||
iface = gradio.Interface(input='textbox', output='class', model=model, model_type='keras')
|
||||
iface = gradio.Interface(inputs='textbox', outputs='label', model=model, model_type='keras')
|
||||
iface.launch()
|
||||
```
|
||||
|
||||
![alt text](https://raw.githubusercontent.com/abidlabs/gradio/master/screenshots/sentiment_positive.png)
|
||||
|
||||
### More Documentation
|
||||
|
||||
More detailed and up-to-date documentation can be found on the gradio website: www.gradio.app.
|
||||
|
||||
|
||||
|
405
Test Keras.ipynb
Normal file
@ -0,0 +1,405 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Load the Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The autoreload extension is already loaded. To reload it, use:\n",
|
||||
" %reload_ext autoreload\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%load_ext autoreload\n",
|
||||
"%autoreload 2\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import gradio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = tf.keras.applications.inception_v3.InceptionV3()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Feed An Image Into the Model Manually"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from PIL import Image\n",
|
||||
"import requests\n",
|
||||
"from io import BytesIO\n",
|
||||
"\n",
|
||||
"url = 'https://nationalzoo.si.edu/sites/default/files/animals/cheetah-004.jpg'\n",
|
||||
"\n",
|
||||
"response = requests.get(url)\n",
|
||||
"img = Image.open(BytesIO(response.content))\n",
|
||||
"\n",
|
||||
"# resize the image into an array that the model can accept\n",
|
||||
"img = np.array(img.resize((299, 299))).reshape((1, 299, 299, 3))\n",
|
||||
"\n",
|
||||
"# scale the image and do other preprocessing\n",
|
||||
"img = img/255"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"array([[2.87348394e-05, 3.31625670e-05, 2.48761153e-05, 2.91672295e-05,\n",
|
||||
" 1.58476214e-05, 2.04349963e-05, 2.29794769e-05, 2.98258401e-05,\n",
|
||||
" 7.79210823e-05, 1.14480768e-04, 4.18152449e-05, 5.98102342e-05,\n",
|
||||
" 8.04319134e-05, 1.35788669e-05, 3.69409063e-05, 2.70363416e-05,\n",
|
||||
" 4.31931585e-05, 2.60745201e-05, 5.94786252e-05, 4.99271664e-05,\n",
|
||||
" 2.43047725e-05, 1.30793895e-04, 4.14417882e-05, 2.70148976e-05,\n",
|
||||
" 4.10587854e-05, 6.58849458e-05, 3.18542261e-05, 2.28214885e-05,\n",
|
||||
" 4.62773096e-05, 2.48329743e-05, 2.42654005e-05, 2.44440125e-05,\n",
|
||||
" 4.15933137e-05, 2.09003556e-05, 1.59764550e-05, 1.34925976e-05,\n",
|
||||
" 5.11476246e-05, 3.04547921e-05, 5.38403292e-05, 2.52995142e-05,\n",
|
||||
" 2.60678062e-05, 1.34934933e-04, 5.17043627e-05, 1.98286180e-05,\n",
|
||||
" 1.71139363e-05, 1.33584617e-04, 3.85650528e-05, 4.20200013e-05,\n",
|
||||
" 4.11127767e-05, 4.81091338e-05, 2.95333430e-05, 1.35418804e-05,\n",
|
||||
" 2.56965777e-05, 3.57028839e-05, 3.47831883e-05, 1.86437192e-05,\n",
|
||||
" 2.68215954e-05, 3.62955325e-05, 3.25360379e-05, 2.10378585e-05,\n",
|
||||
" 5.16176551e-05, 1.74321522e-05, 1.70013354e-05, 3.05602764e-04,\n",
|
||||
" 2.08724887e-05, 4.20338292e-05, 3.24174725e-05, 5.40788969e-05,\n",
|
||||
" 3.05910835e-05, 3.09062816e-05, 3.70780981e-05, 5.52072815e-05,\n",
|
||||
" 5.70952288e-05, 9.20407110e-05, 7.28778541e-05, 7.14581111e-05,\n",
|
||||
" 1.53447272e-05, 8.25692987e-05, 1.66468253e-05, 3.68028523e-05,\n",
|
||||
" 3.37601887e-05, 6.55732729e-05, 7.85594093e-05, 1.06584033e-04,\n",
|
||||
" 2.29202069e-05, 8.09206249e-05, 9.53404306e-05, 3.45181325e-05,\n",
|
||||
" 4.02693950e-05, 4.56266353e-05, 2.36524411e-05, 3.45649423e-05,\n",
|
||||
" 2.90476983e-05, 4.14484311e-05, 6.00201092e-05, 2.55004470e-05,\n",
|
||||
" 2.67037540e-05, 4.19789467e-05, 1.82741569e-05, 3.19312676e-05,\n",
|
||||
" 2.70542678e-05, 5.47103627e-05, 3.11060867e-05, 1.23055879e-05,\n",
|
||||
" 2.46065429e-05, 1.89571019e-05, 6.07315133e-06, 2.28815934e-05,\n",
|
||||
" 2.02425854e-05, 4.30371547e-05, 4.66785350e-05, 2.23741263e-05,\n",
|
||||
" 5.11738763e-05, 1.96830188e-05, 6.12578588e-05, 3.12303055e-05,\n",
|
||||
" 2.31823124e-05, 3.49025431e-05, 4.09512031e-05, 3.99867422e-05,\n",
|
||||
" 2.12125287e-05, 4.49290274e-05, 4.11992005e-05, 2.76286501e-05,\n",
|
||||
" 5.41073750e-05, 2.77385552e-05, 1.73709468e-05, 7.04166450e-05,\n",
|
||||
" 4.19379321e-05, 1.01846208e-05, 5.60213484e-05, 1.86260368e-05,\n",
|
||||
" 2.41570415e-05, 2.32356997e-05, 3.61526509e-05, 3.48804824e-05,\n",
|
||||
" 3.82888429e-05, 1.91788040e-05, 2.66903371e-05, 2.22696453e-05,\n",
|
||||
" 8.82827590e-05, 5.88142175e-05, 4.24889695e-05, 1.39571493e-05,\n",
|
||||
" 2.34105919e-05, 9.14792054e-06, 2.38255616e-05, 2.36884007e-05,\n",
|
||||
" 4.16137882e-05, 2.61921778e-05, 9.35538628e-06, 4.06456202e-05,\n",
|
||||
" 5.00514470e-05, 5.93220975e-05, 4.06428662e-05, 4.57445349e-05,\n",
|
||||
" 9.33669071e-05, 6.87131251e-05, 6.56917036e-05, 1.63057150e-04,\n",
|
||||
" 2.07711382e-05, 3.40171828e-05, 5.25352189e-05, 7.87169774e-05,\n",
|
||||
" 2.05707147e-05, 2.78320695e-05, 6.96952338e-05, 5.21129659e-05,\n",
|
||||
" 7.22707264e-05, 1.92875741e-05, 2.23326842e-05, 1.34510556e-04,\n",
|
||||
" 2.18944042e-05, 2.92327531e-05, 3.70465386e-05, 5.92614815e-05,\n",
|
||||
" 1.72934742e-05, 1.91587696e-05, 2.92312779e-05, 2.92381355e-05,\n",
|
||||
" 3.74670744e-05, 1.88971571e-05, 8.49900607e-05, 4.99997332e-05,\n",
|
||||
" 1.66820264e-05, 7.92418869e-05, 7.79421171e-05, 3.04292171e-05,\n",
|
||||
" 1.57136732e-04, 7.85228622e-05, 7.21035758e-05, 1.09024957e-04,\n",
|
||||
" 4.36982591e-05, 2.41748094e-05, 6.00040876e-05, 2.53325179e-05,\n",
|
||||
" 1.18568069e-05, 2.16146182e-05, 1.14633331e-05, 2.29929574e-05,\n",
|
||||
" 4.85876917e-05, 2.81516332e-05, 5.11928120e-05, 2.46606432e-05,\n",
|
||||
" 2.56997391e-05, 3.24740686e-05, 6.63395913e-05, 5.39563443e-05,\n",
|
||||
" 7.46768637e-05, 1.44902097e-05, 6.21178260e-05, 7.94990920e-05,\n",
|
||||
" 5.26915173e-05, 2.44574185e-05, 4.84968186e-05, 5.58190695e-05,\n",
|
||||
" 6.10985153e-05, 3.27518101e-05, 4.99396410e-05, 4.66030106e-05,\n",
|
||||
" 4.85124510e-05, 4.44747529e-05, 8.11594291e-05, 5.17867738e-05,\n",
|
||||
" 3.27908747e-05, 5.72784738e-05, 3.77279248e-05, 2.79495016e-05,\n",
|
||||
" 1.14093185e-04, 3.22306660e-05, 6.37761405e-05, 4.70271916e-05,\n",
|
||||
" 5.12053120e-05, 3.55333796e-05, 4.20222823e-05, 7.67037345e-05,\n",
|
||||
" 6.28583803e-05, 5.06723591e-05, 1.65470137e-05, 8.55832404e-05,\n",
|
||||
" 9.10326271e-05, 4.44190991e-05, 2.66823718e-05, 4.90651146e-05,\n",
|
||||
" 3.16314836e-05, 1.70037019e-05, 2.22507952e-05, 3.08026210e-05,\n",
|
||||
" 2.92006262e-05, 5.21812508e-05, 2.88916035e-05, 3.42814019e-04,\n",
|
||||
" 3.08394483e-05, 3.21107982e-05, 5.83832407e-05, 3.17412378e-05,\n",
|
||||
" 4.86168028e-05, 3.69986228e-05, 7.35939539e-05, 1.01651509e-04,\n",
|
||||
" 6.33892123e-05, 1.16707757e-04, 4.40528347e-05, 7.76190718e-05,\n",
|
||||
" 4.07124644e-05, 3.60297745e-05, 3.41399682e-05, 3.80674792e-05,\n",
|
||||
" 2.62926351e-05, 6.99426528e-05, 4.08917695e-05, 6.30973009e-05,\n",
|
||||
" 1.36613016e-04, 6.11745490e-05, 4.07748266e-05, 4.84085140e-05,\n",
|
||||
" 3.05555557e-04, 1.16628311e-04, 6.46469707e-05, 8.43134148e-06,\n",
|
||||
" 5.46161064e-05, 2.23031002e-05, 6.53090974e-05, 2.36686647e-05,\n",
|
||||
" 1.82996901e-05, 1.40099131e-04, 5.82382090e-05, 4.51421191e-04,\n",
|
||||
" 2.13040854e-03, 4.35033289e-04, 5.41022397e-04, 1.60870492e-04,\n",
|
||||
" 4.68313883e-05, 9.54939544e-01, 3.10988798e-05, 3.05654467e-05,\n",
|
||||
" 3.23688837e-05, 2.17403522e-05, 3.35242439e-05, 5.57901185e-05,\n",
|
||||
" 3.60432459e-05, 2.85484166e-05, 3.54308868e-05, 2.04342177e-05,\n",
|
||||
" 5.79034349e-05, 8.77326820e-05, 5.87764771e-05, 7.48893217e-05,\n",
|
||||
" 3.75710188e-05, 6.65013722e-05, 3.93841146e-05, 4.30815053e-05,\n",
|
||||
" 3.90250025e-05, 2.49465302e-05, 6.32623924e-05, 2.82751062e-05,\n",
|
||||
" 4.39944779e-05, 2.91896849e-05, 6.09816307e-05, 3.73154508e-05,\n",
|
||||
" 2.32816583e-05, 3.47097885e-05, 2.34287490e-05, 3.12690463e-05,\n",
|
||||
" 3.78116711e-05, 2.44250441e-05, 5.76958992e-05, 1.89158709e-05,\n",
|
||||
" 3.32920463e-05, 7.67280217e-05, 3.95814968e-05, 1.52310495e-05,\n",
|
||||
" 2.03246800e-05, 2.73730711e-05, 1.54649297e-05, 4.58700415e-05,\n",
|
||||
" 9.81626581e-05, 1.53429410e-05, 1.71078354e-05, 2.43297018e-05,\n",
|
||||
" 7.51380576e-05, 1.96585552e-05, 1.77083912e-05, 3.09253592e-05,\n",
|
||||
" 2.97734059e-05, 2.21605824e-05, 2.90767166e-05, 1.85748777e-05,\n",
|
||||
" 4.97450383e-05, 1.88307531e-05, 1.19116285e-05, 7.33295456e-05,\n",
|
||||
" 6.87958745e-05, 3.93158378e-04, 2.31156846e-05, 2.53631715e-05,\n",
|
||||
" 1.65625333e-05, 1.65534693e-05, 4.20270917e-05, 3.63402833e-05,\n",
|
||||
" 4.49393556e-05, 2.52205973e-05, 3.12724478e-05, 2.30503265e-05,\n",
|
||||
" 2.50303183e-05, 7.02027774e-06, 3.39574144e-05, 2.14463907e-05,\n",
|
||||
" 6.73558243e-05, 2.93099947e-05, 4.04911734e-05, 1.33556852e-04,\n",
|
||||
" 3.94189883e-05, 9.10259332e-05, 3.55042503e-05, 1.96132933e-05,\n",
|
||||
" 2.23557199e-05, 1.78663686e-05, 2.76167684e-05, 2.52928312e-05,\n",
|
||||
" 2.53802773e-05, 1.75146706e-05, 3.04427449e-05, 3.05658868e-05,\n",
|
||||
" 2.93310241e-05, 3.47113091e-05, 3.17042395e-05, 4.88488004e-05,\n",
|
||||
" 5.35136824e-05, 1.63490895e-05, 3.07410519e-05, 2.58094551e-05,\n",
|
||||
" 1.68334973e-05, 2.69737084e-05, 1.76451395e-05, 8.72271194e-05,\n",
|
||||
" 4.76380628e-05, 6.58142962e-05, 4.99944399e-05, 3.69260088e-05,\n",
|
||||
" 1.29387572e-05, 4.35098846e-05, 3.55075354e-05, 5.56325867e-05,\n",
|
||||
" 3.39463732e-05, 4.18462005e-05, 2.19590602e-05, 2.19191188e-05,\n",
|
||||
" 1.31938330e-04, 4.50956250e-05, 5.07826589e-05, 4.17550691e-05,\n",
|
||||
" 7.16164577e-05, 3.72835784e-05, 2.07126468e-05, 4.95999884e-05,\n",
|
||||
" 2.19804733e-05, 3.38278987e-05, 3.40998122e-05, 2.76599694e-05,\n",
|
||||
" 5.85454873e-05, 2.18334353e-05, 3.34332472e-05, 4.01523976e-05,\n",
|
||||
" 4.58525028e-05, 3.80572783e-05, 1.40437096e-05, 2.83117733e-05,\n",
|
||||
" 4.42011333e-05, 1.78016035e-05, 1.94806598e-05, 1.90080063e-05,\n",
|
||||
" 6.83374310e-05, 3.57353092e-05, 2.33811606e-05, 2.66535317e-05,\n",
|
||||
" 5.60822373e-05, 2.42737551e-05, 2.61371079e-05, 2.67774813e-05,\n",
|
||||
" 1.90776718e-05, 1.29542277e-05, 2.81340526e-05, 1.97947316e-04,\n",
|
||||
" 4.21368532e-05, 2.00126105e-05, 3.13872697e-05, 5.18013985e-05,\n",
|
||||
" 4.06897962e-05, 5.17138833e-05, 1.88349750e-05, 4.25781291e-05,\n",
|
||||
" 2.22635099e-05, 2.73323512e-05, 3.70964553e-05, 1.27892481e-05,\n",
|
||||
" 2.74279973e-05, 9.07500435e-05, 4.62964235e-05, 2.69962766e-05,\n",
|
||||
" 2.29480265e-05, 3.62087230e-05, 9.92782880e-05, 7.58480746e-05,\n",
|
||||
" 2.73580899e-05, 3.78854857e-05, 1.52056227e-05, 4.57414790e-05,\n",
|
||||
" 2.05655360e-05, 3.76061143e-05, 4.05851133e-05, 4.74572698e-05,\n",
|
||||
" 5.02792682e-05, 2.73430123e-05, 1.88959839e-05, 3.63449617e-05,\n",
|
||||
" 8.58449785e-05, 4.83370568e-05, 2.29676662e-05, 1.39783297e-05,\n",
|
||||
" 3.00153624e-05, 9.30200713e-06, 2.98171090e-05, 4.02061924e-05,\n",
|
||||
" 4.37480085e-05, 2.40602913e-05, 3.97164113e-05, 3.02438275e-05,\n",
|
||||
" 3.59209807e-05, 2.26508218e-05, 8.11787104e-05, 4.59786570e-05,\n",
|
||||
" 3.35251389e-05, 4.65675585e-05, 7.03359765e-05, 3.02757162e-05,\n",
|
||||
" 1.55114576e-05, 2.08013098e-05, 3.21109837e-05, 3.39043763e-05,\n",
|
||||
" 3.69621230e-05, 4.77702197e-05, 2.04758471e-05, 4.22765661e-05,\n",
|
||||
" 1.94308050e-05, 7.30149040e-05, 1.79527942e-05, 2.80337772e-05,\n",
|
||||
" 4.05040737e-05, 3.21173502e-05, 3.44566943e-05, 4.53288958e-05,\n",
|
||||
" 1.00638936e-05, 9.34529671e-05, 3.09277821e-05, 2.75657203e-05,\n",
|
||||
" 7.13371846e-05, 4.78991387e-05, 1.97249592e-05, 2.87434432e-05,\n",
|
||||
" 3.84232007e-05, 3.48397916e-05, 6.53180105e-05, 4.45334517e-05,\n",
|
||||
" 4.27828672e-05, 2.16301651e-05, 6.91576715e-05, 7.24355050e-05,\n",
|
||||
" 1.77990405e-05, 2.19066005e-05, 4.36122064e-05, 1.95597968e-05,\n",
|
||||
" 1.43805864e-05, 3.36158046e-05, 3.29781979e-05, 2.36990436e-05,\n",
|
||||
" 3.80291931e-05, 2.94673118e-05, 5.39072244e-05, 1.34538832e-05,\n",
|
||||
" 4.52095228e-05, 2.56473413e-05, 4.88241676e-05, 3.79534722e-05,\n",
|
||||
" 9.00277664e-05, 2.71920580e-05, 1.99786864e-05, 4.11888686e-05,\n",
|
||||
" 2.08715737e-05, 3.21811240e-05, 3.77245779e-05, 3.90776258e-05,\n",
|
||||
" 5.72720819e-05, 2.28592144e-05, 8.38642154e-05, 7.11378598e-05,\n",
|
||||
" 7.47661543e-05, 2.76085266e-05, 2.37599397e-05, 2.74038648e-05,\n",
|
||||
" 1.96829624e-05, 2.75633538e-05, 1.81687465e-05, 1.65650599e-05,\n",
|
||||
" 1.66132322e-05, 1.77653765e-05, 1.31694123e-05, 4.40269687e-05,\n",
|
||||
" 5.24848438e-05, 7.23133689e-06, 6.08247465e-05, 4.91629107e-05,\n",
|
||||
" 2.79622727e-05, 3.83454862e-05, 3.14143108e-05, 7.62736527e-05,\n",
|
||||
" 7.78079848e-05, 3.67796965e-05, 2.58853524e-05, 2.09938262e-05,\n",
|
||||
" 6.02522668e-05, 2.05882207e-05, 5.41717600e-05, 2.36419546e-05,\n",
|
||||
" 2.47464232e-05, 2.56587409e-05, 4.29635875e-05, 2.33233714e-05,\n",
|
||||
" 2.72860962e-05, 1.83634984e-05, 3.15190737e-05, 3.37215424e-05,\n",
|
||||
" 4.02502737e-05, 4.96676294e-05, 3.87466462e-05, 4.78446273e-05,\n",
|
||||
" 1.86023553e-05, 1.04718667e-04, 3.17602207e-05, 9.92937275e-05,\n",
|
||||
" 2.01554867e-04, 3.02287735e-05, 3.60458944e-05, 2.53110029e-05,\n",
|
||||
" 2.71016797e-05, 4.33302957e-05, 1.72166110e-05, 2.25411804e-05,\n",
|
||||
" 2.48068172e-05, 5.14635103e-05, 1.58837247e-05, 1.93799478e-05,\n",
|
||||
" 1.45588992e-05, 4.06311265e-05, 9.53586550e-06, 4.33395144e-05,\n",
|
||||
" 6.12365402e-05, 4.95142558e-05, 5.42290290e-05, 3.97067161e-05,\n",
|
||||
" 9.20145976e-06, 5.19714195e-05, 5.91691532e-05, 7.11168977e-05,\n",
|
||||
" 1.74283286e-05, 1.11089117e-04, 3.03591587e-05, 6.13862794e-05,\n",
|
||||
" 4.03964805e-05, 3.48730646e-05, 4.12873851e-05, 3.63344952e-05,\n",
|
||||
" 9.57763186e-05, 5.08481317e-05, 2.18448913e-05, 2.55160630e-05,\n",
|
||||
" 2.60871548e-05, 2.94701222e-05, 3.19013780e-05, 4.27702980e-05,\n",
|
||||
" 3.36178891e-05, 5.91083517e-05, 4.76461501e-05, 2.87710882e-05,\n",
|
||||
" 7.71013802e-05, 3.33449207e-05, 3.10998585e-05, 2.31554441e-05,\n",
|
||||
" 7.88360558e-05, 4.88579790e-05, 4.79332739e-05, 6.66515261e-05,\n",
|
||||
" 2.80267468e-05, 3.95161696e-05, 2.19156900e-05, 4.22459379e-05,\n",
|
||||
" 2.90575063e-05, 3.46283523e-05, 4.19461721e-05, 4.79287955e-05,\n",
|
||||
" 4.88870210e-05, 6.30793729e-05, 6.10515781e-05, 7.35698122e-05,\n",
|
||||
" 4.14108945e-05, 2.09264635e-05, 2.75761595e-05, 2.45826413e-05,\n",
|
||||
" 1.17834075e-04, 2.33378105e-05, 2.12311697e-05, 2.49118675e-05,\n",
|
||||
" 1.44527812e-05, 7.47653685e-05, 2.85598799e-05, 4.30836189e-06,\n",
|
||||
" 4.16856419e-05, 6.70859372e-05, 2.27318233e-05, 2.34566724e-05,\n",
|
||||
" 2.86041468e-05, 3.12322700e-05, 4.20835640e-05, 2.88782139e-05,\n",
|
||||
" 2.12479808e-05, 2.67499399e-05, 4.26801307e-05, 3.50373411e-05,\n",
|
||||
" 1.14289542e-04, 2.39087785e-05, 2.87024377e-05, 5.50144468e-05,\n",
|
||||
" 1.25675524e-05, 8.60681976e-05, 5.79822372e-05, 2.25609238e-05,\n",
|
||||
" 1.58922521e-05, 5.59906366e-05, 5.47513882e-05, 3.99525888e-05,\n",
|
||||
" 4.46637750e-05, 3.12694647e-05, 4.69786464e-05, 7.30282045e-05,\n",
|
||||
" 3.32598356e-05, 3.93198643e-05, 2.98816431e-05, 4.68274120e-05,\n",
|
||||
" 3.85413005e-05, 2.63213096e-05, 7.29164458e-05, 1.51059212e-05,\n",
|
||||
" 1.73022017e-05, 2.24817995e-05, 2.50870999e-05, 2.70464498e-05,\n",
|
||||
" 6.94527189e-05, 7.03223559e-05, 1.13023976e-04, 1.81350424e-05,\n",
|
||||
" 1.61756379e-05, 2.27133587e-05, 1.73909539e-05, 3.29188697e-05,\n",
|
||||
" 3.83688603e-05, 4.63621691e-05, 4.20097022e-05, 2.44141211e-05,\n",
|
||||
" 2.36812193e-05, 5.37081723e-05, 3.25651235e-05, 6.26961337e-05,\n",
|
||||
" 4.40348194e-05, 6.08678674e-05, 1.51382401e-05, 4.85360542e-05,\n",
|
||||
" 2.52648915e-05, 2.13416624e-05, 3.66176173e-05, 2.14315878e-05,\n",
|
||||
" 2.53550206e-05, 2.49690220e-05, 1.72549426e-05, 1.23161544e-05,\n",
|
||||
" 4.05697538e-05, 2.80514014e-05, 4.10169851e-05, 1.82738422e-05,\n",
|
||||
" 2.12066843e-05, 1.92876123e-05, 1.40940101e-05, 2.85765000e-05,\n",
|
||||
" 1.49180614e-05, 2.17154247e-05, 1.15241521e-04, 6.20267747e-05,\n",
|
||||
" 3.51752824e-05, 1.84352139e-05, 2.34445524e-05, 6.70253139e-05,\n",
|
||||
" 3.65042324e-05, 3.79433368e-05, 4.84678712e-05, 3.16102814e-05,\n",
|
||||
" 4.14562965e-05, 3.48020985e-05, 5.51545527e-05, 1.20085324e-05,\n",
|
||||
" 4.02397964e-05, 3.69577174e-05, 1.16221108e-05, 1.73307726e-05,\n",
|
||||
" 2.36364995e-05, 3.68570509e-05, 2.28376211e-05, 1.44234800e-05,\n",
|
||||
" 4.29613319e-05, 2.90063417e-05, 3.12782940e-05, 3.48059839e-05,\n",
|
||||
" 3.07464470e-05, 4.45889127e-05, 2.77584841e-05, 2.82693636e-05,\n",
|
||||
" 3.06526708e-05, 2.07262001e-05, 2.39080709e-05, 3.90869063e-05,\n",
|
||||
" 1.63033437e-05, 3.13781420e-05, 1.09535986e-05, 2.71046247e-05,\n",
|
||||
" 1.11349189e-04, 5.54282742e-05, 1.40334141e-05, 4.46840531e-05,\n",
|
||||
" 3.27371636e-05, 2.36247342e-05, 4.71842868e-05, 2.51329329e-05,\n",
|
||||
" 1.72788041e-05, 4.25959151e-05, 3.08176059e-05, 2.34919771e-05,\n",
|
||||
" 5.41002009e-05, 3.36215126e-05, 1.79904982e-05, 2.53810031e-05,\n",
|
||||
" 2.60098714e-05, 3.03672950e-05, 3.66435743e-05, 1.58947860e-05,\n",
|
||||
" 1.63827226e-05, 1.00799487e-04, 7.84313306e-05, 3.44231594e-05,\n",
|
||||
" 2.11487786e-05, 2.59042172e-05, 2.38122284e-05, 4.10612520e-05,\n",
|
||||
" 7.02113830e-05, 5.71030141e-05, 3.37046913e-05, 5.70804186e-05,\n",
|
||||
" 4.23736856e-05, 5.22688570e-05, 1.19306824e-05, 5.73656653e-05,\n",
|
||||
" 7.22504701e-05, 3.07254595e-05, 1.82885069e-05, 3.95821407e-05,\n",
|
||||
" 3.25709625e-05, 3.28924471e-05, 9.97101160e-05, 2.37495660e-05,\n",
|
||||
" 4.08896231e-05, 5.16752771e-05, 2.26338507e-05, 3.62301726e-05,\n",
|
||||
" 3.16428268e-05, 3.80293750e-05, 2.15715372e-05, 4.88352052e-05,\n",
|
||||
" 5.38927270e-05, 1.52157181e-05, 3.10339638e-05, 7.30824031e-05,\n",
|
||||
" 5.28455093e-05, 7.30705578e-05, 5.31096957e-05, 2.09516438e-05,\n",
|
||||
" 2.92397508e-05, 1.90421888e-05, 2.24708656e-05, 5.41521295e-05,\n",
|
||||
" 4.26290353e-05, 2.68298045e-05, 1.68493905e-04, 7.92833162e-05,\n",
|
||||
" 2.62088943e-05, 3.02322060e-05, 4.04044986e-05, 3.22642190e-05,\n",
|
||||
" 3.49984402e-05, 6.04081906e-05, 3.70786656e-05, 3.49358452e-05,\n",
|
||||
" 5.57010717e-05, 6.00058593e-05, 2.44417952e-05, 5.71263263e-05,\n",
|
||||
" 3.27215894e-05, 3.40137776e-05, 1.18062126e-05, 6.24499153e-05,\n",
|
||||
" 6.80528974e-05, 5.41649897e-05, 2.79301566e-05, 6.44374522e-05,\n",
|
||||
" 2.42756778e-05, 1.69049254e-05, 3.07581759e-05, 3.14246172e-05,\n",
|
||||
" 4.06918080e-05, 1.29262517e-05, 3.18938037e-05, 2.57563679e-05,\n",
|
||||
" 4.66018973e-05, 5.63653448e-05, 3.38337704e-05, 2.77759864e-05,\n",
|
||||
" 2.10913349e-05, 2.39067704e-05, 5.60720728e-05, 3.24895882e-05,\n",
|
||||
" 3.95797579e-05, 8.02239229e-05, 1.01568221e-05, 2.71663976e-05,\n",
|
||||
" 4.02033183e-05, 3.46393863e-05, 1.89300026e-05, 6.62385719e-05,\n",
|
||||
" 2.61362366e-05, 1.68800689e-05, 4.81760289e-05, 2.70115488e-05,\n",
|
||||
" 6.33779127e-05, 9.80817131e-05, 1.00358353e-04, 1.55335729e-05,\n",
|
||||
" 6.72744281e-05, 2.67282903e-05, 5.28051060e-05, 2.69585871e-05,\n",
|
||||
" 4.51873238e-05, 1.50139886e-05, 4.31409971e-05, 2.19486756e-05,\n",
|
||||
" 2.87472831e-05, 2.18212153e-05, 5.95341808e-05, 4.36046794e-05,\n",
|
||||
" 2.39972505e-05, 2.03701456e-05, 2.67976375e-05, 1.95541270e-05,\n",
|
||||
" 4.66474739e-05, 3.60696904e-05, 1.68149654e-05, 2.32476013e-05,\n",
|
||||
" 2.53057151e-05, 3.79400080e-05, 4.50154475e-05, 4.20643009e-05,\n",
|
||||
" 2.56692183e-05, 3.46892048e-05, 3.57301287e-05, 3.69169247e-05,\n",
|
||||
" 3.51629387e-05, 3.76432145e-05, 2.19416434e-05, 2.46851632e-05,\n",
|
||||
" 6.47292763e-05, 4.90587045e-05, 9.84386497e-05, 4.41858792e-05,\n",
|
||||
" 1.67457674e-05, 7.27501538e-05, 1.29627551e-05, 4.09351560e-05,\n",
|
||||
" 4.05014071e-05, 5.26341646e-05, 3.61480306e-05, 4.60584415e-05,\n",
|
||||
" 6.60331425e-05, 2.30091900e-05, 4.62839744e-05, 2.25598706e-05,\n",
|
||||
" 3.43657230e-05, 4.64396071e-05, 4.21469849e-05, 4.19451717e-05,\n",
|
||||
" 1.95988596e-05, 2.81598495e-05, 2.10212929e-05, 2.90198714e-05,\n",
|
||||
" 3.15838552e-05, 1.42506296e-05, 1.96369892e-05, 1.41922828e-05,\n",
|
||||
" 2.51494548e-05, 5.29627650e-05, 2.34803319e-05, 2.45093706e-05,\n",
|
||||
" 5.32276354e-05, 3.60567574e-05, 3.34151773e-05, 4.46611339e-05,\n",
|
||||
" 1.84819692e-05, 3.09412899e-05, 4.80864292e-05, 4.70178165e-05,\n",
|
||||
" 7.63339776e-05, 4.71588719e-05, 4.23062920e-05, 4.86267672e-05,\n",
|
||||
" 2.40010959e-05, 1.92821499e-05, 1.88615959e-05, 5.05874268e-05,\n",
|
||||
" 4.82907526e-05, 3.97411168e-05, 5.72696772e-05, 2.60267680e-05,\n",
|
||||
" 5.70021366e-05, 1.14346831e-05, 5.19260393e-05, 6.01843822e-05]],\n",
|
||||
" dtype=float32)"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model.predict(img)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Use Gradio Instead"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"inp = gradio.inputs.ImageUpload()\n",
|
||||
"out = gradio.outputs.Label()\n",
|
||||
"\n",
|
||||
"io = gradio.Interface(inputs=inp, \n",
|
||||
" outputs=out,\n",
|
||||
" model=model, \n",
|
||||
" model_type='keras')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"NOTE: Gradio is in beta stage, please report all bugs to: a12d@stanford.edu\n",
|
||||
"Model is running locally at: http://localhost:7878/interface.html\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"io.launch(inline=True, inbrowser=True, share=True, validate=False);"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 (tensorflow)",
|
||||
"language": "python",
|
||||
"name": "tensorflow"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@ -1,84 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%load_ext autoreload\n",
|
||||
"%autoreload 2\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import gradio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = tf.keras.applications.inception_v3.InceptionV3()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"inp = gradio.inputs.ImageUpload(image_width=299, image_height=299, num_channels=3, aspect_ratio=False)\n",
|
||||
"out = gradio.outputs.Label(label_names='imagenet1000', max_label_length=8, num_top_classes=5)\n",
|
||||
"\n",
|
||||
"io = gradio.Interface(inputs=inp, \n",
|
||||
" outputs=out,\n",
|
||||
" model=model, \n",
|
||||
" model_type='keras')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"NOTE: Gradio is in beta stage, please report all bugs to: a12d@stanford.edu\n",
|
||||
"Model is running locally at: http://localhost:7860/interface.html\n",
|
||||
"To create a public link, set `share=True` in the argument to `launch()`\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"io.launch(inline=False, inbrowser=True, share=False, validate=False);"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 (tensorflow)",
|
||||
"language": "python",
|
||||
"name": "tensorflow"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
746
Test Pytorch.ipynb
Normal file
@ -0,0 +1,746 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%load_ext autoreload\n",
|
||||
"%autoreload 2\n",
|
||||
"\n",
|
||||
"import torch\n",
|
||||
"import torch.nn as nn\n",
|
||||
"import torchvision\n",
|
||||
"import torchvision.transforms as transforms\n",
|
||||
"import gradio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Device configuration\n",
|
||||
"device = torch.device('cpu')\n",
|
||||
"\n",
|
||||
"# Hyper-parameters \n",
|
||||
"input_size = 784\n",
|
||||
"hidden_size = 500\n",
|
||||
"num_classes = 10\n",
|
||||
"num_epochs = 2\n",
|
||||
"batch_size = 100\n",
|
||||
"learning_rate = 0.001\n",
|
||||
"\n",
|
||||
"# MNIST dataset \n",
|
||||
"train_dataset = torchvision.datasets.MNIST(root='../../data', train=True, transform=transforms.ToTensor(), download=True)\n",
|
||||
"test_dataset = torchvision.datasets.MNIST(root='../../data',train=False, transform=transforms.ToTensor())\n",
|
||||
"train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size,shuffle=True)\n",
|
||||
"test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Epoch [1/2], Step [100/600], Loss: 0.4317\n",
|
||||
"Epoch [1/2], Step [200/600], Loss: 0.2267\n",
|
||||
"Epoch [1/2], Step [300/600], Loss: 0.2052\n",
|
||||
"Epoch [1/2], Step [400/600], Loss: 0.1179\n",
|
||||
"Epoch [1/2], Step [500/600], Loss: 0.1108\n",
|
||||
"Epoch [1/2], Step [600/600], Loss: 0.1830\n",
|
||||
"Epoch [2/2], Step [100/600], Loss: 0.0972\n",
|
||||
"Epoch [2/2], Step [200/600], Loss: 0.0662\n",
|
||||
"Epoch [2/2], Step [300/600], Loss: 0.1487\n",
|
||||
"Epoch [2/2], Step [400/600], Loss: 0.0640\n",
|
||||
"Epoch [2/2], Step [500/600], Loss: 0.0425\n",
|
||||
"Epoch [2/2], Step [600/600], Loss: 0.0979\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Fully connected neural network with one hidden layer\n",
|
||||
"class NeuralNet(nn.Module):\n",
|
||||
" def __init__(self, input_size, hidden_size, num_classes):\n",
|
||||
" super(NeuralNet, self).__init__()\n",
|
||||
" self.fc1 = nn.Linear(input_size, hidden_size) \n",
|
||||
" self.relu = nn.ReLU()\n",
|
||||
" self.fc2 = nn.Linear(hidden_size, num_classes) \n",
|
||||
" \n",
|
||||
" def forward(self, x):\n",
|
||||
" out = self.fc1(x)\n",
|
||||
" out = self.relu(out)\n",
|
||||
" out = self.fc2(out)\n",
|
||||
" return out\n",
|
||||
"\n",
|
||||
"model = NeuralNet(input_size, hidden_size, num_classes).to(device)\n",
|
||||
"\n",
|
||||
"# Loss and optimizer\n",
|
||||
"criterion = nn.CrossEntropyLoss()\n",
|
||||
"optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) \n",
|
||||
"\n",
|
||||
"# Train the model\n",
|
||||
"total_step = len(train_loader)\n",
|
||||
"for epoch in range(num_epochs):\n",
|
||||
" for i, (images, labels) in enumerate(train_loader): \n",
|
||||
" # Move tensors to the configured device\n",
|
||||
" images = images.reshape(-1, 28*28).to(device)\n",
|
||||
" labels = labels.to(device)\n",
|
||||
" \n",
|
||||
" # Forward pass\n",
|
||||
" outputs = model(images)\n",
|
||||
" loss = criterion(outputs, labels)\n",
|
||||
" \n",
|
||||
" # Backward and optimize\n",
|
||||
" optimizer.zero_grad()\n",
|
||||
" loss.backward()\n",
|
||||
" optimizer.step()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Accuracy of the network on the 10000 test images: 97.04 %\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Test the model\n",
|
||||
"# In test phase, we don't need to compute gradients (for memory efficiency)\n",
|
||||
"with torch.no_grad():\n",
|
||||
" correct = 0\n",
|
||||
" total = 0\n",
|
||||
" for images, labels in test_loader:\n",
|
||||
" images = images.reshape(-1, 28*28).to(device)\n",
|
||||
" labels = labels.to(device)\n",
|
||||
" outputs = model(images)\n",
|
||||
" _, predicted = torch.max(outputs.data, 1)\n",
|
||||
" total += labels.size(0)\n",
|
||||
" correct += (predicted == labels).sum().item()\n",
|
||||
"\n",
|
||||
" print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"torch.float64\n",
|
||||
"torch.float64\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"ename": "RuntimeError",
|
||||
"evalue": "Expected object of type torch.FloatTensor but found type torch.DoubleTensor for argument #4 'mat1'",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[1;31mRuntimeError\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[1;32m<ipython-input-39-d6583191b5ef>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[0mvalue\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mautograd\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mVariable\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdtype\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 5\u001b[1;33m \u001b[0mprediction\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmodel\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
|
||||
"\u001b[1;32m~\\Anaconda3\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m__call__\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m 475\u001b[0m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 476\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 477\u001b[1;33m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 478\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 479\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[1;32m<ipython-input-9-abba6ac73cbf>\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, x)\u001b[0m\n\u001b[0;32m 8\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 9\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 10\u001b[1;33m \u001b[0mout\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfc1\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 11\u001b[0m \u001b[0mout\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrelu\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mout\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 12\u001b[0m \u001b[0mout\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfc2\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mout\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[1;32m~\\Anaconda3\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m__call__\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m 475\u001b[0m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 476\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 477\u001b[1;33m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 478\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 479\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[1;32m~\\Anaconda3\\lib\\site-packages\\torch\\nn\\modules\\linear.py\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, input)\u001b[0m\n\u001b[0;32m 53\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 54\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 55\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mF\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlinear\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mweight\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mbias\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 56\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 57\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mextra_repr\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[1;32m~\\Anaconda3\\lib\\site-packages\\torch\\nn\\functional.py\u001b[0m in \u001b[0;36mlinear\u001b[1;34m(input, weight, bias)\u001b[0m\n\u001b[0;32m 1022\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdim\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;36m2\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mbias\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1023\u001b[0m \u001b[1;31m# fused op is marginally faster\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1024\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0maddmm\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mbias\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mt\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 1025\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1026\u001b[0m \u001b[0moutput\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0minput\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmatmul\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mweight\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mt\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[1;31mRuntimeError\u001b[0m: Expected object of type torch.FloatTensor but found type torch.DoubleTensor for argument #4 'mat1'"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"value = torch.from_numpy(images.numpy())\n",
|
||||
"print(value.dtype)\n",
|
||||
"value = torch.autograd.Variable(value)\n",
|
||||
"print(value.dtype)\n",
|
||||
"prediction = model(value)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"dtype('float64')"
|
||||
]
|
||||
},
|
||||
"execution_count": 38,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"images.numpy().astype('float64').dtype"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"(100, 10)"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prediction.data.numpy().shape"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"array([[-2.94313431e+00, -1.81460023e+00, -2.08448991e-01,\n",
|
||||
" -2.29123878e+00, -2.91417217e+00, -7.30904102e-01,\n",
|
||||
" -1.85286796e+00, -8.89607048e+00, 3.85826755e+00,\n",
|
||||
" -5.70444298e+00],\n",
|
||||
" [-5.27852488e+00, -9.87475681e+00, -3.23101878e+00,\n",
|
||||
" -3.27192068e+00, 2.99915814e+00, -4.19678402e+00,\n",
|
||||
" -6.34950256e+00, -4.51865005e+00, 7.18662143e-01,\n",
|
||||
" 4.91613436e+00],\n",
|
||||
" [ 5.31619835e+00, -4.94643354e+00, -7.60741353e-01,\n",
|
||||
" -3.37821364e+00, -2.58448744e+00, -1.16258490e+00,\n",
|
||||
" -2.44758511e+00, -2.42502451e+00, -2.97429585e+00,\n",
|
||||
" -8.71329665e-01],\n",
|
||||
" [-6.26879740e+00, 5.35215139e+00, -1.39423239e+00,\n",
|
||||
" -3.57356954e+00, -1.04397392e+00, -6.51621342e+00,\n",
|
||||
" -5.03530502e+00, -3.36044490e-01, -1.06999171e+00,\n",
|
||||
" -5.35540390e+00],\n",
|
||||
" [ 5.72689712e-01, -4.73341894e+00, 9.67390776e-01,\n",
|
||||
" -7.11784005e-01, -2.87459540e+00, -3.85147333e-03,\n",
|
||||
" -1.63910186e+00, -3.20800948e+00, -1.86211896e+00,\n",
|
||||
" -5.54116011e+00],\n",
|
||||
" [-2.28098822e+00, -5.37271118e+00, 1.50332046e+00,\n",
|
||||
" 1.23391628e+00, -8.18955231e+00, -7.10122824e+00,\n",
|
||||
" -9.54822731e+00, 2.04598665e+00, 2.21477568e-01,\n",
|
||||
" -5.82763791e-01],\n",
|
||||
" [-9.81631875e-02, -4.68611860e+00, 4.79472011e-01,\n",
|
||||
" -5.89810753e+00, 4.02780437e+00, -2.99009085e+00,\n",
|
||||
" 9.27805245e-01, -3.35206652e+00, -2.87583947e+00,\n",
|
||||
" -3.54016685e+00],\n",
|
||||
" [ 7.91082382e-02, -3.29304123e+00, -3.03544235e+00,\n",
|
||||
" -4.35647297e+00, -2.58279252e+00, 5.38625240e+00,\n",
|
||||
" -6.60099745e-01, -4.54817867e+00, 3.72485667e-01,\n",
|
||||
" -5.45329714e+00],\n",
|
||||
" [-3.40048730e-01, -2.23622179e+00, -1.75288630e+00,\n",
|
||||
" -4.22681570e+00, -5.96652508e-01, 9.88374472e-01,\n",
|
||||
" 9.12128639e+00, -6.91706181e+00, -5.71193886e+00,\n",
|
||||
" -7.31577396e+00],\n",
|
||||
" [-3.58501768e+00, -9.25465584e+00, -5.46614408e-01,\n",
|
||||
" -2.43667293e+00, -6.48066759e+00, -3.89760876e+00,\n",
|
||||
" -1.38017788e+01, 9.51254082e+00, -2.95755482e+00,\n",
|
||||
" 2.09405303e+00],\n",
|
||||
" [-1.75172710e+00, -2.98126078e+00, -6.65290546e+00,\n",
|
||||
" -2.85864210e+00, -5.55760241e+00, 2.44312382e+00,\n",
|
||||
" -3.61811829e+00, -4.92458248e+00, 4.85441971e+00,\n",
|
||||
" -3.99161577e+00],\n",
|
||||
" [ 7.10333920e+00, -5.87376070e+00, 7.35742152e-01,\n",
|
||||
" -4.57388163e+00, -5.62757587e+00, -8.69627833e-01,\n",
|
||||
" -3.81240129e+00, -1.22680414e+00, -5.86168003e+00,\n",
|
||||
" -3.06198215e+00],\n",
|
||||
" [-4.35662460e+00, 7.14129639e+00, 4.32708621e-01,\n",
|
||||
" -1.88450491e+00, -3.92650890e+00, -4.76905346e+00,\n",
|
||||
" -4.78737926e+00, -2.06425619e+00, -1.43031192e+00,\n",
|
||||
" -8.51775265e+00],\n",
|
||||
" [-5.01732492e+00, -4.75002670e+00, 7.48586702e+00,\n",
|
||||
" 8.71298671e-01, -6.77810001e+00, -4.56456995e+00,\n",
|
||||
" -5.64565229e+00, -2.32957077e+00, 1.09462869e+00,\n",
|
||||
" -5.92619801e+00],\n",
|
||||
" [-3.25442362e+00, -3.75993347e+00, -3.17320156e+00,\n",
|
||||
" 4.08569860e+00, -8.89118862e+00, -1.56907606e+00,\n",
|
||||
" -1.29745827e+01, -4.13903046e+00, 1.30396795e+00,\n",
|
||||
" 4.25274998e-01],\n",
|
||||
" [-3.55768895e+00, -2.09418583e+00, -1.02781892e+00,\n",
|
||||
" -6.95499659e+00, 6.68295813e+00, -2.23202968e+00,\n",
|
||||
" -2.39104450e-01, -4.58472347e+00, -4.44918251e+00,\n",
|
||||
" -5.70568848e+00],\n",
|
||||
" [-3.22387075e+00, -1.00904818e+01, -1.66163158e+00,\n",
|
||||
" -4.08348942e+00, -3.42650390e+00, -3.48878241e+00,\n",
|
||||
" -1.04407053e+01, 6.01407433e+00, -1.70194793e+00,\n",
|
||||
" 3.92319489e+00],\n",
|
||||
" [-2.36084223e+00, -7.12867594e+00, 2.79588461e-01,\n",
|
||||
" -3.45690346e+00, -3.48034048e+00, -2.39581585e+00,\n",
|
||||
" -2.31899548e+00, -7.42060089e+00, 8.48381615e+00,\n",
|
||||
" -6.04322863e+00],\n",
|
||||
" [-4.19490576e+00, -1.02733526e+01, -1.44479012e+00,\n",
|
||||
" -4.82172012e+00, 3.25319171e-02, -3.11602783e+00,\n",
|
||||
" -6.72438049e+00, 3.06269407e-01, -1.48180246e+00,\n",
|
||||
" 4.33638811e+00],\n",
|
||||
" [-2.75081682e+00, -7.28020811e+00, -2.98303461e+00,\n",
|
||||
" -2.76366043e+00, -4.09473085e+00, -3.54056692e+00,\n",
|
||||
" -1.37984486e+01, 8.48108864e+00, -4.28329992e+00,\n",
|
||||
" 3.44067287e+00],\n",
|
||||
" [-1.47935200e+00, -4.31553364e+00, -1.80156577e+00,\n",
|
||||
" -3.10084033e+00, -7.65861988e+00, -2.25040245e+00,\n",
|
||||
" -5.25622416e+00, -6.60806179e+00, 6.59777069e+00,\n",
|
||||
" -3.74126458e+00],\n",
|
||||
" [-3.09584522e+00, -4.03994560e+00, 1.39546502e+00,\n",
|
||||
" -1.71985483e+00, -3.30831736e-01, -9.78809655e-01,\n",
|
||||
" 5.82206869e+00, -6.38060808e+00, -4.40905428e+00,\n",
|
||||
" -5.61296463e+00],\n",
|
||||
" [-4.33585930e+00, -2.28015685e+00, -6.50762844e+00,\n",
|
||||
" -7.50386524e+00, 3.49900460e+00, -1.83042765e-01,\n",
|
||||
" -4.88598967e+00, -2.54932785e+00, -1.70414543e+00,\n",
|
||||
" -1.71335429e-01],\n",
|
||||
" [-6.33788157e+00, 8.20950222e+00, -1.09110951e+00,\n",
|
||||
" -4.89209270e+00, -2.66071391e+00, -5.96939754e+00,\n",
|
||||
" -4.53781509e+00, -1.31869841e+00, -2.04262447e+00,\n",
|
||||
" -7.43765354e+00],\n",
|
||||
" [-1.98968935e+00, -1.08618965e+01, -1.73519349e+00,\n",
|
||||
" -3.69034433e+00, 7.34611869e-01, -3.01101327e+00,\n",
|
||||
" -8.48536491e+00, 1.87765157e+00, -1.50498271e+00,\n",
|
||||
" 4.90581846e+00],\n",
|
||||
" [-3.59241199e+00, -2.45339823e+00, 2.14817572e+00,\n",
|
||||
" 9.27214742e-01, -8.07486057e+00, -2.06884146e+00,\n",
|
||||
" -7.01898956e+00, -1.67429686e+00, 1.15502942e+00,\n",
|
||||
" -4.88388157e+00],\n",
|
||||
" [-5.33111954e+00, -1.68560290e+00, 4.15555894e-01,\n",
|
||||
" -1.83269072e+00, -2.49122381e+00, -1.24994302e+00,\n",
|
||||
" -1.90831006e+00, -3.09089851e+00, 3.47727752e+00,\n",
|
||||
" -5.99055147e+00],\n",
|
||||
" [-2.52312398e+00, -6.40426540e+00, 2.39003658e+00,\n",
|
||||
" -5.73811722e+00, 4.70678949e+00, -5.00276566e+00,\n",
|
||||
" 2.86472030e-04, -3.41072738e-01, -4.31854200e+00,\n",
|
||||
" -1.90522814e+00],\n",
|
||||
" [-3.09813952e+00, -7.31600761e+00, 1.46198285e+00,\n",
|
||||
" -6.91232681e+00, 6.62286377e+00, -3.52320147e+00,\n",
|
||||
" -2.95166254e+00, -1.69830823e+00, -3.61260891e+00,\n",
|
||||
" -3.76430154e-03],\n",
|
||||
" [-3.90675402e+00, -8.58440208e+00, 3.84091437e-01,\n",
|
||||
" -9.11678314e-01, -8.35619164e+00, -5.30601501e+00,\n",
|
||||
" -1.34841938e+01, 7.37753201e+00, -1.02802634e+00,\n",
|
||||
" 3.48227167e+00],\n",
|
||||
" [ 8.09841061e+00, -5.84553242e+00, -4.36034203e-02,\n",
|
||||
" -3.31476593e+00, -7.94556332e+00, -1.81487560e+00,\n",
|
||||
" -1.08142841e+00, -4.74964380e+00, -3.62896776e+00,\n",
|
||||
" -3.67098570e+00],\n",
|
||||
" [-4.82872438e+00, 6.24776268e+00, -2.81209302e+00,\n",
|
||||
" -3.99583030e+00, -4.35030222e+00, -5.47072029e+00,\n",
|
||||
" -5.74521732e+00, -6.83430016e-01, -2.22886491e+00,\n",
|
||||
" -4.28679466e+00],\n",
|
||||
" [-1.13239086e+00, -1.07505608e+01, -3.85221720e-01,\n",
|
||||
" -4.16249514e+00, 1.11317813e-01, -5.01096535e+00,\n",
|
||||
" -7.09929132e+00, 5.47274947e-01, -2.61468601e+00,\n",
|
||||
" 5.91940689e+00],\n",
|
||||
" [-4.76688623e+00, -3.39046216e+00, 8.61355019e+00,\n",
|
||||
" -9.85053182e-02, -2.67433786e+00, -3.72860909e+00,\n",
|
||||
" -2.70728278e+00, -5.08575344e+00, -2.89577341e+00,\n",
|
||||
" -6.25328112e+00],\n",
|
||||
" [-3.26012516e+00, -3.56679535e+00, -2.13104582e+00,\n",
|
||||
" -4.59061265e-01, -5.79459000e+00, -1.60959554e+00,\n",
|
||||
" -5.09219933e+00, -7.62273407e+00, 6.20947170e+00,\n",
|
||||
" -3.95186377e+00],\n",
|
||||
" [-1.31348062e+00, -5.19767284e+00, -1.56831324e-01,\n",
|
||||
" -1.34070158e+00, -8.09649467e+00, -4.45510674e+00,\n",
|
||||
" -1.38942327e+01, 8.03822708e+00, -4.33768272e+00,\n",
|
||||
" 2.58261514e+00],\n",
|
||||
" [-4.15833044e+00, -5.74338055e+00, -5.63697433e+00,\n",
|
||||
" -1.24962544e+00, -8.88556576e+00, 2.62740111e+00,\n",
|
||||
" -8.16130829e+00, -6.14461994e+00, 5.57290173e+00,\n",
|
||||
" -2.04997277e+00],\n",
|
||||
" [-3.57854271e+00, -4.63059044e+00, 7.85657692e+00,\n",
|
||||
" 1.90798604e+00, -3.72001743e+00, -2.77965403e+00,\n",
|
||||
" -2.73498774e+00, -5.69463062e+00, -3.98288202e+00,\n",
|
||||
" -8.08887482e+00],\n",
|
||||
" [ 5.72618544e-01, -2.57825613e+00, -1.72792041e+00,\n",
|
||||
" -3.51139021e+00, -1.85640740e+00, 1.42014265e+00,\n",
|
||||
" 8.85237503e+00, -6.99086475e+00, -6.19104099e+00,\n",
|
||||
" -8.19126129e+00],\n",
|
||||
" [ 8.68018246e+00, -7.40369701e+00, -2.29292154e+00,\n",
|
||||
" -4.26178265e+00, -4.36462879e+00, -4.42296028e-01,\n",
|
||||
" -1.77303386e+00, -1.92960644e+00, -5.18078184e+00,\n",
|
||||
" -3.03363776e+00],\n",
|
||||
" [ 8.04516435e-01, -2.99887037e+00, -7.78589845e-01,\n",
|
||||
" -6.35569668e+00, 2.63802457e+00, -2.18808126e+00,\n",
|
||||
" 3.06124830e+00, -7.12826371e-01, -6.37444162e+00,\n",
|
||||
" -3.09541106e+00],\n",
|
||||
" [-2.78797674e+00, -6.94107354e-01, -3.76091480e+00,\n",
|
||||
" -1.08733892e-01, -4.78449726e+00, 2.34188890e+00,\n",
|
||||
" 1.54788947e+00, -5.22505283e+00, -2.23338032e+00,\n",
|
||||
" -4.30411434e+00],\n",
|
||||
" [-6.92954063e+00, -3.42388296e+00, -4.97031927e+00,\n",
|
||||
" 8.09408665e+00, -1.31485920e+01, 3.95852685e+00,\n",
|
||||
" -1.46645641e+01, -1.01325397e+01, 4.84749079e-01,\n",
|
||||
" -3.03775525e+00],\n",
|
||||
" [-5.77645302e+00, -3.56791949e+00, -1.40874970e+00,\n",
|
||||
" 2.98872280e+00, -1.05846815e+01, 1.69422913e+00,\n",
|
||||
" -1.02357826e+01, -5.04028559e+00, 4.78248119e-01,\n",
|
||||
" -4.10274601e+00],\n",
|
||||
" [-7.54200649e+00, -6.78473139e+00, -4.01818991e+00,\n",
|
||||
" 4.47627008e-01, -5.16251040e+00, 1.08258379e+00,\n",
|
||||
" -1.01345644e+01, -3.04541993e+00, 3.06609201e+00,\n",
|
||||
" 1.39591312e+00],\n",
|
||||
" [-6.21183348e+00, -6.91482210e+00, -5.28124046e+00,\n",
|
||||
" -5.25641012e+00, 2.62771082e+00, -7.80999720e-01,\n",
|
||||
" -9.78050613e+00, -6.41314983e-01, -1.48106194e+00,\n",
|
||||
" 5.20257568e+00],\n",
|
||||
" [-4.19852495e+00, 5.86203766e+00, -3.27771401e+00,\n",
|
||||
" -3.64987040e+00, -4.44587040e+00, -5.03493881e+00,\n",
|
||||
" -5.90556049e+00, -1.82890546e+00, -1.87745023e+00,\n",
|
||||
" -4.08415413e+00],\n",
|
||||
" [ 4.50412154e-01, -5.91576576e+00, 2.40946472e-01,\n",
|
||||
" -5.48061371e+00, 7.73100901e+00, -3.05289888e+00,\n",
|
||||
" 8.29278767e-01, -1.09182882e+00, -9.85262775e+00,\n",
|
||||
" -5.13459969e+00],\n",
|
||||
" [ 7.03677177e+00, -5.27548599e+00, -8.36283922e-01,\n",
|
||||
" -3.35303903e+00, -8.96849060e+00, 1.27885199e+00,\n",
|
||||
" -3.88307619e+00, -1.76440930e+00, -4.93410468e-01,\n",
|
||||
" -4.81816626e+00],\n",
|
||||
" [ 2.10164666e-01, -2.73290563e+00, -9.83471498e-02,\n",
|
||||
" -3.72992039e+00, 1.46693587e+00, -3.98113275e+00,\n",
|
||||
" 8.09136391e+00, -2.84822083e+00, -7.46374559e+00,\n",
|
||||
" -6.29052639e+00],\n",
|
||||
" [-4.37877464e+00, 6.51550770e+00, -2.54614544e+00,\n",
|
||||
" -3.77313089e+00, -2.84641075e+00, -4.26150846e+00,\n",
|
||||
" -4.18192101e+00, -1.52866042e+00, -1.82835793e+00,\n",
|
||||
" -5.50512218e+00],\n",
|
||||
" [ 6.21710587e+00, -6.43846989e+00, -9.54680800e-01,\n",
|
||||
" -3.34206319e+00, -5.43830872e+00, -1.28298807e+00,\n",
|
||||
" -5.16852379e+00, 2.62163877e-02, -2.48312187e+00,\n",
|
||||
" -1.57222879e+00],\n",
|
||||
" [ 1.11582479e+01, -9.11719894e+00, 3.93074095e-01,\n",
|
||||
" -6.16349459e+00, -7.34191084e+00, -1.21771574e+00,\n",
|
||||
" -1.06905496e+00, -4.17815685e+00, -3.26719928e+00,\n",
|
||||
" -5.55661106e+00],\n",
|
||||
" [ 6.24382555e-01, -2.50315714e+00, -1.32084340e-01,\n",
|
||||
" -4.99415159e+00, 9.88243341e-01, -1.59092569e+00,\n",
|
||||
" 3.79477382e+00, -2.61957884e+00, -4.36313868e+00,\n",
|
||||
" -3.75159168e+00],\n",
|
||||
" [-2.41888553e-01, -7.18930769e+00, 7.67069864e+00,\n",
|
||||
" -1.17135257e-01, -1.03810005e+01, -8.67830658e+00,\n",
|
||||
" -6.22834063e+00, 9.77339983e-01, -1.33681512e+00,\n",
|
||||
" -4.75062370e+00],\n",
|
||||
" [-2.68741202e+00, 5.81236267e+00, 1.25405538e+00,\n",
|
||||
" -3.28662777e+00, -4.22186947e+00, -5.25563431e+00,\n",
|
||||
" -4.08522320e+00, -2.46859384e+00, -1.61857891e+00,\n",
|
||||
" -8.00895596e+00],\n",
|
||||
" [-5.18400049e+00, 7.41901207e+00, 6.58489108e-01,\n",
|
||||
" -3.35169506e+00, -2.88367128e+00, -6.68690157e+00,\n",
|
||||
" -5.40690804e+00, -4.45784926e-01, -2.28699875e+00,\n",
|
||||
" -6.92785597e+00],\n",
|
||||
" [-1.77721453e+00, -8.00125504e+00, 1.24888480e-01,\n",
|
||||
" -1.47997165e+00, -6.09206104e+00, -4.90260983e+00,\n",
|
||||
" -1.17006941e+01, 7.79809666e+00, -3.09374452e+00,\n",
|
||||
" 2.66956711e+00],\n",
|
||||
" [-5.29130554e+00, -1.53717768e+00, -7.01504350e-01,\n",
|
||||
" 7.18017042e-01, -8.73529816e+00, -5.64459801e+00,\n",
|
||||
" -1.26275997e+01, 6.27234268e+00, -2.77057409e+00,\n",
|
||||
" -1.00017822e+00],\n",
|
||||
" [-4.33235550e+00, -4.33209330e-01, -1.59399450e-01,\n",
|
||||
" -8.48658442e-01, -4.34042692e+00, -4.76393986e+00,\n",
|
||||
" -6.11099434e+00, -1.05563331e+00, 2.25537944e+00,\n",
|
||||
" -2.38121748e+00],\n",
|
||||
" [-6.10877466e+00, -7.61181355e+00, -1.87066281e+00,\n",
|
||||
" -6.10769939e+00, 7.97212219e+00, -5.86680508e+00,\n",
|
||||
" -4.92918396e+00, 1.71273947e-03, -3.98121977e+00,\n",
|
||||
" 1.64842772e+00],\n",
|
||||
" [-1.94559216e+00, -1.79278302e+00, 1.03016114e+00,\n",
|
||||
" -4.36209917e+00, 5.04958749e-01, -1.91686034e+00,\n",
|
||||
" 8.24356842e+00, -6.12818527e+00, -4.80130959e+00,\n",
|
||||
" -7.60297966e+00],\n",
|
||||
" [ 5.55045605e+00, -5.02859211e+00, -8.79769921e-01,\n",
|
||||
" -1.37008476e+00, -7.79281998e+00, -2.99451381e-01,\n",
|
||||
" -6.21967697e+00, -5.89890778e-01, -3.06062031e+00,\n",
|
||||
" -1.72406542e+00],\n",
|
||||
" [-1.33018994e+00, -9.49907684e+00, -5.35935163e-02,\n",
|
||||
" 4.34355378e-01, -9.34434700e+00, -2.35429907e+00,\n",
|
||||
" -1.26184874e+01, 7.76982164e+00, -3.26596594e+00,\n",
|
||||
" 9.92987037e-01],\n",
|
||||
" [ 6.22934723e+00, -5.23340511e+00, 6.14945412e-01,\n",
|
||||
" -2.38412142e+00, -4.16401768e+00, -1.99598610e+00,\n",
|
||||
" -6.43658400e-01, -6.30997133e+00, -2.06985307e+00,\n",
|
||||
" -4.36144400e+00],\n",
|
||||
" [-6.59722328e+00, -2.47505140e+00, -1.26668119e+00,\n",
|
||||
" 7.06556988e+00, -5.15069532e+00, 1.30961788e+00,\n",
|
||||
" -8.17326546e+00, -9.53056812e+00, -1.30305231e-01,\n",
|
||||
" -3.92515922e+00],\n",
|
||||
" [-1.00758219e+00, -2.31400847e+00, -5.64392984e-01,\n",
|
||||
" -5.06872416e+00, 1.46974552e+00, -6.33509874e-01,\n",
|
||||
" 7.41877127e+00, -2.42539763e+00, -8.05217457e+00,\n",
|
||||
" -6.49750757e+00],\n",
|
||||
" [-2.42883348e+00, -4.32216549e+00, -3.23260427e+00,\n",
|
||||
" -1.33107811e-01, -7.12213898e+00, 5.69173694e-01,\n",
|
||||
" -5.42469597e+00, -8.09215260e+00, 5.31586075e+00,\n",
|
||||
" -2.59745455e+00],\n",
|
||||
" [-3.88383985e+00, -4.58199167e+00, 1.53154266e+00,\n",
|
||||
" 3.95189553e-01, -8.60041714e+00, -5.04962873e+00,\n",
|
||||
" -1.34752960e+01, 9.81561279e+00, -3.49055672e+00,\n",
|
||||
" -5.30974925e-01],\n",
|
||||
" [-5.55807590e+00, 7.13408756e+00, -1.97236156e+00,\n",
|
||||
" -4.12845612e+00, -2.06563425e+00, -5.16105556e+00,\n",
|
||||
" -4.59559488e+00, -7.00516820e-01, -2.03985214e+00,\n",
|
||||
" -5.80674982e+00],\n",
|
||||
" [-7.74202251e+00, 6.83303058e-01, -4.17243576e+00,\n",
|
||||
" 4.54120445e+00, -7.56257725e+00, 7.56401730e+00,\n",
|
||||
" -3.17737961e+00, -8.74649048e+00, -4.01797676e+00,\n",
|
||||
" -5.54178333e+00],\n",
|
||||
" [-1.94957161e+00, -5.57329273e+00, 6.73393011e+00,\n",
|
||||
" -4.60381508e-01, -4.35702658e+00, -5.61905670e+00,\n",
|
||||
" -5.04878044e+00, -1.16630566e+00, -1.51967692e+00,\n",
|
||||
" -5.04717875e+00],\n",
|
||||
" [-6.19228888e+00, -4.36215115e+00, -1.55550504e+00,\n",
|
||||
" -6.67865229e+00, 9.32039070e+00, -4.30677795e+00,\n",
|
||||
" -3.91612124e+00, -2.58168817e+00, -4.25660133e+00,\n",
|
||||
" -1.19800948e-01],\n",
|
||||
" [-4.99262571e+00, -1.04954376e+01, -3.04880023e+00,\n",
|
||||
" -4.99716187e+00, 1.70066953e+00, -4.47071743e+00,\n",
|
||||
" -8.73496056e+00, -2.18293667e-02, 1.81334853e-01,\n",
|
||||
" 5.78750610e+00],\n",
|
||||
" [-2.80345392e+00, -4.72813892e+00, 4.39481020e-01,\n",
|
||||
" -6.51210117e+00, 7.47464228e+00, -4.14801550e+00,\n",
|
||||
" -2.00983143e+00, -2.08157349e+00, -4.23822260e+00,\n",
|
||||
" -3.23492408e-01],\n",
|
||||
" [-3.56214428e+00, -3.25979948e+00, 3.80091906e+00,\n",
|
||||
" 2.54850864e+00, -7.31272936e+00, -3.85736132e+00,\n",
|
||||
" -5.33244038e+00, -1.57806063e+00, 6.14682198e-01,\n",
|
||||
" -5.21179199e+00],\n",
|
||||
" [ 1.68500233e+00, -5.23059702e+00, 4.90493655e-01,\n",
|
||||
" -4.55232048e+00, 1.92345440e+00, -5.41529465e+00,\n",
|
||||
" 5.91364193e+00, -4.34861851e+00, -2.59588194e+00,\n",
|
||||
" -4.20972347e+00],\n",
|
||||
" [-3.76849008e+00, -4.20628738e+00, -2.41676593e+00,\n",
|
||||
" -7.79835606e+00, 6.51037598e+00, -3.28591871e+00,\n",
|
||||
" -3.44309497e+00, -1.11759353e+00, -3.44168925e+00,\n",
|
||||
" -7.18272209e-01],\n",
|
||||
" [-6.25707436e+00, 8.01829052e+00, -8.70854974e-01,\n",
|
||||
" -2.17074561e+00, -2.51921511e+00, -7.34632683e+00,\n",
|
||||
" -7.43379021e+00, 1.21018171e+00, -2.92493868e+00,\n",
|
||||
" -6.24052525e+00],\n",
|
||||
" [-1.54179430e+00, -1.00130758e+01, -1.60373425e+00,\n",
|
||||
" -3.60287333e+00, -1.85504389e+00, -4.95446873e+00,\n",
|
||||
" -1.13778887e+01, 8.44787598e+00, -3.98272920e+00,\n",
|
||||
" 3.42892790e+00],\n",
|
||||
" [-2.97215080e+00, -8.74941635e+00, 6.84806108e+00,\n",
|
||||
" 4.38772535e+00, -1.40652447e+01, -5.17511463e+00,\n",
|
||||
" -1.13551750e+01, 4.42862844e+00, -2.02880740e+00,\n",
|
||||
" -8.97097778e+00],\n",
|
||||
" [ 3.19274902e-01, -3.91941166e+00, -7.31602669e-01,\n",
|
||||
" -2.14523554e+00, -8.81083727e-01, 1.23677731e+00,\n",
|
||||
" 6.26848316e+00, -6.33044815e+00, -4.76621914e+00,\n",
|
||||
" -8.46128654e+00],\n",
|
||||
" [-2.28181696e+00, -2.24040413e+00, -2.13876939e+00,\n",
|
||||
" 4.07040000e-01, -4.47353649e+00, 8.65564048e-02,\n",
|
||||
" 2.86701989e+00, -3.77344036e+00, -5.10816383e+00,\n",
|
||||
" -6.24403095e+00],\n",
|
||||
" [ 1.22107010e+01, -8.57083988e+00, -2.32739854e+00,\n",
|
||||
" -7.13466644e+00, -1.25466442e+01, -1.57235324e+00,\n",
|
||||
" -4.42084503e+00, 2.85823584e-01, -4.89435768e+00,\n",
|
||||
" -4.91631460e+00],\n",
|
||||
" [-7.29047394e+00, 5.65978003e+00, -2.83476830e+00,\n",
|
||||
" -2.03514028e+00, -5.54133081e+00, -7.70633698e+00,\n",
|
||||
" -7.37060356e+00, -4.82241184e-01, -1.20001662e+00,\n",
|
||||
" -3.77805758e+00],\n",
|
||||
" [-5.97074461e+00, -6.08490705e-02, 9.34150600e+00,\n",
|
||||
" 1.63726914e+00, -1.67970066e+01, -7.40469837e+00,\n",
|
||||
" -1.28762379e+01, -4.04681563e-02, -1.38486892e-01,\n",
|
||||
" -5.77418995e+00],\n",
|
||||
" [-7.99248219e+00, -2.33761042e-01, 7.26457715e-01,\n",
|
||||
" 4.49915123e+00, -1.60805607e+01, -1.95129883e+00,\n",
|
||||
" -1.50286369e+01, -5.68364429e+00, 1.22412658e+00,\n",
|
||||
" -1.95644116e+00],\n",
|
||||
" [-8.16811371e+00, -8.41226864e+00, -3.68259668e+00,\n",
|
||||
" -7.08047009e+00, 1.02391491e+01, -3.60192299e+00,\n",
|
||||
" -4.16142845e+00, -3.06098413e+00, -2.04577184e+00,\n",
|
||||
" 4.95172143e-01],\n",
|
||||
" [-3.68254328e+00, -6.43464565e+00, -6.30004025e+00,\n",
|
||||
" -2.46165895e+00, -6.77013969e+00, 1.13204098e+01,\n",
|
||||
" -9.40734673e+00, -9.09241486e+00, 1.30543506e+00,\n",
|
||||
" -4.77968550e+00],\n",
|
||||
" [-1.94616914e-01, -6.30213881e+00, 3.80589724e-01,\n",
|
||||
" -4.76102734e+00, -7.39952683e-01, -2.18785381e+00,\n",
|
||||
" 1.20491581e+01, -1.19087849e+01, -3.75392103e+00,\n",
|
||||
" -1.00205212e+01],\n",
|
||||
" [-6.16003990e+00, -4.73755550e+00, 3.34087610e+00,\n",
|
||||
" 2.48680592e+00, -1.33565502e+01, -9.27250481e+00,\n",
|
||||
" -1.76171017e+01, 1.02097149e+01, -1.63418651e+00,\n",
|
||||
" -2.16037130e+00],\n",
|
||||
" [-6.05271399e-01, -8.01503754e+00, -1.52590179e+00,\n",
|
||||
" -1.42467916e-02, -1.16728144e+01, -1.34578657e+00,\n",
|
||||
" -7.63372087e+00, -7.25058126e+00, 7.69802189e+00,\n",
|
||||
" -5.38569546e+00],\n",
|
||||
" [-4.35134459e+00, -1.14437904e+01, -2.29104280e+00,\n",
|
||||
" -5.45764494e+00, 4.25746632e+00, -5.00416851e+00,\n",
|
||||
" -6.39387798e+00, -1.44824421e+00, -3.18393517e+00,\n",
|
||||
" 5.76170349e+00],\n",
|
||||
" [ 8.75680828e+00, -9.78944206e+00, -5.57915449e-01,\n",
|
||||
" -5.78618336e+00, -6.20521069e+00, -1.44726467e+00,\n",
|
||||
" -8.49293590e-01, -9.27665424e+00, 3.29888439e+00,\n",
|
||||
" -4.34358311e+00],\n",
|
||||
" [-6.03934145e+00, 6.55214643e+00, -1.52804327e+00,\n",
|
||||
" -5.59449673e+00, -5.46073723e+00, -6.28587151e+00,\n",
|
||||
" -3.62543583e+00, -3.13480830e+00, -1.56664109e+00,\n",
|
||||
" -6.25409508e+00],\n",
|
||||
" [-5.58640432e+00, 1.46627307e+00, 1.29633894e+01,\n",
|
||||
" 2.12619233e+00, -1.86619873e+01, -6.41922855e+00,\n",
|
||||
" -1.27642832e+01, -3.96311998e+00, -5.84557891e-01,\n",
|
||||
" -9.79300308e+00],\n",
|
||||
" [-6.58041859e+00, -1.57953396e-01, -4.17795897e-01,\n",
|
||||
" 8.16717339e+00, -1.60149975e+01, 1.75709569e+00,\n",
|
||||
" -1.52297287e+01, -5.96669102e+00, -4.64139891e+00,\n",
|
||||
" -9.46015775e-01],\n",
|
||||
" [-7.29091740e+00, -9.98701859e+00, -4.99693775e+00,\n",
|
||||
" -6.66315222e+00, 8.48139668e+00, -5.61966324e+00,\n",
|
||||
" -4.35089779e+00, -1.19920588e+00, 5.65500855e-01,\n",
|
||||
" 1.40877223e+00],\n",
|
||||
" [-3.93445063e+00, -3.84826946e+00, -6.23737240e+00,\n",
|
||||
" -6.25728607e+00, -3.60181952e+00, 1.01498127e+01,\n",
|
||||
" -2.82543993e+00, -7.06774330e+00, 1.51875269e+00,\n",
|
||||
" -8.45338631e+00],\n",
|
||||
" [-4.64281917e-01, -6.07886744e+00, -5.69949031e-01,\n",
|
||||
" -4.25031662e+00, 5.37574291e-01, -5.09637237e-01,\n",
|
||||
" 1.14183540e+01, -9.18782711e+00, -6.76804829e+00,\n",
|
||||
" -9.76811504e+00]], dtype=float32)"
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"float32\n",
|
||||
"torch.float32\n",
|
||||
"float32\n",
|
||||
"torch.float32\n",
|
||||
"float32\n",
|
||||
"torch.float32\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prediction.data.numpy()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"inp = gradio.inputs.Sketchpad(flatten=True, scale=1/255, dtype='float32')\n",
|
||||
"io = gradio.Interface(inputs=inp, outputs=\"label\", model_type=\"pytorch\", model=model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"No validation samples for this interface... skipping validation.\n",
|
||||
"NOTE: Gradio is in beta stage, please report all bugs to: a12d@stanford.edu\n",
|
||||
"Model is running locally at: http://localhost:7874/interface.html\n",
|
||||
"To create a public link, set `share=True` in the argument to `launch()`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"\n",
|
||||
" <iframe\n",
|
||||
" width=\"1000\"\n",
|
||||
" height=\"500\"\n",
|
||||
" src=\"http://localhost:7874/interface.html\"\n",
|
||||
" frameborder=\"0\"\n",
|
||||
" allowfullscreen\n",
|
||||
" ></iframe>\n",
|
||||
" "
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.lib.display.IFrame at 0x14509666898>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"(<gradio.networking.serve_files_in_background.<locals>.HTTPServer at 0x1450966be48>,\n",
|
||||
" 'http://localhost:7874/',\n",
|
||||
" None)"
|
||||
]
|
||||
},
|
||||
"execution_count": 41,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"float32\n",
|
||||
"torch.float32\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"io.launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
211
Test Sklearn.ipynb
Normal file
@ -0,0 +1,211 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The autoreload extension is already loaded. To reload it, use:\n",
|
||||
" %reload_ext autoreload\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%load_ext autoreload\n",
|
||||
"%autoreload 2\n",
|
||||
"\n",
|
||||
"from sklearn import datasets, svm\n",
|
||||
"import gradio\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"\n",
|
||||
"# The digits dataset\n",
|
||||
"digits = datasets.load_digits()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,\n",
|
||||
" decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',\n",
|
||||
" max_iter=-1, probability=False, random_state=None, shrinking=True,\n",
|
||||
" tol=0.001, verbose=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# To apply a classifier on this data, we need to flatten the image, to\n",
|
||||
"# turn the data in a (samples, feature) matrix:\n",
|
||||
"n_samples = len(digits.images)\n",
|
||||
"data = digits.images.reshape((n_samples, -1))\n",
|
||||
"\n",
|
||||
"# Create a classifier: a support vector classifier\n",
|
||||
"classifier = svm.SVC(gamma=0.001)\n",
|
||||
"\n",
|
||||
"# We learn the digits on the first half of the digits\n",
|
||||
"classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"16.0"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data.max()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAB4CAYAAADbsbjHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4xLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvDW2N/gAACUZJREFUeJzt3V2MXVUZxvHnkYrEFDptlAsQMq1cYIy2aQkJ0UgbaYJB7RClJkJiMdIm3thoSHuBBJTENkEtmmgGvxqDGlovaCAx2BpahQjS6jQRjZq2E6x8JFCmfDVo7evFPpUJlNnrTPc55z27/1/SZE7nPXuteTvznD377NXliBAAIK+3DXoCAICZEdQAkBxBDQDJEdQAkBxBDQDJEdQAkNxQBrXts2y/bPviJmtBb3uJ3vZO23vbl6DuNOXknxO2j017fH23x4uI/0bE3Ih4ssnaJti+2fYzto/a/qHts3s83hnRW9uLbf/a9vO2j/d6vM6YZ0pvP2/7j7ZftH3Y9jdsn9XjMc+U3l5v+2+dPHjW9k9sz+36OP1e8GJ7UtIXImLXDDVzIqIvP4xNsn2NpB9JWiHpWUk7JO2JiFv6NP6k2tvb90m6QtKUpG0RMafP40+qvb39oqT9kh6XdL6kByTdExF39mn8SbW3txdLejUinrN9rqQfSHoqIr7czXFSXPqwfYfte23/wvZLkm6wfYXtR21P2X7a9ndsv71TP8d22B7tPL6n8/lf2X7J9u9tL+y2tvP5j9n+e+cV8Lu2H7G9pvBL+ZykuyPirxFxRNIdkkqf2xNt6W2npz+W9JcG23NaWtTb70XEIxHx74g4LOnnkj7UXKe616LePhkRz037qxOSLum2HymCuuNaVd8g8yTdK+m4pC9Jepeqb5qrJa2b4fmflfRVSQskPSnp693W2j5f0jZJN3fGPSTp8pNPsr2w801ywVsc9/2qzkxO2i/pQtvzZphLP7Sht1m1sbcfkfREYW0vtaK3tq+0fVTSi5I+KWnLDPM4pUxB/XBE3B8RJyLiWEQ8HhGPRcTxiDgo6W5JV87w/F9GxN6I+I+kn0laMovaj0uaiIgdnc99W9L/Xw0j4lBEjETEU29x3LmSjk57fPLjc2eYSz+0obdZtaq3tm+S9EFJ36qr7YNW9DYi9kTEPEkXSbpT1QtBV/p6na/GP6c/sH2ppG9KWibpnarm+tgMz39m2sevqgrNbmsvmD6PiAjbh2tn/rqXJZ037fF50/5+kNrQ26xa01vbn1J1JvnRzqW7QWtNbzvPPWx7l6rfEi6vq58u0xn1G9/VHJf0Z0mXRMR5km6V5B7P4WlJ7zn5wLYlXdjF85+QtHja48WS/hURU81Mb9ba0NusWtFbV2+Ef1/SNRGR4bKH1JLevsEcSe/t9kmZgvqNzlV16eAVV+/4z3QtqikPSFpq+xO256i6HvbuLp7/U0k32b7U9gJJt0ja2vw0T9vQ9daVcySd3Xl8jnt86+MsDWNvV6r63r02Ivb1aI5NGMbe3mD7os7Ho6p+Y/lNt5PIHNRfUXUXxUuqXknv7fWAEfGspM+ouj73vKpXvj9Jek2SbC9ydZ/nKd84iIgHVF3D+q2kSUn/kPS1Xs97Foaut536Y6reoD2r83GaO0CmGcbe3qrqDbsH/fq9zPf3et6zMIy9/YCkR22/IulhVb91d/0C0/f7qIeJq5v+n5L06Yj43aDn0yb0tnfobe8MqreZz6gHwvbVtufZfoeq23WOS/rDgKfVCvS2d+ht72ToLUH9Zh+WdFDVLThXSxqLiNcGO6XWoLe9Q297Z+C95dIHACTHGTUAJEdQA0ByvVqZ2Mj1lO3bt9fWbNiwobZm5cqVReNt2rSptmb+/PlFxyow2xv1+3atavny5bU1U1Nla3luv/322ppVq1YVHatA+t7u3r27tmZsbKzoWEuWzLQyuny8QqezwKSR/m7evLm2ZuPGjbU1CxcurK2RpH376m8t73UucEYNAMkR1ACQHEENAMkR1ACQHEENAMkR1ACQHEENAMkR1ACQXKatuN6kZDHLoUOHamteeOGFovEWLFhQW7Nt27bamuuuu65ovOxGRkZqa/bs2VN0rIceeqi2psEFLwM1MTFRW7NixYramnnzyvZEnpycLKobBiULVUp+BsfHx2tr1q0r+2+hSxa8XHXVVUXHmi3OqAEgOYIaAJIjqAEgOYIaAJIjqAEgOYIaAJIjqAEgOYIaAJIb2IKXkpvISxazHDhwoLZm0aJFRXMq2QmmZN7DsOClZFFGg7uCFO1C0hb33Xdfbc3ixYtra0p3eCnZPWdYrF27tramZCHcsmXLamtKd3jp9WKWEpxRA0ByBDUAJEdQA0ByBDUAJEdQA0ByBDUAJEdQA0ByBDUAJDewBS8lu64sXbq0tqZ0MUuJkpvkh8GWLVtqa2677bbamqNHjzYwm8ry5csbO1Z269evr60ZHR1t5DhSe3bGkcp+ng8ePFhbU7JYrnQhS0lWzZ8/v+hYs8UZNQAkR1ADQHIENQAkR1ADQHIENQAkR1ADQHIENQAkR1ADQHKpF7yU7LjSpAw3tjehZKHEmjVramua/FqnpqYaO9YglXwdJQuOSnaBKbV169bGjjUMShbFHDlypLamdMFLSd2uXbtqa07n54kzagBIjqAGgOQIagBIjqAGgOQIagBIjqAGgOQIagBIjqAGgOQIagBIbmArE0tW6ezbt6+RsUpWHErS3r17a2tWr159utM5I01MTNTWLFmypA8zOT0lW5jdddddjYxVunpxZGSkkfHapCRfSlYTStK6detqazZv3lxbs2nTpqLxToUzagBIjqAGgOQIagBIjqAGgOQIagBIjqAGgOQIagBIjqAGgOQGtuClZDudkgUo27dvb6Sm1IYNGxo7FoZPyRZmu3fvrq3Zv39/bc3Y2FjBjKRVq1bV1tx4442NHCeDjRs31taUbJ9VuhBu586dtTW9XgjHGTUAJEdQA0ByBDUAJEdQA0ByBDUAJEdQA0ByBDUAJEdQA0ByqRe8lOyaULIA5bLLLiuaU1M7ygyDkl1BShZA7Nixo2i8kkUgJYtJBq1kF5qS3WxKakp2k5HK/g1GR0dra4ZlwUvJ7i1r165tbLySxSzj4+ONjXcqnFEDQHIENQAkR1ADQHIENQAkR1ADQHIENQAkR1ADQHIENQAk54gY9BwAADPgjBoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkiOoASA5ghoAkvsf2PN/nyaodHgAAAAASUVORK5CYII=\n",
|
||||
"text/plain": [
|
||||
"<Figure size 432x288 with 4 Axes>"
|
||||
]
|
||||
},
|
||||
"metadata": {
|
||||
"needs_background": "light"
|
||||
},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"images_and_labels = list(zip(digits.images, digits.target))\n",
|
||||
"for index, (image, label) in enumerate(images_and_labels[:4]):\n",
|
||||
" plt.subplot(2, 4, index + 1)\n",
|
||||
" plt.axis('off')\n",
|
||||
" plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n",
|
||||
" plt.title('Training: %i' % label)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"classifier.predict()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"expected = digits.target[n_samples // 2:]\n",
|
||||
"predicted = classifier.predict(data[n_samples // 2:])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"inp = gradio.inputs.Sketchpad(shape=(8, 8), flatten=True, scale=16/255, invert_colors=False)\n",
|
||||
"io = gradio.Interface(inputs=inp, outputs=\"label\", model_type=\"sklearn\", model=classifier)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"No validation samples for this interface... skipping validation.\n",
|
||||
"NOTE: Gradio is in beta stage, please report all bugs to: a12d@stanford.edu\n",
|
||||
"Model is running locally at: http://localhost:7865/interface.html\n",
|
||||
"To create a public link, set `share=True` in the argument to `launch()`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"\n",
|
||||
" <iframe\n",
|
||||
" width=\"1000\"\n",
|
||||
" height=\"500\"\n",
|
||||
" src=\"http://localhost:7865/interface.html\"\n",
|
||||
" frameborder=\"0\"\n",
|
||||
" allowfullscreen\n",
|
||||
" ></iframe>\n",
|
||||
" "
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.lib.display.IFrame at 0x2a051defdd8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"(<gradio.networking.serve_files_in_background.<locals>.HTTPServer at 0x2a051e271d0>,\n",
|
||||
" 'http://localhost:7865/',\n",
|
||||
" None)"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"io.launch()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 (tensorflow)",
|
||||
"language": "python",
|
||||
"name": "tensorflow"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
196
Test Tensorflow.ipynb
Normal file
@ -0,0 +1,196 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%load_ext autoreload\n",
|
||||
"%autoreload 2\n",
|
||||
"\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import gradio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"n_classes = 10\n",
|
||||
"(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()\n",
|
||||
"x_train, x_test = x_train.reshape(-1, 784) / 255.0, x_test.reshape(-1, 784) / 255.0\n",
|
||||
"y_train = tf.keras.utils.to_categorical(y_train, n_classes).astype(float)\n",
|
||||
"y_test = tf.keras.utils.to_categorical(y_test, n_classes).astype(float)\n",
|
||||
"\n",
|
||||
"learning_rate = 0.5\n",
|
||||
"epochs = 5\n",
|
||||
"batch_size = 100"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"x = tf.placeholder(tf.float32, [None, 784], name=\"x\")\n",
|
||||
"y = tf.placeholder(tf.float32, [None, 10], name=\"y\")\n",
|
||||
"\n",
|
||||
"W1 = tf.Variable(tf.random_normal([784, 300], stddev=0.03), name='W1')\n",
|
||||
"b1 = tf.Variable(tf.random_normal([300]), name='b1')\n",
|
||||
"W2 = tf.Variable(tf.random_normal([300, 10], stddev=0.03), name='W2')\n",
|
||||
"hidden_out = tf.add(tf.matmul(x, W1), b1)\n",
|
||||
"hidden_out = tf.nn.relu(hidden_out)\n",
|
||||
"y_ = tf.matmul(hidden_out, W2)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"probs = tf.nn.softmax(y_)\n",
|
||||
"cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_, labels=y))\n",
|
||||
"optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"init_op = tf.global_variables_initializer()\n",
|
||||
"correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
|
||||
"accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Epoch: 1 cost = 0.317\n",
|
||||
"Epoch: 2 cost = 0.123\n",
|
||||
"Epoch: 3 cost = 0.086\n",
|
||||
"Epoch: 4 cost = 0.066\n",
|
||||
"Epoch: 5 cost = 0.052\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sess = tf.Session()\n",
|
||||
"sess.run(init_op)\n",
|
||||
"total_batch = int(len(y_train) / batch_size)\n",
|
||||
"for epoch in range(epochs):\n",
|
||||
" avg_cost = 0\n",
|
||||
" for start, end in zip(range(0, len(y_train), batch_size), range(batch_size, len(y_train)+1, batch_size)): \n",
|
||||
" batch_x = x_train[start: end]\n",
|
||||
" batch_y = y_train[start: end]\n",
|
||||
" _, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})\n",
|
||||
" avg_cost += c / total_batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def predict(inp):\n",
|
||||
" return sess.run(probs, feed_dict={x:inp})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"inp = gradio.inputs.Sketchpad(flatten=True)\n",
|
||||
"io = gradio.Interface(inputs=inp, outputs=\"label\", model_type=\"pyfunc\", model=predict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"No validation samples for this interface... skipping validation.\n",
|
||||
"NOTE: Gradio is in beta stage, please report all bugs to: a12d@stanford.edu\n",
|
||||
"Model is running locally at: http://localhost:7868/interface.html\n",
|
||||
"To create a public link, set `share=True` in the argument to `launch()`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"\n",
|
||||
" <iframe\n",
|
||||
" width=\"1000\"\n",
|
||||
" height=\"500\"\n",
|
||||
" src=\"http://localhost:7868/interface.html\"\n",
|
||||
" frameborder=\"0\"\n",
|
||||
" allowfullscreen\n",
|
||||
" ></iframe>\n",
|
||||
" "
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.lib.display.IFrame at 0x2a126711048>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"(<gradio.networking.serve_files_in_background.<locals>.HTTPServer at 0x2a1266b6b38>,\n",
|
||||
" 'http://localhost:7868/',\n",
|
||||
" None)"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"io.launch()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 (tensorflow)",
|
||||
"language": "python",
|
||||
"name": "tensorflow"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@ -5,9 +5,7 @@ automatically added to a registry, which allows them to be easily referenced in
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import base64
|
||||
from gradio import preprocessing_utils, validation_data
|
||||
from io import BytesIO
|
||||
import numpy as np
|
||||
from PIL import Image, ImageOps
|
||||
|
||||
@ -67,11 +65,15 @@ class AbstractInput(ABC):
|
||||
|
||||
|
||||
class Sketchpad(AbstractInput):
|
||||
def __init__(self, preprocessing_fn=None, image_width=28, image_height=28,
|
||||
invert_colors=True):
|
||||
self.image_width = image_width
|
||||
self.image_height = image_height
|
||||
def __init__(self, preprocessing_fn=None, shape=(28, 28), invert_colors=True, flatten=False, scale=1, shift=0,
|
||||
dtype='float64'):
|
||||
self.image_width = shape[0]
|
||||
self.image_height = shape[1]
|
||||
self.invert_colors = invert_colors
|
||||
self.flatten = flatten
|
||||
self.scale = scale
|
||||
self.shift = shift
|
||||
self.dtype = dtype
|
||||
super().__init__(preprocessing_fn=preprocessing_fn)
|
||||
|
||||
def get_name(self):
|
||||
@ -81,13 +83,17 @@ class Sketchpad(AbstractInput):
|
||||
"""
|
||||
Default preprocessing method for the SketchPad is to convert the sketch to black and white and resize 28x28
|
||||
"""
|
||||
content = inp.split(';')[1]
|
||||
image_encoded = content.split(',')[1]
|
||||
im = Image.open(BytesIO(base64.b64decode(image_encoded))).convert('L')
|
||||
im = preprocessing_utils.encoding_to_image(inp)
|
||||
im = im.convert('L')
|
||||
if self.invert_colors:
|
||||
im = ImageOps.invert(im)
|
||||
im = preprocessing_utils.resize_and_crop(im, (self.image_width, self.image_height))
|
||||
array = np.array(im).flatten().reshape(1, self.image_width, self.image_height)
|
||||
if self.flatten:
|
||||
array = np.array(im).flatten().reshape(1, self.image_width * self.image_height)
|
||||
else:
|
||||
array = np.array(im).flatten().reshape(1, self.image_width, self.image_height)
|
||||
array = array * self.scale + self.shift
|
||||
array = array.astype(self.dtype)
|
||||
return array
|
||||
|
||||
|
||||
@ -108,9 +114,8 @@ class Webcam(AbstractInput):
|
||||
"""
|
||||
Default preprocessing method for is to convert the picture to black and white and resize to be 48x48
|
||||
"""
|
||||
content = inp.split(';')[1]
|
||||
image_encoded = content.split(',')[1]
|
||||
im = Image.open(BytesIO(base64.b64decode(image_encoded))).convert('RGB')
|
||||
im = preprocessing_utils.encoding_to_image(inp)
|
||||
im = im.convert('RGB')
|
||||
im = preprocessing_utils.resize_and_crop(im, (self.image_width, self.image_height))
|
||||
array = np.array(im).flatten().reshape(1, self.image_width, self.image_height, self.num_channels)
|
||||
return array
|
||||
@ -131,15 +136,15 @@ class Textbox(AbstractInput):
|
||||
|
||||
|
||||
class ImageUpload(AbstractInput):
|
||||
def __init__(self, preprocessing_fn=None, image_width=224, image_height=224, num_channels=3, image_mode='RGB',
|
||||
scale=1/127.5, shift=-1, aspect_ratio="false"):
|
||||
self.image_width = image_width
|
||||
self.image_height = image_height
|
||||
self.num_channels = num_channels
|
||||
def __init__(self, preprocessing_fn=None, shape=(224, 224, 3), image_mode='RGB',
|
||||
scale=1/127.5, shift=-1, cropper_aspect_ratio=None):
|
||||
self.image_width = shape[0]
|
||||
self.image_height = shape[1]
|
||||
self.num_channels = shape[2]
|
||||
self.image_mode = image_mode
|
||||
self.scale = scale
|
||||
self.shift = shift
|
||||
self.aspect_ratio = aspect_ratio
|
||||
self.cropper_aspect_ratio = "false" if cropper_aspect_ratio is None else cropper_aspect_ratio
|
||||
super().__init__(preprocessing_fn=preprocessing_fn)
|
||||
|
||||
def get_validation_inputs(self):
|
||||
@ -149,15 +154,14 @@ class ImageUpload(AbstractInput):
|
||||
return 'image_upload'
|
||||
|
||||
def get_js_context(self):
|
||||
return {'aspect_ratio': self.aspect_ratio}
|
||||
return {'aspect_ratio': self.cropper_aspect_ratio}
|
||||
|
||||
def preprocess(self, inp):
|
||||
"""
|
||||
Default preprocessing method for is to convert the picture to black and white and resize to be 48x48
|
||||
"""
|
||||
content = inp.split(';')[1]
|
||||
image_encoded = content.split(',')[1]
|
||||
im = Image.open(BytesIO(base64.b64decode(image_encoded))).convert(self.image_mode)
|
||||
im = preprocessing_utils.encoding_to_image(inp)
|
||||
im = im.convert(self.image_mode)
|
||||
im = preprocessing_utils.resize_and_crop(im, (self.image_width, self.image_height))
|
||||
im = np.array(im).flatten()
|
||||
im = im * self.scale + self.shift
|
||||
|
@ -13,10 +13,12 @@ from gradio import networking
|
||||
import tempfile
|
||||
import threading
|
||||
import traceback
|
||||
import urllib
|
||||
|
||||
nest_asyncio.apply()
|
||||
|
||||
LOCALHOST_IP = '127.0.0.1'
|
||||
SHARE_LINK_FORMAT = 'https://share.gradio.app/{}'
|
||||
INITIAL_WEBSOCKET_PORT = 9200
|
||||
TRY_NUM_PORTS = 100
|
||||
|
||||
@ -28,7 +30,7 @@ class Interface:
|
||||
"""
|
||||
|
||||
# Dictionary in which each key is a valid `model_type` argument to constructor, and the value being the description.
|
||||
VALID_MODEL_TYPES = {'sklearn': 'sklearn model', 'keras': 'Keras model', 'function': 'python function',
|
||||
VALID_MODEL_TYPES = {'sklearn': 'sklearn model', 'keras': 'Keras model', 'pyfunc': 'python function',
|
||||
'pytorch': 'PyTorch model'}
|
||||
STATUS_TYPES = {'OFF': 'off', 'RUNNING': 'running'}
|
||||
|
||||
@ -94,7 +96,7 @@ class Interface:
|
||||
pass
|
||||
|
||||
if callable(model):
|
||||
return 'function'
|
||||
return 'pyfunc'
|
||||
|
||||
raise ValueError("model_type could not be inferred, please specify parameter `model_type`")
|
||||
|
||||
@ -127,11 +129,13 @@ class Interface:
|
||||
return self.model_obj.predict(preprocessed_input)
|
||||
elif self.model_type=='keras':
|
||||
return self.model_obj.predict(preprocessed_input)
|
||||
elif self.model_type=='function':
|
||||
elif self.model_type=='pyfunc':
|
||||
return self.model_obj(preprocessed_input)
|
||||
elif self.model_type=='pytorch':
|
||||
import torch
|
||||
print(preprocessed_input.dtype)
|
||||
value = torch.from_numpy(preprocessed_input)
|
||||
print(value.dtype)
|
||||
value = torch.autograd.Variable(value)
|
||||
prediction = self.model_obj(value)
|
||||
return prediction.data.numpy()
|
||||
@ -233,22 +237,28 @@ class Interface:
|
||||
if share:
|
||||
try:
|
||||
path_to_ngrok_server = networking.setup_ngrok(server_port, websocket_port, output_directory)
|
||||
path_to_ngrok_interface_page = path_to_ngrok_server + '/' + networking.TEMPLATE_TEMP
|
||||
if self.verbose:
|
||||
print(f"Model available publicly for 8 hours at: {path_to_ngrok_interface_page}")
|
||||
except RuntimeError:
|
||||
path_to_ngrok_server = None
|
||||
if self.verbose:
|
||||
print("Unable to create public link for interface, please check internet connection.")
|
||||
else:
|
||||
if self.verbose:
|
||||
print("To create a public link, set `share=True` in the argument to `launch()`")
|
||||
path_to_ngrok_server = None
|
||||
if is_colab: # for a colab notebook, create a public link even if share is False.
|
||||
if is_colab: # For a colab notebook, create a public link even if share is False.
|
||||
path_to_ngrok_server = networking.setup_ngrok(server_port, websocket_port, output_directory)
|
||||
path_to_ngrok_interface_page = path_to_ngrok_server + '/' + networking.TEMPLATE_TEMP
|
||||
print(f"Cannot display local interface on google colab, public link created at:"
|
||||
f"{path_to_ngrok_interface_page} and displayed below.")
|
||||
if self.verbose:
|
||||
print(f"Cannot display local interface on google colab, public link created.")
|
||||
else: # If it's not a colab notebook and share=False, print a message telling them about the share option.
|
||||
if self.verbose:
|
||||
print("To create a public link, set `share=True` in the argument to `launch()`")
|
||||
path_to_ngrok_server = None
|
||||
|
||||
if path_to_ngrok_server is not None:
|
||||
# path_to_ngrok_interface_page = path_to_ngrok_server + '/' + networking.TEMPLATE_TEMP
|
||||
url = urllib.parse.urlparse(path_to_ngrok_server)
|
||||
subdomain = url.hostname.split('.')[0]
|
||||
path_to_ngrok_interface_page = SHARE_LINK_FORMAT.format(subdomain)
|
||||
if self.verbose:
|
||||
print(f"Model available publicly for 8 hours at: {path_to_ngrok_interface_page}")
|
||||
|
||||
# Keep the server running in the background.
|
||||
asyncio.get_event_loop().run_until_complete(start_server)
|
||||
try:
|
||||
@ -270,6 +280,7 @@ class Interface:
|
||||
else:
|
||||
if inbrowser is None:
|
||||
inbrowser = False
|
||||
|
||||
if inbrowser and not is_colab:
|
||||
webbrowser.open(path_to_local_interface_page) # Open a browser tab with the interface.
|
||||
if inline:
|
||||
|
@ -1,6 +1,13 @@
|
||||
from PIL import Image
|
||||
from io import BytesIO
|
||||
import base64
|
||||
|
||||
|
||||
def encoding_to_image(encoding):
|
||||
content = encoding.split(';')[1]
|
||||
image_encoded = content.split(',')[1]
|
||||
return Image.open(BytesIO(base64.b64decode(image_encoded)))
|
||||
|
||||
def resize_and_crop(img, size, crop_type='top'):
|
||||
"""
|
||||
Resize and crop an image to fit the specified size.
|
||||
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 7.0 KiB |
@ -1,80 +1,131 @@
|
||||
<html>
|
||||
<head>
|
||||
<title>GradIO</title>
|
||||
<title>Gradio</title>
|
||||
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
|
||||
<link href="style/style.css" rel="stylesheet">
|
||||
<link href="style/home.css" rel="stylesheet">
|
||||
<link href="style/getting_started.css" rel="stylesheet">
|
||||
<link href="style/gradio.css" rel="stylesheet">
|
||||
<link href="gradio/gradio.css" rel="stylesheet">
|
||||
<link href="gradio/vendor/cropper.css" rel="stylesheet">
|
||||
<link rel="stylesheet"
|
||||
href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.6/styles/github.min.css">
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.6/highlight.min.js"></script>
|
||||
<script>hljs.initHighlightingOnLoad();</script>
|
||||
</head>
|
||||
<body>
|
||||
<nav>
|
||||
<img src="img/logo_inline.png" />
|
||||
<a href="home.html">GradIO</a>
|
||||
<a href="index.html">Gradio</a>
|
||||
<a class="selected" href="getting_started.html">Getting Started</a>
|
||||
<a href="sharing.html">Sharing</a>
|
||||
</nav>
|
||||
<div class="content">
|
||||
<h1>Getting Started</h1>
|
||||
<p>GradIO is a python library that allows you to place input and output
|
||||
interfaces over trained models to make it easy for you to "play around"
|
||||
with your model. GradIO runs entirely locally using your browser.</p>
|
||||
<p>To get a sense of GradIO, take a look at the examples on the home
|
||||
page. Setting up GradIO is as easy as <code>pip install gradio</code> or
|
||||
<code>pip3 install gradio</code> for Python3.</p>
|
||||
<p>Running a GradIO interface requires calling <code><span
|
||||
class="func">Interface(</span><span class="var">input</span> : str,
|
||||
<span class="var">output</span> : str, <span class="var">model_type</span>
|
||||
<h1>Installation</h1>
|
||||
<p>Gradio requires <a href="https://www.python.org/downloads/">Python 3</a>. Once you have Python, you can download the latest version of <code>gradio</code> using pip, like this:</p>
|
||||
<pre><code class="bash">pip install gradio</code></pre>
|
||||
|
||||
<p>Or you may need to do <code>pip3 install gradio</code> if you have multiple installations of Python.</p>
|
||||
<h1>Basic Usage</h1>
|
||||
<p>Creating an interface using gradio involves just adding a few lines to your existing code. For example, here's an
|
||||
how to create a <code>gradio</code> interface using a pretrained <code>keras</code> model:</p>
|
||||
|
||||
<pre><code class="python">import gradio, tensorflow as tf
|
||||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||||
io = gradio.Interface(inputs="imageupload", outputs="label", model_type="keras", model=image_mdl)
|
||||
io.launch()</code></pre>
|
||||
|
||||
<p>Running the code above will open a new browser window with the following interface running:</p>
|
||||
<div id="gradio">
|
||||
<div class="panel">
|
||||
<div class="gradio input image_file">
|
||||
<div class="role">Input</div>
|
||||
<div class="input_image drop_mode">
|
||||
<div class="input_caption">Drop Image Here<br>- or -<br>Click to Upload</div>
|
||||
<img />
|
||||
</div>
|
||||
<input class="hidden_upload" type="file" accept="image/x-png,image/gif,image/jpeg" />
|
||||
</div>
|
||||
<input class="submit" type="submit" value="Submit"/><!--
|
||||
--><input class="clear" type="reset" value="Clear">
|
||||
</div><!--
|
||||
--><div class="panel">
|
||||
<div class="gradio output classifier">
|
||||
<div class="panel_head">
|
||||
<div class="role">Output</div>
|
||||
</div>
|
||||
<div class="output_class"></div>
|
||||
<div class="confidence_intervals">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<p> </p><p> </p>
|
||||
<h1>Basic Parameters</h1>
|
||||
<p>Running a GradIO interface requires creating an <code><span
|
||||
class="func">Interface(</span><span class="var">inputs</span> : str,
|
||||
<span class="var">outputs</span> : str, <span class="var">model_type</span>
|
||||
: str, <span class="var">model</span> : Any<span
|
||||
class="func">)</span></code> function, which takes as input
|
||||
class="func">)</span></code> object, which takes as input
|
||||
arguments:<br>
|
||||
<code><span class="var">input</span></code> – the string representing
|
||||
the input interface to be used.<br>
|
||||
<code><span class="var">output</span></code> – the string representing
|
||||
the output interface to be used.<br>
|
||||
<code><span class="var">inputs</span></code> – the string representing
|
||||
the input interface to be used, or a subclass of <code>gradio.AbstractInput</code> for additional customization (see <a href="#custom-interfaces">below</a>).<br>
|
||||
<code><span class="var">outputs</span></code> – the string representing
|
||||
the output interface to be used, , or a subclass of <code>gradio.AbstractOutput</code> for additional customization (see <a href="#custom-interfaces">below</a>).<br>
|
||||
<code><span class="var">model_type</span></code> – the string
|
||||
representing type of model being passed in. Supported types include
|
||||
keras.<br>
|
||||
<code><span class="var">model</span></code> – the actual model to use
|
||||
for processing.</p>
|
||||
<p>For example:</p>
|
||||
<div class="codeblock"><code>
|
||||
import <span class="var">gradio</span>, tensorflow as <span
|
||||
class="var">tf</span><br>
|
||||
<span class="var">mdl</span> = tf.keras.models.<span
|
||||
class="func">Sequential()</span><br>
|
||||
<span class="comm"># ...
|
||||
define and train the model as you would normally</span><br>
|
||||
<span class="var">io</span> = gradio.<span
|
||||
class="func">Interface(</span><span
|
||||
class="var">input</span>=“sketchpad”, <span
|
||||
class="var">output</span>=“label”, <span
|
||||
class="var">model_type</span>=“keras”, <span
|
||||
class="var">model</span>=mdl<span class="func">)</span><br>
|
||||
io.<span class="func">launch()</span>
|
||||
</code></div>
|
||||
<p>Instead of providing the string names for <code><span class="var">inputs</span></code> and <code><span class="var">outputs</span></code>, objects that represent input and output interfaces can be provided. For example, the code
|
||||
in the Basic Usage section executes identically as:</p>
|
||||
|
||||
<pre><code class="python">import gradio, tensorflow as tf
|
||||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||||
inp = gradio.inputs.ImageUpload()
|
||||
out = gradio.outputs.Label()
|
||||
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
|
||||
io.launch()</code></pre>
|
||||
|
||||
<p>This allows for customization of the interfaces, by passing in arguments to the input and output constructors. The parameters that each interface constructor accepts is described below.</p>
|
||||
|
||||
<h1>Supported Interfaces</h1>
|
||||
<p id="interfaces_text">This is the list of currently supported interfaces
|
||||
in GradIO. All input interfaces can be paired with any output interface.
|
||||
</p>
|
||||
<div class="interfaces_set">
|
||||
<div class="inputs_set">
|
||||
<h2><code><span class="var">input</span>=“text”</code></h2>
|
||||
<p>Use this interface to enter text as your input.</p>
|
||||
<h2>Input Interfaces</h2>
|
||||
<h2><code><span class="var">inputs</span>=“text”</code></h2>
|
||||
<p>Use this interface to enter text as your input. Parameters: <em>None</em>
|
||||
</p>
|
||||
<div class="gradio input text">
|
||||
<div class="role">Input</div>
|
||||
<textarea class="input_text"
|
||||
placeholder="Enter text here..."></textarea>
|
||||
</div>
|
||||
<h2><code><span class="var">input</span>=“image_file”</code></h2>
|
||||
<p>Use this interface to upload images to your model.</p>
|
||||
<h2><code><span class="var">inputs</span>=“imageupload”</code></h2>
|
||||
<p>Use this interface to upload images to your model. Parameters: <br>
|
||||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||||
<code><span class="var">image_mode</span></code> – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
|
||||
<code><span class="var">scale</span></code> – A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
|
||||
<code><span class="var">shift</span></code> – A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
|
||||
<code><span class="var">cropper_aspect_ratio</span></code> – Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
|
||||
</p>
|
||||
<div class="gradio input image_file">
|
||||
<div class="role">Input</div>
|
||||
<div class="input_image">
|
||||
Drop Image Here<br>- or -<br>Click to Upload
|
||||
</div>
|
||||
</div>
|
||||
<h2><code><span class="var">input</span>=“snapshot”</code></h2>
|
||||
<p>Use this interface to take snapshots from the user's webcam.</p>
|
||||
<h2><code><span class="var">inputs</span>=“snapshot”</code></h2>
|
||||
<p>Use this interface to take snapshots from the user's webcam. Parameters: <br>
|
||||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||||
<code><span class="var">image_mode</span></code> – PIL Image mode that is used to convert the image to a numpy array. Typically either 'RGB' (3 channel RGB) or 'L' (1 channel grayscale). Default: <code>'RGB'</code><br>
|
||||
<code><span class="var">scale</span></code> – A float used to rescale each pixel value in the image. Default: <code>1/127.5</code><br>
|
||||
<code><span class="var">shift</span></code> – A float used to shift each pixel value in the image after scaling. Default: <code>-1</code><br>
|
||||
<code><span class="var">cropper_aspect_ratio</span></code> – Either None or a float that is the aspect ratio of the cropper. Default: <code>None</code><br>
|
||||
</p>
|
||||
<div class="gradio input snapshot">
|
||||
<div class="role">Input</div>
|
||||
<div class="input_snapshot">
|
||||
@ -84,12 +135,15 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h2><code><span class="var">input</span>=“sketchpad”</code></h2>
|
||||
<p>Use this interface to take simple monochrome cketches as input.</p>
|
||||
<h2><code><span class="var">inputs</span>=“sketchpad”</code></h2>
|
||||
<p>Use this interface to take simple monochrome cketches as input. Parameters: <br>
|
||||
<code><span class="var">shape</span></code> – a tuple with the shape which the uploaded image should be resized to before passing into the model. Default: <code>(224, 224, 3)</code><br>
|
||||
<code><span class="var">invert_colors</span></code> – a boolean that designates whether the colors should be inverted before passing into the model. Default: <code>True</code><br>
|
||||
</p>
|
||||
<div class="input sketchpad">
|
||||
<div class="role">Input</div>
|
||||
</div>
|
||||
<h2><code><span class="var">input</span>=“microphone”</code></h2>
|
||||
<h2><code><span class="var">inputs</span>=“microphone”</code></h2>
|
||||
<p>Use this interface to audio input from the microphone.</p>
|
||||
<div class="gradio input mic">
|
||||
<div class="role">Input</div>
|
||||
@ -100,7 +154,7 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h2><code><span class="var">input</span>=“audio_file”</code></h2>
|
||||
<h2><code><span class="var">inputs</span>=“audio_file”</code></h2>
|
||||
<p>Use this interface to upload audio to your model.</p>
|
||||
<div class="gradio input audio_file">
|
||||
<div class="role">Input</div>
|
||||
@ -110,7 +164,8 @@
|
||||
</div>
|
||||
</div><!--
|
||||
--><div class="outputs_set">
|
||||
<h2><code><span class="var">output</span>=“classifier”</code></h2>
|
||||
<h2>Output Interfaces</h2>
|
||||
<h2><code><span class="var">outputs</span>=“classifier”</code></h2>
|
||||
<p>Use this interface for classification. Responds with confidence
|
||||
intervals. </p>
|
||||
<div class="gradio output classifier">
|
||||
@ -123,14 +178,14 @@
|
||||
<div class="confidence"><div class="label">angry</div><div class="level" style="width: 6px"> </div></div>
|
||||
</div>
|
||||
</div>
|
||||
<h2><code><span class="var">output</span>=“text”</code></h2>
|
||||
<h2><code><span class="var">outputs</span>=“text”</code></h2>
|
||||
<p>Use this interface to display the text of your output.</p>
|
||||
<div class="gradio output text">
|
||||
<div class="role">Output</div>
|
||||
<textarea readonly class="output_text">Lorem ipsum consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
|
||||
</textarea>
|
||||
</div>
|
||||
<h2><code><span class="var">output</span>=“image”</code></h2>
|
||||
<h2><code><span class="var">outputs</span>=“image”</code></h2>
|
||||
<p>Use this interface to display the text of your output.</p>
|
||||
<div class="gradio output image">
|
||||
<div class="role">Output</div>
|
||||
@ -140,9 +195,250 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h1 id="custom-interfaces">Customizing Interfaces</h1>
|
||||
<p>In practice, it is fairly typical to customize the input and output interfaces so they preprocess the inputs
|
||||
in way your model accepts, or postprocesses the result of your model in the appropriate way so that the output interface
|
||||
can display the result. For example, you may need to adapt the preprocessing of the image upload interface so that
|
||||
the image is resized to the correct dimensions before being fed into your model. This can be done in one of two ways: (1) instantiating <code>gradio.Input</code> /
|
||||
<code>gradio.Output</code> objects with custom parameters, or (2) supplying custom preprocessing/postprocessing functions.</p>
|
||||
<h2>Input/Output Objects with Custom Parameters</h2>
|
||||
<p>For small, common changes to the input and output interfaces, you can often simply change the parameters in
|
||||
the constructor of the input and output objects to affect the preprocessing/postprocessing. Here is an example that
|
||||
resizing the image to a different size before feeding it into the model, and tweaks the output interface to
|
||||
hide the confidence bars and show the top 5 classes rather than the default 3:</p>
|
||||
|
||||
<pre><code class="python">import gradio, tensorflow as tf
|
||||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||||
inp = gradio.inputs.ImageUpload(shape=(299, 299, 3))
|
||||
out = gradio.outputs.Label(num_top_classes=5)
|
||||
io = gradio.Interface(inputs=inp, outputs=out, model_type="keras", model=mdl)
|
||||
io.launch()</code></pre>
|
||||
|
||||
<h2>Custom Preprocessing/Postprocessing Functions</h2>
|
||||
<p>Alternatively, you can completely override the default preprocessing/postprocessing functions by supplying
|
||||
your own. For example, here we modify the preprocessing function of the ImageUpload interface to add some
|
||||
noise to the image before feeding it into the model.</p>
|
||||
|
||||
<pre><code class="python">import gradio, base64, numpy as np, tensorflow as tf
|
||||
from io import BytesIO
|
||||
from PIL import Image
|
||||
image_mdl = tf.keras.applications.inception_v3.InceptionV3()
|
||||
|
||||
def pre(inp):
|
||||
im = gradio.preprocessing_utils.encoding_to_image(inp)
|
||||
im = gradio.preprocessing_utils.resize_and_crop(im, (299, 299))
|
||||
im = np.array(im).flatten()
|
||||
im = im * 1/127.5 - 1
|
||||
im = im + np.random.normal(0, 0.1, im.shape) # Adding the noise
|
||||
array = im.reshape(1, 299, 299, 3)
|
||||
return array
|
||||
|
||||
inp = gradio.inputs.ImageUpload(preprocessing_fn=pre)
|
||||
io = gradio.Interface(inputs=inp, outputs="label", model_type="keras", model=mdl)
|
||||
io.launch()</code></pre>
|
||||
|
||||
<h1>Model Types</h1>
|
||||
We currently support the following kinds of models:
|
||||
<h3><code><span class="var">model_type</span>="sklearn"</code></h3>
|
||||
<p>This allows you to pass in scikit-learn models, and get predictions from the model. Here's a complete example of training a <code>sklearn</code> model and creating a <code>gradio</code> interface around it.
|
||||
</p>
|
||||
|
||||
<pre><code class="python">from sklearn import datasets, svm
|
||||
import gradio
|
||||
|
||||
digits = datasets.load_digits()
|
||||
n_samples = len(digits.images)
|
||||
data = digits.images.reshape((n_samples, -1)) # flatten the images
|
||||
|
||||
# Create a classifier: a support vector classifier
|
||||
classifier = svm.SVC(gamma=0.001)
|
||||
classifier.fit(data, digits.target)
|
||||
|
||||
# The sklearn digits dataset is different from MNIST: it is 8x8 and consists of black digits on a white background.
|
||||
inp = gradio.inputs.Sketchpad(shape=(8, 8), flatten=True, scale=16/255, invert_colors=False)
|
||||
io = gradio.Interface(inputs=inp, outputs="label", model_type="sklearn", model=classifier)
|
||||
io.launch()</code></pre>
|
||||
|
||||
<h3><code><span class="var">model_type</span>="keras"</code></h3>
|
||||
<p>This allows you to pass in keras models, and get predictions from the model. Here's a complete example of training a <code>keras</code> model and creating a <code>gradio</code> interface around it.
|
||||
</p>
|
||||
|
||||
<pre><code class="python">import gradio, tensorflow as tf
|
||||
|
||||
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
|
||||
x_train, x_test = x_train / 255.0, x_test / 255.0
|
||||
|
||||
model = tf.keras.models.Sequential([
|
||||
tf.keras.layers.Flatten(),
|
||||
tf.keras.layers.Dense(512, activation=tf.nn.relu),
|
||||
tf.keras.layers.Dropout(0.2),
|
||||
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
|
||||
])
|
||||
|
||||
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
|
||||
model.fit(x_train, y_train, epochs=5)
|
||||
loss, accuracy = model.evaluate(x_test, y_test)
|
||||
|
||||
io = gradio.Interface(inputs="sketchpad", outputs="label", model=model, model_type='keras')
|
||||
io.launch(inline=True, share=True)</code></pre>
|
||||
|
||||
<p><a href="https://colab.research.google.com/drive/1DQSuxGARUZ-v4ZOAuw-Hf-8zqegpmes-">Run this code in a colab notebook</a> to see the interface -- embedded in the notebook.</p>
|
||||
<h3><code><span class="var">model_type</span>="pytorch"</code></h3>
|
||||
<p>This allows you to pass in pytorch models, and get predictions from the model. Here's a complete example of training a <code>pytorch</code> model and creating a <code>gradio</code> interface around it.
|
||||
</p>
|
||||
<pre><code class="python">import torch
|
||||
import torch.nn as nn
|
||||
import torchvision
|
||||
import torchvision.transforms as transforms
|
||||
import gradio
|
||||
|
||||
# Device configuration
|
||||
device = torch.device('cpu')
|
||||
|
||||
# Hyper-parameters
|
||||
input_size = 784
|
||||
hidden_size = 500
|
||||
num_classes = 10
|
||||
num_epochs = 2
|
||||
batch_size = 100
|
||||
learning_rate = 0.001
|
||||
|
||||
# MNIST dataset
|
||||
train_dataset = torchvision.datasets.MNIST(root='../../data', train=True, transform=transforms.ToTensor(), download=True)
|
||||
test_dataset = torchvision.datasets.MNIST(root='../../data',train=False, transform=transforms.ToTensor())
|
||||
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size,shuffle=True)
|
||||
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
|
||||
|
||||
# Fully connected neural network with one hidden layer
|
||||
class NeuralNet(nn.Module):
|
||||
def __init__(self, input_size, hidden_size, num_classes):
|
||||
super(NeuralNet, self).__init__()
|
||||
self.fc1 = nn.Linear(input_size, hidden_size)
|
||||
self.relu = nn.ReLU()
|
||||
self.fc2 = nn.Linear(hidden_size, num_classes)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.fc1(x)
|
||||
out = self.relu(out)
|
||||
out = self.fc2(out)
|
||||
return out
|
||||
|
||||
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
|
||||
|
||||
# Loss and optimizer
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
||||
|
||||
# Train the model
|
||||
total_step = len(train_loader)
|
||||
for epoch in range(num_epochs):
|
||||
for i, (images, labels) in enumerate(train_loader):
|
||||
# Move tensors to the configured device
|
||||
images = images.reshape(-1, 28*28).to(device)
|
||||
labels = labels.to(device)
|
||||
|
||||
# Forward pass
|
||||
outputs = model(images)
|
||||
loss = criterion(outputs, labels)
|
||||
|
||||
# Backward and optimize
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
inp = gradio.inputs.Sketchpad(flatten=True, scale=1/255, dtype='float32')
|
||||
io = gradio.Interface(inputs=inp, outputs="label", model_type="pytorch", model=model)
|
||||
io.launch()
|
||||
</code></pre>
|
||||
|
||||
<h3><code><span class="var">model_type</span>="pyfunc"</code></h3>
|
||||
<p>This allows you to pass in an arbitrary python function, and get the outputs from the function. Here's a very simple example of a "model" with a <code>gradio</code> interface around it.
|
||||
</p>
|
||||
|
||||
<pre><code class="python">import gradio
|
||||
|
||||
# A very simplistic function that capitalizes each letter in the given string
|
||||
def big(x):
|
||||
return x.upper()
|
||||
|
||||
io = gradio.Interface(inputs="textbox", outputs="textbox", model=big, model_type='pyfunc')
|
||||
io.launch(inline=True, share=True)</code></pre>
|
||||
|
||||
<p>A more realistic examples of the <code>pyfunc</code> use case may be the following, where we would like to
|
||||
use a TensorFlow session with a trained model to do predictions. So we wrap the session inside a python function
|
||||
like this:</p>
|
||||
|
||||
<pre><code class="python">import tensorflow as tf
|
||||
import gradio
|
||||
|
||||
n_classes = 10
|
||||
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
|
||||
x_train, x_test = x_train.reshape(-1, 784) / 255.0, x_test.reshape(-1, 784) / 255.0
|
||||
y_train = tf.keras.utils.to_categorical(y_train, n_classes).astype(float)
|
||||
y_test = tf.keras.utils.to_categorical(y_test, n_classes).astype(float)
|
||||
|
||||
learning_rate = 0.5
|
||||
epochs = 5
|
||||
batch_size = 100
|
||||
|
||||
x = tf.placeholder(tf.float32, [None, 784], name="x")
|
||||
y = tf.placeholder(tf.float32, [None, 10], name="y")
|
||||
|
||||
W1 = tf.Variable(tf.random_normal([784, 300], stddev=0.03), name='W1')
|
||||
b1 = tf.Variable(tf.random_normal([300]), name='b1')
|
||||
W2 = tf.Variable(tf.random_normal([300, 10], stddev=0.03), name='W2')
|
||||
hidden_out = tf.add(tf.matmul(x, W1), b1)
|
||||
hidden_out = tf.nn.relu(hidden_out)
|
||||
y_ = tf.matmul(hidden_out, W2)
|
||||
|
||||
probs = tf.nn.softmax(y_)
|
||||
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_, labels=y))
|
||||
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
|
||||
init_op = tf.global_variables_initializer()
|
||||
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
|
||||
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
|
||||
|
||||
sess = tf.Session()
|
||||
sess.run(init_op)
|
||||
total_batch = int(len(y_train) / batch_size)
|
||||
for epoch in range(epochs):
|
||||
avg_cost = 0
|
||||
for start, end in zip(range(0, len(y_train), batch_size), range(batch_size, len(y_train)+1, batch_size)):
|
||||
batch_x = x_train[start: end]
|
||||
batch_y = y_train[start: end]
|
||||
_, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})
|
||||
avg_cost += c / total_batch
|
||||
|
||||
def predict(inp):
|
||||
return sess.run(probs, feed_dict={x:inp})
|
||||
|
||||
inp = gradio.inputs.Sketchpad(flatten=True)
|
||||
io = gradio.Interface(inputs=inp, outputs="label", model_type="pyfunc", model=predict)
|
||||
io.launch(inline=True, share=True)</code></pre>
|
||||
|
||||
<h1>Launch Options</h1>
|
||||
<p>When launching the interface, you have the option to pass in several boolean parameters that determine how the interface is displayed. Here
|
||||
is an example showing all of the possible parameters:</p>
|
||||
|
||||
<pre><code class="python">io.launch(inbrowser=True, inline=False, validate=False, share=True)
|
||||
</code></pre>
|
||||
|
||||
|
||||
<p><code><span class="var">inbrowser</span></code> – whether the model should launch in a new browser window.<br>
|
||||
<code><span class="var">inline</span></code> – whether the model should launch embedded in an interactive python environment (like jupyter notebooks or colab notebooks).<br>
|
||||
<code><span class="var">validate</span></code> – whether gradio should try to validate the interface-model compatibility before launch.<br>
|
||||
<code><span class="var">share</span></code> – whether a public link to share the model should be created.
|
||||
for processing.</p>
|
||||
|
||||
</div>
|
||||
<footer>
|
||||
<img src="img/logo_inline.png" />
|
||||
</footer>
|
||||
<script src="gradio/vendor/jquery.min.js"></script>
|
||||
<script src="gradio/image_upload.js"></script>
|
||||
<script src="gradio/label.js"></script>
|
||||
<script src="gradio/vendor/cropper.js"></script>
|
||||
<script src="https://unpkg.com/ml5@0.1.3/dist/ml5.min.js"></script>
|
||||
<script src="js/models.js"></script>
|
||||
<body>
|
||||
</html>
|
||||
|
239
web/gradio/gradio.css
Normal file
@ -0,0 +1,239 @@
|
||||
#gradio {
|
||||
display: flex;
|
||||
}
|
||||
.panel {
|
||||
max-width: 50%;
|
||||
min-width: 300px;
|
||||
max-height: 360px;
|
||||
flex-grow: 1;
|
||||
}
|
||||
.panel:first-child {
|
||||
margin-right: 20px;
|
||||
}
|
||||
.instructions {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.input, .output {
|
||||
width: 100%;
|
||||
height: 360px;
|
||||
max-height: 360px;
|
||||
background-color: #F6F6F6;
|
||||
margin-bottom: 16px;
|
||||
box-sizing: border-box;
|
||||
padding: 6px;
|
||||
display: flex;
|
||||
flex-flow: column;
|
||||
}
|
||||
.loading {
|
||||
margin-left: auto;
|
||||
}
|
||||
.loading img {
|
||||
display: none;
|
||||
height: 22px;
|
||||
}
|
||||
.panel_head {
|
||||
display: flex
|
||||
}
|
||||
.role {
|
||||
text-transform: uppercase;
|
||||
font-family: Arial;
|
||||
color: #BBB;
|
||||
margin-bottom: 6px;
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
}
|
||||
.input.text .role, .output.text .role {
|
||||
margin-left: 1px;
|
||||
}
|
||||
.submit, .clear, .flag, .message, .send-message {
|
||||
background-color: #F6F6F6 !important;
|
||||
padding: 8px !important;
|
||||
box-sizing: border-box;
|
||||
width: calc(50% - 8px);
|
||||
text-transform: uppercase;
|
||||
font-weight: bold;
|
||||
border: 0 none;
|
||||
}
|
||||
.clear {
|
||||
background-color: #F6F6F6 !important;
|
||||
}
|
||||
.submit {
|
||||
background-color: #EEA45D !important;
|
||||
color: white !important;
|
||||
}
|
||||
|
||||
.submit {
|
||||
margin-right: 8px;
|
||||
}
|
||||
.clear {
|
||||
margin-left: 8px;
|
||||
}
|
||||
|
||||
/*.flag:focus {
|
||||
background-color: pink !important;
|
||||
}
|
||||
*/
|
||||
.input_text, .output_text {
|
||||
background: transparent;
|
||||
resize: none
|
||||
border: 0 none;
|
||||
width: 100%;
|
||||
font-size: 18px;
|
||||
outline: none;
|
||||
height: 100%;
|
||||
padding: 0;
|
||||
}
|
||||
.input_image, .input_audio, .input_snapshot, .input_mic, .input_csv, .output_class,
|
||||
.output_image, .output_csv {
|
||||
flex: 1 1 auto;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
text-align: center;
|
||||
}
|
||||
.input_image, .input_audio, .input_snapshot, .input_mic, .input_csv {
|
||||
font-weight: bold;
|
||||
font-size: 24px;
|
||||
color: #BBB;
|
||||
cursor: pointer;
|
||||
}
|
||||
.input_image img {
|
||||
max-height: 100%;
|
||||
max-width: 100%;
|
||||
}
|
||||
.hidden_upload {
|
||||
display: none;
|
||||
}
|
||||
.output_class {
|
||||
font-weight: bold;
|
||||
font-size: 36px;
|
||||
}
|
||||
.drop_mode {
|
||||
border: dashed 8px #DDD;
|
||||
}
|
||||
.input_image, .input_audio {
|
||||
line-height: 1.5em;
|
||||
}
|
||||
.input_snapshot, .input_mic {
|
||||
flex-direction: column;
|
||||
}
|
||||
.input_snapshot .webcam, .input_mic .mic {
|
||||
height: 80px;
|
||||
}
|
||||
.output_image img {
|
||||
width: 100%; /* or any custom size */
|
||||
height: 100%;
|
||||
object-fit: contain;
|
||||
}
|
||||
.confidence_intervals {
|
||||
font-size: 16px;
|
||||
}
|
||||
.confidence {
|
||||
padding: 3px;
|
||||
display: flex;
|
||||
}
|
||||
.level, .label {
|
||||
display: inline-block;
|
||||
}
|
||||
.label {
|
||||
width: 80px;
|
||||
display: block;
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
.confidence_intervals .level {
|
||||
font-size: 14px;
|
||||
margin-left: 8px;
|
||||
margin-right: 8px;
|
||||
background-color: #AAA;
|
||||
padding: 2px 4px;
|
||||
text-align: right;
|
||||
font-family: monospace;
|
||||
color: white;
|
||||
font-weight: bold;
|
||||
}
|
||||
.confidence_intervals > * {
|
||||
vertical-align: bottom;
|
||||
}
|
||||
.flag.flagged {
|
||||
background-color: pink !important;
|
||||
}
|
||||
|
||||
.sketchpad canvas {
|
||||
background-color: white;
|
||||
}
|
||||
.sketch_tools {
|
||||
flex: 0 1 auto;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
.brush {
|
||||
border-radius: 50%;
|
||||
background-color: #AAA;
|
||||
margin: 0px 20px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.brush.selected, .brush:hover {
|
||||
background-color: black;
|
||||
}
|
||||
#brush_1 {
|
||||
height: 8px;
|
||||
width: 8px;
|
||||
}
|
||||
#brush_2 {
|
||||
height: 16px;
|
||||
width: 16px;
|
||||
}
|
||||
#brush_3 {
|
||||
height: 24px;
|
||||
width: 24px;
|
||||
}
|
||||
.canvas_holder {
|
||||
flex: 1 1 auto;
|
||||
text-align: center;
|
||||
}
|
||||
canvas {
|
||||
border: solid 1px black;
|
||||
}
|
||||
|
||||
.text textarea {
|
||||
resize: none;
|
||||
background-color: white;
|
||||
border: none;
|
||||
box-sizing: border-box;
|
||||
padding: 4px;
|
||||
}
|
||||
|
||||
.output_image img {
|
||||
display: none
|
||||
}
|
||||
|
||||
.table_holder {
|
||||
max-width: 100%;
|
||||
max-height: 100%;
|
||||
overflow: scroll;
|
||||
display: none;
|
||||
}
|
||||
.csv_preview {
|
||||
background-color: white;
|
||||
max-width: 100%;
|
||||
max-height: 100%;
|
||||
font-size: 12px;
|
||||
font-family: monospace;
|
||||
}
|
||||
.csv_preview tr {
|
||||
border-bottom: solid 1px black;
|
||||
}
|
||||
.csv_preview tr.header td {
|
||||
background-color: #EEA45D;
|
||||
font-weight: bold;
|
||||
}
|
||||
.csv_preview td {
|
||||
padding: 2px 4px;
|
||||
}
|
||||
.csv_preview td:nth-child(even) {
|
||||
background-color: #EEEEEE;
|
||||
}
|
68
web/gradio/image_upload.js
Normal file
@ -0,0 +1,68 @@
|
||||
var cropper;
|
||||
var aspectRatio = "{{aspect_ratio}}"
|
||||
|
||||
$('body').on('click', ".input_image.drop_mode", function (e) {
|
||||
$(this).parent().find(".hidden_upload").click();
|
||||
})
|
||||
|
||||
$('body').on('drag dragstart dragend dragover dragenter dragleave drop', ".input_image.drop_mode", function(e) {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
})
|
||||
|
||||
function loadPreviewFromFiles(files) {
|
||||
var ReaderObj = new FileReader()
|
||||
ReaderObj.readAsDataURL(files[0])
|
||||
ReaderObj.onloadend = function() {
|
||||
$(".input_caption").hide()
|
||||
$(".input_image").removeClass("drop_mode")
|
||||
var image = $(".input_image img")
|
||||
image.attr("src", this.result)
|
||||
image.cropper({
|
||||
aspectRatio : aspectRatio,
|
||||
background: false
|
||||
});
|
||||
if (!cropper) {
|
||||
cropper = image.data('cropper');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
$(".input_image").on('drop', function(e) {
|
||||
files = e.originalEvent.dataTransfer.files;
|
||||
loadPreviewFromFiles(files)
|
||||
});
|
||||
|
||||
$(".hidden_upload").on("change", function() {
|
||||
var files = !!this.files ? this.files : []
|
||||
if (!files.length || !window.FileReader) {
|
||||
return
|
||||
}
|
||||
if (/^image/.test(files[0].type)) {
|
||||
loadPreviewFromFiles(files)
|
||||
} else {
|
||||
alert("Invalid input")
|
||||
}
|
||||
})
|
||||
|
||||
$('body').on('click', '.clear', function(e) {
|
||||
if (cropper) {
|
||||
cropper.destroy();
|
||||
cropper = null
|
||||
$(".input_image img").remove()
|
||||
$(".input_image").append("<img>")
|
||||
}
|
||||
$(".hidden_upload").prop("value", "")
|
||||
$(".input_caption").show()
|
||||
$(".input_image img").removeAttr("src");
|
||||
$(".input_image").addClass("drop_mode")
|
||||
})
|
||||
$('body').on('click', '.submit', function(e) {
|
||||
src = cropper.getCroppedCanvas({
|
||||
maxWidth: 360,
|
||||
maxHeight: 360,
|
||||
fillColor: "white"
|
||||
}).toDataURL();
|
||||
$("#invisible_img").attr("src", src);
|
||||
upload();
|
||||
})
|
31
web/gradio/label.js
Normal file
@ -0,0 +1,31 @@
|
||||
function loadData(data) {
|
||||
console.log("heyyy")
|
||||
// data = {
|
||||
// label: "happy",
|
||||
// confidences : [
|
||||
// {
|
||||
// label : "happy",
|
||||
// confidence: 0.7
|
||||
// },
|
||||
// {
|
||||
// label : "sad",
|
||||
// confidence: 0.3
|
||||
// },
|
||||
// ]
|
||||
// }
|
||||
$(".output_class").text(data["label"])
|
||||
$(".confidence_intervals").empty()
|
||||
if ("confidences" in data) {
|
||||
data["confidences"].forEach(function (c) {
|
||||
var confidence = c["confidence"]
|
||||
$(".confidence_intervals").append(`<div class="confidence"><div class=
|
||||
"label" title="${c["label"]}">${c["label"]}</div><div class="level" style="flex-grow:
|
||||
${confidence}">${Math.round(confidence * 100)}%</div></div>`)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
$('body').on('click', '.clear', function(e) {
|
||||
$(".output_class").text("")
|
||||
$(".confidence_intervals").empty()
|
||||
})
|
305
web/gradio/vendor/cropper.css
vendored
Normal file
@ -0,0 +1,305 @@
|
||||
/*!
|
||||
* Cropper v4.0.0
|
||||
* https://github.com/fengyuanchen/cropper
|
||||
*
|
||||
* Copyright (c) 2014-2018 Chen Fengyuan
|
||||
* Released under the MIT license
|
||||
*
|
||||
* Date: 2018-04-01T06:26:32.417Z
|
||||
*/
|
||||
|
||||
.cropper-container {
|
||||
direction: ltr;
|
||||
font-size: 0;
|
||||
line-height: 0;
|
||||
position: relative;
|
||||
-ms-touch-action: none;
|
||||
touch-action: none;
|
||||
-webkit-user-select: none;
|
||||
-moz-user-select: none;
|
||||
-ms-user-select: none;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.cropper-container img {/*Avoid margin top issue (Occur only when margin-top <= -height)
|
||||
*/
|
||||
display: block;
|
||||
height: 100%;
|
||||
image-orientation: 0deg;
|
||||
max-height: none !important;
|
||||
max-width: none !important;
|
||||
min-height: 0 !important;
|
||||
min-width: 0 !important;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.cropper-wrap-box,
|
||||
.cropper-canvas,
|
||||
.cropper-drag-box,
|
||||
.cropper-crop-box,
|
||||
.cropper-modal {
|
||||
bottom: 0;
|
||||
left: 0;
|
||||
position: absolute;
|
||||
right: 0;
|
||||
top: 0;
|
||||
}
|
||||
|
||||
.cropper-wrap-box,
|
||||
.cropper-canvas {
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.cropper-drag-box {
|
||||
background-color: #fff;
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.cropper-modal {
|
||||
background-color: #000;
|
||||
opacity: .5;
|
||||
}
|
||||
|
||||
.cropper-view-box {
|
||||
display: block;
|
||||
height: 100%;
|
||||
outline-color: rgba(51, 153, 255, 0.75);
|
||||
outline: 1px solid #39f;
|
||||
overflow: hidden;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.cropper-dashed {
|
||||
border: 0 dashed #eee;
|
||||
display: block;
|
||||
opacity: .5;
|
||||
position: absolute;
|
||||
}
|
||||
|
||||
.cropper-dashed.dashed-h {
|
||||
border-bottom-width: 1px;
|
||||
border-top-width: 1px;
|
||||
height: 33.33333%;
|
||||
left: 0;
|
||||
top: 33.33333%;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.cropper-dashed.dashed-v {
|
||||
border-left-width: 1px;
|
||||
border-right-width: 1px;
|
||||
height: 100%;
|
||||
left: 33.33333%;
|
||||
top: 0;
|
||||
width: 33.33333%;
|
||||
}
|
||||
|
||||
.cropper-center {
|
||||
display: block;
|
||||
height: 0;
|
||||
left: 50%;
|
||||
opacity: .75;
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
width: 0;
|
||||
}
|
||||
|
||||
.cropper-center:before,
|
||||
.cropper-center:after {
|
||||
background-color: #eee;
|
||||
content: ' ';
|
||||
display: block;
|
||||
position: absolute;
|
||||
}
|
||||
|
||||
.cropper-center:before {
|
||||
height: 1px;
|
||||
left: -3px;
|
||||
top: 0;
|
||||
width: 7px;
|
||||
}
|
||||
|
||||
.cropper-center:after {
|
||||
height: 7px;
|
||||
left: 0;
|
||||
top: -3px;
|
||||
width: 1px;
|
||||
}
|
||||
|
||||
.cropper-face,
|
||||
.cropper-line,
|
||||
.cropper-point {
|
||||
display: block;
|
||||
height: 100%;
|
||||
opacity: .1;
|
||||
position: absolute;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.cropper-face {
|
||||
background-color: #fff;
|
||||
left: 0;
|
||||
top: 0;
|
||||
}
|
||||
|
||||
.cropper-line {
|
||||
background-color: #39f;
|
||||
}
|
||||
|
||||
.cropper-line.line-e {
|
||||
cursor: ew-resize;
|
||||
right: -3px;
|
||||
top: 0;
|
||||
width: 5px;
|
||||
}
|
||||
|
||||
.cropper-line.line-n {
|
||||
cursor: ns-resize;
|
||||
height: 5px;
|
||||
left: 0;
|
||||
top: -3px;
|
||||
}
|
||||
|
||||
.cropper-line.line-w {
|
||||
cursor: ew-resize;
|
||||
left: -3px;
|
||||
top: 0;
|
||||
width: 5px;
|
||||
}
|
||||
|
||||
.cropper-line.line-s {
|
||||
bottom: -3px;
|
||||
cursor: ns-resize;
|
||||
height: 5px;
|
||||
left: 0;
|
||||
}
|
||||
|
||||
.cropper-point {
|
||||
background-color: #39f;
|
||||
height: 5px;
|
||||
opacity: .75;
|
||||
width: 5px;
|
||||
}
|
||||
|
||||
.cropper-point.point-e {
|
||||
cursor: ew-resize;
|
||||
margin-top: -3px;
|
||||
right: -3px;
|
||||
top: 50%;
|
||||
}
|
||||
|
||||
.cropper-point.point-n {
|
||||
cursor: ns-resize;
|
||||
left: 50%;
|
||||
margin-left: -3px;
|
||||
top: -3px;
|
||||
}
|
||||
|
||||
.cropper-point.point-w {
|
||||
cursor: ew-resize;
|
||||
left: -3px;
|
||||
margin-top: -3px;
|
||||
top: 50%;
|
||||
}
|
||||
|
||||
.cropper-point.point-s {
|
||||
bottom: -3px;
|
||||
cursor: s-resize;
|
||||
left: 50%;
|
||||
margin-left: -3px;
|
||||
}
|
||||
|
||||
.cropper-point.point-ne {
|
||||
cursor: nesw-resize;
|
||||
right: -3px;
|
||||
top: -3px;
|
||||
}
|
||||
|
||||
.cropper-point.point-nw {
|
||||
cursor: nwse-resize;
|
||||
left: -3px;
|
||||
top: -3px;
|
||||
}
|
||||
|
||||
.cropper-point.point-sw {
|
||||
bottom: -3px;
|
||||
cursor: nesw-resize;
|
||||
left: -3px;
|
||||
}
|
||||
|
||||
.cropper-point.point-se {
|
||||
bottom: -3px;
|
||||
cursor: nwse-resize;
|
||||
height: 20px;
|
||||
opacity: 1;
|
||||
right: -3px;
|
||||
width: 20px;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.cropper-point.point-se {
|
||||
height: 15px;
|
||||
width: 15px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 992px) {
|
||||
.cropper-point.point-se {
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 1200px) {
|
||||
.cropper-point.point-se {
|
||||
height: 5px;
|
||||
opacity: .75;
|
||||
width: 5px;
|
||||
}
|
||||
}
|
||||
|
||||
.cropper-point.point-se:before {
|
||||
background-color: #39f;
|
||||
bottom: -50%;
|
||||
content: ' ';
|
||||
display: block;
|
||||
height: 200%;
|
||||
opacity: 0;
|
||||
position: absolute;
|
||||
right: -50%;
|
||||
width: 200%;
|
||||
}
|
||||
|
||||
.cropper-invisible {
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.cropper-bg {
|
||||
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQAQMAAAAlPW0iAAAAA3NCSVQICAjb4U/gAAAABlBMVEXMzMz////TjRV2AAAACXBIWXMAAArrAAAK6wGCiw1aAAAAHHRFWHRTb2Z0d2FyZQBBZG9iZSBGaXJld29ya3MgQ1M26LyyjAAAABFJREFUCJlj+M/AgBVhF/0PAH6/D/HkDxOGAAAAAElFTkSuQmCC');
|
||||
}
|
||||
|
||||
.cropper-hide {
|
||||
display: block;
|
||||
height: 0;
|
||||
position: absolute;
|
||||
width: 0;
|
||||
}
|
||||
|
||||
.cropper-hidden {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.cropper-move {
|
||||
cursor: move;
|
||||
}
|
||||
|
||||
.cropper-crop {
|
||||
cursor: crosshair;
|
||||
}
|
||||
|
||||
.cropper-disabled .cropper-drag-box,
|
||||
.cropper-disabled .cropper-face,
|
||||
.cropper-disabled .cropper-line,
|
||||
.cropper-disabled .cropper-point {
|
||||
cursor: not-allowed;
|
||||
}
|
3761
web/gradio/vendor/cropper.js
vendored
Normal file
4
web/gradio/vendor/jquery.min.js
vendored
Normal file
120
web/home.html
@ -1,119 +1 @@
|
||||
<html>
|
||||
<head>
|
||||
<title>GradIO</title>
|
||||
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
|
||||
<link href="style/style.css" rel="stylesheet">
|
||||
<link href="style/home.css" rel="stylesheet">
|
||||
<link href="style/gradio.css" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
<nav>
|
||||
<a class="selected" href="home.html">GradIO</a>
|
||||
<a href="getting_started.html">Getting Started</a>
|
||||
<a href="sharing.html">Sharing</a>
|
||||
</nav>
|
||||
<div id="hero-section"><!--
|
||||
--><div id="intro">
|
||||
<img id="logo" src="img/logo.png"/>
|
||||
<p>Gradio is a free, open-source python library that helps machine
|
||||
learning researchers <strong>interact</strong> with and <strong>share</strong> their machine
|
||||
learning models with collaborators and clients with only a few lines of extra code.</p>
|
||||
<p>With gradio, you can easily generate in-browser interfaces that
|
||||
enable you to enter various forms of input for your model and explore
|
||||
the behavior of your model immediately. Gradio also generates <strong>links</strong>
|
||||
that can be shared with collaborators and other audiences, so they can
|
||||
interact with the model without setting up any software or even having
|
||||
any background in machine learning or software at all!</p>
|
||||
<p>Visit the <a href="https://github.com/abidlabs/gradio"
|
||||
target="_blank">Gradio GitHub >></a></p>
|
||||
<p>Gradio was developed by researchers at Stanford University and is
|
||||
under the Apache license.</p>
|
||||
</div><!--
|
||||
--><div id="demos">
|
||||
<div id="demo-nav">
|
||||
<button demo="1" class="selected demo-link">
|
||||
<span class="demo-count">Demo 1</span><br>Handwriting Digit
|
||||
</button>
|
||||
<button demo="2" class="demo-link">
|
||||
<span class="demo-count">Demo 2</span><br>Impressionist Painter
|
||||
</button>
|
||||
<button demo="3" class="demo-link">
|
||||
<span class="demo-count">Demo 3</span><br>Expression Detection
|
||||
</button>
|
||||
<button demo="4" class="demo-link">
|
||||
<span class="demo-count">Demo 4</span><br>Text Summaries
|
||||
</button>
|
||||
<button demo="5" class="demo-link">
|
||||
<span class="demo-count">Demo 5</span><br>Hotdog or Not Hotdog
|
||||
</button>
|
||||
</div>
|
||||
<div id="demo-code">
|
||||
<div id="demo-code-label">handwriting_digit_demo.py</div>
|
||||
<div class="codeblock"><code>
|
||||
import <span class="var">gradio</span>, tensorflow as <span
|
||||
class="var">tf</span><br>
|
||||
<span class="var">handwriting_mdl</span> = tf.keras.models.<span
|
||||
class="func">Sequential()</span><br>
|
||||
<span class="comm"># ... define and train the model as you would
|
||||
normally</span><br>
|
||||
<span class="var">io</span> = gradio.<span
|
||||
class="func">Interface(</span><span
|
||||
class="var">input</span>=“sketchpad”, <span
|
||||
class="var">output</span>=“label”, <span
|
||||
class="var">model_type</span>=“keras”,<span
|
||||
class="var">model</span>=handwriting_mdl<span
|
||||
class="func">)</span><br>
|
||||
io.<span class="func">launch()</span>
|
||||
</code></div>
|
||||
</div>
|
||||
<div id="gradio">
|
||||
<div class="instructions">
|
||||
The code above produces produces an interface like this, in which you can draw a digit from 0 to 9 in the input box. Click
|
||||
submit to get the prediction!
|
||||
</div>
|
||||
<div class="panel">
|
||||
<div class="input sketchpad">
|
||||
<div class="role">Input</div>
|
||||
</div>
|
||||
<input class="submit" type="submit" value="Submit"/><!--
|
||||
--><input class="clear" type="reset" value="Clear">
|
||||
</div><!--
|
||||
--><div class="panel">
|
||||
<div class="output classifier">
|
||||
<div class="role">Output</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div><!--
|
||||
--></div>
|
||||
<div id="summaries">
|
||||
<div id="setup" class="summary_box">
|
||||
<h2>Fast, easy setup</h2>
|
||||
<p>Using Gradio only requires adding a couple lines of code to your
|
||||
project. You can install Gradio from pip and deploy your model in
|
||||
seconds. Once launched, you can choose from a variety of interface
|
||||
types to interact with, iterate over, and improve your models.</p>
|
||||
<p>More on <a href="getting_started.html">Getting Stared >></a><p>
|
||||
</div>
|
||||
<div id="present" class="summary_box">
|
||||
<h2>Present and share</h2>
|
||||
<p>Gradio present an interface that is intuitive to engineers and
|
||||
non-engineers alike, and thus a valuable tool in sharing insights from
|
||||
your models. When Gradio launches a model, it also creates a link you
|
||||
can share with colleagues that lets them interact with the model
|
||||
on your computer remotely from their own devices.</p>
|
||||
<p>More on <a href="sharing.html">Sharing >></a><p>
|
||||
</div>
|
||||
<div id="embed" class="summary_box">
|
||||
<h2>Embed and go</h2>
|
||||
<p>Gradio can be embedded in Jupyter and Colab notebooks, in blogs and
|
||||
websites, and screenshotted for use in research papers. These features
|
||||
all help your models be more easily shared and consumed with a larger
|
||||
audience.</p>
|
||||
</div>
|
||||
</div>
|
||||
<footer>
|
||||
<img src="img/logo_inline.png" />
|
||||
</footer>
|
||||
<body>
|
||||
</html>
|
||||
<meta http-equiv="refresh" content="0; URL='https://gradio.app/'" />
|
||||
|
Before Width: | Height: | Size: 199 KiB After Width: | Height: | Size: 55 KiB |
BIN
web/img/logo.png
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 7.0 KiB |
137
web/index.html
Normal file
@ -0,0 +1,137 @@
|
||||
<html>
|
||||
<head>
|
||||
<title>Gradio</title>
|
||||
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
|
||||
<link href="style/style.css" rel="stylesheet">
|
||||
<link href="style/home.css" rel="stylesheet">
|
||||
<link href="gradio/gradio.css" rel="stylesheet">
|
||||
<link href="gradio/vendor/cropper.css" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
<nav>
|
||||
<a class="selected" href="index.html">Gradio</a>
|
||||
<a href="getting_started.html">Getting Started</a>
|
||||
<a href="sharing.html">Sharing</a>
|
||||
</nav>
|
||||
<div id="hero-section"><!--
|
||||
--><div id="intro">
|
||||
<img id="logo" src="img/logo.png"/>
|
||||
<p>Gradio is a free, open-source python library that helps machine
|
||||
learning researchers <strong>interact</strong> with and <strong>share</strong> their machine
|
||||
learning models with collaborators and clients with only a few lines of extra code.</p>
|
||||
<p>With gradio, you can easily generate in-browser interfaces that
|
||||
enable you to enter various forms of input for your model and explore
|
||||
the behavior of your model immediately. Gradio also generates <strong>links</strong>
|
||||
that can be shared with collaborators and other audiences, so they can
|
||||
interact with the model without setting up any software or even having
|
||||
any background in machine learning or software at all!</p>
|
||||
<p>Visit the <a href="https://github.com/abidlabs/gradio"
|
||||
target="_blank">Gradio GitHub >></a></p>
|
||||
<p>Gradio was developed by researchers at Stanford University and is
|
||||
under the Apache license.</p>
|
||||
</div><!--
|
||||
--><div id="demos">
|
||||
<!-- <div id="demo-nav">
|
||||
<button demo="1" class="selected demo-link">
|
||||
<span class="demo-count">Demo 1</span><br>Handwriting Digit
|
||||
</button>
|
||||
<button demo="2" class="demo-link">
|
||||
<span class="demo-count">Demo 2</span><br>Impressionist Painter
|
||||
</button>
|
||||
<button demo="3" class="demo-link">
|
||||
<span class="demo-count">Demo 3</span><br>Expression Detection
|
||||
</button>
|
||||
<button demo="4" class="demo-link">
|
||||
<span class="demo-count">Demo 4</span><br>Text Summaries
|
||||
</button>
|
||||
<button demo="5" class="demo-link">
|
||||
<span class="demo-count">Demo 5</span><br>Hotdog or Not Hotdog
|
||||
</button>
|
||||
</div> -->
|
||||
<div id="demo-code">
|
||||
<div id="demo-code-label">image_detector.py</div>
|
||||
<div class="codeblock"><code>
|
||||
import <span class="var">gradio</span>, tensorflow as <span
|
||||
class="var">tf</span><br>
|
||||
<span class="var">image_mdl</span> = tf.keras.models.<span
|
||||
class="func">Sequential()</span><br>
|
||||
<span class="comm"># ... define and train the model as you would
|
||||
normally</span><br>
|
||||
<span class="var">io</span> = gradio.<span
|
||||
class="func">Interface(</span><span
|
||||
class="var">inputs</span>=“imageupload", <span
|
||||
class="var">outputs</span>=“label”, <span
|
||||
class="var">model_type</span>=“keras”,<span
|
||||
class="var">model</span>=image_mdl<span
|
||||
class="func">)</span><br>
|
||||
io.<span class="func">launch()</span>
|
||||
</code></div>
|
||||
</div>
|
||||
<div class="instructions">
|
||||
The code above produces produces an interface like this, in which you can upload an image and receive labelling as output. Click
|
||||
submit to get the prediction!
|
||||
</div>
|
||||
<div id="gradio">
|
||||
<div class="panel">
|
||||
<div class="gradio input image_file">
|
||||
<div class="role">Input</div>
|
||||
<div class="input_image drop_mode">
|
||||
<div class="input_caption">Drop Image Here<br>- or -<br>Click to Upload</div>
|
||||
<img />
|
||||
</div>
|
||||
<input class="hidden_upload" type="file" accept="image/x-png,image/gif,image/jpeg" />
|
||||
</div>
|
||||
<input class="submit" type="submit" value="Submit"/><!--
|
||||
--><input class="clear" type="reset" value="Clear">
|
||||
</div><!--
|
||||
--><div class="panel">
|
||||
<div class="gradio output classifier">
|
||||
<div class="panel_head">
|
||||
<div class="role">Output</div>
|
||||
</div>
|
||||
<div class="output_class"></div>
|
||||
<div class="confidence_intervals">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div><!--
|
||||
--></div>
|
||||
<img id="invisible_img" style="display: none" />
|
||||
<div id="summaries">
|
||||
<div id="setup" class="summary_box">
|
||||
<h2>Fast, easy setup</h2>
|
||||
<p>Using gradio only requires adding a couple lines of code to your
|
||||
project. You can install gradio from pip and deploy your model in
|
||||
seconds. Once launched, you can choose from a variety of interface
|
||||
types to interact with, iterate over, and improve your models.</p>
|
||||
<p>More on <a href="getting_started.html">Getting Stared >></a><p>
|
||||
</div>
|
||||
<div id="present" class="summary_box">
|
||||
<h2>Present and share</h2>
|
||||
<p>Gradio presents an interface that is intuitive to engineers and
|
||||
non-engineers alike, and thus a valuable tool in sharing insights from
|
||||
your models. When gradio launches a model, it also creates a link you
|
||||
can share with colleagues that lets them interact with the model
|
||||
on your computer remotely from their own devices.</p>
|
||||
<p>More on <a href="sharing.html">Sharing >></a><p>
|
||||
</div>
|
||||
<div id="embed" class="summary_box">
|
||||
<h2>Embed and go</h2>
|
||||
<p>Gradio can be embedded in Jupyter and Colab notebooks, in blogs and
|
||||
websites, and screenshotted for use in research papers. These features
|
||||
all help your models be more easily shared and consumed with a larger
|
||||
audience.</p>
|
||||
</div>
|
||||
</div>
|
||||
<footer>
|
||||
<img src="img/logo_inline.png" />
|
||||
</footer>
|
||||
<script src="gradio/vendor/jquery.min.js"></script>
|
||||
<script src="gradio/image_upload.js"></script>
|
||||
<script src="gradio/label.js"></script>
|
||||
<script src="gradio/vendor/cropper.js"></script>
|
||||
<script src="https://unpkg.com/ml5@0.1.3/dist/ml5.min.js"></script>
|
||||
<script src="js/models.js"></script>
|
||||
<body>
|
||||
</html>
|
@ -1,46 +1,26 @@
|
||||
<script src="https://unpkg.com/ml5@0.1.3/dist/ml5.min.js"></script>
|
||||
|
||||
classifier = ml5.imageClassifier('MobileNet', function() {
|
||||
console.log('Model Loaded!');
|
||||
});
|
||||
|
||||
// Takes in the ID of the image, and returns labels and confidences
|
||||
function imageupload_label(image){
|
||||
var output;
|
||||
classifier = ml5.imageClassifier('MobileNet', function() {
|
||||
console.log('Model Loaded!');
|
||||
});
|
||||
classifier.predict(image, function(err, results) {
|
||||
var output = {
|
||||
'label': results[0].className,
|
||||
'confidences': [
|
||||
{'label': results[0].className,
|
||||
'confidence': results[0].probability.toFixed(4)},
|
||||
{'label': results[1].className,
|
||||
'confidence': results[1].probability.toFixed(4)},
|
||||
{'label': results[2].className,
|
||||
'confidence': results[2].probability.toFixed(4)},
|
||||
function upload() {
|
||||
classifier.predict(document.getElementById('invisible_img'), function(err,
|
||||
results) {
|
||||
if (!results) {
|
||||
return
|
||||
}
|
||||
console.log(results)
|
||||
var output = {
|
||||
'label': results[0].className,
|
||||
'confidences': [
|
||||
{'label': results[0].className,
|
||||
'confidence': results[0].probability.toFixed(4)},
|
||||
{'label': results[1].className,
|
||||
'confidence': results[1].probability.toFixed(4)},
|
||||
{'label': results[2].className,
|
||||
'confidence': results[2].probability.toFixed(4)},
|
||||
]
|
||||
}
|
||||
}
|
||||
loadData(output);
|
||||
});
|
||||
return output
|
||||
}
|
||||
|
||||
// Takes in the ID of the image, and returns labels and confidences
|
||||
function sketchpad_label(image){
|
||||
var output;
|
||||
classifier = ml5.imageClassifier('MobileNet', function() {
|
||||
console.log('Model Loaded!');
|
||||
});
|
||||
classifier.predict(image, function(err, results) {
|
||||
var output = {
|
||||
'label': results[0].className,
|
||||
'confidences': [
|
||||
{'label': results[0].className,
|
||||
'confidence': results[0].probability.toFixed(4)},
|
||||
{'label': results[1].className,
|
||||
'confidence': results[1].probability.toFixed(4)},
|
||||
{'label': results[2].className,
|
||||
'confidence': results[2].probability.toFixed(4)},
|
||||
]
|
||||
}
|
||||
});
|
||||
return output
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
<html>
|
||||
<head>
|
||||
<title>GradIO</title>
|
||||
<title>Gradio</title>
|
||||
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
|
||||
<link href="style/style.css" rel="stylesheet">
|
||||
<link href="style/sharing.css" rel="stylesheet">
|
||||
@ -9,25 +9,25 @@
|
||||
<body>
|
||||
<nav>
|
||||
<img src="img/logo_inline.png" />
|
||||
<a href="home.html">GradIO</a>
|
||||
<a href="index.html">Gradio</a>
|
||||
<a href="getting_started.html">Getting Started</a>
|
||||
<a class="selected" href="sharing.html">Sharing</a>
|
||||
</nav>
|
||||
<div class="content">
|
||||
<h1>Sharing</h1>
|
||||
<p>GradIO comes with built in support for sharing models. When GradIO
|
||||
<p>Gradio comes with built in support for sharing models. When gradio
|
||||
launches an interface for a model, it also creates a shareable link that
|
||||
can be sent out by the creator to collaborators so they can access the
|
||||
interface from their browsers. The link does not require these remote
|
||||
users to have GradIO, Python, or any environment set up.</p>
|
||||
users to have gradio, Python, or any environment set up.</p>
|
||||
<p>This link, which is printed to the console, is generated by adding
|
||||
the parameter <code><span class="var">share</span>=True</code> when
|
||||
launching the interface: e.g. <code>io.<span
|
||||
class="func">launch(</span><span class="var">share</span>=True<span
|
||||
class="func">)</span></code>. The link stays active for 8 hours.
|
||||
Other users can access the GradIO interface from the link remotely, but
|
||||
Other users can access the gradio interface from the link remotely, but
|
||||
the model stays on the original host computer. When remote users submit
|
||||
inputs to the GradIO interface, the browser makes a call to the host
|
||||
inputs to the gradio interface, the browser makes a call to the host
|
||||
computer with the input provided. On the host, the model will generate
|
||||
the output based on the provided input, and return the output, which
|
||||
will be rendered in the remote browser’s output pane. </p>
|
||||
@ -40,7 +40,7 @@
|
||||
executes on the host computer.
|
||||
</div>
|
||||
<h1>Public Hosting</h1>
|
||||
<p>The option for permanent public hosting on GradIO servers is coming
|
||||
<p>The option for permanent public hosting on gradio servers is coming
|
||||
soon! You will be able share your model publicly and collect user input
|
||||
data in this upcoming feature.</p>
|
||||
</div>
|
||||
|
@ -52,6 +52,7 @@ code {
|
||||
#summaries {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
margin-top: 30px;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
.summary_box {
|
||||
@ -59,7 +60,7 @@ code {
|
||||
padding: 0 16px;
|
||||
background-repeat: no-repeat;
|
||||
background-size: 200px;
|
||||
background-position: center;
|
||||
background-position: center;
|
||||
}
|
||||
.summary_box h2 {
|
||||
color: #ed9013;
|
||||
|