Gradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere!
Let’s write a simple gradio app
import gradio as gr
def snap(image):
return image
demo = gr.Interface(
snap,
gr.Image(label="Input", type="numpy", source="webcam", streaming=True),
"image",
live=True,
)
if __name__ == "__main__":
demo.launch()
This will use webcam as input and output live stream in the output window.
Build with docker
Let’s start with writing a Dockerfile:
NOTE: requirements.txt
should contain all python package used for gradio app.
FROM python:3.9
COPY requirements.txt ./
RUN pip install --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
RUN export PYTHONUNBUFFERED=1
WORKDIR /app
COPY app /app/
ENTRYPOINT [ "python", "app.py" ]
Then, we can build the docker image with:
docker build
--pull
--network=host
--tag gradio-demo:latest
.
Alternatively, if you have a image registry like Harbor, you can build the image with:
docker pull example.docker.registry/gradio-demo:latest || true
docker build
--pull
--network=host
--tag harbor.registry.com/gradio-demo:latest
.
docker push harbor.registry.com/gradio-demo:latest
CI/CD
Also, you can automatic above progress by using CI/CD. Let’s write a .gitlab-ci.yml
for users stick to Gitlab:
NOTE: You should create a robot user in Harbor dashboard, then set variable $HARBOR_ROBOT_TOKEN
and $HARBOR_ROBOT_USER
in gitlab. [Refer To]
- On the top bar, select Main menu > Admin.
- On the left sidebar, select Settings > CI/CD and expand the Variables section.
- Select Add variable and fill in the details.
stages:
- build
- push
variables:
HARBOR_LIBRARY_NAME: example.docker.registry/library
before_script:
- echo -n $HARBOR_ROBOT_TOKEN | docker login -u $HARBOR_ROBOT_USER --password-stdin example.docker.registry/library
- docker info
Build:
stage: build
tags:
- shell
script:
- echo "Start Build"
- docker pull $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:latest || true
- >
docker build
--pull
--network=host
--tag $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA
.
- docker push $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA
Push Latest to Harbor:
stage: push
tags:
- shell
only:
- main
script:
- echo "Start Push Latest to Harbor"
- docker pull $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA
- docker tag $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:latest
- docker push $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:latest
Push Tag to Harbor:
stage: push
tags:
- shell
only:
- tags
script:
- echo "Start Push Tags to Harbor"
- docker pull $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA
- docker tag $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_SHA $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
- docker push $HARBOR_LIBRARY_NAME/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
Test docker image
NOTE: Gradio runs at port 7860 as default.
IMPORTENT: In case you have proxy in you docker setting, remember to change ~/.docker/config.json with:
{
"proxies": {
"default": {
"httpProxy": "http://<your-proxy>",
"httpsProxy": "http://<your-proxy>",
"noProxy": "localhost"
}
}
}
After this, you can simply test the docker image with:
docker run -d --env NO_PROXY="localhost" --network host --name gradio-demo gradio-demo:latest
Now, let’s deploy it to K8S
NOTE: If you didn’t have a kubernetes environment yet, you can check out my article Install MicroK8s On Ubuntu22.04.
First, let’s write a deploy config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gradio-demo
namespace: gradio
spec:
replicas: 1
selector:
matchLabels:
name: gradio-demo
template:
metadata:
labels:
name: gradio-demo
spec:
restartPolicy: Always
hostNetwork: true
terminationGracePeriodSeconds: 30
containers:
- name: gradio-demo
image: example.docker.registry/library/gradio-demo:latest
env:
- name: PYTHONUNBUFFERED
value: "1"
resources:
limits:
memory: 20000Mi
requests:
memory: 2000Mi
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: gradio-demo
namespace: gradio
spec:
type: ClusterIP
selector:
name: gradio-demo
ports:
- port: 12345
targetPort: 12345
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/gradio-demo)$ $1/ redirect;
ngnix.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: gradio-demo
namespace: gradio
spec:
rules:
- http:
paths:
- path: /gradio-demo(/|$)(.*)
pathType: Prefix
backend:
service:
name: gradio-demo
port:
number: 12345
IMPORTANT: If you need to upload file larger than 1M in your gradio app, you will need to increase the maximum upload size for the ingres controller.
Now, we can start the app with:
kubectl apply -f gradio-demo.yaml
Finally, we can access the gradio via https://example.domain.com/gradio-demo.
Add CI/CD Auto Deploy Config
First, add gitlab CI/CD variable SSH_PRIVATE_KEY with ~/.ssh/id_rsa content generate with ssh-keygen
NOTE: You should scp
copy file to folder where user
have write permission.
stages:
- build
- push
- deploy
...
Deploy to MicroK8s:
stage: deploy
tags:
- shell
only:
- main
before_script:
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
- echo "Start Deploy to MicroK8s"
- scp gradio-deploy-config.yaml [email protected]:<path-to-your-deploy-configs>
- ssh [email protected] "microk8s kubectl apply -f <path-to-your-deploy-configs>/gradio-deploy-config.yaml"
Leave a Reply