You will need to connect your Kubernetes cluster to
kubectl. When you run
minikube start it will set up the cluster and connect it to
kubectl. When creating a Kubernetes cluster using gcloud or aws you will have their own way in which you can connect it to
I have really simple flask app that needs to be deployed. Here is the app that we have to deploy.
The first step to getting something onto your kubernets cluster is to have a Dockerfile. You can see the Dockerfile that I am using in that folder.
You build your docker container, then using a
Deployment you set up pods.
Firstly build your Docker container by running the following command in the
docker build -t docker.io/meain/flaskapp . ----- dockerhub username
meainwith your dockerhub username
This command will build your docker container and tag it with
docker.io/meain/flaskapp. You tag the image with the location where you need to push the image to.
You can start your docker container by doing
docker run --rm -p 8080:8080 docker.io/meain/flaskapp. This will start your docker container and map the internal 8080 port to the 8080 port in your local machine.
$ docker run --rm -p 8080:8080 docker.io/meain/flaskapp * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit) 172.17.0.1 - - [26/Jun/2019 05:58:36] "GET / HTTP/1.1" 200 -
$ curl localhost:8080 Hello, World!%
When deploying, you will have to push your docker image to some remote location and give that location for image. There are multiple services which let you upload your docker images. Google has gcr.io or you could just use hub.docker.com.
You could push your docker image using:
$ push docker.io/meain/flaskapp
OK, now that we are tested and ready, let us deploy to Kubernetes.
You can set up a deployment using:
$ kubectl run flaskapp --image='docker.io/meain/flaskapp' --port 8080 --------- ------------------------ deployment-name image-location
You can choose any deployment name, it does not have to be the same.
Now that you have created a
Deployment object, you can query Kubernetes to see that stats.
$ kubectl get pods NAME READY STATUS RESTARTS AGE flaskapp-576b787759-4jmd2 1/1 Running 0 3m47s $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE flaskapp 1/1 1 1 3m51s
Now to set up a
Service object and expose the deployment. You can do that using:
$ kubectl expose deployment flaskapp --type=LoadBalancer --port 80 --target-port 8080 -------- service-name
This will set up a service called
flaskapp with type as LoadBalancer. It maps the pod’s 8080 port as the port 80 of the service.
Now that we have everything deployed, let us test it out. Well you could just do a
curl, but to which address.
If you run
$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE flaskapp LoadBalancer 10.99.64.231 <pending> 80:31852/TCP 3m29s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 59m
It will list of the services. For services like
eks, you will have some value in the
EXTERNAL-IP field. But when using
minikube you will only have a
So, when using with
eks, you can just curl to that location.
minikube, you create another pod and then use the internal IP. You can create a tiny pod using the following command. Think of it as
ssh ing into a VM instance in the cluster.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty
once you are in, you can do a curl the internal ip
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty If you don't see a command prompt, try pressing enter. [ root@curl-66bdcf564-4p9lb:/ ]$ curl 10.99.64.231 Hello, World!
You can easily scale a deployment using
kubectl. In order to scale our
flaskapp deployment to 3 pods we can run.
$ kubectl scale deployment flaskapp --replicas 3 deployment.extensions/flaskapp scaled
Now, if you check number of pods, you can see 3 pods running.
$ kubectl get pods NAME READY STATUS RESTARTS AGE flaskapp-576b787759-4jmd2 1/1 Running 0 39m flaskapp-576b787759-8bhdc 1/1 Running 0 84s flaskapp-576b787759-kdps9 1/1 Running 0 84s ...
OK, now autoscaling. It is just as easy. You can run:
$ kubectl autoscale deployment flaskapp --min=1 --max=5 --cpu-percent=50 horizontalpodautoscaler.autoscaling/flaskapp autoscaled
Now with that, it will scale up from 1 pod to a max of 5 pods based on the
cpu-percent. [no of nodes = ceil(cpu-percen/50)]
Well, it is really easy to delete your deployment or service. You can run:
kubectl delete pod <pod-name> kubectl delete deployment <deployment-name> kubectl delete service <service-name>
Well, that is pretty much the basics.
Well, all this is pretty cool. But, you probably don’t wanna retype the entire command with all the parameters every time you wanna deploy something or make some changes. That is exactly why you have a config file.
You can have a config file for any kind of kubernetes object. It is a yaml file which will essentially specify the parameters which you would be specifying in the command that you would enter.
You could have config for multiple kubernets object in one file, you will just have to seperate it with
I am not going into all the configuration options that you can do because it is pretty vast, and it is bettter that you just go through the documentation for kubernetes.
I will go through the config for a deployment and a service.
To do the same deployment as what the command above, we will be writing a config file which looks something like this:
apiVersion: apps/v1 kind: Deployment metadata: name: flask spec: selector: matchLabels: app: flask replicas: 1 template: metadata: labels: app: flask spec: containers: - name: flask image: "docker.io/meain/flaskapp" ports: - name: backend containerPort: 8080
Let me introduce you to the important pieces. Almost every kubernetes config piece will contain these things:
apiVersion: the version of the kubernets config
kind: kind of object that you are working with
metadata: things like
spec: the spec of the deployment, as in which pod, how many replicas etc..
You use labels or name to link between different objects
template section of a deployment could be actually defined in a
Pod object and connected via labels. In this case, if we were to separate out the two, we could have the
template section in a different file and have this
Deployment config linked to it by using
selector > matchLabels in the
Deployment section and matching it to
metadata > labels in the
Pod section of that config.
But in this case we just specify the
Pod specification directly in the
In here the
selector: specify labels used to select the
replicas: how many replicas of the pod to use
template: this specified the pod definition (could have been taken out in to a separate piece if needed)
template section actually contains the
Pod definition. It will have your
metadata section, but don’t need others like
kind etc as they are already know.
apiVersion as it is specified for
kind can be inferred from the fact that this a
Pod spec contains:
container: the pod specification
name: name of the pod
image: docker image link
ports: ports to be exposed
Well that is pretty much it for a
Deployment config. Now here is a sample
apiVersion: v1 kind: Service metadata: name: flask labels: app: flask tier: backend track: stable spec: selector: app: flask tier: backend ports: - protocol: TCP port: 80 targetPort: backend type: LoadBalancer
Here, again you have the same labels like
metadata etc… But the
spec section changes.
spec of a service we define things like:
ports: ports to be connected to
type: type of service
There is a lot more things that the
spec section of
Seervice can handle. You can look them up in the documentation for kubernetes. But this is the essential idea.
OK, now you have the config files. How to use them?
You can either run:
kubectl apply -f <config-file> or
kubectl create -f <config-file>
The only difference is that if you use
create, you will not be able to change the config later. With
apply, you could change the file later and run the same command again to apply only the changes that you have made.
With that, I’m out. You are on your own now. But I guess that it helped out a bit.