Below Object Will Cover....
1) EXAMPLE OF LABELS
2) EXAMPLE OF NODE SELECTOR
3) EXAMPLE OF REPLICATION CONTROLLER
4) EXAMPLE OF REPLICA SET
5) EXAMPLE OF DEPLOYMENT
6) EXAMPLE OF KUBERNETS CLUSTERIP SERVICE
7) EXAMPLE OF KUBERNETS NODEPORT SERVICE
8) EXAMPLE OF KUBERNETS VIRTUAL VOLUME INSIDE THE CONTAINER
9) EXAMPLE OF KUBERNETS HOST VOLUME
1) EXAMPLE OF LABELS
Labels :- are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organise and to select subsets of objects
kind: Pod
apiVersion: v1
metadata:
name: demopod
labels:
env: development
company: hcl
creator: techsingh
spec:
containers:
- name: container1
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
restartPolicy: Never # Defaults to Always
$ kubectl apply -f label.yml
$ kubectl get pods
$ kubectl get pods --show-labels
- Add labels via imperative method
$ kubectl label pods demopod company=hcl
$ kubectl get pods --show-labels
- Get Pods via Label
$ kubectl get pods -l env=development
$ kubectl get pods -l env!=development
- Get Pods via Multiple Labels
$ kubectl get pods -l 'env in(development,techsingh)'
- Delete Pod Via Label
$ kubectl delete pod -l env=development
2) EXAMPLE OF NODE SELECTOR
NodeSelector : is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
kind: Pod
apiVersion: v1
metadata:
name: podfornodelselector
labels:
env: development
company: hcl
creator: techsingh
spec:
containers:
- name: container1
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
restartPolicy: Never # Defaults to Always
nodeSelector:
hardware: t2-medium
$ kubectl apply -f nodeselector.yml
- It will show 0/1 "Need to select node first"
$ kubectl get pods
$ kubectl describe pod podfornodelselector
- Show All Available Nodes
$ kubectl get nodes
- Apply Label to selected Node
$ kubectl label nodes ip-172.16.15.25 hardware=t2-medium
$ kubectl get nodes --show-labels |grep hardware
$ kubectl get pods
- Remove Label From Node
$ kubectl label node ip-172-31-16-27 hardware-
3) EXAMPLE OF REPLICATION CONTROLLER
A ReplicationController ensures that a specified number of pod replicas are running at any one time.
In other words, a ReplicationController makes sure that a pods is always up and available.
kind: ReplicationController
apiVersion: v1
metadata:
name: myreplica
spec:
replicas: 2 #_This Element Defines The Desired Number Of Pods
selector: #_Tells the controler which pods to belog this RC
myname: tech2towards #_This must match the Labels
template: #_Templete element define a template to launch a new pod
metadata:
name: demopod
labels: #_Selector values need to match the label values
myname: tech2towards
spec:
containers:
- name: container1
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
$ kubectl apply -f replica.yml
$ kubectl get rc
$ kubectl describe rc myreplica
$ kubectl get pods
$ kubectl delete pod pod_name
$ kubectl get pods
$ kubectl describe rc myreplica
$ kubectl get pods --show-labels
- Scale UP/Down pods by Imperative Method
$ kubectl scale --replicas=8 rc -l myname=tech2towards
$ kubectl scale --replicas=1 rc -l myname=tech2towards
4) EXAMPLE OF REPLICA SET
Replica Set is next genration Replication Controller
The Replication Controller is only support [equality-based selector] where is replica set supports [set-based selector] ex: Filtering according to set of values.
Note: there is changes in kind, apiVersion and matchExpressions.
kind: ReplicaSet
apiVersion: apps/v1
metadata:
name: myreplicaset
spec:
replicas: 2
selector:
matchExpressions: # these must match the labels
- {key: myname, operator: In, values: [Technology, tech]}
- {key: env, operator: NotIn, values: [production]}
template:
metadata:
name: demopod
labels:
myname: Technology
spec:
containers:
- name: container1
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
$ kubectl apply -f replicaset.yml
$ kubectl get rs
$ kubectl get pods
$ kubectl scale --replicas=1 rs/myreplicaset
$ kubectl get rs
$ kubectl get pods
$ kubectl delete pod pod-name
$ kubectl get pods
$ kubectl delete rs/myreplicaset
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 1
selector: # tells the controller which pods to watch/belong to
matchLabels:
name: deployment
template:
metadata:
name: demopod
labels:
name: deployment
spec:
containers:
- name: container1
image: httpd
ports:
- containerPort: 80
$ kubectl -f apply deployment.yml
$ kubectl get pods
$ kubectl get pods -o wide
$ curl pod-ip:80
Now Make A serviceci.yml for cluser IP
kind: Service # Defines to create Service type Object
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80 # Containers port exposed
targetPort: 80 # Pods port
selector:
name: deployment # Apply this service to any pods which has the specific label
type: ClusterIP # Specifies the service type i.e ClusterIP or NodePort
$ kubectl -f apply services.yml
$ kubectl get svc
$ curl cluster-ip:80
$ kubectl get pods
$ kubectl delete pod-name
$ kubectl get pods
$ curl cluster-ip:80
7) EXAMPLE OF KUBERNETS NODEPORT SERVICE
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 1
selector: # tells the controller which pods to watch/belong to
matchLabels:
name: deployment
template:
metadata:
name: demopod
labels:
name: deployment
spec:
containers:
- name: container1
image: httpd
ports:
- containerPort: 80
$ kubectl -f apply deployment.yml
Now Make A servicenp.yml for NODE PORT
kind: Service # Defines to create Service type Object
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80 # Containers port exposed
targetPort: 80 # Pods port
selector:
name: deployment # Apply this service to any pods which has the specific label
type: NodePort # Specifies the service type i.e ClusterIP or NodePort
$ kubectl apply -f servicenp.yml
$ kubectl get svc
$ kubectl describe svc svc-name
Now Copy the aws Ec2 instance IP/DNS and paste in browser with :nordport (aws-dns-address:32565)
8) EXAMPLE OF KUBERNETS VIRTUAL VOLUME INSIDE THE CONTAINER
apiVersion: v1
kind: Pod
metadata:
name: myvirtualvol
spec:
containers:
- name: container1
image: centos
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
volumeMounts: # Mount definition inside the container
- name: virtualvol
mountPath: "/tmp/con1"
- name: container2
image: centos
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container2-Running; sleep 5 ; done"]
volumeMounts:
- name: virtualvol
mountPath: "/tmp/con2"
volumes:
- name: virtualvol
emptyDir: {}
$ kubectl apply -f virtualvol.yml
$ kubectl exec -it pod-name -c container1 -- /bin/bash
$ touch /tmp/con1/best.txt
$ kubectl exec -it pod-name -c container2 -- /bin/bash
$ ls /tmp/con2/
apiVersion: v1
kind: Pod
metadata:
name: myhostvolume
spec:
containers:
- image: centos
name: container1
command: ["/bin/bash", "-c", "while true; do echo My-Demo-Container1-Running; sleep 5 ; done"]
volumeMounts:
- mountPath: /tmp/convol
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /tmp/data
$ kubectl apply -f hostvolume.yml
$ touch /tmp/data/tech2towards.txt
$ kubectl exec -it pod-name -c container1 -- /bin/bash
$ ls /tmp/convol
No comments:
Post a Comment
testing