본문 바로가기
Kubernetes/Management

Knative 이해

by 여행을 떠나자! 2021. 10. 5.

1. Knative?

- Knative는 Kubernetes 환경에서 동작하는 서버리스 클라우드 네이티브 애플리케이션(Severless CNA)을 배포, 실행, 관리하기 위한 오픈소스 소프트웨어

- Public Cloud 대표적인 Serverless Computing 기술은 AWS Lambda, Azure Functions, Google Cloud Functions

 

 

2. Knative 기능
a. Serving
https://bcho.tistory.com/1322
   ✓ Stateless web service를 위한 serverless model
   ✓ 서빙은 무상태 웹서비스를 구축하기 위한 프레임웍으로 간단하게 웹서비스 컨테이너만 배포하면, 로드밸런서의 배치, 오토 스케일링, Scale to zero, 복잡한 배포(롤링/카나리)등을 지원
   ✓ 서비스 매쉬 솔루션인 istio와 통합
- https://knative.dev/docs/serving/

   ✓ Rapid deployment of serverless containers.
   ✓ Autoscaling including scaling pods down to zero.
   ✓ Support for multiple networking layers such as Ambassador, Contour, Kourier, Gloo, and Istio for integration into existing environments.
   ✓ Point-in-time snapshots of deployed code and configurations.

 

b. Eventing
https://bcho.tistory.com/1323
   ✓ Event handling (Consumer)를 위한 serverless model
   ✓ Kafka, RabbitMQ 등 에서 메시지를 받거나 또는 Cron과 같은 타이머에서 이벤트가 발생하면 이를 받아서 처리할 수 있는 비동기 메커니즘
- https://knative.dev/docs/eventing/

 

 

3. Knative serving 이해

a. 주요 컴포넌트 기능

$ k get deployments.apps -n knative-serving
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
activator          1/1     1            1           142d
autoscaler         1/1     1            1           142d
controller         1/1     1            1           142d
istio-webhook      1/1     1            1           142d
networking-istio   1/1     1            1           142d
webhook            1/1     1            1           142d
$

- Controller

   ✓ When a user applies a Knative service to the Kubernetes API, this creates the configuration and route.

       It will convert the configuration into revisions and the revisions into deployments and Knative Pod Autoscalers (KPAs)

Knative serving CRDs

 

- activator

   ✓ The Activator is a component that receives all the traffic coming to the IDLE Revisions. When the Activator receives a request, it changes the Revision state to Active, which lets the Revision Pods receive the requests.

   [부연 설명] : https://knative.tips/autoscaling/activation/

       When a Knative Service is scaled to zero, it has no Pods running to receive traffic.

       When first request comes to the activator, it will hold onto the request, and will scale up the Knative Service to > 0.

       Once a Pod becomes ready,
           it will proxy the traffic to Pod
           it will update the Endpoints of the Kubernetes Service({rev_name}-private) so that the subsequent requests to the Revision directly resolve to the app's Pod IPs.

https://help-static-aliyun-doc.aliyuncs.com/assets/img/en-US/0032790261/p172886.png

 

- autoscaler (KPA)

   ✓ The autoscaler receives request metrics and adjusts the number of pods required to handle the load of traffic.
   ✓ Knative Serving adds the Queue-Proxy container to each pod. The Queue-Proxy container sends concurrency metrics of the application containers to KPA. After KPA receives the metrics, KPA automatically adjusts the number of pods provisioned for a Deployment based on the number of concurrent requests and related algorithms.

   [부연 설명] : https://knative.tips/networking/life-of-a-request/

       Queue-proxy is responsible for making sure the Pod receives only the desired amount of concurrent requests, and it also reports concurrency metrics for autoscaling.

$ k describe pod autoscale-go-hm69r-deployment-7956b95556-zwf4j -n yoosung-jeon | egrep "Container ID:" -C1
  user-container:
    Container ID:   docker://a4e378428de4f264026c29faf8569a6a9858c0ae2fdfa2f2ca3b760031cefa94
    Image:          gcr.io/knative-samples/autoscale-go@sha256:e5e89c5fd57c717b49d41be89faebc526bdcda017e898ae86c2bf20f5cd339b5
--
--
  queue-proxy:
    Container ID:   docker://34dcd6a764390cc3833d15b86f763c7abc439e1dbd134b4380e70ed19fe497a4
    Image:          gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:d066ae5b642885827506610ae25728d442ce11447b82df6e9cc4c174bb97ecb3
$

 

b. The request flow examples

- simplified flow

https://www.slideshare.net/JeremiasWerner/how-knative-changes-the-serverless-landscape

   b-1. Istio Ingress Gateway is configured by the Route and terminates the request

   b-2. Ingresss Gateway forwards the requests to the Activator (when a ksvc is scaled to zero)

   b-3. Activator buffers requests when scaled to zero or when is burst mode

   b-4. Queue Proxy is terminating the request in the service pod and forwards the request to the user container

   b-5. Autoscaler is scraping metrics from the Activator and Queue Proxies and scales the Deployment         

 

- Scaling from zero (상세)

https://github.com/knative/serving/raw/main/docs/scaling/images/scale-from-0.png

   If a revision is scaled to zero and a request comes into the system trying to reach this revision, the system needs to scale it up. As the SKS is in Proxy mode, the request will reach the activator (1), which will count it and report its appearance to the autoscaler (2.1). The activator will then buffer the request and watch the SKS's private service for endpoints to appear (2.2).
   The autoscaler gets the metric from the activator and immediately runs an autoscaling cycle (3). That process will determine that at least one pod is be desired (4) and the autoscaler will instruct the revision's deployment to scale up to N > 0 replicas (5.1). It also puts the SKS into Serve mode, causing the traffic to flow to the revision's pods directly, once they come up (5.2).
   The activator eventually sees the endpoints coming up and starts probing it. Once the probe passes successfully, the respective address will be considered healthy and used to route the request we buffered and all additional requests that arrived in the meantime (8.2).
   The revision has been successfully scaled from zero.

 

   sks (serverless knative services)와 services는 아래와 같다.

$ k get sks autoscale-go-klt76 -n yoosung-jeon
NAME                 MODE    ACTIVATORS   SERVICENAME          PRIVATESERVICENAME           READY   REASON
autoscale-go-klt76   Proxy   3            autoscale-go-klt76   autoscale-go-klt76-private   True
$

   ✓ autoscale-go-klt76의 endpoint는 activator-55f9fdc55d-k64tg Pod

   ✓ autoscale-go-klt76-private의 endpoint는 autoscale-go-klt76-deployment-8688cc56cb-d8h8l  Pod의 queue-proxy container

 

- Scaling up and down (상세)

https://github.com/knative/serving/raw/main/docs/scaling/images/scale-up-down.png

   At steady state, the autoscaler is constantly scraping the currently active revision pods to adjust the scale of the revision constantly. As requests flow into the system, the scraped values will change and the autoscaler will instruct the revision's deployment to adhere to a given scale.

The SKS keeps track of the changes to the deployment's size through the private service. It updates the public service accordingly.

 

c. Knative serving diagram 

https://blog.nebrass.fr/wp-content/uploads/knative-serving-ecosystem.png

 

 

4. Knative serving 예제

- Environment

   knative v0.14.3, kubernetes 1.16.15, kubeflow 1.2 (kubeflow에 knative가 포함되어 있음)

 

- Deploy the sample Knative service

   ✓ Create a new immutable revision for this version of the app.
   ✓ Perform network programming to create a route, ingress, service, and load balancer for your app.
   ✓ Automatically scale your pods up and down based on traffic, including to zero active pods.

$ git clone -b "release-0.26" https://github.com/knative/docs knative-docs
$ cd knative-docs
$ cat docs/serving/autoscaling/autoscale-go/service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: autoscale-go
  namespace: yoosung-jeon
spec:
  template:
    metadata:
      annotations:
        # Target 10 in-flight-requests per pod.
        autoscaling.knative.dev/target: "10"
        autoscaling.knative.dev/minSacle: "1"
    spec:
      containers:
      - image: gcr.io/knative-samples/autoscale-go:0.1
$ k apply -f docs/serving/autoscaling/autoscale-go/service.yaml
$

   ✓ Knative custom domain을 'example.com'에서 'kf-serv.acp.kt.co.kr'로 변경하였으며, Knative service의 URL 주소는 'http://autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr' 이다.

       Knative - Custom domain 변경 방법

$ k get ksvc autoscale-go -n yoosung-jeon
NAME           URL                                                     LATESTCREATED        LATESTREADY          READY   REASON
autoscale-go   http://autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr   autoscale-go-klt76   autoscale-go-klt76   True
$

 

- Make a request to the autoscale app

   ✓ DNS에 Custom domain(ex. *.kf-serv.acp.kt.co.kr)이 등록되어 있지 않는 경우는 호출 시 Host 정보를 추가해야 한다.

       $ curl -H "Host: autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr" http://${IP_Address}

   ✓ 만약 자체 DNS를 활용할 수 없는 경우에는 대안으로 무료 DNS인 sslip.io, xip.io 또는 nip.io 사용을 고려할 수 있다.

       예를 들어 istio-ingressgateway의 external ip가 '14.52.244.137'인 경우 Custom domain으로 '14.52.244.137.sslip.io'를 설정할 수 있다.

$ curl "http://autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr?sleep=100&prime=10000&bloat=5"
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.17 milliseconds.
$

 

- Knavie service (autoscale-go)를 배포 후에 자동으로 생성된 리소스정보는 아래와 같으며, autoscale-go는 배포 후에 수정되었기 때문에 다수의 revision이 생성되었다. 

Knative serving CRDs

$ k get ksvc autoscale-go -n yoosung-jeon
NAME           URL                                                     LATESTCREATED        LATESTREADY          READY   REASON
autoscale-go   http://autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr   autoscale-go-klt76   autoscale-go-klt76   True
$
$ k get rt autoscale-go -n yoosung-jeon
NAME           URL                                                     READY   REASON
autoscale-go   http://autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr   True
$
$ k get cfg autoscale-go -n yoosung-jeon
NAME           LATESTCREATED        LATESTREADY          READY   REASON
autoscale-go   autoscale-go-klt76   autoscale-go-klt76   True
$
$ k get rev -n yoosung-jeon -l serving.knative.dev/configuration=autoscale-go
NAME                 CONFIG NAME    K8S SERVICE NAME     GENERATION   READY   REASON
autoscale-go-6qn5q   autoscale-go   autoscale-go-6qn5q   5            True
autoscale-go-8xf97   autoscale-go   autoscale-go-8xf97   4            True
autoscale-go-clmht   autoscale-go   autoscale-go-clmht   2            True
autoscale-go-klt76   autoscale-go   autoscale-go-klt76   7            True
autoscale-go-n9hm2   autoscale-go   autoscale-go-n9hm2   3            True
autoscale-go-ql4mq   autoscale-go   autoscale-go-ql4mq   6            True
$
$ k get deployments.apps -n yoosung-jeon -l serving.knative.dev/configuration=autoscale-go
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
autoscale-go-6qn5q-deployment   0/0     0            0           21h
autoscale-go-8xf97-deployment   0/0     0            0           22h
autoscale-go-clmht-deployment   0/0     0            0           22h
autoscale-go-klt76-deployment   1/1     1            1           17h
autoscale-go-n9hm2-deployment   0/0     0            0           22h
autoscale-go-ql4mq-deployment   0/0     0            0           20h
$
$ k get svc -n yoosung-jeon -l serving.knative.dev/configuration=autoscale-go
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                             AGE
autoscale-go-6qn5q           ClusterIP   10.111.129.245   <none>        80/TCP                              22h
autoscale-go-6qn5q-private   ClusterIP   10.109.109.236   <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   22h
autoscale-go-8xf97           ClusterIP   10.111.46.166    <none>        80/TCP                              22h
autoscale-go-8xf97-private   ClusterIP   10.97.224.235    <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   22h
autoscale-go-clmht           ClusterIP   10.107.112.20    <none>        80/TCP                              22h
autoscale-go-clmht-private   ClusterIP   10.97.237.246    <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   22h
autoscale-go-klt76           ClusterIP   10.105.153.108   <none>        80/TCP                              17h
autoscale-go-klt76-private   ClusterIP   10.105.9.229     <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   17h
autoscale-go-n9hm2           ClusterIP   10.106.233.163   <none>        80/TCP                              22h
autoscale-go-n9hm2-private   ClusterIP   10.101.14.98     <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   22h
autoscale-go-ql4mq           ClusterIP   10.109.114.156   <none>        80/TCP                              20h
autoscale-go-ql4mq-private   ClusterIP   10.100.248.232   <none>        80/TCP,9090/TCP,9091/TCP,8022/TCP   20h
$
$ k get kpa -n yoosung-jeon -l serving.knative.dev/configuration=autoscale-go
NAME                 DESIREDSCALE   ACTUALSCALE   READY   REASON
autoscale-go-6qn5q   0              0             False   NoTraffic
autoscale-go-8xf97   0              0             False   NoTraffic
autoscale-go-clmht   0              0             False   NoTraffic
autoscale-go-klt76   1              1             True
autoscale-go-n9hm2   0              0             False   NoTraffic
autoscale-go-ql4mq   0              0             False   NoTraffic
$
$ k get virtualservices.networking.istio.io -n yoosung-jeon | egrep 'NAME|autoscale-go'
NAME                    GATEWAYS                                                            HOSTS                                                                                                                                                                                                                                  AGE
autoscale-go-ingress    [knative-serving/cluster-local-gateway kubeflow/kubeflow-gateway]   [autoscale-go.yoosung-jeon autoscale-go.yoosung-jeon.kf-serv.acp.kt.co.kr autoscale-go.yoosung-jeon.svc autoscale-go.yoosung-jeon.svc.cluster.local]                                                                                   41d
autoscale-go-mesh       [mesh]                                                              [autoscale-go.yoosung-jeon autoscale-go.yoosung-jeon.svc autoscale-go.yoosung-jeon.svc.cluster.local]
$

'Kubernetes > Management' 카테고리의 다른 글

istio - Access logs 설정  (0) 2021.10.08
Knative - Custom domain 변경  (0) 2021.10.06
K8s - No more than 110 pods per node  (0) 2021.10.02
K8s - Master node의 role이 '<none>' 일 때  (0) 2021.09.30
Istio - Virtual service config  (0) 2021.09.23

댓글