본문 바로가기

Rook Ceph8

Rook Ceph - DiskPressure 2020.11.30 a. Problem: DiskPressure - Environments Kubernetes 1.16.15, Rook Ceph 1.3.8, CentOS 7.8 [iap@iap01 ~]$ k get pod -n rook-ceph -o wide| egrep -v "Run|Com" NAME READY STATUS RESTARTS AGE IP NODE ... csi-cephfsplugin-tf82b 0/3 Evicted 0 13m iap04 csi-rbdplugin-jzkxk 0/3 Evicted 0 1s iap04 [iap@iap01 ~]$ k describe pod csi-cephfsplugin-tf82b -n rook-ceph | grep Events -A10 Events: Type Re.. 2021. 9. 16.
Rook Ceph - scrub error 2021.04.14 a. Problem: scrub error Environments: Kubernetes 1.16.15, Rook Ceph 1.3.8 특정 PG(placement groups)에서 data damage 발생 A Placement Group (PG) is a logical collection of objects that are replicated on OSDs to provide reliability in a storage system. [iap@iap01 ~]$ ceph-toolbox.sh [root@rook-ceph-tools-79d7c49c8d-kp6xh /]# ceph status cluster: id: 1ef6e249-005e-477e-999b-b874f9fa0854 health.. 2021. 9. 16.
Rook Ceph - rook-ceph-osd POD is CrashLoopBackOff 2021.05.10 a. Problem: rook-ceph-osd-19-5b8c7f4787-klrfr POD 상태가 CrashLoopBackOff - Environments Kubernetes 1.16.15, Rook Ceph 1.3.8 [iap@iap01 ~]$ k get pod -n rook-ceph -o wide | egrep 'NAME|osd-[0-9]' NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES rook-ceph-osd-12-686858c5dd-hsxh7 1/1 Running 1 37h 10.244.10.105 iap10 rook-ceph-osd-13-584d4ff974-wdtq9 1/1 Running 1 37h .. 2021. 9. 16.
Rook Ceph - pgs undersized 2020.12.31 a. Problem: pgs undersized - Environments Kubernetes 1.16.15, Rook Ceph 1.3.8 [root@rook-ceph-tools-79d7c49c8d-4c4x5 /]# ceph status cluster: id: 1ef6e249-005e-477e-999b-b874f9fa0854 health: HEALTH_WARN Degraded data redundancy: 2/1036142 objects degraded (0.000%), 2 pgs degraded, 14 pgs undersized … b. Cause analysis - undersized The placement group has fewer copies than the configur.. 2021. 9. 16.
Rook Ceph - failed to get status 2020.12.01 a. Problem: “op-cluster: failed to get ceph status. failed to get status” - Environments Kubernetes 1.16.15, Rook Ceph 1.3.8 [iap@iap01 ~]$ k logs rook-ceph-operator-674d4db4cf-zpp8g -n rook-ceph | egrep " E " … 2020-11-30 07:16:22.362561 E | op-cluster: failed to create cluster in namespace "rook-ceph". failed to start the mons: failed to start mon pods: failed to check mon quorum q:.. 2021. 9. 16.
Rook Ceph 구성 2020.05.25 1. 구성 환경 - Rook v1.3.6, ceph image version: "14.2.10-0 nautilus”, cephcsi v2.1.2, kubenetes 1.16.15, CentOS 7.8 2. Rook / Ceph ? - Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. https://rook.io/docs/rook/v1.3/ceph-examples.html - Ceph C.. 2021. 9. 15.
Rook-ceph - OSD/K8s Node 제거 2020.12.31 1. 삭제 대상 Kubernetes node : iap07 OSD (Object Storage Daemon) : 0, 3 [root@rook-ceph-tools-79d7c49c8d-4c4x5 /]# ceph osd status +----+-------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+-------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | iap09 | 3324M | 2.. 2021. 9. 15.
Rook ceph vs NFS 2020.11.20 1. 개요 K8s storage로 Rook ceph와 NFS 중 어떤 것을 선택할지 검토하기 위하여 작성 2. 검토 의견 - 개발 환경 (in GiGA Tech Hub)는 Rook ceph와 NFS 스토리지 모두를 제공하며 기본으로 Rook ceph를 사용 - 운영환경과 시스템별로 별도 구축되는 개발 환경은 NFS 스토리지만 제공 운영 이관 고려 : Rook Ceph 엔지니어 부재 및 적용 사례가 없어 이관 협의 시 이슈화 될 수 있기 때문에 3. Rook ceph vs NFS 비교 a. 기능 / 디스크 사용율 관점 기능 디스크 사용율 (Block storage) SPOF 구성 관리 / 운영 Rook Ceph - Block storage, Shared Filesystem, Object.. 2021. 9. 15.