English
DH3MFW
headerimage

Dashy hilft selbst gehosteten Dienste zu organisieren und von einer Oberfläche aus zugänglich zu machen.

OriginalHomepage --> https://dashy.to/

Eine YAML Datei von Kubernetes erzeugen lassen

kubectl create deployment dashy --image lissy93/dashy --dry-run=client -o yaml > dashy.yml
vi dashy.yaml

 

Eine YAML Datei von Kubernetes erzeugen lassen

kubectl create deployment dashy --image lissy93/dashy --dry-run=client -o yaml > dashy.yml
vi dashy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: dashy
    name: dashy
spec:
  replicas: 1
  selector:
    matchLabels:
    app: dashy
  strategy: {}
  template:
    metadata:
    creationTimestamp: null
    labels:
      app: dashy
    spec:
      containers:
      - image: lissy93/dashy
        name: dashy
        resources: {}
status: {}

Die Timestamp´s werden nicht gebraucht und können entfernt werden.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dashy
    name: dashy
spec:
  replicas: 1
  selector:
    matchLabels:
    app: dashy
  strategy: {}
  template:
    metadata:
    labels:
      app: dashy
    spec:
      containers:
      - image: lissy93/dashy
        name: dashy
        resources: {}
status: {}

Obacht bei Fehlermeldungen könnte es sein, das die Einrückungen nicht passen !!

Jetzt kann der Container gestartet werden. -f --> Starte aus dem File

root@k3s-n1:/home/mkellerer# kubectl apply -f dashy.yml
deployment.apps/dashy created
root@k3s-n1:/home/mkellerer#

LoadBalancer-Datei herrichten
kubectl expose deployment dashy --port 8081 --target-port 80 --type LoadBalancer --load-balancer-ip 172.16.155.48 --dry-run=client -o yaml > lb-dashy1.yml
LB-Datei deployen
root@k3s-n1:/home/mkellerer# kubectl apply -f lb-dashy1.yml
service/dashy created
root@k3s-n1:/home/mkellerer#

Was rennt nun?

root@k3s-n1:/home/mkellerer# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dashy-6bb85b47d5-l8pdf 1/1 Running 0 15m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashy LoadBalancer 10.43.187.225 172.16.155.48 8081:31456/TCP 5m
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashy 1/1 1 1 15m

NAME DESIRED CURRENT READY AGE
replicaset.apps/dashy-6bb85b47d5 1 1 1 15m

Jetzt ist die Dashy Anwendung unter http://172.16.151.48:8081 erreichbar. Allerdings mit der Default Konfiguration. Dafür gibt es 2 Wege um eine eigene Konfigurationsdatei zu erzeugen:

  • Aus dem Container holen:

Dafür sucht man den genauen Namen des Containers heraus --> dashy-6bb85b47d5-l8pdf und geht mit dem Befehl in den Container hinein:

root@k3s-n1:/home/mkellerer# kubectl exec dashy-6bb85b47d5-l8pdf -it -- /bin/sh

Mit Exit oder STRG-D wieder herausgehen

Dann sich die Datei conf.yml anzeigen lassen --> Steht in der Dashy Doku, das die Datei so heisst.

1.Weg von Innen holen

/app/public # cat conf.yml
-------------------------------------------------------------------------------------------------------------------------------
# Page meta info, like heading, footer text and nav links
pageInfo:
  title: Dashy
  description: Welcome to your new dashboard!
  navLinks:
  - title: GitHub
  path: https://github.com/Lissy93/dashy
  - title: Documentation
  path: https://dashy.to/docs

# Optional app settings and configuration
appConfig:
  theme: colorful

Main content - An array of sections, each containing an array of items
sections:
- name: Getting Started
  icon: fas fa-rocket
  items:
  - title: Dashy Live
    description: Development a project management links for Dashy
    icon: https://i.ibb.co/qWWpD0v/astro-dab-128.png
    url: https://live.dashy.to/
    target: newtab
  - title: GitHub
    description: Source Code, Issues and Pull Requests  
    url: https://github.com/lissy93/dashy
    icon: favicon
  - title: Docs
    description: Configuring & Usage Documentation
    provider: Dashy.to
    icon: far fa-book
    url: https://dashy.to/docs
  - title: Showcase
    description: See how others are using Dashy
    url: https://github.com/Lissy93/dashy/blob/master/docs/showcase.md
    icon: far fa-grin-hearts
  - title: Config Guide
     description: See full list of configuration options
    url: https://github.com/Lissy93/dashy/blob/master/docs/configuring.md
    icon: fas fa-wrench
  - title: Support
    description: Get help with Dashy, raise a bug, or get in contact
    url: https://github.com/Lissy93/dashy/blob/master/.github/SUPPORT.md
    icon: far fa-hands-helping
/app/public #

2.Weg Von Aussen die Konfig Datei holen

kubectl cp dashy-6bb85b47d5-l8pdf:/app/public/conf.yml ./dashy1-conf.yml
Funktioniert nur dann wenn der Container TAR installiert hat

Die Datei editieren, was gewünscht wird.

-------------------------------------------------------------------------------------------

# Page meta info, like heading, footer text and nav links
pageInfo:
  title: Dashy-Matthias
  description: Willkommen Zuhause
  navLinks:
  - title: GitHub
  path: https://github.com/Lissy93/dashy
  - title: Documentation
  path: https://dashy.to/docs

# Optional app settings and configuration
appConfig:
  theme: colorful

Main content - An array of sections, each containing an array of items
sections:
- name: Getting Started
  icon: fas fa-rocket
  items:
  - title: Dashy Live
    description: Development a project management links for Dashy
    icon: https://i.ibb.co/qWWpD0v/astro-dab-128.png
    url: https://live.dashy.to/
    target: newtab
  - title: GitHub
    description: Source Code, Issues and Pull Requests  
    url: https://github.com/lissy93/dashy
    icon: favicon
  - title: Docs
    description: Configuring & Usage Documentation
    provider: Dashy.to
    icon: far fa-book
    url: https://dashy.to/docs
  - title: Showcase
    description: See how others are using Dashy
    url: https://github.com/Lissy93/dashy/blob/master/docs/showcase.md
    icon: far fa-grin-hearts
  - title: Config Guide
     description: See full list of configuration options
    url: https://github.com/Lissy93/dashy/blob/master/docs/configuring.md
    icon: fas fa-wrench
  - title: Support
    description: Get help with Dashy, raise a bug, or get in contact
    url: https://github.com/Lissy93/dashy/blob/master/.github/SUPPORT.md
    icon: far fa-hands-helping

Titel und dergl geändert und nun über Kubernetes Mittel reinschreiben in den POD

Datei anlegen --> cm-dashy1.yml

root@k3s-n1:/home/mkellerer# kubectl create configmap dashy --from-file=dashy1-conf.yml --dry-run=client -o yaml > cm-dashy1.yml

root@k3s-n1:/home/mkellerer# kubectl apply -f cm-dashy1.yml
configmap/dashy created
root@k3s-n1:/home/mkellerer#

Jetzt rennt der Container mit der eigenen Konfiguration, aber die Steuerdatei des Containers muss nun auch um diese Konfigurationsdatei erweitert werden:

Alte Datei:

Die laufende Deployment dashy.yml Datei editieren
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dashy
    name: dashy
spec:
  replicas: 1
  selector:
    matchLabels:
    app: dashy
  strategy: {}
  template:
    metadata:
    labels:
      app: dashy
    spec:
      containers:
      - image: lissy93/dashy
        name: dashy
        resources: {}
status: {}

Soll nun so aussehen:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dashy
  name: dashy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dashy
  strategy: {}
  template:
    metadata:
      labels:
        app: dashy
    spec:
      containers:
      - image: lissy93/dashy:latest
      name: dashy
      resources: {}
      volumeMounts:
      - name: config
        mountPath: "/app/public"
        readOnly: true
      volumes:
      - name: config
        configMap:
          name: dashy
          items:
          - key: "dashy1-conf.yml"
            path: "conf.yml"

root@k3s-n1:/home/mkellerer# kubectl apply -f dashy.yml
deployment.apps/dashy configured
root@k3s-n1:/home/mkellerer#

Bei Punkt Configmap: --> name des Configmap angeben.

root@k3s-n1:/home/mkellerer# kubectl get cm
NAME DATA AGE
dashy 1 16m
kube-root-ca.crt 1 18d

root@k3s-n1:/home/mkellerer# kubectl describe cm dashy
Name: dashy
Namespace: default
Labels: <none>
Annotations: <none>

Data
====
dashy1-conf.yml:
----
---\r
# Page meta info, like heading, footer text and nav links\r
pageInfo:\r
title: Dashy-Matthias\r
description: Willkommen Zuhause!\r
navLinks:\r
- title: GitHub\r
path: https://github.com/Lissy93/dashy\r
- title: Documentation\r
path: https://dashy.to/docs\r
\r
# Optional app settings and configuration\r
appConfig:\r
theme: colorful\r
\r
# Main content - An array of sections, each containing an array of items\r
sections:\r
- name: Getting Started\r
icon: fas fa-rocket\r
items:\r
- title: Dashy Live\r
description: Development a project management links for Dashy\r
icon: https://i.ibb.co/qWWpD0v/astro-dab-128.png\r
url: https://live.dashy.to/\r
target: newtab\r
- title: GitHub\r
description: Source Code, Issues and Pull Requests\r
url: https://github.com/lissy93/dashy\r
icon: favicon\r
- title: Docs\r
description: Configuring & Usage Documentation\r
provider: Dashy.to\r
icon: far fa-book\r
url: https://dashy.to/docs\r
- title: Showcase\r
description: See how others are using Dashy\r
url: https://github.com/Lissy93/dashy/blob/master/docs/showcase.md\r
icon: far fa-grin-hearts\r
- title: Config Guide\r
description: See full list of configuration options\r
url: https://github.com/Lissy93/dashy/blob/master/docs/configuring.md\r
icon: fas fa-wrench\r
- title: Support\r
description: Get help with Dashy, raise a bug, or get in contact\r
url: https://github.com/Lissy93/dashy/blob/master/.github/SUPPORT.md\r
icon: far fa-hands-helping\r
\r


BinaryData
====

Events: <none>

----------------------------------------------------------------

Nun gehört noch ein Sauberes Monitoring dazu !!
Eingebaut zusätzlich in die dashy.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dashy
  name: dashy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dashy
  strategy: {}
  template:
    metadata:
      labels:
        app: dashy
    spec:
      containers:
      - image: lissy93/dashy:latest
        name: dashy
        resources: {}
        startupProbe:
          exec:
            command:
               - yarn
               - health-check
          failureThreshold: 3
          timeoutSeconds: 10
          periodSeconds: 90
          initialDelaySeconds: 40
        livenessProbe:
          exec:
            command:
              - yarn
              - health-check
           failureThreshold: 3
           timeoutSeconds: 10
           periodSeconds: 90
        readinessProbe:
          httpGet:
            port: 80
          failureThreshold: 3
          timeoutSeconds: 10
          periodSeconds: 90
      volumeMounts:
      - name: config
        mountPath: "/app/public"
        readOnly: true
    volumes:
    - name: config
      configMap:
        name: dashy
        items:
        - key: "dashy1-conf.yml"
          path: "conf.yml"

root@k3s-n1:/home/mkellerer# kubectl apply -f dashy.yml
deployment.apps/dashy configured
root@k3s-n1:/home/mkellerer#

root@k3s-n1:/home/mkellerer# kubectl describe pod dashy
Name: dashy-78dfcdb6cf-vlftp
Namespace: default
Priority: 0
Service Account: default
Node: k3s-n2/172.16.155.52
Start Time: Wed, 20 Mar 2024 09:54:04 +0000
Labels: app=dashy
pod-template-hash=78dfcdb6cf
Annotations: <none>
Status: Running
IP: 10.42.1.11
IPs:
IP: 10.42.1.11
Controlled By: ReplicaSet/dashy-78dfcdb6cf
Containers:
dashy:
Container ID: containerd://09de4c329b50ffbd8a3540619b0d65e3b94d2c70c663f6e8e0c3efabc835adfc
Image: lissy93/dashy:latest
Image ID: docker.io/lissy93/dashy@sha256:4fabb423f22891a4840982b7a9310142c542f482a05ce59a95a2f6cac94d7886
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 20 Mar 2024 09:54:06 +0000
Ready: True
Restart Count: 0
Liveness: exec [yarn health-check] delay=0s timeout=10s period=90s #success=1 #failure=3
Readiness: http-get http://:80/ delay=0s timeout=10s period=90s #success=1 #failure=3
Startup: exec [yarn health-check] delay=40s timeout=10s period=90s #success=1 #failure=3
Environment: <none>
Mounts:
/app/public from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vzz8l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dashy
Optional: false
kube-api-access-vzz8l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m4s default-scheduler Successfully assigned default/dashy-78dfcdb6cf-vlftp to k3s-n2
Normal Pulling 3m4s kubelet Pulling image "lissy93/dashy:latest"
Normal Pulled 3m3s kubelet Successfully pulled image "lissy93/dashy:latest" in 829ms (829ms including waiting)
Normal Created 3m3s kubelet Created container dashy
Normal Started 3m3s kubelet Started container dashy

Über eine 2.TerminalSession wurde der POD gelöscht
root@k3s-n1:~# kubectl delete pod dashy-78dfcdb6cf-vlftp
pod "dashy-78dfcdb6cf-vlftp" deleted
root@k3s-n1:~#

root@k3s-n1:/home/mkellerer# kubectl get pods
NAME READY STATUS RESTARTS AGE
dashy-78dfcdb6cf-vlftp 1/1 Running 0 4m8s
root@k3s-n1:/home/mkellerer# kubectl get pods -l app=dashy
NAME READY STATUS RESTARTS AGE
dashy-78dfcdb6cf-vlftp 1/1 Running 0 4m40s
root@k3s-n1:/home/mkellerer# kubectl get pods -l app=dashy -w
NAME READY STATUS RESTARTS AGE
dashy-78dfcdb6cf-vlftp 1/1 Running 0 5m7s
dashy-78dfcdb6cf-vlftp 1/1 Terminating 0 6m43s
dashy-78dfcdb6cf-5kb5v 0/1 Pending 0 0s
dashy-78dfcdb6cf-5kb5v 0/1 Pending 0 0s
dashy-78dfcdb6cf-5kb5v 0/1 ContainerCreating 0 0s
dashy-78dfcdb6cf-vlftp 0/1 Terminating 0 6m45s
dashy-78dfcdb6cf-vlftp 0/1 Terminating 0 6m45s
dashy-78dfcdb6cf-vlftp 0/1 Terminating 0 6m46s
dashy-78dfcdb6cf-vlftp 0/1 Terminating 0 6m46s
dashy-78dfcdb6cf-5kb5v 0/1 Running 0 3s
dashy-78dfcdb6cf-5kb5v 0/1 Running 0 91s --> Zeitverzögerung weil erst nach 1.Readyness Probe Initial Delay nach 40s
dashy-78dfcdb6cf-5kb5v 1/1 Running 0 91s

Erst wenn der POD auf Status 1 steht gibt der Loadbalancer Anfragen weiter !!

 

nach oben