Possible false positive? kubernetes:S6865

I’m using SonarCloud for a homelab that mostly uses Flux to install various Helm charts. In my CI pipeline (GitHub Actions), I generate the final Kubernetes manifests by running helm template for each chart, and splitting files by kind/name using yq (see here):

helm template "$release" "$chart" "${helm_flags[@]}" |
  yq -s '"'"$out_dir/$release/"'" + (.kind | downcase) + "/" + (.metadata.name | sub(":", "-")) + ".yaml"'

This effectively creates files like helm/<namespace>/<release>/<kind>/<name>.yaml, with one resource per file.

The problem is, my deployments keep getting flagged by the kubernetes:S6865 rule, even when I’m pretty sure the service account is bound to RBAC.

I’m suspecting that SonarQube expects the service account, role and binding all to be saved in the same file, but I’m not sure. Can you confirm, or suggest, how to structure these generated files so that SonarQube would properly make the connection between service account and role binding?

To be very clear, these are files generated “at build time” so I’m pretty flexible in where I put them, as long as SonarQube is happy. However, a nice directory structure with namespace/chart/kind/name.yaml makes it nice drill down in the SonarQube UI, which is why I went with that initially.


EDIT: Generating one file per kind, e.g. helm/$namespace/$release/$kind.yaml, doesn’t seem to fix the issue. Neither does generating helm/$namespace/$release.yaml. Here is an example Helm render. (Or maybe I just don’t understand where the issue is?)

---
# Source: homepage/templates/common.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: homepage
  namespace: apps
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
secrets:
  - name: homepage-sa-token
---
# Source: homepage/templates/common.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: homepage-sa-token
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
  annotations:
    kubernetes.io/service-account.name: homepage
---
# Source: homepage/templates/common.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: homepage
  labels:
    helm.sh/chart: homepage-2.1.0
    app.kubernetes.io/name: homepage
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/version: "v1.2.0"
    app.kubernetes.io/managed-by: Helm
    
data:
  bookmarks.yaml: ""
  docker.yaml: ""
  kubernetes.yaml: |
    ingress: true
    mode: cluster
  services.yaml: ""
  settings.yaml: |
    color: gray
    layout:
      Cluster Management:
        columns: 4
        icon: kubernetes.svg
        style: row
    theme: dark
    title: Dornhaus
  widgets.yaml: |
    - greeting:
        text: Dornhaus
        text_size: 4xl
    - kubernetes:
        cluster:
          cpu: true
          label: /locker/
          memory: true
          show: true
          showLabel: true
        nodes:
          cpu: true
          memory: true
          show: true
          showLabel: true
---
# Source: homepage/templates/common.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: homepage
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
      - pods
      - nodes
    verbs:
      - get
      - list
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
  - apiGroups:
      - traefik.containo.us
      - traefik.io
    resources:
      - ingressroutes
    verbs:
      - get
      - list
  - apiGroups:
      - gateway.networking.k8s.io
    resources:
      - httproutes
      - gateways
    verbs:
      - get
      - list
  - apiGroups:
      - metrics.k8s.io
    resources:
      - nodes
      - pods
    verbs:
      - get
      - list
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions/status
    verbs:
      - get
---
# Source: homepage/templates/common.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: homepage
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: homepage
subjects:
  - kind: ServiceAccount
    name: homepage
    namespace: apps
---
# Source: homepage/templates/common.yaml
apiVersion: v1
kind: Service
metadata:
  name: homepage
  labels:
    app.kubernetes.io/service: homepage
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
  annotations:
spec:
  type: ClusterIP
  ports:
    - port: 3000
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/name: homepage
---
# Source: homepage/templates/common.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: homepage
  namespace: apps
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
  annotations:
    values-checksum: 3438c16b5d315e1b02ed0691cbc388774303e920b67a84e29930bcb123d73d17
spec:
  revisionHistoryLimit: 3
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: homepage
      app.kubernetes.io/instance: homepage
  template:
    metadata:
      annotations:
        
        values-checksum: 3438c16b5d315e1b02ed0691cbc388774303e920b67a84e29930bcb123d73d17
        checksum/secrets: 4141e6981f3b767e75a4e744858b9ff414dba5d0ef6afd761f7700061fb6e32e
      labels:
        app.kubernetes.io/name: homepage
        app.kubernetes.io/instance: homepage
    spec:
      
      serviceAccountName: homepage
      automountServiceAccountToken: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532
        seccompProfile:
          type: RuntimeDefault
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      containers:
        - name: homepage
          image: ghcr.io/gethomepage/homepage:v1.4.4
          imagePullPolicy: Always
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
          env:
            - name: HOMEPAGE_ALLOWED_HOSTS
              value: dorn.haus
          ports:
            - name: http
              containerPort: 3000
              protocol: TCP
          volumeMounts:
            - name: homepage-base-config
              mountPath: /app/config
            - name: homepage-config
              subPath: bookmarks.yaml
              mountPath: /app/config/bookmarks.yaml
            - name: homepage-config
              subPath: docker.yaml
              mountPath: /app/config/docker.yaml
            - name: homepage-config
              subPath: kubernetes.yaml
              mountPath: /app/config/kubernetes.yaml
            - name: homepage-config
              subPath: services.yaml
              mountPath: /app/config/services.yaml
            - name: homepage-config
              subPath: settings.yaml
              mountPath: /app/config/settings.yaml
            - name: homepage-config
              subPath: widgets.yaml
              mountPath: /app/config/widgets.yaml
          livenessProbe:
            failureThreshold: 3
            initialDelaySeconds: 0
            periodSeconds: 10
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            initialDelaySeconds: 0
            periodSeconds: 10
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          startupProbe:
            failureThreshold: 30
            initialDelaySeconds: 0
            periodSeconds: 5
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 200m
              ephemeral-storage: 128Mi
              memory: 256Mi
            requests:
              cpu: 200m
              ephemeral-storage: 128Mi
              memory: 256Mi
      volumes:
        - name: homepage-base-config
          emptyDir: {}
        - name: homepage-config
          configMap:
            name: homepage
---
# Source: homepage/templates/common.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: homepage
  labels:
    app.kubernetes.io/instance: homepage
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: homepage
    app.kubernetes.io/version: v1.2.0
    helm.sh/chart: homepage-2.1.0
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "dorn.haus"
      secretName: "dorn.haus-tls"
  rules:
    - host: "dorn.haus"
      http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: homepage
                port:
                  number: 3000

So it turns out, switching RBAC from ClusterRole + ClusterRoleBinding to Role + RoleBinding resolves the issue, but in this case I would kind of like to use a cluster role binding.

Hey @attilaolah

Thanks for the report and for the analysis. I think we already have a ticket open for this here: SONARIAC-2066

Have a look!

1 Like

Thanks Colin! That issue also suggests that not specifying the namespace would be a workaround for now “(or both are equally inexistent)” so I’ll try that. Ironically, I only patched in the namespace to fix the SonarQube finding.

I think we used to call that “Sonar-Driven Development” – changing your code just to satisfy SQ. We don’t love that.

I’ve linked this thread to the ticket, which helps tickets like that gain traction. :smiley:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.