SideCar in Kubernetes Cluster

Jackie
3 min readFeb 27, 2020

--

Service Mesh is a design to provide an infrastructure as a service layer within the cloud service. Sidecar is one example of such implementation.

SideCar as its name is acting as a decoupled component attached to other microservices to cater to those cross-cutting concerns. One example is a volume mount across microservices.

As mentioned in my other post, mount S3 as share drive, it’s a great feature to be able to access the S3 for CRUD operations. Some common use would embed the volume mount into each container or pod whichever needs the S3 access.

However, there is a security concern with this.

The ACL needed to mount the FUSE drive is `SYS_ADMIN` at a minimum. So to run this as a single container, we need to provide:

docker run --rm -it --cap-add SYS_ADMIN s3-sidecar bash

to run it with docker-compose:

s3-sidecar:
restart: on-failure
image: s3-sidecar
init: true
build:
context: s3-sidecar
target: dev
environment:
- DEPLOYMENt=STAGING
privileged: true
cap_add:
- SYS_ADMIN # This is needed for mounting the volume

to run it in Kubernetes:

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
status: {}

all these would expose the uplifted privilege to the container and pod. With this access, there are ways some experienced developers would be able to bypass the designated location /s3 for example in the above pod and write or delete other files within the VFS.

Examples:

https://www.exploit-db.com/exploits/47147

https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/

Unless the permission required for FUSE mounting is corrected, it’s important to segregate the component doing this mounting.

The implementation of this segregation is sidecar. So instead of embedding the mounting into each container or pod, we will create a dedicated sidecar to do the single point mounting. We will have different security control for this single component to disable it being exposed or exploited. While at the same time, for those containers or pods need to access S3, they will just have read-only access into the sidecar volume.

Here is the implementation:

For the sidecar, it will be provided with the needed permission `SYS_ADMIN` to run the mounting.

Note: the mountPropagation should be Bidirectional, as we need the access to be able to update the content back into S3

kind: Service
apiVersion: v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app: s3-sidecar
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
resources: {}
env:
- name: DEPLOYMENT
value: -{{ .Values.deployment }}
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
volumes:
- name: s3
hostPath:
path: /s3
status: {}

for each individual container and pod need the access:

vol = client.V1Volume(
name="s3",
host_path=client.V1HostPathVolumeSource(path="/s3"),
)
s3 = client.V1VolumeMount(
name="s3",
mount_path="/s3",
mount_propagation="HostToContainer",
read_only=True
)
client.AppsV1Api().create_namespaced_replica_set(
...
V1ReplicaSet(
...
spec=V1ReplicaSetSpec(
...
template=V1PodTemplateSpec(
... spec=V1PodSpec(
volumes=[vol],
containers=[
V1Container(
... volume_mounts=[s3],
image_pull_policy="IfNotPresent",
)
],
),
),
),
),
)

Note: we will limit the mountPropagation to HostToContainer. So that if write or update to the mount place or sub directory, they will be available in these containers. However, these containers shouldn’t propagate contents back into S3.

This is the topology:

--

--

No responses yet