The plumber tool allows you to read, write, relay and tunnel data right from your CLI. While this is OK for quick one-offs, it has some drawbacks - if you are planning to make use of relay or tunnel functionality in production or in a more serious capacity:
You will need to launch a new plumber instance for every relay and tunnel
Difficult to pull off high-availability
Potential for missed data while plumber is not running (such as a redeploy)
This might be acceptable depending on your integration needs. If plumber exists in dev to facilitate testing or enable developers to tunnel data into their local workstations - it's probably fine. But if you are planning on having many relays and/or many tunnels, it wouldn't be well-suited.
To address this, we have built plumber server mode.
Plumber's server mode enables you to run any number of plumber instances as a distributed cluster. Server mode exposes a gRPC API which can be used to run and manage multiple relays and tunnels in parallel.
Plumber server can run in either of two operational modes - standalone or cluster.
Operational Modes
Standalone Mode
In standalone mode, plumber will write its config to a local JSON file (default ~/.batchsh/config.json that can be overridden via PLUMBER_SERVER_FOO).
Suitable for:
Local and dev environments
Low throughput (<100 events per sec)
High-reliability is not required
Cluster Mode
In cluster mode, plumber uses a message bus (NATS) to facilitate communication between the plumber instances and for storing its configs. You can run any number of plumber instances - we recommend to start with 3.
Suitable for:
Production environments
High throughput (1,000+ events per sec)
High-reliability is required
How to Run
Standalone mode
When running in Docker, you will want to mount this file as a writable volume to the container.
The NATS container SHOULD have a persistent storage volume.
# Install plumber
$ brew tap batchcorp/public
...
$ brew install plumber
...
# Git clone plumber repo (to get access to its docker-compose + assets)
$ git clone git@github.com:batchcorp/plumber.git
$ cd plumber
# Launch a NATS container
$ docker-compose up -d natsjs
# Launch plumber in cluster mode
$ plumber server --enable-cluster
INFO[0000]
█▀█ █ █ █ █▀▄▀█ █▄▄ █▀▀ █▀█
█▀▀ █▄▄ █▄█ █ ▀ █ █▄█ ██▄ █▀▄
INFO[0000] starting plumber server in 'cluster' mode... pkg=plumber
INFO[0015] plumber server started pkg=plumber
# Git clone plumber repo (to get access to its docker-compose + assets)
$ git clone git@github.com:batchcorp/plumber.git
$ cd plumber
# Launch a NATS container
$ docker-compose up -d natsjs
# Launch a plumber container that points to your NATS instance
$ docker run --name plumber-server -p 9090:9090 \
--network plumber_default \
-e PLUMBER_SERVER_NATS_URL=nats://natsjs \
-e PLUMBER_SERVER_ENABLE_CLUSTER=true \
batchcorp/plumber:latest
{"level":"info","msg":"starting plumber server in 'cluster' mode...","pkg":"plumber","time":"2022-02-15T20:21:47Z"}
{"level":"info","msg":"plumber server started","pkg":"plumber","time":"2022-02-15T20:22:02Z"}
---apiVersion:v1kind:ConfigMapmetadata:name:nats-confignamespace:defaultlabels:app.kubernetes.io/name:natsdata:nats.conf:| # NATS Clients Port port: 4222 # PID file shared with configuration reloader. pid_file: "/var/run/nats/nats.pid" http: 8222 server_name:$POD_NAME jetstream { max_mem: 1Gi store_dir: /data max_file:5Gi } lame_duck_duration: 120s---apiVersion:v1kind:Servicemetadata:name:natsnamespace:defaultlabels:app.kubernetes.io/name:natsspec:selector:app.kubernetes.io/name:natsclusterIP:Noneports: - name:clientport:4222 - name:clusterport:6222 - name:monitorport:8222 - name:metricsport:7777 - name:leafnodesport:7422 - name:gatewaysport:7522---apiVersion:v1kind:Servicemetadata:name:plumber-clusterlabels:app.kubernetes.io/name:plumber-clusterspec:type:ClusterIPports: - port:9090targetPort:9090protocol:TCPname:grpc-apiselector:app.kubernetes.io/name:plumber-cluster---apiVersion:apps/v1kind:Deploymentmetadata:name:nats-boxnamespace:defaultlabels:app:nats-boxchart:nats-0.13.0spec:replicas:1selector:matchLabels:app:nats-boxtemplate:metadata:labels:app:nats-boxspec:volumes:containers: - name:nats-boximage:natsio/nats-box:0.8.1imagePullPolicy:IfNotPresentresources:nullenv: - name:NATS_URLvalue:natscommand: - "tail" - "-f" - "/dev/null"---apiVersion:apps/v1kind:Deploymentmetadata:name:plumber-clusterlabels:app.kubernetes.io/name:plumber-clusterspec:replicas:3selector:matchLabels:app.kubernetes.io/name:plumber-clustertemplate:metadata:labels:app.kubernetes.io/name:plumber-clusterspec:containers: - name:plumber-clusterimage:"batchcorp/plumber:v1.4.0"imagePullPolicy:IfNotPresentcommand: ["/plumber-linux","server"]ports: - containerPort:9090env: - name:PLUMBER_SERVER_CLUSTER_IDvalue:"7EB6C7FB-9053-41B4-B456-78E64CF9D393" - name:PLUMBER_SERVER_ENABLE_CLUSTERvalue:"true" - name:PLUMBER_SERVER_NATS_URLvalue:"nats://nats.default.svc.cluster.local:4222" - name:PLUMBER_SERVER_USE_TLSvalue:"false" - name:PLUMBER_SERVER_NODE_IDvalueFrom:fieldRef:fieldPath:metadata.name---apiVersion:apps/v1kind:StatefulSetmetadata:name:natsnamespace:defaultlabels:app.kubernetes.io/name:natsspec:selector:matchLabels:app.kubernetes.io/name:natsreplicas:1serviceName:natspodManagementPolicy:Paralleltemplate:metadata:annotations:prometheus.io/path:/metricsprometheus.io/port:"7777"prometheus.io/scrape:"true"labels:app.kubernetes.io/name:natsspec:# Common volumes for the containers.volumes: - name:config-volumeconfigMap:name:nats-config# Local volume shared with the reloader. - name:pidemptyDir: {}# Required to be able to HUP signal and apply config# reload to the server without restarting the pod.shareProcessNamespace:trueterminationGracePeriodSeconds:120containers: - name:natsimage:nats:2.7.2-alpineimagePullPolicy:IfNotPresentports: - containerPort:4222name:client - containerPort:7422name:leafnodes - containerPort:7522name:gateways - containerPort:6222name:cluster - containerPort:8222name:monitor - containerPort:7777name:metricscommand: - "nats-server" - "--config" - "/etc/nats-config/nats.conf"env: - name:POD_NAMEvalueFrom:fieldRef:fieldPath:metadata.name - name:SERVER_NAMEvalue:$(POD_NAME) - name:POD_NAMESPACEvalueFrom:fieldRef:fieldPath:metadata.namespace - name:CLUSTER_ADVERTISEvalue:$(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.localvolumeMounts: - name:config-volumemountPath:/etc/nats-config - name:pidmountPath:/var/run/nats - name:nats-js-pvcmountPath:/datalivenessProbe:httpGet:path:/port:8222initialDelaySeconds:10timeoutSeconds:5periodSeconds:60successThreshold:1failureThreshold:3startupProbe:httpGet:path:/port:8222initialDelaySeconds:10timeoutSeconds:5periodSeconds:10successThreshold:1failureThreshold:30# Gracefully stop NATS Server on pod deletion or image upgrade.#lifecycle:preStop:exec:# Using the alpine based NATS image, we add an extra sleep that is# the same amount as the terminationGracePeriodSeconds to allow# the NATS Server to gracefully terminate the client connections.#command: - "/bin/sh" - "-c" - "nats-server -sl=ldm=/var/run/nats/nats.pid" - name:reloaderimage:natsio/nats-server-config-reloader:0.6.2imagePullPolicy:IfNotPresentresources:nullcommand: - "nats-server-config-reloader" - "-pid" - "/var/run/nats/nats.pid" - "-config" - "/etc/nats-config/nats.conf"volumeMounts: - name:config-volumemountPath:/etc/nats-config - name:pidmountPath:/var/run/nats - name:metricsimage:natsio/prometheus-nats-exporter:0.9.1imagePullPolicy:IfNotPresentargs: - -connz - -routez - -subz - -varz - -prefix=nats - -use_internal_server_id - -jsz=all - http://localhost:8222/ports: - containerPort:7777name:metricsvolumeClaimTemplates: - metadata:name:nats-js-pvcspec:accessModes: - "ReadWriteOnce"resources:requests:storage:5Gi---apiVersion:v1kind:Podmetadata:name:"nats-test-request-reply"labels:app.kubernetes.io/name:natsannotations:"hook":testspec:containers: - name:nats-boximage:synadia/nats-boxenv: - name:NATS_HOSTvalue:natscommand: - /bin/sh - -ec - | nats reply -s nats://$NATS_HOST:4222 'name.>' --command "echo 1" & - | "&&" - | name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null) - | "&&" - | [ $name = test ]restartPolicy:Never
All of the following environment variables are only used if plumber is launched in cluster mode.
Environment Variable
Description
PLUMBER_SERVER_ENABLE_CLUSTER
Set to "true" to enable cluster mode. If not set, plumber will run in standalone mode.
PLUMBER_SERVER_CLUSTER_ID
All plumber instances part of the same cluster must share the same id.
PLUMBER_SERVER_NODE_ID
If you run plumber as a stateful set, you can assign each node its own $node_id which will simplify debug, parsing logs, etc.
If unset, it will be automatically generated at launch.
PLUMBER_SERVER_NATS_URL
Must be set to your local NATS server's address.
PLUMBER_SERVER_AUTH_TOKEN
Used for authenticating with the plumber instance. Set to batchcorp by default.
Metrics
Plumber exposes various prometheus metrics that are exposed via its internal HTTP server: