Tag Archives: Kubernetes - Page 3

Setup a simple Wiremock service in Kubernetes

Wiremock is a great tool for many things and especially black box testing. I’m here going to show a small and simple way to setup a Wiremock implementation in Kubernetes

First we need a Deployment of the Wiremock application:

# Deploy wiremock to Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wiremock
  labels:
    app: wiremock
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wiremock
  template:
    metadata:
      labels:
        app: wiremock
    spec:
      containers:
        - name: wiremock
          image: rodolpheche/wiremock
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: mappings
              mountPath: /home/wiremock/mappings
            - name: files
              mountPath: /home/wiremock/__files
      volumes:
        - name: mappings
          configMap:
            name: wiremock-mappings
        - name: files
          configMap:
            name: wiremock-files

Then we need a Service to expose Wiremock to the cluster

apiVersion: v1
kind: Service
metadata:
  name: wiremock
spec:
    selector:
        app: wiremock
    ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080

Two ConfigMaps for configuration and files:

apiVersion: v1
kind: ConfigMap
metadata:
  name: wiremock-mappings
data:
    mappings.json: |
        {
            "request": {
                "method": "GET",
                "url": "/api/v1/hello"
            },
            "response": {
                "status": 200,
                "bodyFileName": "hello.json",
                "headers": {
                    "Content-Type": "application/json"
                }
            }
        }
        
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: wiremock-files
data:
    hello.json: |
        {
        "Hello": "World"
        }
        

And finally a call to the mock to see that everything works as expected:

 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: check-wiremock
  labels:
    app: check-wiremock
spec:
    replicas: 1
    selector:
        matchLabels:
          app: check-wiremock
    template:
        metadata:
          labels:
            app: check-wiremock
        spec:
          containers:
            - name: check-wiremock
              image: cosmintitei/bash-curl
              imagePullPolicy: Always
              command: ["/bin/sh", "-c", "while true; do date; curl -sS wiremock:8080/api/v1/hello; sleep 10; done"]

You should now see the payload from the Wiremock application every ten second in the check-wiremock applications log

Tested in Minikube v1.29.0

One keystore, many applications in a Kubernetes environment

Mutual TLS (mTLS) is a good way to secure your sensitive information when it travels over the Internet. One draw-back is that certificates needs to be renewed every now and then and if you have many applications using the same certificate chain, eg. a company with many micro services that handle sensitive information, you often find yourself needing to change the key store in many applications every time the certificate reaches its expiry. One way to handle this in a Kubernetes environment is to have all the micro services using the same key store via a Secrets or ConfigMap object.

Here is how to set it up:
1. Create a Secret (or ConfigMap) with the key stores you need (I’ve also added a trust store here):

kubectl create secret generic shared-trust-and-key-stores --from-file=keystore.p12 --from-file=truststore.jks

2. (Optional) Create a Secret to hold the pass phrases for the key stores

kubectl create secret generic shared-trust-and-key-store-credentials --from-literal=truststore_password=secret1 --from-literal=key_password=secret2 --from-literal=keystore_password=secret3

3. For every application setup a volume and mount the Secret (or ConfigMap) into that volume:

...
containers:
  - name: mypod
    image: myimage
    volumeMounts:
    - name: shared-keystores
      mountPath: "/etc/ssl"
  volumes:
  - name: shared-keystores
    secret:
      secretName: shared-trust-and-key-stores
...

4. (Optional) Map the pass phrase Secret as environment variables in the pod

...
containers:
  - name: mypod
    image: myimage
    env:
    - name: TRUSTSTORE_PASSWORD
      valueFrom:
        secretKeyRef:
          name: shared-trust-and-key-store-credentials
          key: truststore_password
    - name: KEY_PASSWORD
      valueFrom:
        secretKeyRef:
          name: shared-trust-and-key-store-credentials
          key: key_password
    - name: KEYSTORE_PASSWORD
      valueFrom:
        secretKeyRef:
          name: shared-trust-and-key-store-credentials
          key: keystore_password
...

Now in our application all we have to do is to point it to /etc/ssl for our key stores
* key store: /etc/ssl/keystore.p12
* trust store: /etc/ssl/truststore.jks

and if you are using the shared-trust-and-key-stores-credetials you will find the pass phrases as environment variables called:
* KEY_PASSWORD
* KEYSTORE_PASSWORD
* TRUSTSTORE_PASSWORD

When the time comes for renewing the certificates you only have to change in one place

Tested on WMVare Tansu v1.23.8

Camel-K: Add multiple custom classes via .jar

I came across a use case where we needed a MapForce mapping in our Camel-K flow. MapForce generated Java code consists of many classes and it becomes overly cumbersome to add them all to the kamel run command. To solve this problem we put all the MapForce generated code into a .jar file and then added it to our cluster and referenced it in our kamel run command

1. First we made a .jar file out of the generated java code. For this we use the jar command that can be found in most java developer kits

jar cvf MapFormatAToFormatB.jar com/ se/

This creates a .jar file called MapFormatAToFormatB.jar with the contents of the generated code com/ and se/. Mapforce puts general classes into com/ package and our custom classes into your own packages, in our case se/.

2. Create a ConfigMap to hold our .jar so that Camel-K-Operator can use it

kubectl create configmap map-formata-to-formatb --from-file=MapFormatAToFormatB.jar

3. Reference the configmap in you kamel run command and add a trait to put it on the classpath

kamel run src/main/java/myApp --resource configmap:map-formata-to-formatb -t jvm.classpath=/etc/camel/resources/map-formata-to-formatb/MapFormatAToFormatB.jar

That´s it. In our code we reference these classes the same way we should have if the where part of our code base

import com.altova.io.Input;
import com.altova.io.StringInput;
import com.altova.io.StringOutput;
import se.myorg.integration.MapFormatAToFormatB;

import org.apache.camel.builder.RouteBuilder;

public class AmbuReg extends RouteBuilder {
    @Override
    public void configure() throws Exception {
       ...
    }
}

Tested on Camel-K-Operator v1.11.1 and Kubernetes v1.24.11