Setup Jaeger Operator with Opensearch for Kubernetes
“Jaeger means ‘hunter’ in German”
This medium blog covers how to setup Jaeger Operator with Opensearch (with demo certificates). We will use the hotrod application for the demo. We can then view the traces on opensearch-dashboards.
The entire demo will be done on minikube, so please make sure to have it installed.
https://minikube.sigs.k8s.io/docs/start/
You will also need kubectl and helm → https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
We will walkthrough:
- Opensearch Setup
- Opensearch Dashboards Setup
- Jaeger Setup
- HotRod Setup
This is how the setup will look:
Opensearch Setup -
OpenSearch is a free and open-source search and analytics platform. It’s like a super powerful magnifying glass for your data, allowing you to explore and analyze vast amounts of information. It’s great for things like real-time application monitoring, log analysis, and website search.
AWS supports opensearch, and it was created as a fork of elasticsearch, back when it changed licenses in 2021. It has pretty much left a scar on the opensource community, and organizations now have to be wary of what the license is, on their opensource tool and whether it will continue getting support from the community. Its best to stick to projects that are backed by foundations like Linux Foundation, and CNCF.
Luckily both, Opensearch and Jaeger are backed by CNCF and Jaeger is a CNCF graduated project.
We will be adding the opensearch repository to our helm. This repository contains the helm-charts that are provided by opensearch, fo configuring opensearch on kubernetes.
After which we are pulling the helm-chart named ‘opensearch’ from the ‘opensearch’ repository.
helm repo add opensearch https://opensearch-project.github.io/helm-charts/
helm repo update
helm search repo opensearch
helm pull opensearch/opensearch
Extract the tar file, that is saved after you run the above commands. Now we have to change the values file. Copy a version of the default values.yaml file to your working directory.
(It is a good idea to not edit the default values.yaml file, will help us with debugging issues using diff later on, but if you are feeling rebellious today, go ahead :)
cp opensearch/values.yaml os.yaml
Set, a strong initial password, it is mandatory from opensearch 2.12.0 onwards. Make these changes to the values file, for demo config.
single-node: true
plugins.security.http.enabled: false
extraEnvs:
- name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
value: Sl4Ij+H@\k4}0Vn # Feel free to set any other strong password
Use demo certificates only for demo purposes, do not use it on production systems.
kubectl create ns observe
helm install opensearch opensearch/ -f os.yaml -n observe
This is what you output should look like:
$ kubectl get po -n observe
NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-0 1/1 Running 0 6m36s
Opensearch Dashboards Setup -
Now, lets get to setting up opensearch dashboards…
You should already have the opensearch repo configured with helm, so just run,
helm pull opensearch/opensearch-dashboards # Extract the Tar File
cp opensearch-dashboards/values.yaml osd.yaml
# There is nothing to configure here, just install the helm chart
helm install dashboards opensearch-dashboards/ -f osd.yaml -n observe
We are solid, if your output looks like this,
$ kubectl get po -n observe
NAME READY STATUS RESTARTS AGE
dashboards-opensearch-dashboards-89dc54fc5-66rl8 1/1 Running 0 7m41s
opensearch-cluster-master-0 1/1 Running 0 13m
$ kubectl get svc -n observe
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboards-opensearch-dashboards ClusterIP 10.107.31.112 <none> 5601/TCP 64m
opensearch-cluster-master ClusterIP 10.102.152.179 <none> 9200/TCP,9300/TCP 70m
opensearch-cluster-master-headless ClusterIP None <none> 9200/TCP,9300/TCP,9600/TCP 70m
Lets test it out, forward the we use the kubectl port-forward command to forward localhost:30001 to the pod’s 5601 port.
In the following command, we are redirecting traffic to the service and not the pod directly, this is because, it the number of pods is increased, the service will automatically take care of balancing the load among the pods. But, nothing is stopping you from redirecting the traffic to the pods.
kubectl port-forward svc/dashboards-opensearch-dashboards -n observe 30001:5601
# After this, visit http://127.0.0.1:30001 in your browser
# username: admin
# password: Sl4Ij+H@\k4}0Vn
Jaeger Setup -
Since version 1.31 the Jaeger Operator uses webhooks to validate Jaeger custom resources (CRs). This requires an installed version of the cert-manager.
Install the Cert Manager Manifests, save it somewhere if you want to edit them. docs-link
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml
It will install the cert-manager manifests in the cert-manager namespace.
Now, installing jaeger in a different namespace requires some editing. In this doc it tells us how to install in the observability namespace. It tells us to create the observability namespace, before applying the jaeger-operator manifest file.
This is because observability
is the namespace used by jaeger-operator manifest files as the installation namespace. We can simply edit these files to change that.
Start with getting the file, you can do a find and replace observability
with observe
in the manifest file using sed before applying the yaml file.
wget https://github.com/jaegertracing/jaeger-operator/releases/download/v1.57.0/jaeger-operator.yaml
sed -i 's/observability/observe/g' jaeger-operator.yaml
kubectl apply -f jaeger-operator.yaml
$ kubectl get po -n observe
NAME READY STATUS RESTARTS AGE
dashboards-opensearch-dashboards-89dc54fc5-66rl8 1/1 Running 0 10h
jaeger-operator-786c87cb64-gsc6n 2/2 Running 0 15m
opensearch-cluster-master-0 1/1 Running 0 10h
The jaeger operator is now setup in our cluster, but we don’t have jaeger yet. Jaeger Operator simply manages all our Jaeger CRs. When installing Jaeger we have also installed a Custom Resource Definition(CRD) by the name of jaeger
. We have to create a Jaeger CR (Custom Resource).
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-cluster
namespace: observe
spec:
strategy: production # This is the strategy for deployment
collector:
maxReplicas: 5 # Set a value for MaxReplicas (for HPA)
resources:
limits:
cpu: 100m
memory: 128Mi
storage:
type: elasticsearch
options:
es:
# Service Discovery URL for Opensearch
server-urls: https://opensearch-cluster-master.observe.svc.cluster.local:9200
# Indices will be created like demo-jaeger-span-2024-05-26
index-prefix: demo
tls.enabled: true
# We are enabling this because we are using demo-config
tls.skip-host-verify: true
username: admin
password: Sl4Ij+H@\k4}0Vn
kubectl apply -f jaeger-cluster.yaml
# You have cleared level 3 if you see this,
$ kubectl get po -n observe
NAME READY STATUS RESTARTS AGE
dashboards-opensearch-dashboards-89dc54fc5-66rl8 1/1 Running 0 11h
jaeger-cluster-collector-6488f9b78d-fmf4c 1/1 Running 0 84s
jaeger-cluster-query-6f46478d95-zrsls 2/2 Running 0 84s
jaeger-operator-786c87cb64-gsc6n 2/2 Running 0 38m
opensearch-cluster-master-0 1/1 Running 0 11h
For other strategies refer here.
For more options for configuring, you can refer here.
# We can use the port-forward method, for jaeger-ui also,
kubectl port-forward svc/jaeger-cluster-query -n observe 16686:16686
HotRod Setup -
HotRod is just an example app we can use for testing tracing, apply this manifest file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-hotrod
labels:
app: hotrod
spec:
replicas: 1
selector:
matchLabels:
app: hotrod
template:
metadata:
labels:
app: hotrod
spec:
containers:
- name: hotrod
image: jaegertracing/example-hotrod:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: TCP
- containerPort: 8082
protocol: TCP
- containerPort: 8083
protocol: TCP
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://jaeger-cluster-collector.observe.svc.cluster.local:4318"
resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "250m"
args: ["all"]
---
apiVersion: v1
kind: Service
metadata:
name: hotrod-service
labels:
app: hotrod
spec:
ports:
- port: 8080
targetPort: 8080
name: port8080
- port: 8081
targetPort: 8081
name: port8081
- port: 8082
targetPort: 8082
name: port8082
- port: 8083
targetPort: 8083
name: port8083
selector:
app: hotrod
type: ClusterIP
kubectl port-forward svc/hotrod-service 8080:8080
After you apply the manifest file, you should be able to see the hotrod frontend on localhost:8080
Click on a few buttons on the hotrod frontend, and go to localhost:30001
and go to Traces
under Observability
If you are able to see something like this, congratulations, you have now connected Hotrod to Jaeger-Collector, while using Opensearch as the storage backend.
The cool part, jaeger-query can also be used with this configuration. Apply the port-forward command I gave above, and visit localhost:16686
Thank you for your patience, I hope I have helped you (atleast a little 😅).