In a recent blog post, I introduced sentry as a platform for capturing and analysing errors in real time. I would like to explain the use of sentry with applications that are operated as containers in OpenShift. Red Hat OpenShift provides a container-centric hybrid cloud solution, built on projects such as Docker, Kubernetes, Project Atomic or OpenShift Origin – with Red Hat Enterprise Linux as it’s core foundation. It provides a secure and stable platform for container-based deployments. Within the container different technologies are supported, eg. Node.js, Java, WildFly etc. The demo application for this article was created with Spring Boot. Since OpenShift ultimately relies on Kubernetes, instructions and concepts presented below can also be applied one-to-one on Kubernetes!
In sentry, as well as in OpenShift, a functional separation or grouping can be achieved through projects. For this reason, it is recommended to configure a suitable project in sentry for each project in OpenShift. The permissions for the projects can then be set accordingly in sentry as well as in OpenShift.
Each project in sentry can be referenced by a so-called
DSN. The DSN for each project is unique. The DSN configures the connection and access data for each client. The DSN can be found in the project settings within the web console.
In OpenShift, a ConfigMap named sentri.io is created in each project. In this ConfigMap, the DSN is configured as a property. Thus every application in the OpenShift project has access to this ConfigMap. When deploying an application, this ConfigMap is used as the basis for creating environment variables and thus integrated into the respective application.
The demo application can be found in github. The application defines Apache Camel Routes and starts with Spring Boot. The application can be executed by calling a Rest interface (see below). sentry is integrated in the logging configuration:
<configuration scan=„true“ scanPeriod=„30 seconds“>
<appender name=„Sentry“ class=„io.sentry.logback.SentryAppender“>
<appender–ref ref=„STDOUT“ />
The values from the ConfigMap sentry.io are then made available as environment variables during deployment of the application in OpenShift.
– name: SENTRY_DSN
– name: SENTRY_ENVIRONMENT
– name: SENTRY_RELEASE
The following explains the setup and testing of the application in OpenShift.
- Create a project with the same name in OpenShift and sentry.
- Obtain the DSN in Project Settings > Klient Keys (DNS) in sentry.io console.
- Login to OpenShift with the CLI
1oc login –u developer –p developer https://OPENSHIFT_IP_ADDR:8443
- The following command grants ‘view’ access:
1oc policy add–role–to–user view —serviceaccount=default
If the above permission is not granted, your pod may throw a message similar to the following:
“Forbidden!Configured service account doesn’t have access. Service account may have been revoked”
- Create a ConfigMap with name
sentry.ioand a key ‘SENTRY_DSN’, key ‘environment’. As value use the Klient Keys (DNS) for the project in sentry.io.
1oc create configmap sentry.io —from–literal=SENTRY_DSN=http://064662d323d44ec8b642297a16a7b845:1f12c8a490fd48139f440ba7fb71627b@localhost:9000/7 –from-literal=environment=dev
- Create a ConfigMap for this project using the following command:
1oc create configmap springboot–camel–sentry —from–file=src/main/resources/application.properties
- Deploy the application to OpenShift
- Verify the Deployment
- Produce an error
Check sentry for a FileNotFoundException.
With a few configuration steps, sentry can be usefully integrated with OpenShift. Thus, a central error management for applications in OpenShift can be easily established.