In the previous article1, I have investigated modern PKI software alternatives. One of the options on the list was HashiCorp Vault. The natural next step is to set up a Vault PKI.
This article documents setting up an imaginary multi-tenant Vault PKI with custom PEM bundles generated with OpenSSL.
Modern applications tend to get fairly complex pretty quick. A usual stack will consist of many moving parts. Starting from a cloud environment, maybe abstracted behind Kubernetes or Mesos, through multitude of web servers, GRPC services, to monitoring systems like Grafana, Jaeger, Prometheus, all fronted with load balancers or proxies like Traefik.
In Introduction to Keycloak Authorization Services 1, I have described how to use the Authorization Services to find out if the user has access to certain resources.
I have done so by asking Keycloak to issue an access token with a special grant_type with the value of urn:ietf:params:oauth:grant-type:uma-ticket which returned a list of permissions the has access to.
As the number of applications and websites in the organization grows, the developer will inevitably receive a request to implement Single Sign-On. Single Sign-On (SSO for short) is an authentication scheme allowing the user to log in with a single set of credentials and share the session across multiple, independent, potentially unrelated systems.
Updated on 15th of May 2021 for Keycloak 13.0.0 with Postgres 13.2.
6th of June 2021: Follow up: setting up Keycloak with TLS for local development.
Keycloak is an open source Identity and Access Management System developed as a JBoss community project under the stewardship of Red Hat.
First, create an account on the Sonatype JIRA, unless you have one. For the new group ID, create a ticket using the form under this URI. Once requested, wait for the ticket to go into Resolved state. When this happens, you can publish your project to Sonatype.
It is entirely possible that what I am going to describe here is an edge case not many people hit with their Kafka deployments. However, in my experience, when Kafka is used to ingest large volumes of data, it makes perfect sense. Considering that every now and then people ask for a cold storage feature on the Kafka mailing list, I am not the only one who would find this useful.
About two weeks ago, Virdata released a set of patches for Apache Spark enabling Spark to work on Mesos with Docker bridge networking. We are using these in production for our multi tenant Spark environment.
SPARK-11638: Spark patches All patches for all components described below are available in Spark JIRA.
Wow. It’s difficult to believe it’s been almost a week since I gave a talk about gossip protocols at Erlang User Conference in Stockholm. It was a fantastic event, great agenda, great topics, fantastic networking. EUC is one of those events you should attend, you will not regret it.
A little moan to start with… I owned a mid 2009 MacBook Pro, I never used it for presenting stuff to others but I actually bought a remote control for it. The computer cost me a lot of money, it was top end stuff when it was bought.