Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.

Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.

A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.

Keycloak is good software. It never failed for me. Even upgrading from 7.x.x to 16.x.x somehow just worked.

Yes, their docker image is fat, but it's also very flexible. Now that they are basing Keycloak on Quarkus instead on Wildfly, the docker image should shrink in size.

    quay.io/keycloak/keycloak   18.0.0                      a6bd0f949af0   15 hours ago   562MB
    quay.io/keycloak/keycloak   18.0.0-legacy               421e95f49589   46 hours ago   753MB
ok, still big :).

Beware: they aren't using Docker Hub anymore. Newer versions are on Quay only (https://quay.io/repository/keycloak/keycloak).

I'm happy with Keycloak. Also nice folks around Keycloak.



We're actually working on a new version of the Administration UI at the moment (I'm one of the devs) so this is useful feedback. We're looking for folks to try it out, so take a look at https://github.com/keycloak/keycloak-admin-ui/.

You can try it out on the latest Keycloak by passing the --features=admin2 flag on startup.


I come from cybersec and I want to give you mad props for this product and project. It fills gaps in many places!


We have way too many issues with KeyCloak. Sometimes I wonder why did we integrate this. One of the main issue is when you authorize by Github but cancel the authentication, it redirects to KeyCloak page rather than our login page. Couldn't find any solution yet.


isnt it open source?


Will this have an impact on the auth screens and the related theming?


Theming the Administration UI in the new version is a lot harder as it relies more on JavaScript for rendering than FreeMarker templates in the old one. We're keeping the option to use the old interface around until this has been mitigated.

That said, we are now relying on PatternFly (https://www.patternfly.org), which allows quite some customization through CSS variables.

As for the authorization screens, these are out of scope for the changes we're doing here. But they will probably get pulled into their own refactor at some point.


> Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.

> A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.

To echo everyone else: the Keycloak documentation does not do a good job of hand-holding you at all, and the number of possible ways you can configure and use the system and the amount of jargon and terminology used is massively overwhelming to someone trying to get started. It would be very helpful to have some "white paper"-esque summaries that walk you through some simple, typical use-cases.

I looked through the docs quickly before making this post and as an example here's a basic task for initial setup ("hook up an IDP", basically giving keycloak its database of users), and it's utterly incomprehensible to any human being who doesn't already know how to work the system and really essentially worthless even then. It's just... reading me the command line options and a couple config files? What do any of those values even mean? This is core functionality for Keycloak, and the documentation consists of "yeah, here's a command line with placeholders and a text file syntax, good luck bitches!".

https://www.keycloak.org/server/configuration-provider

Honestly I feel like you could do better simply by jumping into the UI and playing with options, it's not entirely unintuitive what's going on in the UI, but the docs are basically incomprehensible.

I actually know of several projects that have pretty much bogged down because of Keycloak configuration or role/privilege mis-configuration issues and it's not hard to see why. It's the turing tar-pit of IDP, everything is possible and nothing is easy (or documented). Which is a shame because it seems like an awesome piece of software, just inscrutible to the un-initiated.

As others are noting, I'm sure some of this is due to OAuth2 being an inscrutable piece of shit in general, same thing, it tries to do everything and it's so un-opinionated that you end up with a bunch of basically incompatible implementations that are each effectively their own "standard" anyway.

(posted this on the wrong child, moving it to the parent)


> Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation.

This is one area where incentives don't align correctly for open source projects that offer commercial support.


Disclaimer: Former Red Hatter but worked on OpenShift, not Keycloak

Working as a person providing commercial support for open source projects, I promise it doesn't actually work that way. Incentives are entirely for creating good documentation. Having crappy docs only hurts project adoption for paying and non-paying customers, increases the support burden, and wastes the time of your employees (who are the primary consumers of that documentation).

Usually documentation isn't great because writing (and maintaining!) good documentation is really hard. It's a continual effort and it takes engineer time away from bug fixes and feature dev, two things for which there is never ending demand for.

Edit: Pro-Tip: With Red Hat projects (like Keycloak, OKD, etc) it's always worth looking at the RH product docs as well as "open source" docs. For example if you use OKD, check OpenShift docs as well as OKD docs. You do (unfortunately and I wish they'd remove this) usually have to log in to a Red Hat account but you don't have to pay. You can create a free account and use that.


I can tell you that at least as of late last year, the OpenShift install docs omitted key details for setting it up.

We were unable to do so until contacting RH and getting additional instructions - I forget the all the details, part of it involved creating DNS records mentioned nowhere in the docs.


If you can remember which ones, I'd be interested. I installed OpenShift a dozen or so times using those docs and I don't remember having to add any DNS records that weren't in the docs. I also grepped my notes and don't see anything. That said I do remember the DNS records being on a page that wasn't the one I thought it should be on.

Something I do criticize them for though (this is a problem in broader tech not just Red Hat) is that they are aggressive at culling/cutting old docs. The idea is to keep the docs small and relevant, but unfortunately in my opinion they cut valuable stuff. I always screen grab/print docs at the time in case they get removed because that's been a wide problem.


How open is OpenShift? Is there a bug where this is tracked and are people contributing at least through issue comments? And response from committers?


If you have a Red Hat subscription for OpenShift, then yes.

Although, unless it's high priority (like it's breaking functionality and there's no workaround) it's not usually a quick turnaround because it has to get prioritized and added to a sprint. It can take weeks or months. Although I did have a high-pri item fixed in under a day, so it does happen.


Isn't the existence of separate RH product docs (that require log in) a validation of the parent's point?


Thank you for taking the time to share your first-hand perspective!


> This is one area where incentives don't align correctly for open source projects that offer commercial support.

This is true in some[0] cases. It's also true that documentation is key source of customer acquisition and retention.

Projects get traction by making it useful out of the box[1] for some use-cases, making it appealing for hackers to config and extend, teasing features the former would pay for and the latter could figure out the buy v build.

Projects that do this well also learn shittons from real-world usage and feedback informing their roadmap and new opportunities to pursue.

[0] True when the perspective is "If we make the docs too good we're losing revenue" / "Everyone using Feature X gratis is a loss in MRR." It's an understandable view that's held widely. It's not often a significant revenue factor in my experience, and ~never when accounting for product and market insights gained by wider adoption.

[1] https://news.ycombinator.com/item?id=31259034


> Beware: they aren't using Docker Hub anymore. Newer versions are on Quay only

Oh, right, Quaycloak.


Quaaludes... what?


> Beware: they aren't using Docker Hub anymore.

Do you know why? Is it because of the docker hub pricing changes?

I found this discussion on the mailing list but didn't see a reason why: https://lists.jboss.org/pipermail/keycloak-user/2019-March/0...


This is the publicly stated reasoning:

https://lists.jboss.org/pipermail/keycloak-user/2019-March/0...

Mostly I think the answer is just that it's a Red Hat project and Red Hat wants to use their ecosystem.


Pretty sure it's because the core devs are RH employees, who owns Quay. Seems reasonable to keep things on your own infra.

Having said that, I know there has been some falling out between RH and Docker some time ago, which was one of the reason RH ended up creating Podman.


We're still on the older one and looking forward to the Quarkus improvement specifically for boot times. Even with an empty DB, the old one takes several minutes to load and come up. It's the long pole in our install.

Very happy with KC otherwise. We make heavy use of its nice API to create providers and clients at install time.


I'm litterally about to jump from 8 to 17 this week, so that's good to hear. It seemed seamless on my local setup and was wondering if it was just too good to be true. It's a great piece of software.

You are correct about the documentation. I find the tragedy of open source documentation is that the people who need it most - the novices - are the ones whom could write it best - if they only knew if what they were saying was accurate. And then by the time you become an old-timer, and know thy ways, you just want to wipe your hands and walk away, because your tired....and still not sure if all your knowledge is accurate.

But anyway, once it's all figured out, it runs very reliably.


Thanks for the thumbs up!

>> 562MB

Curious, why is the Quay image/container so large? Is there a way to list the contents without downloading it?


The base image (registry.access.redhat.com/ubi8-minimal) is about 100 MiB.

    ID                                                                CREATED       CREATED BY                                                                                                                                                                                                                                                                                SIZE        COMMENT
    a6bd0f949af01b5680767225c3ac2b428d9b6921a6a9a420f6189f2523931c4c  18 hours ago  ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]                                                                                                                                                                                                                                                    0 B         buildkit.dockerfile.v0
    <missing>                                                         18 hours ago  EXPOSE map[8443/tcp:{}]                                                                                                                                                                                                                                                                   0 B         buildkit.dockerfile.v0
    <missing>                                                         18 hours ago  EXPOSE map[8080/tcp:{}]                                                                                                                                                                                                                                                                   0 B         buildkit.dockerfile.v0
    <missing>                                                         18 hours ago  USER 1000                                                                                                                                                                                                                                                                                 0 B         buildkit.dockerfile.v0
    <missing>                                                         18 hours ago  RUN /bin/sh -c microdnf update -y &&     microdnf install -y java-11-openjdk-headless && microdnf clean all && rm -rf /var/cache/yum/* &&     echo "keycloak:x:0:root" >> /etc/group &&     echo "keycloak:x:1000:0:keycloak user:/opt/keycloak:/sbin/nologin" >> /etc/passwd # buildkit  272 MB      buildkit.dockerfile.v0
    <missing>                                                         18 hours ago  COPY /opt/keycloak /opt/keycloak # buildkit                                                                                                                                                                                                                                               192 MB      buildkit.dockerfile.v0
    1ecf95eda522cf8db84ac321e43a353deea042480ed4e97e02c5290eb53390c3  5 days ago                                                                                                                                                                                                                                                                                              20.5 kB     
    <missing>                                                         5 days ago                                                                                                                                                                                                                                                                                              107 MB      Imported from -


For the most part I am also happy with Keycloak, but they could do a far better job documenting things, especially their language adapters. For example the "Readme" for the `keycloak-connect` Node.js package has a link to documentation, but that documentation fails to document anything around the package.

Likewise I had better luck once I understood OpenID and then treating Keycloak as an extension of that. I even ended up writing my own code to deal with the bearer token passed to our API, because I couldn't find anything. If anyone is interested I can share it, but it isn't anything amazing.

Most of my best help came from outside of the Keycloak support groups and instead reaching out to other people who use Keycloak.


> In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.

Hot take: OAuth2 is a really shitty protocol. It is one of those technologies that get a lot of good press, because it enables you to do stuff you wouldn't be able to do in standardized manner without resorting to abysmal alternatives (SAML in this case). And because of that it shines in comparison. But looking at it from a secure protocol design perspective it is riddled with accidental complexity producing unnecessary footguns.

The main culprit is the idea to transfer security critical data over URLs. IIUC this was done to reduce state on the involved servers, but that advantage has completely vanished, if you follow today's best practices to use the PKCE, state and nonce parameter (together with the authorization code flow). And more than half of the attacks you need to prevent or mitigate with the modern extensions to the original OAuth concepts are possible because grabbing data from URLs is so easy: An attacker can trick you to use a malicious redirect URL? Lock down the possible redirects with an explicitly managed URL allow-list. URLs can be cached and later accessed by malicious parties? Don't transmit the main secret (bearer token) via URL parameters, but instead transmit an authorization code which you can exchange (exactly) once for the real bearer token. A malicious app can register your URL schema in your smartphone OS? Add PKCE via server-side state to prove that the second request is really from the same party as the first request...

It could have been so simple (see [1] for the OAuth2 roles): The client (third party application) opens a session at the authorization server, detailing the requested rights and scopes. The authorization server returns two random IDs – a public session identifier, and a secret session identifier for the client – and stores everything in the database. The client directs the user (resource owner) to the authorization server giving them the public session identifier (thus the user and possible attackers only ever have the possibility to see the public session identifier). The authorization server uses the public session identifier to look up all the details of the session (requested rights and scopes and who wants access) and presents that to the user (resource owner) for approval. When that is given, the user is directed back to the client carrying only the public session identifier (potentially not even that is necessary, if the user can be identified via cookies), and the client can fetch the bearer token from the authorization server using the secret session identifier. That would be so much easier...

Alas, we are stuck with OAuth2 for historic reasons.

[1] https://aaronparecki.com/oauth-2-simplified/#roles


You're right about the complexity and the steep learning curve, but there's hope that OAuth 2.1 will simplify this mess by forcing almost everyone to use a simple setup: authorization code + PKCE + dPoP. No "implicit flow" madness.

Another big problem with OAuth is the lack of quality client/server libraries. For example, in JS/Node, there's just one lone hero (https://github.com/panva) doing great work against an army of rubbish JWT/OAuth libs.


The problem with the authorization code flow is, it was not build with SPAs in mind. I.e. you always need a server-side component that obtains those tokens.

So a 100% client/FE solution based on NextJS/React/angular/vue etc. can not simply be deployed to a CDN and then use Auth0/AWS Cognito/Azure AD whatever without running and hosting your own server-side component.


There is a document meant for best practices for browser-based apps such as SPA/PWA, which includes use of code flow.

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-brows...

(disclaimer - co-author)

The catch is that since the client web origin and AS web origin are often different sites, the AS has to actually implement CORS on their token endpoint.

Some implementations unfortunately (perhaps due to a misunderstanding about what CORS is meant to accomplish) make this a per-tenant/per-installation allowlist of origins on the AS.

Auth0 and Ping Identity (my employer) document CORS settings for products. I'm not sure about AWS and you might need to add CORS via API gateway. Azure AD supports CORS for the token endpoint, but they may limit domains in some manner (such as redirect uri of registered clients).

FWIW, I created a demo ages ago (at https://github.com/pingidentity/angular-spa-sample), which by default is configured to target Google for OpenID Connect and uses localhost for local development/testing. It hasn't aged particularly well in terms of library choices, but I do keep it running.

A deployment based on older Angular is also at https://angular-appauth.herokuapp.com to try - IIRC I used a node server just to deal with wildcard path resolution of the index file, but there's otherwise no local logic.


I appreciate you work on clarifying the situation. But my statement still stands, and you seem to back it up in your draft:

> The JavaScript application is then responsible for storing the access token (and optional refresh token) as securely as possible using appropriate browser APIs. As of the date of this publication there is no browser API that allows to store tokens in a completely secure way.

So with OAuth 2.0 + PKCE and no BE component, the tokens are directly exposed to the client, just as they were with the implicit flow. Also, if I'm not mistaken, PKCE extension is optional in OAuth 2.0, and without it you cannot securely use the code flow (as you would have to expose the client secret).


Storing access tokens in Javascript and storing them in a native application have about equal protections - but by far most Javascript apps are left far more susceptible to third party code execution.

The answer is typically to make such credentials incapable of being exfiltrated by adding proof-of-possession, such as the use of MTLS or of the upcoming DPoP mechanism.

Note that preventing exfiltration doesn't prevent a third party from injecting logic to remote-drive use of those tokens and their access sans exfiltration.

While access tokens can be requested with specifically limited scopes of access, a backend server could potentially further control the level of access a front-end has. The problem is that the backend and frontend are typically defined in terms of business requirements. As such, there hasn't been a clear opportunity for standardizing such approaches.

When using a backend, my advice is to be sure you don't just have your API take a session cookie in lieu of an access token. API are typically not constructed with protections from XSRF and the like (or rather, an access token header serves as an XSRF protection while just a session cookie will not).

> Also, if I'm not mistaken, PKCE extension is optional in OAuth 2.0

Correct - although it is strongly recommended by best current practices, including recommending deployments limit/block access to clients which do not use PKCE.

> …and without it you cannot securely use the code flow (as you would have to expose the client secret).

PKCE has nothing to do with client secrets or client authentication. It provides additional strong correlation between the initial front-end request with the following code exchange request.

It was written to support native apps, as many such apps used the system browser for the authorization step and then redirected back into a custom URL scheme. Since custom URL scheme registrations are not regulated, malicious apps could attempt to catch these redirects. PKCE provides a verification that the same client software created both the redirect to the authorization endpoint and the request to the token endpoint. Even if a malicious piece of software got the code, they wouldn't have a way to exchange it for an access token.

Some of the original OAuth security requirements for clients have been found to be poorly implemented, but that PKCE provides equivalent protections against these particular issues. Unlike client-only implementation logic, PKCE support is something that an AS can audit. Hence it being likely that PKCE will be a requirement in future versions of OAuth.


I really appreciate your effort, but somehow you just produce a lot of text without arguing against my point: OAuth 2.0 in a FE SPA is just broken, or "difficult" at best.


a) The text explains how its not broken b) Its not difficult if you use a library that ensures this is done correctly. Otherwise yes, secure auth is difficult, just like its difficult anywhere else.


Even for 100% FE solutions, the current best practice from OAuth authors [1][2] is to use authorization code + PKCE (optionally, +dPoP). The implicit flow is deprecated (since PKCE), and from OAuth 2.1 it will be removed entirely.

[1] https://datatracker.ietf.org/doc/html/draft-ietf-oauth-secur...

[2] https://auth0.com/docs/get-started/authentication-and-author...


It depends on the provider. For Mastodon and Pleroma, there's an endpoint to get generate a client ID/secret that you can call on the client. The flow is basically

  1. Prompt for an instance name
  2. Get a client id/secret from the instance and put it in localStorage
  3. Redirect to the login page
  4. Once you get the callback, get the token using the code and the client ID/secret from localStorage
  5. You're done. No server needed.


But this is surely non-standard OAuth 2.0, is it?


Other than the initial request to get the client ID/secret it's standard I think


SPA is HTML/JS served by the server. We don't need client-only solutions. We need devs to understand how HTTP and browsers work.

It means that we simply keep using what actually works, i.e. serverside component that obtains authorization and we use simple mechanisms to ensure token stays at the server and FE speaks to the server which in turn speaks to the target app. Proxying is not that difficult of a problem and we don't have to run in circles, inventing different flows only to cater to devs who can't learn their field.


You've misidentified the problem.

We need CDN solutions for front-ends because that's the best way to deliver great, scalable performance for complex SPAs.

We also need a purely client-side flow for mobile (native) apps.

Additionally, the authorization code flow (with PKCE) in Keycloak still supports pure client side authorization. Its more complex than the implicit flow, but it doesn't really matter as any library (including keycloak-js) will take care to ensure its done correctly.


Honestly, DPoP[1] is pretty horrible. It is a partial re-implementation of TLS client authentication deep inside the TLS connection. What's wrong:

- No mandatory liveliness check. That means you don't know whether the proof of possession was indeed issued right now or has been pre-issued by an attacker with past access. Quoting from the spec[2]: """Malicious XSS code executed in the context of the browser-based client application is also in a position to create DPoP proofs with timestamp values in the future and exfiltrate them in conjunction with a token. These stolen artifacts can later be used together independent of the client application to access protected resources. To prevent this, servers can optionally require clients to include a server-chosen value into the proof that cannot be predicted by an attacker (nonce).""" This is a solved problem in TLS.

- The proof of possession doesn't cover much of an HTTP request, just "The HTTP method of the request to which the JWT is attached" and "The HTTP request URI [...], without query and fragment parts." It doesn't even cover the query parameters or POST body. Given rational: """The idea is sign just enough of the HTTP data to provide reasonable proof-of-possession with respect to the HTTP request. But that it be a minimal subset of the HTTP data so as to avoid the substantial difficulties inherent in attempting to normalize HTTP messages.""" In short: Because it is so damn difficult to do it on this layer. Of course, TLS covers everything in the connection.

- Validating the proofs, i.e. implementing the server side of the spec, is super complicated, see [3]. And to do it right you also need to check the uniqueness of the provided nonce see [4] which brings its own potential attack vectors. And to actually provide liveliness checks (see above) you have to implement a whole extra machinery to provide server chosen nonces see [5]. I expect several years until implementations are sufficiently bug free. Again, TLS has battle tested implementations ready.

Best of all? There is already a spec to do certificate based proof of possessions using mutual TLS! See [6]. We really should invest our time in fixing our software stack to use the latter (e.g. by adding JavaScript initiated mTLS in browsers) instead of yet another band aid in the wrong protocol layer.

[1] https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html

[2] https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html#sect...

[3] https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html#name...

[4] https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html#name...

[5] https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html#name...

[6] https://www.rfc-editor.org/rfc/rfc8705.html


For most Keycloak users, a very tiny subset of OIDC is being used too. Usually there is no three way relationship between a third party developer, an API provider and a user anymore. You could rip scopes out of Keycloak and few users wouldn't be able to cover their use cases. Rarely is there more than one set of scopes being used with the same client.

Keycloak also supports some very obscure specs, my favourite probably being "Client Initiated Backend Authentication" which can enable a push message sent to authenticator app type authentication flow using a lot of polling and/or webhooks.


Can you disclose the number of users & apps you have? Are you using Keycloak or do you pay for Red Hat Single Sign-On (for context, that's the name of the downstream product that Red Hat sell subscriptions for).


The downside to using Red Hat Single Sign-On is that it is a vastly inferior product to using Keycloak upstream as it is so many versions behind.

This means that bug fixes and features haven't trickled down yet. Although RH SSO 7.5 jumped from Keycloak version 9.0.17 (in RH SSO 7.4) to 15.0.2 so there's some improvement there... but Keycloak just released 18.0.0...


We are using keycloak, not SSO. There is no long-term-support keycloak version available, so we are considering buying into Redhat SSO.


You should really be building your own multi-stage container so you can prebake KC_FEATURES and KC_DB into the image.

https://www.keycloak.org/server/containers


We still use an old version without Quarkus :). But yes, that's the way to go.


> Newer versions are on Quay only

Thanks for mentioning this, I didn't realize Keycloak is a RedHat product. I'll plan to move to something else. Anything RedHat makes turns into a catastrophe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: