Series of articles: defence in depth part 3 of 7

Clients and sessions

In the first two articles, we discussed how you can design your system to build a strong permission control. We've looked at how to find the right balance for what information is attached to your access token and the balance between identity and local permissions. This article will cover how a client can be configured to get your token, as well as how we handle sessions.

We start by dividing our clients into different types:

  • Service integration (machine to machine)

  • Web (both SPA and server-side)

  • Native (iOS, Android, Windows, Linux, macOS etc)

  • Devices (client without browser, or with poor password entry capabilities, e.g. TVs and terminals)

From the OAuth2 specification, we currently only need to choose from three different flows that cover all types of clients.

  • Client Credentials (Service integration)

  • Code+PKCE (Web and Native)

  • Device Code (Devices)

Client Credentials represent a client where a human is not involved, e.g. another service or an automated job.

Code+PKCE is used for all clients used by a human and which have a web browser.

Device Code is used if the client does not have a browser, or has poor user input capabilities. Examples are TV, terminals (CLI) etc.

Martin Altenstedt

The fact that there are "only" three flows is a significant simplification compared to a few years ago and makes it easier for me to work with this. I really recommend that you, as the person responsible for identity and login, read up on these three flows. Here are a couple of good links:

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-02
https://tools.ietf.org/html/bcp78
https://tools.ietf.org/html/bcp79

Service integration

Clients where a human is not involved are an important component of more complex systems. The systems built today are increasingly composed of loosely coupled services. Communication between these services creates the need for this type of integration. Another example is automated system testing of deployed systems. The process that performs the tests in a continuous integration pipeline (CD/CI) is a client of this type.

These clients often represent a privileged user who has significant rights in the system. It is important that our permission model and scopes are good enough to restrict the rights of this type of client. In particular, system test clients are problematic, as they need access to the entire system.

Erica Edholm

I see test clients who have full rights to the system which is a problem when there is a lack of client secret management.

In addition, we lose a lot of test cases when we have a test client that has entitled to everything. It is better to have several test clients with more specialized access to the system.

Even worse is if you turn off all or part of the access control for systems under test! On the one hand, it can creep into production, and secondly you don't actually test the code that goes into production.

When designing the system, we need to think about testability :-)

The flow we use is called "Client Credentials", and is the simplest of our three OAuth2 flows.

The client authenticates with a "client secret" and receives an access token (e.g. a JWT) that can be used for all API calls.

The way the client authenticates itself is usually by means of a "client secret", or with an X.509 certificate. A very important aspect of the client secret is that it is generated mechanically with very high entropy. It therefore does not have the weaknesses of a password chosen by a human being.

If we choose to let the client authenticate with a certificate, we can increase security with certificate-bound tokens, which allow the client's access token to be cryptographically bound to the certificate and thus only be used by a client that has access to the certificate's private key (as opposed to an access token that is not bound to a certificate and can be used by anyone else).

Certificate-bound token: if the client uses an X.509 for authentication via mTLS and the access token format is JWT, the client certificate information can be added as a claim. The API can then use this information to ensure that the access token is sent from the same client (the client that has access to the private key). This means that an attacker cannot send a stolen access token from another client.

The disadvantage is that we need to manage X.509 certificates, which can be more complex than managing client secrets.

Note that we can dispense with certificate management but still bind tokens to a client by generating certificates on the client per session. We only use these certificates to bind a token to the client via mTLS. In other words, we then do not use the certificate to authenticate the client in the OAuth2 flow.

It is important to be able to rotate both certificates and client secrets and to communicate them to the client in a secure way. This is easy to say but in practice can be a major challenge for an organisation.

Martin Altenstedt

A common problem is that "client secret" and system client certificates are not securely managed. For example, they can be checked in with the client's source code, sent by email, SMS or chat. Combined with the fact that they represent privileged access to the system, this can represent a serious security weakness.

Another is that secrets are not rotated, which increases the risk that a former employee or attacker could access the systems. It is important to design your systems so that it is easy to rotate secrets on a recurring basis.

To further strengthen the protection, we can also use a reference token instead of a full JWT as access token.

A JWT contains all the information our API needs to perform an authorization check. The problem with a JWT is that it cannot be cancelled and it often contains personal data. This is particularly important to consider if the client represents a privileged user, is developed by a third party or is public.

We should consider further strengthening the protection of service integrations using reference tokens, instead of a JWT. As always, this is a balance between security and system performance.

A reference token does not contain any information and offers a stronger protection compared to a full JWT because it can be cancelled immediately. The disadvantage is an additional call from the API to the IdP to look up a full JWT on each call.

Now we have established a foundation that we can use in all our flows. We can strengthen our protection by using a few different patterns:

  • mTLS (X.509) instead of "client secret"

  • Certificate-based tokens (requires mTLS, both to API and IdP)

  • Reference token instead of JWT (can be used without mTLS, but requires that the API can authenticate to the IdP e.g. with a client-secret)

Tobias Ahnoff

Even if a client has access to several scopes, we need to think about using as few scopes as possible. This is especially important when tokens are sent to third party services that you don't control, what prevents that service from using your token against other services? The principle of "Least Privilege" also applies to tokens.

Web

Whether it is a Single Page Application (SPA), or whether it is HTML created on the server and served to the browser, the same flow applies: Code+PKCE. The rest of this section will discuss the case of an SPA, since in our experience it is by far the most common way to build web applications today.

The recommendation from the IETF is to use a Backend For Frontend (BFF) as a client to the IdP. This means that our SPA (frontend) only talks to the BFF (backend) which is a proxy to all APIs. Authentication between SPA and BFF is with cookie, BFF translates cookie to access token which it sends with in all API calls. It is BFF that manages all tokens and is responsible for using the right token in the right context.

Integrating directly from the web client to the IdP using e.g. oidc-client-js, auth0-js or similar libraries is increasingly problematic. As browsers strengthen the protection of the individual's data, our technical ability to build a good solution using these libraries is limited.

https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-05#section-6
https://leastprivilege.com/2020/03/31/spas-are-dead/

It is the BFF that is the client against the IdP, not our SPA. The correct flow for the integration to the IdP is Code+PKCE.

Note that you can enforce protection using reference tokens and certificate-bound tokens in the same pattern as we described in the previous chapter on service integration. Compared to a direct integration between SPA and IdP, a BFF allows us to have a very strong protection as it implies that tokens are only managed in an environment we control (backend).

Use a framework that gives you good support to build a BFF. For safety-critical functions, we like to base our solutions on hardened components, well-established designs and reference implementations. Security is simply an area where you should avoid inventing your own solutions.

See the Sessions chapter below for more information on how we need to manage cookies and tokens to enable the user to use the application over a longer period of time.

Native

Examples of native clients are mobile apps in iOS and Android, and installed applications in Windows, macOS and Linux. Formally, we mean a client that is public, can be deconstructed by an attacker. Contrast this with a confidential client, e.g. our BFF, to which an attacker does not have access and can therefore be entrusted with secrets. Many native clients have good possibilities to store a secret, but with important caveats.

Although a platform such as iOS has the ability to store a secret in a cryptographically secure way, an attacker who has access to the environment when clients are running can extract secrets directly from the memory of the client. Different platforms have different levels of protection, but for this reason there may be a case for using reference tokens because, unlike a full JWT, they never contain any personal data and can be cancelled immediately.

The flow that the client should use is Code+PKCE. We use the system browser for authentication against the IdP. As always, we should use pre-built libraries for the integration with the IdP. All major platforms have this type of pre-built libraries. Choose the one that best supports your particular scenario.

The client uses the system browser to allow the user to authenticate to the IdP. We use a ready-made library from the platform vendor.

Tobias Ahnoff

We often encounter a contradiction between good user interaction design and secure MFA. This is especially true in mobile apps where it is tempting to let the user enter passwords directly in the app instead of opening the system's browser. Here it is important to understand the user's perspective and the possibilities that OAuth 2 and OIDC offer us. We want to strike a good balance between accessibility and confidentiality.

Devices

When we have clients that do not have a browser, or have limited support for allowing a user to enter text, we can make use of a flow called "Device Authorization Grant". Examples of clients are TV sets and IoT devices. An important type of client in many systems is a terminal (CLI) where this flow can be a very convenient way for the user to authenticate.

A user starts the login in e.g. a bash prompt. The login command returns a URL and code that the same user can use for authentication on another device.

  [anna@machine ~]$ signin
  To sign in, use a web browser to open the page https://idp/device
  and enter the code C7HL4EQK4 to authenticate

The user launches a browser, navigates to the link above and authenticates as usual. After a successful login, the user enters the code, which causes an access token to be returned to the client (Device).

Summary of clients and flows

Now we have gone through how different types of clients get access tokens. We have four different client types and three basic flows with OAuth 2.

Client type Raft
Service integration Client Credentials
Web Code
Native Code
Devices Device Code

Whatever the flow, we can strengthen protection with mTLS for strong client authentication, certificate-based tokens and reference tokens.

Martin Altenstedt

An important architectural decision is that we use BFF for SPAs. There are many security benefits to a BFF solution, as we keep security-critical code and tokens backend.

The code flow should always be secured with PKCE and for systems with higher security requirements PAR should also be used, which is recommended by e.g. FAPI.

PAR is a new specification that is expected to be supported in the fall of 2021. PAR strengthens protection by minimizing parameters passing through the browser.

href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-par-08

FAPI originally comes from the Banking and Finance sector but with the work on 2.0, recommendations also apply to systems that need the same level of security "financial grade APIs", e.g. Health and Healthcare.

https://openid.net/wg/fapi/

Tobias Ahnoff

Regardless of the level of security, FAPI is an excellent reference for how to build secure OIDC/OAuth 2 implementations. With clear requirements and recommendations for both IdP, client and API (resource server).

Sessions

OAuth 2 only defines the delegation of authorization to a client, i.e. how to securely give the client an access token. The basic scenario of OAuth2 is that the user gives access to e.g. profile data from his Google account to a third party client, often even after the user has left the application.

The strength of OAuth 2 Code flows is that this is done without the user revealing their secret to the third-party client. The secret, in this case the Google password, should stay between Google and the user.

However, this is not the same as a user logging into the client and having an active session.

A session can be defined as a temporary and interactive exchange of data between a user and the system. The session is established at some point in time and terminated at a later point in time. The lifetime of a session varies depending on the type of system: a banking application typically has short sessions, while an administrative system, for example, has much longer ones. 

Another important part of a user's session is that we have a standardised way of representing the user's identity. For this we use OpenID Connect (OIDC), which standardises identity and to some extent also how sessions and Single Sign-On, as well as Single Logout, are handled.

Martin Altenstedt

Before OIDC was established, many people made their own login solution based on OAuth 2, which often had weaknesses, such as Facebook, Apple etc. Inventing your own solutions here is simply not a good idea, even if you are a large company.

OIDC is used on top of OAuth 2, i.e. it is still the same basic Code flows for delegating authorization, and with an OIDC solution we have three different types of tokens:

  • Id tokens allow the client to validate the authentication of the user and his identity

  • Access tokens are used by the client to make calls to the service (often an API)

  • Refresh tokens can be used by the client to request a new access token from the IdP, without the user re-authenticating, thus managing persistent access to the user's resources

Using an SPA implemented according to the BFF pattern as an example, it looks like this

Since HTTP is stateless and we have browsers as front-end, we create sessions with cookies. Note that all tokens are handled by BFF (backend), i.e. are not available to SPA (JavaScript in frontend).

Id tokens are "one-time" tokens for the client, are not sent anywhere, and should be valid for the shortest time possible, in practice maybe 5 minutes. Think"authentication response", not token.

An Id token contains metadata from the authentication event, e.g. which method was used, exact time, etc. Unlike access and refresh tokens, id tokens are specified by the OIDC. An id token also normally contains some information about the user, such as name, but this is not the main purpose.

Kasper Karlsson

Note that an id token should not be accepted by an API as a valid access token - not even if it is a JWT. This is something we sometimes encounter in our pen tests.

Our access tokens are often a JWT, or a reference token that can be translated into a JWT. It is used by our API where it is the basis for our call authorization check.

Refresh token is a long-lived token used to retrieve a new, more short-lived access token from the IdP. It is not a JWT but a cryptographically secure random string.

Note that the refresh token flow is part of OAuth 2, i.e. it can be used with or without OIDC and sessions to provide persistent access for clients to reach your resources even if you are not active.

The refresh token flow works as follows.

Note that when a refresh token expires, the user must re-authenticate. See more at https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-02#section-6

Note that when a refresh token expires, the user must re-authenticate. See more at https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-02#section-6

Now we are faced with the problem of associating these tokens with a user's session. It is important to have control over sessions and a clear set of requirements from the business. How often the user needs to authenticate is an important part of the user experience.

Tobias Ahnoff

In our experience, many UX departments are focused on accessibility, and may prefer not to have a login at all. But security is also confidentiality, privacy and sometimes traceability. For banks, for example, we want to know that the user is present and actively making payments and can even tie activities to them in a legal sense. Here it is clear how important the right balance between all security aspects is for a complete set of requirements.

Using our SPA as an example, we get the following sessions to handle in a solution with OIDC. It is important to understand that the cookie to the BFF, together with the cookie to the IdP form a common session to the system for the user. This requires an interaction between these two cookies and the lifetimes of our tokens.

An OIDC solution provides two cookies and three tokens to keep track of. Cookies are between SPA and BFF, and between SPA and IdP. Cookie between SPA and IdP gives us Single Sign-On, often called SSO cookie.

An OIDC solution provides two cookies and three tokens to keep track of. Cookies are between SPA and BFF, and between SPA and IdP. Cookie between SPA and IdP gives us Single Sign-On, often called SSO cookie.

Session lifetimes are basically controlled by two parameters:

  • Inactivity, so-called "Sliding Window"

  • Maximum lifetime.

Kasper Karlsson

Maximum length is easy to forget, but it matters from a safety perspective. An attacker who has managed to steal a session can otherwise use it forever, even after the vulnerability that allowed the session to be stolen has been plugged.

How often a user has to authenticate depends very much on your requirements, a bank might have inactivity of 15 min and a maximum of 60 min. An e-commerce has inactivity for many weeks and maybe max for many months. In the example we show, we have 7 days of inactivity and 28 days max.

In order for the session against the BFF to work as expected, without unexpected re-authentication, the client (BFF) needs to have access to valid access tokens for the user for the lifetime of the session.Since access tokens are short-lived, a validity for refresh tokens is required that is synchronized with the session, in our example: 7 days sliding, with a maximum of 28 days.Access tokens should always be short-lived, in our example you can think of e.g. 60 min.When the session is over there should be no active tokens at the client, so keep in mind that your last access token can be valid after the maximum for session and refresh token. Minutes usually don't matter in practice, but days don't meet a requirements picture where the user expects data to be accessible only by the client during the session.The goal for a session with high security requirements is that we can trust that the user who authenticated is present during the whole session, not just at login time. So that we also provide support for reliable traceability.

In order for the session against the BFF to work as expected, without unexpected re-authentication, the client (BFF) needs to have access to valid access tokens for the user for the lifetime of the session.

Since access tokens are short-lived, a validity for refresh tokens is required that is synchronized with the session, in our example: 7 days sliding, with a maximum of 28 days.

Access tokens should always be short-lived, in our example you can think of 60 minutes.

At the end of the session, there should be no active tokens on the client, so keep in mind that your last access token may expire after the session maximum and refresh token. Minutes usually don't matter in practice, but days don't meet a requirements picture where the user expects that data can only be accessed by the client during the current session.

The goal of a high-security session is that we can trust that the authenticated user is present throughout the session, not just at login time. So that we also provide support for reliable traceability.

Martin Altenstedt

A client that receives an access token will have access to my data for the duration of the access token, regardless of whether I as a person am still there or not. Therefore important to require re-authentication for extra sensitive operations. Think signing payments.

We want to highlight the importance of clear requirements around sessions, which is often something that is missed and perhaps left to the developer who implements. The following conversation captures many of the issues that arise when implementing SSO solutions.

PO: What happens when a BFF session ends and we make a new call?

DEV: Then we are redirected to the IdP, and if we then have a valid SSO session, we do not need to authenticate again but come back and can immediately create a new BFF session. The user may only notice this by the flashing of the browser url bar.

PO: But if I'm on IdP because I've been inactive for 7 days, don't I need to authenticate again?

DEV: No, that's right, if your SSO session is longer than 7 days, you don't need to authenticate again

PO: But then our requirements for how our system should behave are not correct? Didn't my specification say to require a new login after 7 days of inactivity?

DEV: Exactly, to face that, we need an interaction between the SSO and the local session between SPA and BFF. In your case, you need an SSO session that has the same max and sliding as your BFF session.

PO: But what if I'm not the only system using IdP, and there are other systems, with different requirements for inactivity and maximum session lengths? There can be very different requirements between different systems.

DEV: Yes, that could be problematic, different IdP products look different here. Some allow us to configure SSO session per client, but other products may not. The same goes for OIDC support, where many support what is defined in the Core spec, but may not support Session.

Erica Edholm

How sessions are handled can be difficult to test and perhaps something that is often missed. As always, it is important to remember the negative test cases and that security is one of the requirements, e.g. "If I as a user am inactive for X minutes I need to re-authenticate" or "When I log out I am logged out in all applications where I have SSO". Automated testing around this is difficult, often manual testing is required which gives long lead times and is tedious to repeat :-)

It is important to clarify your requirements around sessions and SSO, additional requirements will be added when we strengthen the protection around sessions and refresh tokens in the next section, and look at logout and impersonation.

The choice of IdP product is important, so that it supports your requirements, see more at https://omegapoint.se/artikel-val-av-identity-provider-idp

Secure sessions

To strengthen protection when we use cookies, we need to think about a few things. Basically, our session cookie should only contain information that is necessary for the session.

Kasper Karlsson

We have seen vulnerable systems where sensitive session data is stored in "normal" cookies. An attacker has been able to manipulate these to gain access to other users' accounts, for example, or even switch roles to gain extended privileges in the system.

They should also be cryptographically secure, i.e. either encrypted by the BFF (backend) or the BFF needs to have its own session store (Redis is common) to store your session. Your session cookie can then only contain a cryptographically secure ID instead. The disadvantage is that you then introduce state, an advantage is that you get a small payload for your calls (which otherwise often need to be split into multiple cookies).

We should also take advantage of the protections offered by browsers such as HTTPOnly and Host-prefix "__Host-", so that session cookies are not accessible to Javascript and enforce the safe use of Domain, Path and Secure attributes.

Because we use cookies, we also need to deal with Cross Site Request Forgery (CSRF), which means that an attacker can use your session to perform operations that you are not aware of. For example, by tricking you into clicking on a link that you do not understand is to our system. This works because the browser automatically sends our session cookie in every request to BFF, which it does for both read and write operations.

Recommended protections from OWASP and others can be divided into three levels and our experience is that the following together provide strong CSRF protection:

  • Browser: SameSite Cookie Policy Strict or LAX

  • Application: "Double submit cookie" pattern, or equivalent depending on application and framework support

  • User: re-authentication/signing for sensitive operations (alt CAPTCHA solutions)

Although SameSite is widely supported, different browsers and versions make different judgments about what is acceptable under Strict and LAX policies. And it is therefore difficult to know exactly how this protection will work in practice for different browsers over time. Based on our experience and OWASP recommendations, we should therefore see SameSite as a complement to other protections, such as the double submit cookie.

Tobias Ahnoff

Also consider the accessibility aspect of security. One problem that several of us who have worked OIDC in recent years have encountered is that logins fail due to changes in the implementation of cookie policies in browsers. This can be difficult to troubleshoot and monitor and perhaps very serious for your business.

Regardless of other CSRF protections, strong re-authentication is a must for highly sensitive operations where we have traceability requirements.

Martin Altenstedt

Remember that XSS in practice bypasses virtually all protections around our session, except perhaps strong re-authentication, so always prioritize protection against XSS when building web applications and build security in multiple layers, with defense in depth!

Secure Refresh tokens

Refresh tokens are very sensitive information and should therefore be kept backend, in an environment we fully control. They should therefore not be available in browsers where they can be accessed via JavaScript, e.g. in case of an XSS attack. 

Another basic protection for refresh tokens is that they should be bound to the client.

For confidential clients, a back-end that we control (e.g. a BFF), this is done by client authentication each time a refresh token is renewed.

For systems with high security requirements, all clients should be confidential. If we have public clients, we should avoid long-lived refresh tokens for these clients. But this is a judgement that needs to be made depending on your requirements picture and risk analysis.

Tobias Ahnoff

A common problem is mobile apps, which are native clients with the need for persistent access. Often the risk of refresh tokens for these is deemed acceptable. Assuming you do what you can to protect them through secure storage, management and rotation of tokens. Note that OIDC CIBA is being established and may be one way to address the problem as, like a BFF, it brings a confidential (backend) client to mobile apps as well.

To strengthen protection, it is recommended by OAuth 2.1 that refresh tokens should be one-time, i.e. rotated on each refresh token request, and that there should be "misuse detection".

However, detecting addiction is easier said than done. In practice, it is very difficult for an IdP to determine whether someone has stolen a token for a public client and is using it in a normal pattern. A confidential client, on the other hand, with a session, has a better chance as the client can use its domain knowledge of what is a reasonable usage pattern and can do a deeper analysis.

We would also like to reiterate that for web applications you should prioritise protection against XSS. You may have done everything right with how tokens and sessions are handled, but if you have an XSS vulnerability, an attacker can bypass virtually all of these protections.

Kasper Karlsson

In penetration testing of web-based systems, we almost always find XSS vulnerabilities. An attacker can use such vulnerabilities to steal tokens - long-lived refresh tokens, for example, can be of great value to the attacker.

We will return to XSS and protection against it in the article on Browsers at https://omegapoint.se/forsvar-pa-djupet-webblsare-del-6-av-7

Single logout

For a secure system and a good user experience, it is important that the user can end his session in a controlled way. However, we cannot assume that users log out and base our security on that. However, it is our responsibility as a client to properly clean up after a user who actively logs out.

An OIDC solution with SSO requires both local logout (1) and logout to the IdP (2).

If you want to have Single Logout (3), you also need to log out to the other clients that have SSO (against the same IdP). This is then done with OIDC Back-channel logout, where the IdP notifies all other clients registered for Single-logout.

Article 4 SPA Logout - Web Styled (2).png

Note that step 3 requires the client (BFF) to keep the session state backend, e.g. via a Redis Cache. But even if we don't need Single logout, we need to keep the session state backend to have full control over our sessions, however it depends on your system requirements if it is justified or not to introduce state-dependency.

Martin Altenstedt

One question we often get is how do you, as an administrator, log someone else out or lock an account? When we have total control over our sessions by managing the state backend, we can simply can delete the user's session in our BFF and IdP and trigger a Single Logout.

Tobias Ahnoff

SSO is easy, Single Logout is hard. Getting a good consistent user experience often requires more than you think, e.g. there is a risk that the choice of IdP can limit you in what is possible when logging out.

If all sessions are to behave as one session, you might want to consider whether there should be one application and one local session to avoid the Single Logout problem."Complexity is the worst enemy of security" is worth reading! You can't secure what you don't understand:-)

Impersonation

In many systems, we have a need to act like someone else. For example, in customer support, we may want to log in as a user and see what she sees, or we may want to perform a task for her.

For this there is a flow called "token exchange", which means that we exchange our own token for a new one, but without losing who is acting. 

In our example, "admin" has authenticated via e.g. a Code flow so that the client has a valid access token for "admin".

The client provides an interface so that the "admin" can choose who to impersonate, in this example it is the "user". The client calls the token-exchange endpoint and sends the access token for the "admin", as well as the "user" who is the target of impersonation (1).

The IdP now needs to do a permissions check, to determine if "admin" is allowed to impersonate. The IdP then returns a new access token where the sub is "user" and an act-claim with the sub "admin" (2).

Article 3 Token Exchange - Web Styled.png

{ 
  "aud":"https://api.example.com",
  "iss":"https://issuer.example.com", 
  "exp":1443904177, 
  "nbf":1443904077, 
  "sub":"user@example.com", 
  "act": { 
    "sub":"admin@example.com" 
  } 
}

Note that an API can now auditlog that it is "admin" that is the source of the operation, even if it is performed with the same permissions as "user".

In the specification (https://datatracker.ietf.org/doc/html/rfc8693) this is referred to as delegation. As opposed to impersonation where you can't tell who is acting, as will be the case in this example if you don't include the act claim.

Different IdP products differ in how they support this and similar scenarios, check your product documentation to see how it works for your solution.

Summary

The first three articles give you what you need to build a strong and fine-grained access control for our system.

In the first article, we reviewed how we can model identity using the concepts and tools provided by OAuth 2 and OpenID Connect.

Article 2 discussed how we should think when deciding what information should be in our access token, and how we move from tokens to rights and permissions control in our APIs.

In this article, we have gone through how to get an access token for different types of clients and how to manage our sessions, as well as SSO and Single logout.

The next article describes what you need to build a secure API with strong permission control.


Read the other parts of the article series