How to get access control right through following the right practices

Long read

How to develop and build infrastructure access control as you develop and deploy your software applications across your organization. 

In nearly all applications, what you can see and what you can do depends entirely on who you claim to be and how that claim is verified. Without an identity maybe you can only see the promotional material. If you are logged in as a paying user you can interact with the application. If you are authenticated with multi-factor into an account administrator role you can control other users, but only within your account. Getting access control right is essential. Get it wrong and you might not charge a user for your service or you might grant them access to something they shouldn’t see and end up as front-page news as a result.

In this article, we’ll explore some of the practices you can follow to set yourself up for success and how to manage this as you develop and deploy your applications.

Practice One: Do not roll your own identity

Always stick to standards. The existing authentication and authorization standards out there have been analyzed, reviewed, hacked and beaten on by experts from all over the world. Whatever you build yourself has not had that level of scrutiny, therefore is going to be a much higher risk than taking an existing solution and applying it to your use case. The most common objection to not using a standard is that they cover scenarios that are not needed by the application today. Often supporting these scenarios adds complexity, however having predefined standard solutions available allows rapid reaction to new requirements without the need to modify core authorization logic.

Open standards such as OAuth 2.0 and OpenID Connect give you a fantastic basis from which to build your identity, these are well supported both with open source software, as well as vendor libraries, making it easy to adopt regardless of your technology stack.

Practice Two: Architect for identity

Understanding how the different parts of your application architecture view users are crucial. As we break down our old monoliths and build in a more microservice driven approach we should be thinking about how the perception of user identity changes within the application. Does the data layer need to know the user who initiated the request? If so what information is needed, at what level of detail and what source can that service trust provide that information reliably?

In this model, identity becomes a service in its own right. Applications can defer the verification of users to the service and downstream systems can depend on the assertions of the service to grant access to resources.

Going beyond the user identity it is also important to understand how the services can securely identify each other. By identifying the component we can understand the context in which a user is making a and provides valuable information about the suitability of that request. For example, my user account may have administrative privileges but I shouldn’t be making requests to delete user records through the webshop. This combinatorial access model such that requests are made by a user with the context of the application they are using.

Practice Three: Don’t test what you don’t control

By drawing a clear boundary between the identity of the user and the application we have an excellent opportunity to support better test objects which can define how the different services view the same user identity.  The basic definition of a unit test is a test within the smallest possible functional boundary. Wherever that test reaches beyond the boundary we have a dependency. We should capture and validate the expected behavior of that dependency and use that to build a mock the represent it during our tests. That validation is an integration test in its own right, if the behavior of the dependency changes we may want to fail the test and all tests which rely on that behavior.

 

Strictly enforcing the use of identity mocks and stubs will prevent many of the woes of testing identity-driven applications. You can focus your efforts on the logic of your service rather than needing to build sessions for a user, handling MFA scenarios or users with federated profiles.

 

However, during development, you’re going to reach that boundary and end up needing to reach over to get a test condition to work. These situations happen but it’s important to have a pattern to address them in a way that doesn’t block your other tests and doesn’t stress what is outside your boundary. A simple pattern to follow is: define, initialize, use, recycle. First, you should define these tests as externally dependent, your test framework of choice should have a mechanism to do this. This should constrain the conditions in which the tests are fired and make it simple to modify the external target. When initializing you may need access to credentials or configuration. wherever possible initialize your external resource only once. Now that you have access to your external dependency re-use that reference between any tests which need it. Finally, when you’re done clear up the test data and release any resources. There is nothing more frustrating than tests that fail because the user “TestAccount001” already exists on the server.

Practice Four: Define your identity as code

When we push code to the deployment pipeline this is going to go through a number of steps each of which will handle the tests differently.

The developer might have started out working in their isolated machine, here they are running mostly unit tests with much of the wider environment mocked out. When that change is committed it gets deployed to the continuous integration environment which might use shared resources and run the full test cycle. When that passes, it’s released to QA who has a production-like environment to run more complex testing. Finally to production where it will need to handle scaling and real users.

 

There are at least four environments there that need to be kept aligned. What happens when the password complexity policy is changed in one or MFA is enabled? Does that flow down so the environments in development are set up to match that policy, will tests fail the first time they encounter a stricter policy or worse will your users catch it in production?

 

Now that we are representing identity as a service within our infrastructure we should be able to define that dependency and its behavior as code. By doing so we can ensure that everyone has the same view of the identity service and any changes are applied consistently. I’m going to use Terraform as my example here, other providers are available but Terraform sits in that vendor-neutral orchestration sweet spot. For the identity service, I’ll be using Okta to provide custom OAuth authorization servers.

 

We can define our connection to the service, this tells the script what to communicate with and which credentials to use. These values can be provided by variables allowing you to target different environments at each stage of your pipeline without requiring changes to the script:

 

provider "okta" {

   org_name = “babbage”

   base_url = “okta.com”

   api_token = “isthisarealtoken”

}

 

We can register our applications dynamically to get back the client id and client secret they need. This also allows us to enforce the OAuth grant types and response types we want to allow for each.

 

resource "okta_app_oauth" ”engine_client" {

   label = “Engine Client”

   type = "web”

   grant_types = [“authorization_code”]

   redirect_uris = [“${var.client_callback}”]

   response_types = ["code"]

}

 

resource "okta_app_oauth" ”engine_api" {

   label = “Engine API”

   type = ”service”

   grant_types = [“client_credentials”]

}


Next, define an OAuth authorization server and its custom scopes to provide an authorization endpoint for our applications.

 

resource "okta_auth_server" “analytical_engine” {

  audiences = [“babbage.local”]

  description = “General purpose computing.”

  name = “Analytical Engine API”

}

resource “okta_auth_server_scope” “tabulate” {

  description = “tabulate logarithm”

  name = “tabulate:perform”

  auth_server_id = “${okta_auth_server.analytical_engine.id}”

}

 

Now we have identity and authorization defined for the applications we need to model our users. We can extend the user schema to hold any data we might require as custom attributes.



resource "okta_user_schema" "role_extension" {

 index = "analytical_engine_role"

 title = "Analytical Engine Role"

 type = "string"

 master = "PROFILE_MASTER"

}

 

We can then expose our schema value in the user’s OAuth Access token by including it as a claim.

 

resource "okta_auth_server_claim" "role" {

name = "User_Role"

status = "ACTIVE"

claim_type = "RESOURCE"

value_type = "EXPRESSION"

value = "user.analytical_engine_role"

auth_server_id = "${okta_auth_server.analytical_engine.id}"

}

 

If we were now to change the user model to add additional fields or change the identifier we return for the user’s role we would be able to validate that change with Terraform plan to ensure our dependency is correct. Once we commit this change any environment using this definition of our identity service will be able to pick up this configuration and know what has changed.

 

By using these techniques we can ensure that all the components of our application have a secure foundation from which to build a consistent understanding of the user’s identity and how it can be validated. While at the same time ensuring that our developers are working in an environment which reflects the security stance of production without forcing them to use production-like restrictions in development.

 

Want to find out more, check out developer.okta.com to help you build better identity and access management into your applications.

Written by Andy March, Platform Specialist at Okta Inc. 

 

More
articles