Tag Archives: C#

Using the OAuth 2.0 device flow to authenticate users in desktop apps

Over the last few years, OpenID Connect has become one of the most common ways to authenticate users in a web application. But if you want to use it in a desktop application, it can be a little awkward…

Authorization code flow

OpenID Connect is an authentication layer built on top of OAuth 2.0, which means that you have to use one of the OAuth 2.0 authorization flows. A few years ago, there were basically two possible flows that you could use in a desktop client application to authenticate a user:

The password flow is pretty easy to use (basically, just exchange the user’s login and password for a token), but it requires that the client app is highly trusted, since it gets to manipulate the user’s credentials directly. This flow is now disallowed by OAuth 2.0 Security Best Current Practice.

The authorization code flow is a bit more complex, but has the advantage that the client application never sees the user’s password. The problem is that it requires web navigation with a redirection to the client application, which isn’t very practical in a desktop app. There are ways to achieve this, but none of them is perfect. Here are two common ones:

  • Open the authorization page in a WebView, and intercept the navigation to the redirect URI to get the authorization code. Not great, because the app could get the credentials from the WebView (at least on some platforms), and it requires that the WebView supports intercepting the navigation (probably not possible on some platforms).
  • Open the authorization page in the default web browser, and use an application protocol (e.g. myapp://auth) associated with the client application for the redirect URI. Unfortunately, a recent Chrome update made this approach impractical, because it always prompts the user to open the URL in the client application.

In addition, in order to protect against certain attack vectors, it’s recommended to use the PKCE extension when using the authorization code grant, which contributes to make the implementation more complex.

Finally, many identity providers require that the client authenticates with its client secret when calling the token endpoint, even though it’s not required by the spec (it’s only required for confidential clients). This is problematic, since the client app will probably be installed on many machines, and is definitely not a confidential client. The user could easily extract the client secret, which is therefore no longer secret.

An easier way : device flow

Enter device flow (or, more formally, device authorization grant). Device flow is a relatively recent addition to OAuth 2.0 (the first draft was published in 2016), and was designed for connected devices that don’t have a browser or have limited user input capabilities. How would you authenticate on such a device if you don’t have a keyboard? Well, it’s easy: do it on another device! Basically, when you need to authenticate, the device will display a URL and a code (it could also display a QR code to avoid having to copy the URL), and start polling the identity provider to ask if authentication is complete. You navigate to the URL in the browser on your phone or computer, log in when prompted to, and enter the code. When you’re done, the next time the device polls the IdP, it will receive a token: the flow is complete. The Azure AD documentation has a nice sequence diagram that helps understand the flow.

When you think of it, this approach is quite simple, and more straightforward than the more widely used redirection-based flows (authorization code and implicit flow). But what does it have to do with desktop apps, you ask? Well, just because it was designed for input constrained devices doesn’t mean you can’t use it on a full-fledged computer. As discussed earlier, the redirection-based flows are impractical to use in non-web applications; the device flow doesn’t have this problem.

In practice, the client application can directly open the authentication page in the browser, with the code as a query parameter, so the user doesn’t need to copy them. The user just needs to sign in with the IdP, give their consent for the application, and it’s done. Of course, if the user is already signed in with the IdP and has already given their consent, the flow completes immediately.

The device flow is not very commonly used in desktop apps yet, but you can see it in action in the Azure CLI, when you do az login.

A simple implementation

OK, this post has been a little abstract so far, so let’s build something! We’re going to create a simple console app that authenticates a user using the device flow.

In this example, I use Azure AD as the identity provider, because it’s easy and doesn’t require any setup (of course, you could also do this with your IdP of choice, like Auth0, Okta, a custom IdP based on IdentityServer, etc.). Head to the Azure Portal, in the Azure Active Directory blade, App registrations tab. Create a new registration, give it any name you like, and select "Accounts in this organizational directory only (Default directory only – Single tenant)" for the Supported Account Types (it would also work in multi-tenant mode, of course, but let’s keep things simple for now). Also enter a redirect URI for a public client. It shouldn’t be necessary for the device flow, and it won’t actually be used, but for some reason, authentication will fail if it’s not defined… one of Azure AD’s quirks, I guess.

App registration

Now, go to the Authentication tab of the app, in the Advanced settings section, and set Treat application as a public client to Yes.

Public client

And that’s all for the app registration part. Just take note of these values in the app’s Overview tab:

  • Application ID (client ID in OAuth terminology)
  • Directory ID (a.k.a. tenant ID; this is your Azure AD tenant)

Now, in our program, the first step is to issue a request to the device code endpoint to start the authorization flow. The OpenID Connect discovery document on Azure AD is incomplete and doesn’t mention the device code endpoint, but it can be found in the documentation. We need to send the client id of our application and the requested scopes. In this case, we use openid, profile and offline_access (to get a refresh token), but in real-world scenario you’ll probably need an API scope as well.

private const string TenantId = "<your tenant id>";
private const string ClientId = "<your client id>";

private static async Task<DeviceAuthorizationResponse> StartDeviceFlowAsync(HttpClient client)
{
    string deviceEndpoint = $"https://login.microsoftonline.com/{TenantId}/oauth2/v2.0/devicecode";
    var request = new HttpRequestMessage(HttpMethod.Post, deviceEndpoint)
    {
        Content = new FormUrlEncodedContent(new Dictionary<string, string>
        {
            ["client_id"] = ClientId,
            ["scope"] = "openid profile offline_access"
        })
    };
    var response = await client.SendAsync(request);
    response.EnsureSuccessStatusCode();
    var json = await response.Content.ReadAsStringAsync();
    return JsonSerializer.Deserialize<DeviceAuthorizationResponse>(json);
}

private class DeviceAuthorizationResponse
{
    [JsonPropertyName("device_code")]
    public string DeviceCode { get; set; }

    [JsonPropertyName("user_code")]
    public string UserCode { get; set; }

    [JsonPropertyName("verification_uri")]
    public string VerificationUri { get; set; }

    [JsonPropertyName("expires_in")]
    public int ExpiresIn { get; set; }

    [JsonPropertyName("interval")]
    public int Interval { get; set; }
}

Let’s call this method and open the verification_uri from the response in the browser. The user will need to enter the user_code in the authorization page.

using var client = new HttpClient();
var authorizationResponse = await StartDeviceFlowAsync(client);
Console.WriteLine("Please visit this URL: " + authorizationResponse.VerificationUri);
Console.WriteLine("And enter the following code: " + authorizationResponse.UserCode);
OpenWebPage(authorizationResponse.VerificationUri);

This opens the following page:

Enter user code

Note: the specs for the device flow mention an optional verification_uri_complete property in the authorization response, which includes the user_code. Unfortunately, this is not supported by Azure AD, so the user has to enter the code manually.

Now, while the user is entering the code and logging in, we start polling the IdP to get a token. We need to specify urn:ietf:params:oauth:grant-type:device_code as the grant_type, and provide the device_code from the authorization response.

var tokenResponse = await GetTokenAsync(client, authorizationResponse);
Console.WriteLine("Access token: ");
Console.WriteLine(tokenResponse.AccessToken);
Console.WriteLine("ID token: ");
Console.WriteLine(tokenResponse.IdToken);
Console.WriteLine("refresh token: ");
Console.WriteLine(tokenResponse.IdToken);

...

private static async Task<TokenResponse> GetTokenAsync(HttpClient client, DeviceAuthorizationResponse authResponse)
{
    string tokenEndpoint = $"https://login.microsoftonline.com/{TenantId}/oauth2/v2.0/token";

    // Poll until we get a valid token response or a fatal error
    int pollingDelay = authResponse.Interval;
    while (true)
    {
        var request = new HttpRequestMessage(HttpMethod.Post, tokenEndpoint)
        {
            Content = new FormUrlEncodedContent(new Dictionary<string, string>
            {
                ["grant_type"] = "urn:ietf:params:oauth:grant-type:device_code",
                ["device_code"] = authResponse.DeviceCode,
                ["client_id"] = ClientId
            })
        };
        var response = await client.SendAsync(request);
        var json = await response.Content.ReadAsStringAsync();
        if (response.IsSuccessStatusCode)
        {
            return JsonSerializer.Deserialize<TokenResponse>(json);
        }
        else
        {
            var errorResponse = JsonSerializer.Deserialize<TokenErrorResponse>(json);
            switch(errorResponse.Error)
            {
                case "authorization_pending":
                    // Not complete yet, wait and try again later
                    break;
                case "slow_down":
                    // Not complete yet, and we should slow down the polling
                    pollingDelay += 5;                            
                    break;
                default:
                    // Some other error, nothing we can do but throw
                    throw new Exception(
                        $"Authorization failed: {errorResponse.Error} - {errorResponse.ErrorDescription}");
            }

            await Task.Delay(TimeSpan.FromSeconds(pollingDelay));
        }
    }
}

private class TokenErrorResponse
{
    [JsonPropertyName("error")]
    public string Error { get; set; }

    [JsonPropertyName("error_description")]
    public string ErrorDescription { get; set; }
}

private class TokenResponse
{
    [JsonPropertyName("access_token")]
    public string AccessToken { get; set; }

    [JsonPropertyName("id_token")]
    public string IdToken { get; set; }

    [JsonPropertyName("refresh_token")]
    public string RefreshToken { get; set; }

    [JsonPropertyName("token_type")]
    public string TokenType { get; set; }

    [JsonPropertyName("expires_in")]
    public int ExpiresIn { get; set; }

    [JsonPropertyName("scope")]
    public string Scope { get; set; }
}

When the user completes the login process in the browser, the next call to the token endpoint returns an access_token, id_token and refresh_token (if you requested the offline_access scope).

When the access token expires, you can use the refresh token to get a new one, as described in the specs.

Conclusion

As you can see, the device flow is pretty easy to implement; it’s quite straightforward, with no redirection mechanism. Its simplicity also makes it quite secure, with very few angles of attack. In my opinion, it’s the ideal flow for desktop or console applications.

You can find the full code for this article in this repository.

Lazily resolving services to fix circular dependencies in .NET Core

The problem with circular dependencies

When building an application, good design dictates that you should avoid circular dependencies between your services. A circular dependency is when some components depend on each other, directly or indirectly, e.g. A depends on B which depends on C which depends on A:

Circular dependency

It is generally agreed that this should be avoided; I won’t go into the details of the conceptual and theoretical reasons, because there are plenty of resources about it on the web.

But circular dependencies also have a very concrete effect. If you accidentally introduce a circular dependency in an NET Core app that uses dependency injection, you will know immediately, because the resolution of a component involved in the dependency cycle will fail. For instance, if you have these components:

  • A, which implements interface IA and depends on IB
  • B, which implements interface IB and depends on IC
  • C, which implements interface IC and depends on IA

When you try to resolve IA, the dependency injection container will try to create an instance of A; to do that, it will need to resolve IB, so it will try to create an instance of B; to do that, it will need to resolve IC, so it will try to create an instance of C; and to do that, it will need to resolve IA… which was already being resolved. Here you have it: circular dependency. The resolution of IA will cannot complete; in fact, .NET Core’s built-in IoC container detects this, and throws a helpful exception:

System.InvalidOperationException: A circular dependency was detected for the service of type ‘Demo.IA’.

So, clearly, this situation should be avoided.

Workaround

However, when a real-world app reaches a certain level of complexity, it can sometimes be difficult to avoid. One day, you innocently add a dependency to a service, and things blow up in your face. So you’re faced with a choice: refactor a significant part of your app to avoid the dependency cycle, or "cheat".

While, ideally, you would opt for a refactoring, it’s not always practical. Because of deadlines, you might not have the time to refactor your code and thoroughly test it for regressions.

Fortunately, if you’re willing to incur a bit of technical debt, there’s a simple workaround that works in most cases (what I referred to as "cheating" earlier). The trick is to resolve one of the dependencies in the cycle lazily, i.e. resolve it at the last possible moment, when you actually need to use it.

One way to do that is to inject the IServiceProvider into your class, and use services.GetRequiredService<T>() when you need to use T. For instance, the C class I mentioned earlier might initially look like this:

class C : IC
{
    private readonly IA _a;

    public C(IA a)
    {
        _a = a;
    }

    public void Bar()
    {
        ...
        _a.Foo()
        ...
    }
}

To avoid the dependency cycle, you could rewrite it like this:

class C : IC
{
    private readonly IServiceProvider _services;

    public C(IServiceProvider services)
    {
        _services = services;
    }

    public void Bar()
    {
        ...
        var a = _services.GetRequiredService<IA>();
        a.Foo();
        ...
    }
}

Because it’s no longer necessary to resolve IA while C is being constructed, the cycle is broken (at least during construction), and the problem fixed.

However, I don’t really like this approach, because it smells of the Service Locator pattern, which is a known anti-pattern. I see two main issues with it:

  • It makes your class depend explicitly on the service provider. This is bad, because your class shouldn’t have to know anything about the injection dependency mechanism being used; after all, the app could be using Pure DI, i.e. not use an IoC container at all.
  • It hides the dependencies of your class. Instead of having them all clearly declared at the constructor level, you now have just an IServiceProvider which doesn’t tell you anything about the actual dependencies. You have to scan the code of the class to find them.

A cleaner workaround

The approach I actually use in this situation takes advantage of the Lazy<T> class. You will need the following extension method and class:

public static IServiceCollection AddLazyResolution(this IServiceCollection services)
{
    return services.AddTransient(
        typeof(Lazy<>),
        typeof(LazilyResolved<>));
}

private class LazilyResolved<T> : Lazy<T>
{
    public LazilyResolved(IServiceProvider serviceProvider)
        : base(serviceProvider.GetRequiredService<T>)
    {
    }
}

Call this new method on your service collection during service registration:

services.AddLazyResolution();

This enables the resolution of a Lazy<T> which will lazily resolve a T from the service provider.

In the class that depends on IA, inject Lazy<IA> instead. When you need to use IA, just access the lazy’s value:

class C : IC
{
    private readonly Lazy<IA> _a;

    public C(Lazy<IA> a)
    {
        _a = a;
    }

    public void Bar()
    {
        ...
        _a.Value.Foo();
        ...
    }
}

Note: DO NOT access the value in the constructor, just store the Lazy itself. Accessing the value in the constructor would eagerly resolve IA, which would cause the same problem we were trying to solve.

This solution isn’t perfect, but it solves the initial problem without too much hassle, and the dependencies are still clearly declared in the constructor.

Handling query string parameters with no value in ASP.NET Core

Query strings are typically made of a sequence of key-value pairs, like ?foo=hello&bar=world…. However, if you look at RFC 3986, you can see that query strings are very loosely specified. It mentions that

query components are often used to carry identifying information in the form of "key=value" pairs

But it’s just an observation, not a rule (RFCs usually have very specific wording for rules, with words like MUST, SHOULD, etc.). So basically, a query string can be almost anything, it’s not standardized. The use of key-value pairs separated by & is just a convention, not a requirement.

And as it happens, it’s not uncommon to see URLs with query strings like this: ?foo, i.e. a key without a value. How it should be interpreted is entirely implementation-dependent, but in most cases, it probably means the same as ?foo=true: the presence of the parameter is interpreted as an implicit true value.

Unfortunately, in ASP.NET Core MVC, there’s no built-in support for this form of query string. If you have a controller action like this:

[HttpGet("search")]
public IActionResult Search(
    [FromQuery] string term,
    [FromQuery] bool ignoreCase)
{
    …
}

The default model binder expects the ignoreCase parameter to be specified with an explicit true or false value, e.g. ignoreCase=true. If you omit the value, it will be interpreted as empty, and the model binding will fail:

{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "|53613c25-4767e032425dfb92.",
  "errors": {
    "ignoreCase": [
      "The value '' is invalid."
    ]
  }
}

It’s not a very big issue, but it’s annoying… So, let’s see what we can do about it!

By default, a boolean parameter is bound using SimpleTypeModelBinder, which is used for most primitive types. This model binder uses the TypeConverter of the target type to convert a string value to the target type. In this case, the converter is a BooleanConverter, which doesn’t recognize an empty value…

So we need to create our own model binder, which will interpret the presence of a key with no value as an implicit true:

class BooleanModelBinder : IModelBinder
{
    public Task BindModelAsync(ModelBindingContext bindingContext)
    {
        var result = bindingContext.ValueProvider.GetValue(bindingContext.ModelName);
        if (result == ValueProviderResult.None)
        {
            // Parameter is missing, interpret as false
            bindingContext.Result = ModelBindingResult.Success(false);
        }
        else
        {
            bindingContext.ModelState.SetModelValue(bindingContext.ModelName, result);
            var rawValue = result.FirstValue;
            if (string.IsNullOrEmpty(rawValue))
            {
                // Value is empty, interpret as true
                bindingContext.Result = ModelBindingResult.Success(true);
            }
            else if (bool.TryParse(rawValue, out var boolValue))
            {
                // Value is a valid boolean, use that value
                bindingContext.Result = ModelBindingResult.Success(boolValue);
            }
            else
            {
                // Value is something else, fail
                bindingContext.ModelState.TryAddModelError(
                    bindingContext.ModelName,
                    "Value must be false, true, or empty.");
            }
        }

        return Task.CompletedTask;
    }
}

In order to use this model binder, we also need a model binder provider:

class BooleanModelBinderProvider : IModelBinderProvider
{
    public IModelBinder GetBinder(ModelBinderProviderContext context)
    {
        if (context.Metadata.ModelType == typeof(bool))
        {
            return new BooleanModelBinder();
        }

        return null;
    }
}

It will return our model binder if the target type is bool. Now we just need to add this provider to the list of model binder providers:

// In Startup.ConfigureServices
services.AddControllers(options =>
{
    options.ModelBinderProviders.Insert(
        0, new BooleanModelBinderProvider());
});

Note: This code is for an ASP.NET Core 3 Web API project.

  • If your project also has views or pages, replace AddControllers with AddControllersWithViews or AddRazorPages, as appropriate.
  • If you’re using ASP.NET Core 2, replace AddControllers with AddMvc.

Note that we need to insert our new model binder provider at the beginning of the list. If we add it at the end, another provider will match first, and our provider won’t even be called.

And that’s it: you should now be able to call your endpoint with a query string like ?term=foo&ignoreCase, without explicitly specifying true as the value of ignoreCase.

A possible improvement to this binder would be to also accept 0 or 1 as valid values for boolean parameters. I’ll leave that as an exercise to you!

ASP.NET Core: when environments are not enough, use sub-environments!

Out of the box, ASP.NET Core has the concept of "environments", which allows your app to use different settings based on which environment it’s running in. For instance, you can have Development/Staging/Production environments, each with its own settings file, and a common settings file shared by all environments:

  • appsettings.json: global settings
  • appsettings.Development.json: settings specific to the Development environment
  • appsettings.Staging.json: settings specific to the Staging environment
  • appsettings.Production.json: settings specific to the Production environment

With the default configuration, environment-specific settings just override global settings, so you don’t have to specify unchanged settings in every environment if they’re already specified in the global settings file.

Of course, you can have environments with any name you like; Development/Staging/Production is just a convention.

You can specify which environment to use via the ASPNETCORE_ENVIRONMENT environment variable, or via the --environment command line switch. When you work in Visual Studio, you typically do this in a launch profile in Properties/launchSettings.json.

Limitations

This feature is quite handy, but sometimes, it’s not enough. Even in a given environment, you might need different settings to test different scenarios.

As a concrete example, I develop a solution that consists (among other things) of a web API and an authentication server. The API authenticates users with JWT bearer tokens provided by the authentication server. Most of the time, when I work on the API, I don’t need to make changes to the authentication server, and I’m perfectly happy to use the one that’s deployed in the development environment in Azure. But when I do need to make changes to the authentication server, I have to modify the API settings so that it uses the local auth server instead. And I have to be careful not to commit that change, to avoid breaking the development instance in Azure. It’s a minor issue, but it’s annoying…

A possible solution would be to create a new "DevelopmentWithLocalAuth" environment, with its own settings file. But the settings would be the same as in the Development environment, with the only change being the auth server URL. I hate to have multiple copies of the same thing, because it’s a pain to keep them in sync. What I really want is a way to use the settings of the Development environment, and just override what I need, without touching the Developement environment settings.

Enter "sub-environments"

It’s not an actual feature, it’s just a name I made up. But the point is that you can easily introduce another "level" of configuration settings that just override some settings of the "parent" environment.

For instance, in my scenario, I want to introduce a appsettings.Development.LocalAuth.json file that inherits the settings of the Development environment and just overrides the auth server URL:

{
    "Authentication": {
        "Authority": "https://localhost:6001"
    }
}

The way to do that is to add the new file as a configuration source when building the host in Program.cs:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration((context, builder) =>
        {
            string subenv = context.Configuration["SubEnvironment"];
            if (!string.IsNullOrEmpty(subenv))
            {
                var env = context.HostingEnvironment;
                builder.AddJsonFile($"appsettings.{env.EnvironmentName}.{subenv}.json", optional: true, reloadOnChange: true);
            }
        })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });

(This code is for ASP.NET Core 3.0, but the same applies if you use ASP.NET Core 2.0 with WebHostBuilder instead of HostBuilder.)

The magic happens in the call to ConfigureAppConfiguration. It adds a new JSON file whose name depends on the environment and sub-environment. Since this configuration source is added after the existing ones, it will override the settings provided by previous sources.

The name of the sub-environment is retrieved from the host configuration, which itself is based on environment variables starting with ASPNETCORE_ and command line arguments. So, to specify that you want the "LocalAuth" sub-environment, you need to set the ASPNETCORE_SUBENVIRONMENT environment variable to "LocalAuth".

And that’s it! With this, you can refine existing environments for specific scenarios.

Note: Since the new configuration source is added last, it will override ALL previous configuration sources, not just the default appsettings.json files. The default host builder adds user secrets, environment variables, and command line arguments after the JSON files, so those will be overriden as well by the sub-environment settings. This is less than ideal, but probably not a major issue for most scenarios. If it’s a concern, the fix is to insert the sub-environment config source after the existing JSON sources, but before the user secrets source. It makes the code a bit more involved, but it’s doable:

        ...
        .ConfigureAppConfiguration((context, builder) =>
        {
            string subenv = context.Configuration["SubEnvironment"];
            if (!string.IsNullOrEmpty(subenv))
            {
                var env = context.HostingEnvironment;
                var newSource = new JsonConfigurationSource
                {
                    Path = $"appsettings.{env.EnvironmentName}.{subenv}.json",
                    Optional = true,
                    ReloadOnChange = true
                };
                newSource.ResolveFileProvider();

                var lastJsonConfigSource = builder.Sources
                    .OfType<JsonConfigurationSource>()
                    .LastOrDefault(s => !s.Path.Contains("secrets.json"));
                if (lastJsonConfigSource != null)
                {
                    var index = builder.Sources.IndexOf(lastJsonConfigSource);
                    builder.Sources.Insert(index + 1, newSource);
                }
                else
                {
                    builder.Sources.Insert(0, newSource);
                }
            }
        })
        ...

Easy unit testing of null argument validation (C# 8 edition)

A few years ago, I blogged about a way to automate unit testing of null argument validation. Its usage looked like this:

[Fact]
public void FullOuterJoin_Throws_If_Argument_Is_Null()
{
    var left = Enumerable.Empty<int>();
    var right = Enumerable.Empty<int>();
    TestHelper.AssertThrowsWhenArgumentNull(
        () => left.FullOuterJoin(right, x => x, y => y, (k, x, y) => 0, 0, 0, null),
        "left", "right", "leftKeySelector", "rightKeySelector", "resultSelector");
}

Basically, for each of the specified parameters, the AssertThrowsWhenArgumentNull method rewrites the lambda expression by replacing the corresponding argument with null, compiles and executes it, and checks that it throws an ArgumentNullException with the appropriate parameter name. This method has served me well for many years, as it drastically reduces the amount of code to test argument validation. However, I wasn’t completely satisfied with it, because I still had to specify the names of the non-nullable parameters explicitly…

C# 8 to the rescue

Yesterday, I was working on enabling C# 8 non-nullable reference types on an old library, and I realized that I could take advantage of the nullable metadata to automatically detect which parameters are non-nullable.

Basically, when you compile a library with nullable reference types enabled, method parameters can be annotated with a [Nullable(x)] attribute, where x is a byte value that indicates the nullability of the parameter (it’s actually slightly more complicated than that, see Jon Skeet’s article on the subject). Additionally, there can be a [NullableContext(x)] attribute on the method or type that indicates the default nullability for the method or type; if a parameter doesn’t have the [Nullable] attribute, the default nullability applies.

Using these facts, it’s possible to update my old AssertThrowsWhenArgumentNull method to make it detect non-nullable parameters automatically. Here’s the result:

using System;
using System.Collections.ObjectModel;
using System.Linq;
using System.Linq.Expressions;
using System.Reflection;
using FluentAssertions;

static class TestHelper
{
    private const string NullableContextAttributeName = "System.Runtime.CompilerServices.NullableContextAttribute";
    private const string NullableAttributeName = "System.Runtime.CompilerServices.NullableAttribute";

    public static void AssertThrowsWhenArgumentNull(Expression<Action> expr)
    {
        var realCall = expr.Body as MethodCallExpression;
        if (realCall == null)
            throw new ArgumentException("Expression body is not a method call", nameof(expr));
        
        var method = realCall.Method;
        var nullableContextAttribute =
            method.CustomAttributes
            .FirstOrDefault(a => a.AttributeType.FullName == NullableContextAttributeName)
            ??
            method.DeclaringType.GetTypeInfo().CustomAttributes
            .FirstOrDefault(a => a.AttributeType.FullName == NullableContextAttributeName);

        if (nullableContextAttribute is null)
            throw new InvalidOperationException($"The method '{method}' is not in a nullable enable context. Can't determine non-nullable parameters.");

        var defaultNullability = (Nullability)(byte)nullableContextAttribute.ConstructorArguments[0].Value;

        var realArgs = realCall.Arguments;
        var parameters = method.GetParameters();
        var paramIndexes = parameters
            .Select((p, i) => new { p, i })
            .ToDictionary(x => x.p.Name, x => x.i);
        var paramTypes = parameters
            .ToDictionary(p => p.Name, p => p.ParameterType);

        var nonNullableRefParams = parameters
            .Where(p => !p.ParameterType.GetTypeInfo().IsValueType && GetNullability(p, defaultNullability) == Nullability.NotNull);

        foreach (var param in nonNullableRefParams)
        {
            var paramName = param.Name;
            var args = realArgs.ToArray();
            args[paramIndexes[paramName]] = Expression.Constant(null, paramTypes[paramName]);
            var call = Expression.Call(realCall.Object, method, args);
            var lambda = Expression.Lambda<Action>(call);
            var action = lambda.Compile();
            action.ShouldThrow<ArgumentNullException>($"because parameter '{paramName}' is not nullable")
                .And.ParamName.Should().Be(paramName);
        }
    }

    private enum Nullability
    {
        Oblivious = 0,
        NotNull = 1,
        Nullable = 2
    }

    private static Nullability GetNullability(ParameterInfo parameter, Nullability defaultNullability)
    {
        if (parameter.ParameterType.GetTypeInfo().IsValueType)
            return Nullability.NotNull;

        var nullableAttribute = parameter.CustomAttributes
            .FirstOrDefault(a => a.AttributeType.FullName == NullableAttributeName);

        if (nullableAttribute is null)
            return defaultNullability;

        var firstArgument = nullableAttribute.ConstructorArguments.First();
        if (firstArgument.ArgumentType == typeof(byte))
        {
            var value = (byte)firstArgument.Value;
            return (Nullability)value;
        }
        else
        {
            var values = (ReadOnlyCollection<CustomAttributeTypedArgument>)firstArgument.Value;

            // Probably shouldn't happen
            if (values.Count == 0)
                return defaultNullability;

            var value = (byte)values[0].Value;

            return (Nullability)value;
        }
    }
}

The unit test is now even simpler, since there’s no need to specify the parameters to validate:

[Fact]
public void FullOuterJoin_Throws_If_Argument_Is_Null()
{
    var left = Enumerable.Empty<int>();
    var right = Enumerable.Empty<int>();
    TestHelper.AssertThrowsWhenArgumentNull(
        () => left.FullOuterJoin(right, x => x, y => y, (k, x, y) => 0, 0, 0, null));
}

It will automatically check that each non-nullable parameter is properly validated.

Happy coding!

Using foreach with index in C#

Just a quick tip today!

for and foreach loops are among the most useful constructs in a C# developer’s toolbox. To iterate a collection, foreach is, in my opinion, more convenient than for in most cases. It works with all collection types, including those that are not indexable such as IEnumerable<T>, and doesn’t require to access the current element by its index.

But sometimes, you do need the index of the current item; this usually leads to one of these patterns:

// foreach with a "manual" index
int index = 0;
foreach (var item in collection)
{
    DoSomething(item, index);
    index++;
}

// normal for loop
for (int index = 0; index < collection.Count; index++)
{
    var item = collection[index];
    DoSomething(item, index);
}

It’s something that has always annoyed me; couldn’t we have the benefits of both foreach and for? It turns out that there’s a simple solution, using Linq and tuples. Just write an extension method like this:

using System.Linq;
...

public static IEnumerable<(T item, int index)> WithIndex<T>(this IEnumerable<T> source)
{
    return source.Select((item, index) => (item, index));
}

And now you can do this:

foreach (var (item, index) in collection.WithIndex())
{
    DoSomething(item, index);
}

I hope you find this useful!

Handling type hierarchies in Cosmos DB (part 2)

This is the second post in a series of 2:

In the previous post, I talked about the difficulty of handling type hierarchies in Cosmos DB, showed that the problem was actually with the JSON serializer, and proposed a solution using JSON.NET’s TypeNameHandling feature. In this post, I’ll show another approach based on custom converters, and how to integrate the solution with the Cosmos DB .NET SDK.

Custom JSON converter

With JSON.NET, we can create custom converters to tell the serializer how to serialize and deserialize specific types. Let’s see how to apply this feature to our problem.

First, let add an abstract Type property to the base class of our object model, and implement it in the concrete classes:

public abstract class FileSystemItem
{
    [JsonProperty("id")]
    public string Id { get; set; }
    [JsonProperty("$type")]
    public abstract string Type { get; }
    public string Name { get; set; }
    public string ParentId { get; set; }
}

public class FileItem : FileSystemItem
{
    public override string Type => "fileItem";
    public long Size { get; set; }
}

public class FolderItem : FileSystemItem
{
    public override string Type => "folderItem";
    public int ChildrenCount { get; set; }
}

There’s nothing special to do for serialization, as JSON.NET will automatically serialize the Type property. However, we need a converter to handle deserialization when the target type is the abstract FileSystemItem class. Here it is:

class FileSystemItemJsonConverter : JsonConverter
{
    // This converter handles only deserialization, not serialization.
    public override bool CanRead => true;
    public override bool CanWrite => false;

    public override bool CanConvert(Type objectType)
    {
        // Only if the target type is the abstract base class
        return objectType == typeof(FileSystemItem);
    }

    public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
    {
        // First, just read the JSON as a JObject
        var obj = JObject.Load(reader);
        
        // Then look at the $type property:
        var typeName = obj["$type"]?.Value<string>();
        switch (typeName)
        {
            case "fileItem":
                // Deserialize as a FileItem
                return obj.ToObject<FileItem>(serializer);
            case "folderItem":
                // Deserialize as a FolderItem
                return obj.ToObject<FolderItem>(serializer);
            default:
                throw new InvalidOperationException($"Unknown type name '{typeName}'");
        }
    }

    public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
    {
        throw new NotSupportedException("This converter handles only deserialization, not serialization.");
    }
}

And here’s how we can now use this converter:

var settings = new JsonSerializerSettings
{
    Converters =
    {
        new FileSystemItemJsonConverter()
    }
};
string json = JsonConvert.SerializeObject(items, Formatting.Indented, settings);

...

var deserializedItems = JsonConvert.DeserializeObject<FileSystemItem[]>(json, settings);

And we get the same results as with the custom serialization binder, except that we have control over which types are serialized with a $type property.

This converter is specific to FileSystemItem, but of course, it’s possible to make a more generic one, based on reflection.

Integration with the Cosmos DB SDK

OK, we now have two ways of serializing and deserializing type hierarchies in JSON. In my opinion, the one based on TypeNameHandling is either overly verbose when using TypeNameHandling.Objects, or a bit risky when using TypeNameHandling.Auto, because it’s easy to forget to specify the root type and end up with no $type property on the root object. So I’ll stick to the solution based on a converter, at least until my feature suggestion for JSON.NET is implemented.

Now, let’s see how to integrate this with the Cosmos DB .NET SDK.

If you’re still using the 2.x SDK, it’s trivial: just pass the JsonSerializerSettings with the converter to the DocumentClient constructor (but you should totally consider switching to 3.X, which is much nicer to work with in my opinion).

In the 3.x SDK, it requires a little more work. The default serializer is based on JSON.NET, so it should be easy to pass custom JsonSerializerSettings… but unfortunately, the class is not public, so we can’t instantiate it ourselves. All we can do is specify CosmosSerializationOptions that are passed to it, and those options only expose a very small subset of what is possible with JSON.NET. So the alternative is to implement our own serializer, based on JSON.NET.

To do this, we must derive from the CosmosSerializer abstract class:

public abstract class CosmosSerializer
{
    public abstract T FromStream<T>(Stream stream);
    public abstract Stream ToStream<T>(T input);
}

FromStream takes a stream and reads an object of the specified type from the stream. ToStream takes an object, writes it to a stream and returns the stream.

Aside: To be honest, I don’t think it’s a very good abstraction… Returning a Stream is weird, it would be more natural to receive a stream and write to it. The way it’s designed, you have to create a new MemoryStream for every object you serialize, and then the data will be copied from that stream to the document. That’s hardly efficient… Also, you must dispose the stream you receive in FromStream, which is unusual (you’re usually not responsible for disposing an object you didn’t create); it also means that the SDK creates a new stream for each document to read, which is, again, inefficient. Ah, well… It’s too late to fix it v3 (it would be a breaking change), but maybe in v4?

Fortunately, we don’t have to reinvent the wheel: we can just copy the code from the default implementation, and adapt it to our needs. Here it goes:

public class NewtonsoftJsonCosmosSerializer : CosmosSerializer
{
    private static readonly Encoding DefaultEncoding = new UTF8Encoding(false, true);

    private readonly JsonSerializer _serializer;

    public NewtonsoftJsonCosmosSerializer(JsonSerializerSettings settings)
    {
        _serializer = JsonSerializer.Create(settings);
    }

    public override T FromStream<T>(Stream stream)
    {
        string text;
        using (var reader = new StreamReader(stream))
        {
            text = reader.ReadToEnd();
        }

        if (typeof(Stream).IsAssignableFrom(typeof(T)))
        {
            return (T)(object)stream;
        }

        using (var sr = new StringReader(text))
        {
            using (var jsonTextReader = new JsonTextReader(sr))
            {
                return _serializer.Deserialize<T>(jsonTextReader);
            }
        }
    }

    public override Stream ToStream<T>(T input)
    {
        var streamPayload = new MemoryStream();
        using (var streamWriter = new StreamWriter(streamPayload, encoding: DefaultEncoding, bufferSize: 1024, leaveOpen: true))
        {
            using (JsonWriter writer = new JsonTextWriter(streamWriter))
            {
                writer.Formatting = _serializer.Formatting;
                _serializer.Serialize(writer, input);
                writer.Flush();
                streamWriter.Flush();
            }
        }

        streamPayload.Position = 0;
        return streamPayload;
    }
}

We now have a serializer for which we can specify the JsonSerializerSettings. To use it, we just need to specify it when we create the CosmosClient:

var serializerSettings = new JsonSerializerSettings
{
    Converters =
    {
        new FileSystemItemJsonConverter()
    }
};
var clientOptions = new CosmosClientOptions
{
    Serializer = new NewtonsoftJsonCosmosSerializer(serializerSettings)
};
var client = new CosmosClient(connectionString, clientOptions);

And that’s it! We can now query our collection of mixed FileItems and FolderItems, and have them deserialized to the proper type:

var query = container.GetItemLinqQueryable<FileSystemItem>();
var iterator = query.ToFeedIterator();
while (iterator.HasMoreResults)
{
    var items = await iterator.ReadNextAsync();
    foreach (var item in items)
    {
        var description = item switch
        {
            FileItem file =>
                $"File {file.Name} (id {file.Id}) has a size of {file.Size} bytes",
            FolderItem folder =>
                $"Folder {folder.Name} (id {folder.Id}) has {folder.ChildrenCount} children",
            _ =>
                $"Item {item.Name} (id {item.Id}) is of type {item.GetType()}... I don't know what that is."
        };
        Console.WriteLine(description);
    }
}

There might be better solutions out there. If you’re using Entity Framework Core 3.0, which supports Cosmos DB, this scenario seems to be supported, but I was unable to make it work so far. In the meantime, this solution is working very well for me, and I hope it helps you too!

Handling type hierarchies in Cosmos DB (part 1)

This is the first post in a series of 2:

Azure Cosmos DB is Microsoft’s NoSQL cloud database. In Cosmos DB, you store JSON documents in containers. This makes it very easy to model data, because you don’t need to split complex objects into multiple tables and use joins like in relational databases. You just serialize your full C# object graph to JSON and save it to the database. The Cosmos DB .NET SDK takes care of serializing your objects, so you don’t need to do it explicitly, and it lets you query the database in a strongly typed manner using Linq:

using var client = new CosmosClient(connectionString);
var database = client.GetDatabase(databaseId);
var container = database.GetContainer("Pets");

var pet = new Pet { Id = "max-0001", Name = "Max", Species = "Dog" };
await container.CreateItemAsync(pet);

...

var dogsQuery = container.GetItemLinqQueryable<Pet>()
    .Where(p => p.Species == "Dog");

var iterator = dogsQuery.ToFeedIterator();
while (iterator.HasMoreResults)
{
    var dogs = await iterator.ReadNextAsync();
    foreach (var dog in dogs)
    {
        Console.WriteLine($"{dog.Id}\t{dog.Name}\t{dog.Species}");
    }
}

However, there’s a little wrinkle… Out of the box, the Cosmos DB .NET SDK doesn’t know how to handle type hierarchies. If you have an abstract base class with a few derived classes, and you save instances of those classes to Cosmos, the SDK won’t know how to deserialize them, and you will get an exception saying it can’t create an instance of an abstract type…

Actually the problem isn’t in the Cosmos DB SDK per se, but in JSON.NET, which is used as the default serializer by the SDK. So, before we can solve the problem for Cosmos DB, we first need to solve it for JSON.NET; we’ll see later how to integrate the solution with the Cosmos DB SDK.

A simple class hierarchy

Let’s take a concrete example: a (very simple) object model to represent a file system. We have two concrete types, FileItem and FolderItem, which both inherit from a common abstract base class, FileSystemItem. Here’s the code:

public abstract class FileSystemItem
{
    [JsonProperty("id")]
    public string Id { get; set; }
    public string Name { get; set; }
    public string ParentId { get; set; }
}

public class FileItem : FileSystemItem
{
    public long Size { get; set; }
}

public class FolderItem : FileSystemItem
{
    public int ChildrenCount { get; set; }
}

In a real-world scenario, you’d probably want more properties than that, but let’s keep things simple for the sake of this demonstration.

If you create a FileItem and a FolderItem and serialize them to JSON…

var items = new FileSystemItem[]
{
    new FolderItem
    {
        Id = "1",
        Name = "foo",
        ChildrenCount = 1
    },
    new FileItem
    {
        Id = "2",
        Name = "test.txt",
        ParentId = "1",
        Size = 42
    }
};
string json = JsonConvert.SerializeObject(items, Formatting.Indented);

…you’ll notice that the JSON doesn’t contain any information about the object’s type:

[
  {
    "ChildrenCount": 1,
    "id": "1",
    "Name": "foo",
    "ParentId": null
  },
  {
    "Size": 42,
    "id": "2",
    "Name": "test.txt",
    "ParentId": "1"
  }
]

If the type information isn’t available for deserialization, we can’t really blame JSON.NET for not being able to guess. It just needs a bit of help!

TypeNameHandling

One way to solve this is using a built-in feature of JSON.NET: TypeNameHandling. Basically, you tell JSON.NET to include the name of the type in serialized objects, like this:

var settings = new JsonSerializerSettings
{
    TypeNameHandling = TypeNameHandling.Objects
};
string json = JsonConvert.SerializeObject(items, Formatting.Indented, settings);

And you get JSON objects annotated with the assembly-qualified type name of the objects:

[
  {
    "$type": "CosmosTypeHierarchy.FolderItem, CosmosTypeHierarchy",
    "id": "1",
    "Name": "foo",
    "ParentId": null
  },
  {
    "$type": "CosmosTypeHierarchy.FileItem, CosmosTypeHierarchy",
    "Size": 42,
    "id": "2",
    "Name": "test.txt",
    "ParentId": "1"
  }
]

This is nice! Using the type name and assembly, JSON.NET can then deserialize these objects correctly:

var deserializedItems = JsonConvert.DeserializeObject<FileSystemItem[]>(json, settings);

There’s just one issue, though: if you include actual .NET type names in your JSON documents, what happens when you decide to rename a class, or move it to a different namespace or assembly? Well, your existing documents can no longer be deserialized… Bummer.

On the other hand, if we were able to control the type name written to the document, it would solve this problem. And guess what: we can!

Serialization binder

We just need to implement our own ISerializationBinder:

class CustomSerializationBinder : ISerializationBinder
{
    public void BindToName(Type serializedType, out string assemblyName, out string typeName)
    {
        if (serializedType == typeof(FileItem))
        {
            assemblyName = null;
            typeName = "fileItem";
        }
        else if (serializedType == typeof(FolderItem))
        {
            assemblyName = null;
            typeName = "folderItem";
        }
        else
        {
            // Mimic the default behavior
            assemblyName = serializedType.Assembly.GetName().Name;
            typeName = serializedType.FullName;
        }
    }

    public Type BindToType(string assemblyName, string typeName)
    {
        if (string.IsNullOrEmpty(assemblyName))
        {
            if (typeName == "fileItem")
                return typeof(FileItem);
            if (typeName == "folderItem")
                return typeof(FolderItem);
        }

        // Mimic the default behavior
        var assemblyQualifiedName = typeName;
        if (!string.IsNullOrEmpty(assemblyName))
            assemblyQualifiedName += ", " + assemblyName;
        return Type.GetType(assemblyQualifiedName);
    }
}

...

var settings = new JsonSerializerSettings
{
    TypeNameHandling = TypeNameHandling.Objects,
    SerializationBinder = new CustomSerializationBinder()
};
string json = JsonConvert.SerializeObject(items, Formatting.Indented, settings);

Which gives us the following JSON:

[
  {
    "$type": "folderItem",
    "ChildrenCount": 1,
    "id": "1",
    "Name": "foo",
    "ParentId": null
  },
  {
    "$type": "fileItem",
    "Size": 42,
    "id": "2",
    "Name": "test.txt",
    "ParentId": "1"
  }
]

This is more concise, and more flexible. Of course, now we have to keep using the same "JSON names" for these types, but it’s not as much of a problem as not being able to rename or move classes.

Overall, this is a pretty solid approach. And if you don’t want to explicitly write type/name mappings in the serialization binder, you can always use custom attributes and reflection to do define the mapping without touching the binder itself.

What still bothers me is that with TypeNameHandling.Objects, all objects will be annotated with their type, including nested ones, even though it’s not always necessary. For instance, if you know that a particular class is sealed (or at least, doesn’t have any derived class), writing the type name is unnecessary and just adds noise. There’s an other option that does almost the right thing: TypeNameHandling.Auto. It writes the type if and only if it can’t be inferred from context, i.e. if the actual type of the object is different from the statically known type. This is almost perfect, except that it doesn’t write the type for the root object, unless you specify the "known type" explicitly, which isn’t very convenient. What would be ideal would be another option to always write the type for the root object. I suggested this on GitHub, vote if you want it too!

In the meantime, there’s another way to achieve the desired result: a custom converter. But this post has been long enough already, so we’ll cover that, and the integration with Cosmos DB SDK, in the next post.

Multitenant Azure AD issuer validation in ASP.NET Core

If you use Azure AD authentication and want to allow users from any tenant to connect to your ASP.NET Core application, you need to configure the Azure AD app as multi-tenant, and use a "wildcard" tenant id such as organizations or common in the authority URL:

openIdConnectOptions.Authority = "https://login.microsoftonline.com/organizations/v2.0";

The problem when you do that is that with the default configuration, the token validation will fail because the issuer in the token won’t match the issuer specified in the OpenID metadata. This is because the issuer from the metadata includes a placeholder for the tenant id:

https://login.microsoftonline.com/{tenantid}/v2.0

But the iss claim in the token contains the URL for the actual tenant, e.g.:

https://login.microsoftonline.com/64c5f641-7e94-4d21-ae5c-9747994e4211/v2.0

A workaround that is often suggested is to disable issuer validation in the token validation parameters:

openIdConnectOptions.TokenValidationParameters.ValidateIssuer = false;

However, if you do that the issuer won’t be validated at all. Admittedly, it’s not much of a problem, since the token signature will prove the issuer identity anyway, but it still bothers me…

Fortunately, you can control how the issuer is validated, by specifying the TokenValidator property:

options.TokenValidationParameters.IssuerValidator = ValidateIssuerWithPlaceholder;

Where ValidateIssuerWithPlaceholder is the method that validates the issuer. In that method, we need to check if the issuer from the token matches the issuer with a placeholder from the metadata. To do this, we just replace the {tenantid} placeholder with the value of the token’s tid claim (which contains the tenant id), and check that the result matches the token’s issuer:

private static string ValidateIssuerWithPlaceholder(string issuer, SecurityToken token, TokenValidationParameters parameters)
{
    // Accepts any issuer of the form "https://login.microsoftonline.com/{tenantid}/v2.0",
    // where tenantid is the tid from the token.

    if (token is JwtSecurityToken jwt)
    {
        if (jwt.Payload.TryGetValue("tid", out var value) &&
            value is string tokenTenantId)
        {
            var validIssuers = (parameters.ValidIssuers ?? Enumerable.Empty<string>())
                .Append(parameters.ValidIssuer)
                .Where(i => !string.IsNullOrEmpty(i));

            if (validIssuers.Any(i => i.Replace("{tenantid}", tokenTenantId) == issuer))
                return issuer;
        }
    }

    // Recreate the exception that is thrown by default
    // when issuer validation fails
    var validIssuer = parameters.ValidIssuer ?? "null";
    var validIssuers = parameters.ValidIssuers == null
        ? "null"
        : !parameters.ValidIssuers.Any()
            ? "empty"
            : string.Join(", ", parameters.ValidIssuers);
    string errorMessage = FormattableString.Invariant(
        $"IDX10205: Issuer validation failed. Issuer: '{issuer}'. Did not match: validationParameters.ValidIssuer: '{validIssuer}' or validationParameters.ValidIssuers: '{validIssuers}'.");

    throw new SecurityTokenInvalidIssuerException(errorMessage)
    {
        InvalidIssuer = issuer
    };
}

With this in place, you’re now able to fully validate tokens from any Azure AD tenant without skipping issuer validation.

Happy coding, and merry Christmas!

Asynchronous initialization in ASP.NET Core, revisited

Initialization in ASP.NET Core is a bit awkward. There are well defined places for registering services (the Startup.ConfigureServices method) and for building the middleware pipeline (the Startup.Configure method), but not for performing other initialization steps (e.g. pre-loading data, seeding a database, etc.).

Using a middleware: not such a good idea

Two months ago I published a blog post about asynchronous initialization of an ASP.NET Core app using a custom middleware. At the time I was rather pleased with my solution, but a comment from Frantisek made me realize it wasn’t such a good approach. Using a middleware for this has a major drawback: even though the initialization will only be performed once, the app will still incur the cost of calling an additional middleware for every single request. Obviously, we don’t want the initialization to impact performance for the whole lifetime of the app, so it shouldn’t be done in the request processing pipeline.

A better approach: the Program.Main method

There’s a piece of all ASP.NET Core apps that’s often overlooked, because it’s generated by a template and we rarely need to touch it: the Program class. It typically looks like this:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
}

Basically, it builds a web host and immediately runs it. However, there’s nothing to prevent us from doing something with the host before running it. In fact, it’s a pretty good place to perform the app initialization:

    public static void Main(string[] args)
    {
        var host = CreateWebHostBuilder(args).Build();
        /* Perform initialization here */
        host.Run();
    }

As a bonus, the web host exposes a service provider (host.Services), configured with the services registered in Startup.ConfigureServices, which gives us access to everything we might need to initialize the app.

But wait, didn’t I mention asynchronous initialization in the title? Well, since C# 7.1, it’s possible to make the Main method async. To enable it, just set the LangVersion property to 7.1 or later in your project (or latest if you always want the most recent features).

Wrapping up

While we could just resolve services from the service provider and call them directly in the Main method, it wouldn’t be very clean. Instead, it would be better to have an initializer class that receives the services it needs via dependency injection. This class would be registered in Startup.ConfigureServices and called from the Main method.

After using this approach in two different projects, I put together a small library to make things easier: AspNetCore.AsyncInitialization. It can be used like this:

  1. Create a class that implements the IAsyncInitializer interface:

    public class MyAppInitializer : IAsyncInitializer
    {
        public MyAppInitializer(IFoo foo, IBar bar)
        {
            ...
        }
    
        public async Task InitializeAsync()
        {
            // Initialization code here
        }
    }
    
  2. Register the initializer in Startup.ConfigureServices, using the AddAsyncInitializer extension method:

    services.AddAsyncInitializer<MyAppInitializer>();
    

    It’s possible to register multiple initializers.

  3. Call the InitAsync extension method on the web host in the Main method:

    public static async Task Main(string[] args)
    {
        var host = CreateWebHostBuilder(args).Build();
        await host.InitAsync();
        host.Run();
    }
    

    This will run all registered initializers.

There you have it, a nice and clean way to initialize your app. Enjoy!