Scaling out ASP.NET Core SignalR using Azure Service Bus

ASP.NET Core SignalR is a super easy way to establish two-way communication between an ASP.NET Core app and its clients, using WebSockets, Server-Sent Events, or long polling, depending on the client’s capabilities. For instance, it can be used to send a notification to all connected clients. However, if you scale out your application to multiple server instances, it no longer works out of the box: only the clients connected to the instance that sent the notification will receive it. Microsoft has two documented solutions to this problem:

Derek Comartin did a good job explaining these solutions (Redis, Azure SignalR Service), so I won’t go into the details. Both are perfectly viable, however they’re relatively expensive. A Redis Cache resource in Azure starts at about 14€/month for the smallest size, and Azure SignalR Service starts at about 40€/month for a single unit (I’m entirely dismissing the free plan, which is too limited to use beyond development scenarios). Sure, it’s not that expensive, but why pay more when you can pay less?

What I want to talk about in this post is a third option that will probably be cheaper in many scenarios: using Azure Service Bus to dispatch SignalR messages between server instances. In fact, this approach was supported in classic ASP.NET, but it hasn’t been ported to ASP.NET Core.

Here’s an overview of how one could manually implement the Azure Service Bus approach:

  • When an instance of the application wants to send a SignalR message to all clients, it sends it:

    • via its own SignalR hub or hub context (only clients connected to this instance will receive it)
    • and to an Azure Service Bus topic, for distribution to other instances.
    // Pseudo code...
    
    private readonly IHubContext<ChatHub, IChatClient> _hubContext;
    private readonly IServiceBusPublisher _serviceBusPublisher;
    
    public async Task SendMessageToAllAsync(string text)
    {
        // Send the message to clients connected to the current instance
        await _hubContext.Clients.All.ReceiveMessageAsync(text);
    
        // Notify other instances to send the same message
        await _serviceBusPublisher.PublishMessageAsync(new SendToAllMessage(text));
    }
    
  • Each instance of the application runs a hosted service that subscribes to the topic and processes the messages

    • When a message is received, it’s sent to the relevant clients via the hub context, unless it’s from the current instance.
    // Very simplified pseudo code...
    
    // Subscribe to the topic
    var subscriptionClient = new SubscriptionClient(connectionString, topicName, subscriptionName);
    subscriptionClient.RegisterMessageHandler(OnMessageReceived, OnError);
    
    ...
    
    private async Task OnMessageReceived(Message sbMessage, CancellationToken cancellationToken)
    {
        SendToAllMessage message = DeserializeServiceBusMessage(sbMessage);
    
        if (message.SenderInstanceId == MyInstanceId)
            return; // ignore message from self
    
        // Send the message to clients connected to the current instance
        await _hubContext.Clients.All.ReceiveMessageAsync(message.Text);
    }
    

I’m not showing the full details of how to implement this solution, because to be honest, it kind of sucks. It works, but it’s a bit ugly: the fact that it’s using a service bus to share messages with other server instances is too visible, you can’t just ignore it. Every time you send a message via SignalR, you also have to explicitly send one to the service bus. It would be better to hide that ugliness behind an abstraction, or even better, make it completely invisible…

If you have used the Redis or Azure SignalR Service approaches before, you might have noticed how simple they are to use. Basically, in your Startup.ConfigureServices method, just append AddRedis(...) or AddAzureSignalR(...) after services.AddSignalR(), and you’re done: you can use SignalR as usual, the details of how it handles scale-out are completely abstracted away. Wouldn’t it be nice to be able to do the same for Azure Service Bus? I thought so too, so I made a library that does exactly that: AspNetCore.SignalR.AzureServiceBus. To use it, reference the NuGet package, and just add this in your Startup.ConfigureServices method:

services.AddSignalR()
        .AddAzureServiceBus(options =>
        {
            options.ConnectionString = "(your service bus connection string)";
            options.TopicName = "(your topic name)";
        });

Disclaimer: The library is still in alpha status, probably not ready for production use. I’m not aware of any issue, but it hasn’t been battle tested yet. Use at your own risk, and please report any issues you find!

Google+ shutdown: fixing Google authentication in ASP.NET Core

A few months ago, Google decided to shutdown Google+, due to multiple data leaks. More recently, they announced that the Google+ APIs will be shutdown on March 7, 2019, which is pretty soon! In fact, calls to these APIs might start to fail as soon as January 28, which is less than 3 weeks from now. You might think that it doesn’t affect you as a developer; but if you’re using Google authentication in an ASP.NET Core app, think again! The built-in Google authentication provider (services.AddAuthentication().AddGoogle(...)) uses a Google+ API to retrieve information about the signed-in user, which will soon stop working. You can read the details in this Github thread. Note that it also affects classic ASP.NET MVC.

OK, now I’m listening. How do I fix it?

Fortunately, it’s not too difficult to fix. There’s already a pull request to fix it in ASP.NET Core, and hopefully an update will be released soon. In the meantime, you can either:

  • use the workaround described here, which basically specifies a different user information endpoint and adjusts the JSON mappings.
  • or use the generic OpenID Connect authentication provider instead, which I think is better than the built-in provider anyway, because you can get all the necessary information directly from the ID token, without making an extra API call.

Using OpenID Connect to authenticate with Google

So, let’s see how to change our app to use the OpenID Connect provider instead of the built-in Google provider, and configure it to get the same results as before.

First, let’s install the Microsoft.AspNetCore.Authentication.OpenIdConnect NuGet package to the project, if it’s not already there.

Then, we go to the place where we add the built-in Google provider (the call to AddGoogle, usually in the Startup class), and remove that call.

Instead, we add the OpenID Connect provider, point it to the Google OpenID Connect authority URL, and set the client id (the same that we were using for the built-in Google provider):

services
    .AddAuthentication()
    .AddOpenIdConnect(
        authenticationScheme: "Google",
        displayName: "Google",
        options =>
        {
            options.Authority = "https://accounts.google.com/";
            options.ClientId = configuration["Authentication:Google:ClientId"];
        });

We also need to adjust the callback path to be the same as before, so that the redirect URI configured for the Google app still works; and while we’re at it, let’s also configure the signout paths.

options.CallbackPath = "/signin-google";
options.SignedOutCallbackPath = "/signout-callback-google";
options.RemoteSignOutPath = "/signout-google";

The default configuration already includes the openid and profile scopes, but if we want to have access to the user’s email address as we did before, we also need to add the email scope:

options.Scope.Add("email");

And that’s it! Everything should work as it did before. Here’s a Gist that shows the code before and after the change.

Hey, where’s the client secret?

You might have noticed that we didn’t specify the client secret. Why is this?

The built-in Google provider is actually just a generic OAuth provider with Google-specific configuration. It uses the authorization code flow, which requires the client secret to exchange the authorization code for an access token, which in turn is used to call the user information endpoint.

But by default the OpenId Connect provider uses the implicit flow. There isn’t an authorization code: an id_token is provided directly to the redirect_uri, and there’s no need to call any API, so no secret is needed. If, for some reason, you don’t want to use the implicit flow, just change options.ResponseType to code (the default is id_token), and set options.ClientSecret as appropriate. You should also set options.GetClaimsFromUserInfoEndpoint to true to get the user details (name, email…), since you won’t have an id_token to get them from.

Multitenant Azure AD issuer validation in ASP.NET Core

If you use Azure AD authentication and want to allow users from any tenant to connect to your ASP.NET Core application, you need to configure the Azure AD app as multi-tenant, and use a "wildcard" tenant id such as organizations or common in the authority URL:

openIdConnectOptions.Authority = "https://login.microsoftonline.com/organizations/v2.0";

The problem when you do that is that with the default configuration, the token validation will fail because the issuer in the token won’t match the issuer specified in the OpenID metadata. This is because the issuer from the metadata includes a placeholder for the tenant id:

https://login.microsoftonline.com/{tenantid}/v2.0

But the iss claim in the token contains the URL for the actual tenant, e.g.:

https://login.microsoftonline.com/64c5f641-7e94-4d21-ae5c-9747994e4211/v2.0

A workaround that is often suggested is to disable issuer validation in the token validation parameters:

openIdConnectOptions.TokenValidationParameters.ValidateIssuer = false;

However, if you do that the issuer won’t be validated at all. Admittedly, it’s not much of a problem, since the token signature will prove the issuer identity anyway, but it still bothers me…

Fortunately, you can control how the issuer is validated, by specifying the TokenValidator property:

options.TokenValidationParameters.IssuerValidator = ValidateIssuerWithPlaceholder;

Where ValidateIssuerWithPlaceholder is the method that validates the issuer. In that method, we need to check if the issuer from the token matches the issuer with a placeholder from the metadata. To do this, we just replace the {tenantid} placeholder with the value of the token’s tid claim (which contains the tenant id), and check that the result matches the token’s issuer:

private static string ValidateIssuerWithPlaceholder(string issuer, SecurityToken token, TokenValidationParameters parameters)
{
    // Accepts any issuer of the form "https://login.microsoftonline.com/{tenantid}/v2.0",
    // where tenantid is the tid from the token.

    if (token is JwtSecurityToken jwt)
    {
        if (jwt.Payload.TryGetValue("tid", out var value) &&
            value is string tokenTenantId)
        {
            var validIssuers = (parameters.ValidIssuers ?? Enumerable.Empty<string>())
                .Append(parameters.ValidIssuer)
                .Where(i => !string.IsNullOrEmpty(i));

            if (validIssuers.Any(i => i.Replace("{tenantid}", tokenTenantId) == issuer))
                return issuer;
        }
    }

    // Recreate the exception that is thrown by default
    // when issuer validation fails
    var validIssuer = parameters.ValidIssuer ?? "null";
    var validIssuers = parameters.ValidIssuers == null
        ? "null"
        : !parameters.ValidIssuers.Any()
            ? "empty"
            : string.Join(", ", parameters.ValidIssuers);
    string errorMessage = FormattableString.Invariant(
        $"IDX10205: Issuer validation failed. Issuer: '{issuer}'. Did not match: validationParameters.ValidIssuer: '{validIssuer}' or validationParameters.ValidIssuers: '{validIssuers}'.");

    throw new SecurityTokenInvalidIssuerException(errorMessage)
    {
        InvalidIssuer = issuer
    };
}

With this in place, you’re now able to fully validate tokens from any Azure AD tenant without skipping issuer validation.

Happy coding, and merry Christmas!

Making a WPF app using a SDK-style project with MSBuildSdkExtras

Ever since the first stable release of the .NET Core SDK, we’ve enjoyed a better C# project format, often called "SDK-style" because you specify a SDK to use in the project file. It’s still a .csproj XML file, it’s still based on MSBuild, but it’s much more lightweight and much easier to edit by hand. Personally, I love it and use it everywhere I can.

However, out of the box, it’s only usable for some project types: ASP.NET Core apps, console applications, and simple class libraries. If you want to write a WPF Windows application, for instance, you’re stuck with the old, bloated project format. This will change with .NET Core 3.0, but it’s not there yet.

MSBuildSdkExtras

Fortunately, Oren Novotny created a pretty cool project named MSBuildSdkExtras. This is basically an extension of the .NET Core SDK that adds missing MSBuild targets and properties to enable building project types that are not supported out of the box. It presents itself as an alternative SDK, i.e. instead of specifying Sdk="Microsoft.NET.Sdk" in the root element of your project file, you write Sdk="MSBuild.Sdk.Extras/1.6.61". The SDK will be automatically resolved from NuGet (note that you need VS2017 15.6 or higher for this to work). Alternatively, you can just specify Sdk="MSBuild.Sdk.Extras", and specify the SDK version in a global.json file in the solution root folder, like this:

{
    "msbuild-sdks": {
        "MSBuild.Sdk.Extras": "1.6.61"
    }
}

This approach is useful to share the SDK version between multiple projects.

Our first SDK-style WPF project

Let’s see how to create a WPF project with the SDK project format. Follow the usual steps to create a new WPF application in Visual Studio. Once it’s done, unload the project and edit the csproj file; replace the whole content with this:

<Project Sdk="MSBuild.Sdk.Extras/1.6.61">
  <PropertyGroup>
    <OutputType>WinExe</OutputType>
    <TargetFramework>net471</TargetFramework>
    <ExtrasEnableWpfProjectSetup>true</ExtrasEnableWpfProjectSetup>
  </PropertyGroup>
</Project>

Reload your project, and remove the Properties/AssemblyInfo.cs file. That’s it, you can now build and run as usual, but now with a much more concise project file!

A few things to note:

  • ExtrasEnableWpfProjectSetup is a MSBuildSdkExtras property to opt in to WPF support (which isn’t enabled by default). Basically, it includes WPF file types with the appropriate build action (e.g. ApplicationDefinition for the App.xaml file, Page for other XAML files, etc.) and sets up appropriate tasks to handle XAML compilation.
  • The Properties/AssemblyInfo.cs file is redundant, because a file with the same attributes is automatically generated by the SDK. You can control how the attributes are generated by setting the properties listed on this page. If you prefer to keep your own assembly info file, you can set GenerateAssemblyInfo to false in the project file.

Limitations

While it’s very convenient to be able to use this project format for WPF apps, there are a few limitations to be aware of:

  • Even though we’re using the .NET Core SDK project format, we need WPF-specific MSBuild tasks that are not available in the .NET Core SDK. So you can’t use dotnet build to build the project, you have to use MSBuild (or Visual Studio, which uses MSBuild).
  • This project format for WPF projects isn’t fully supported in Visual Studio; it will build and run just fine, but some features won’t work correctly, e.g. Visual Studio won’t offer the appropriate item templates when you add a new item to the project.

But even with these limitations, MSBuildSdkExtras gives us a taste of what we’ll be able to do in .NET Core 3.0!

Asynchronous initialization in ASP.NET Core, revisited

Initialization in ASP.NET Core is a bit awkward. There are well defined places for registering services (the Startup.ConfigureServices method) and for building the middleware pipeline (the Startup.Configure method), but not for performing other initialization steps (e.g. pre-loading data, seeding a database, etc.).

Using a middleware: not such a good idea

Two months ago I published a blog post about asynchronous initialization of an ASP.NET Core app using a custom middleware. At the time I was rather pleased with my solution, but a comment from Frantisek made me realize it wasn’t such a good approach. Using a middleware for this has a major drawback: even though the initialization will only be performed once, the app will still incur the cost of calling an additional middleware for every single request. Obviously, we don’t want the initialization to impact performance for the whole lifetime of the app, so it shouldn’t be done in the request processing pipeline.

A better approach: the Program.Main method

There’s a piece of all ASP.NET Core apps that’s often overlooked, because it’s generated by a template and we rarely need to touch it: the Program class. It typically looks like this:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
}

Basically, it builds a web host and immediately runs it. However, there’s nothing to prevent us from doing something with the host before running it. In fact, it’s a pretty good place to perform the app initialization:

    public static void Main(string[] args)
    {
        var host = CreateWebHostBuilder(args).Build();
        /* Perform initialization here */
        host.Run();
    }

As a bonus, the web host exposes a service provider (host.Services), configured with the services registered in Startup.ConfigureServices, which gives us access to everything we might need to initialize the app.

But wait, didn’t I mention asynchronous initialization in the title? Well, since C# 7.1, it’s possible to make the Main method async. To enable it, just set the LangVersion property to 7.1 or later in your project (or latest if you always want the most recent features).

Wrapping up

While we could just resolve services from the service provider and call them directly in the Main method, it wouldn’t be very clean. Instead, it would be better to have an initializer class that receives the services it needs via dependency injection. This class would be registered in Startup.ConfigureServices and called from the Main method.

After using this approach in two different projects, I put together a small library to make things easier: AspNetCore.AsyncInitialization. It can be used like this:

  1. Create a class that implements the IAsyncInitializer interface:

    public class MyAppInitializer : IAsyncInitializer
    {
        public MyAppInitializer(IFoo foo, IBar bar)
        {
            ...
        }
    
        public async Task InitializeAsync()
        {
            // Initialization code here
        }
    }
    
  2. Register the initializer in Startup.ConfigureServices, using the AddAsyncInitializer extension method:

    services.AddAsyncInitializer<MyAppInitializer>();
    

    It’s possible to register multiple initializers.

  3. Call the InitAsync extension method on the web host in the Main method:

    public static async Task Main(string[] args)
    {
        var host = CreateWebHostBuilder(args).Build();
        await host.InitAsync();
        host.Run();
    }
    

    This will run all registered initializers.

There you have it, a nice and clean way to initialize your app. Enjoy!

Handling multipart requests with JSON and file uploads in ASP.NET Core

Suppose we’re writing an API for a blog. Our "create post" endpoint should receive the title, body, tags and an image to display at the top of the post. This raises a question: how do we send the image? There are at least 3 options:

  • Embed the image bytes as base64 in the JSON payload, e.g.

    {
        "title": "My first blog post",
        "body": "This is going to be the best blog EVER!!!!",
        "tags": [ "first post", "hello" ],
        "image": "iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg=="
    }
    

    This works fine, but it’s probably not a very good idea to embed an arbitrarily long blob in JSON, because it could use a lot of memory if the image is very large.

  • Send the JSON and image as separate requests. Easy, but what if we want the image to be mandatory? There’s no guarantee that the client will send the image in a second request, so our post object will be in an invalid state.

  • Send the JSON and image as a multipart request.

The last approach seems the most appropriate; unfortunately it’s also the most difficult to support… There is no built-in support for this scenario in ASP.NET Core. There is some support for the multipart/form-data content type, though; for instance, we can bind a model to a multipart request body, like this:

public class MyRequestModel
{
    [Required]
    public string Title { get; set; }
    [Required]
    public string Body { get; set; }
    [Required]
    public IFormFile Image { get; set; }
}

public IActionResult Post([FromForm] MyRequestModel request)
{
    ...
}

But if we do this, it means that each property maps to a different part of the request; we’re completely giving up on JSON.

There’s also a MultipartReader class that we can use to manually decode the request, but it means we have to give up model binding and automatic model validation entirely.

Custom model binder

Ideally, we’d like to have a request model like this:

public class CreatePostRequestModel
{
    [Required]
    public string Title { get; set; }
    [Required]
    public string Body { get; set; }
    public string[] Tags { get; set; }
    [Required]
    public IFormFile Image { get; set; }
}

Where the Title, Body and Tags properties come from a form field containing JSON and the Image property comes from the uploaded file. In other words, the request would look like this:

POST /api/blog/post HTTP/1.1
Content-Type: multipart/form-data; boundary=AaB03x
 
--AaB03x
Content-Disposition: form-data; name="json"
Content-Type: application/json
 
{
    "title": "My first blog post",
    "body": "This is going to be the best blog EVER!!!!",
    "tags": [ "first post", "hello" ]
}
--AaB03x
Content-Disposition: form-data; name="image"; filename="image.jpg"
Content-Type: image/jpeg
 
(... content of the image.jpg file ...)
--AaB03x

Fortunately, ASP.NET Core is very flexible, and we can actually make this work, by writing a custom model binder.

Here it is:

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.ModelBinding;
using Microsoft.AspNetCore.Mvc.ModelBinding.Binders;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Newtonsoft.Json;

namespace TestMultipart.ModelBinding
{
    public class JsonWithFilesFormDataModelBinder : IModelBinder
    {
        private readonly IOptions<MvcJsonOptions> _jsonOptions;
        private readonly FormFileModelBinder _formFileModelBinder;

        public JsonWithFilesFormDataModelBinder(IOptions<MvcJsonOptions> jsonOptions, ILoggerFactory loggerFactory)
        {
            _jsonOptions = jsonOptions;
            _formFileModelBinder = new FormFileModelBinder(loggerFactory);
        }

        public async Task BindModelAsync(ModelBindingContext bindingContext)
        {
            if (bindingContext == null)
                throw new ArgumentNullException(nameof(bindingContext));

            // Retrieve the form part containing the JSON
            var valueResult = bindingContext.ValueProvider.GetValue(bindingContext.FieldName);
            if (valueResult == ValueProviderResult.None)
            {
                // The JSON was not found
                var message = bindingContext.ModelMetadata.ModelBindingMessageProvider.MissingBindRequiredValueAccessor(bindingContext.FieldName);
                bindingContext.ModelState.TryAddModelError(bindingContext.ModelName, message);
                return;
            }

            var rawValue = valueResult.FirstValue;

            // Deserialize the JSON
            var model = JsonConvert.DeserializeObject(rawValue, bindingContext.ModelType, _jsonOptions.Value.SerializerSettings);

            // Now, bind each of the IFormFile properties from the other form parts
            foreach (var property in bindingContext.ModelMetadata.Properties)
            {
                if (property.ModelType != typeof(IFormFile))
                    continue;

                var fieldName = property.BinderModelName ?? property.PropertyName;
                var modelName = fieldName;
                var propertyModel = property.PropertyGetter(bindingContext.Model);
                ModelBindingResult propertyResult;
                using (bindingContext.EnterNestedScope(property, fieldName, modelName, propertyModel))
                {
                    await _formFileModelBinder.BindModelAsync(bindingContext);
                    propertyResult = bindingContext.Result;
                }

                if (propertyResult.IsModelSet)
                {
                    // The IFormFile was sucessfully bound, assign it to the corresponding property of the model
                    property.PropertySetter(model, propertyResult.Model);
                }
                else if (property.IsBindingRequired)
                {
                    var message = property.ModelBindingMessageProvider.MissingBindRequiredValueAccessor(fieldName);
                    bindingContext.ModelState.TryAddModelError(modelName, message);
                }
            }

            // Set the successfully constructed model as the result of the model binding
            bindingContext.Result = ModelBindingResult.Success(model);
        }
    }
}

To use it, just apply this attribute to the CreatePostRequestModel class above:

[ModelBinder(typeof(JsonWithFilesFormDataModelBinder), Name = "json")]
public class CreatePostRequestModel

This tells ASP.NET Core to use our custom model binder to bind this class. The Name = "json" part tells our binder from which field of the multipart request it should read the JSON (this is the bindingContext.FieldName in the binder code).

Now we just need to pass a CreatePostRequestModel to our controller action, and we’re done:

[HttpPost]
public ActionResult<Post> CreatePost(CreatePostRequestModel post)
{
    ...
}

This approach enables us to have a clean controller code and keep the benefits of model binding and validation. It messes up the Swagger/OpenAPI model though, but hey, you can’t have everything!

Asynchronous initialization in ASP.NET Core with custom middleware

Update: I no longer recommend the approach described in this post. I propose a better solution here: Asynchronous initialization in ASP.NET Core, revisited.

Sometimes you need to perform some initialization steps when your web application starts. However, putting such code in the Startup.Configure method is generally not a good idea, because:

  • There’s no current scope in the Configure method, so you can’t use services registered with "scoped" lifetime (this would throw an InvalidOperationException: Cannot resolve scoped service ‘MyApp.IMyService’ from root provider).
  • If the initialization code is asynchronous, you can’t await it, because the Configure method can’t be asynchronous. You could use .Wait to block until it’s done, but it’s ugly.

Async initialization middleware

A simple way to do it involves writing a custom middleware that ensures initialization is complete before processing a request. This middleware starts the initialization process when the app starts, and upon receiving a request, will wait until the initialization is done before passing the request to the next middleware. A basic implementation could look like this:

public class AsyncInitializationMiddleware
{
    private readonly RequestDelegate _next;
    private readonly ILogger _logger;
    private Task _initializationTask;

    public AsyncInitializationMiddleware(RequestDelegate next, IApplicationLifetime lifetime, ILogger<AsyncInitializationMiddleware> logger)
    {
        _next = next;
        _logger = logger;

        // Start initialization when the app starts
        var startRegistration = default(CancellationTokenRegistration);
        startRegistration = lifetime.ApplicationStarted.Register(() =>
        {
            _initializationTask = InitializeAsync(lifetime.ApplicationStopping);
            startRegistration.Dispose();
        });
    }

    private async Task InitializeAsync(CancellationToken cancellationToken)
    {
        try
        {
            _logger.LogInformation("Initialization starting");

            // Do async initialization here
            await Task.Delay(2000);

            _logger.LogInformation("Initialization complete");
        }
        catch(Exception ex)
        {
            _logger.LogError(ex, "Initialization failed");
            throw;
        }
    }

    public async Task Invoke(HttpContext context)
    {
        // Take a copy to avoid race conditions
        var initializationTask = _initializationTask;
        if (initializationTask != null)
        {
            // Wait until initialization is complete before passing the request to next middleware
            await initializationTask;

            // Clear the task so that we don't await it again later.
            _initializationTask = null;
        }

        // Pass the request to the next middleware
        await _next(context);
    }
}

We can then add this middleware to the pipeline in the Startup.Configure method. It should be added early in the pipeline, before any other middleware that would need the initialization to be complete.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseMiddleware<AsyncInitializationMiddleware>();

    app.UseMvc();
}

Dependencies

At this point, our initialization middleware doesn’t depend on any service. If it has transient or singleton dependencies, they can just be injected into the middleware constructor as usual, and used from the InitializeAsync method.

However, if the dependencies are scoped, we’re in trouble: the middleware is instantiated directly from the root provider, not from a scope, so it can’t take scoped dependencies in its constructor.

Depending on scoped dependencies for initialization code doesn’t make a lot of sense anyway, since by definition scoped dependencies only exist in the context of a request. But if for some reason you need to do it anyway, the solution is to perform initialization in the middleware’s Invoke method, injecting the dependencies as method parameters. This approach has at least two drawbacks:

  • Initialization won’t start until a request is received, so the first requests will have a delayed response time; this can be an issue if the initialization takes a long time.
  • You need to take special care to ensure thread safety: the initialization code must run only once, even if several requests arrive before initialization is done.

Writing thread-safe code is hard and error-prone, so avoid getting in this situation if possible, e.g. by refactoring your services so that your initialization middleware doesn’t depend on any scoped service.

Hosting an ASP.NET Core 2 application on a Raspberry Pi

As you probably know, .NET Core runs on many platforms: Windows, macOS, and many UNIX/Linux variants, whether on x86/x64 architectures or on ARM. This enables a wide range of interesting scenarios… For instance, is a very small machine like a Raspberry Pi, which its low performance ARM processor and small amount of RAM (1 GB on my RPi 2 Model B), enough to host an ASP.NET Core web app? Yes it is! At least as long as you don’t expect it to handle a very heavy load. So let’s see in practice how to deploy an expose an ASP.NET Core web app on a Raspberry Pi.

Creating the app

Let’s start from a basic ASP.NET Core 2.0 MVC app template:

dotnet new mvc

You don’t even need to open the project for now, just compile it as is and publish it for the Raspberry Pi:

dotnet publish -c Release -r linux-arm

Prerequisites

We’re going to use a Raspberry Pi running Raspbian, the official Linux distro for Raspberry Pi, which is based on Debian. To run a .NET Core 2.0 app, you’ll need version Jessie or higher (I used Raspbian Stretch Lite). Update: as Tomasz mentioned in the comments, you also need a Raspberry Pi 2 or more recent, with an ARMv7 processor; The first RPi has an ARMv6 processor and cannot run .NET Core.

Even though the app is self-contained and doesn’t require .NET Core to be installed on the RPi, you will still need a few low-level dependencies; they are listed here. You can install them using apt-get:

sudo apt-get update
sudo apt-get install curl libunwind8 gettext apt-transport-https

Deploy and run the application

Copy all files from the bin\Release\netcoreapp2.0\linux-arm\publish directory to the Raspberry Pi, and make the binary executable (replace MyWebApp with the name of your app):

chmod 755 ./MyWebApp

Run the app:

./MyWebApp

If nothing went wrong, the app should start listening on port 5000. But since it listens only on localhost, it’s only accessible from the Raspberry Pi itself…

Exposing the app on the network

There are several ways to fix that. The easiest is to set the ASPNETCORE_URLS environment variable to a value like http://*:5000/, in order to listen on all addresses. But if you intend to expose the app on the Internet, it might not be a good idea: the Kestrel server used by ASP.NET Core isn’t designed to be exposed directly to the outside world, and isn’t well protected against attacks. It is strongly recommended to put it behind a reverse proxy, such as nginx. Let’s see how to do that.

First, you need to install nginx if it’s not already there, using this command:

sudo apt-get install nginx

And start it like this:

sudo service nginx start

Now you need to configure it so that requests arriving to port 80 are passed to your app on port 5000. To do that, open the /etc/nginx/sites-available/default file in your favorite editor (I use vim because my RPi has no graphical environment). The default configuration defines only one server, listening on port 80. Under this server, look for the section starting with location /: this is the configuration for the root path on this server. Replace it with the following configuration:

location / {
        proxy_pass http://localhost:5000/;
        proxy_http_version 1.1;
        proxy_set_header Connection keep-alive;
}

Be careful to include the final slash in the destination URL.

This configuration is intentionnally minimal, we’ll expand it a bit later.

Once you’re done editing the file, tell nginx to reload its configuration:

sudo nginx -s reload

From your PC, try to access the app on the Raspberry Pi by entering its IP address in your browser. If you did everything right, you should see the familiar home page from the ASP.NET Core app template!

Note that you’ll need to be patient: the first time the home page is loaded, its Razor view is compiled, which can take a while on the RPi’s low-end hardware. ASP.NET Core 2.0 doesn’t support precompilation of Razor views for self-contained apps; this is fixed in 2.1, which is currently in preview. So for now you have 3 options:

  • be patient and endure the delay on first page load
  • migrate to ASP.NET Core 2.1 preview, as explained here
  • make a non self-contained deployment, which requires .NET Core to be installed on the RPi

For this article, I chose the first options to keep things simple.

Proxy headers

At this point, we could just leave the app alone and call it a day. However, if your app is going to evolve into something more useful, there are a few things that aren’t going to work correctly in the current state. The problem is that the app isn’t aware that it’s behind a reverse proxy; as far as it knows, it’s only listening to requests on localhost on port 5000. Which means it cannot know:

  • the actual client IP (requests seem to come from localhost)
  • the protocol scheme used by the client
  • the actual host name specified by the client

For the app to know these things, it has to be told by the reverse proxy. Let’s change the nginx configuration so that it adds a few headers to incoming requests. These headers are not standard, but they’re widely used by proxy servers.

    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host   $http_host;
    proxy_set_header X-Forwarded-Proto  http;

X-Forwarded-For contains the client IP address, and optionally the addresses of proxies along the way. X-Forwarded-Host contains the host name initially specified by the client, and X-Forwarded-Proto contains the original protocol scheme (hard-coded to http here since HTTPS is not configured).

(Don’t forget to reload the nginx configuration)

We also need to change the ASP.NET Core app so that it takes these headers into account. This can be done easily using the ForwardedHeaders middleware; add this code at the start of the Startup.Configure method:

app.UseForwardedHeaders(new ForwardedHeadersOptions
{
    ForwardedHeaders = ForwardedHeaders.All
});

In case you’re wondering what a middleware is, this article might help!

This middleware will read the X-Forwarded-* headers from incoming requests, and use them to modify:

  • the Host and Scheme of the request
  • the Connection.RemoteIpAddress, which contains the client IP.

This way, the app will behave as if the request was received directly from the client.

Expose the app on a specific path

Our app is now accessible at the URL http://<ip-address>/, i.e. at the root of the server. But if we want to host several applications on the Raspberry Pi, it’s going to be a problem… We could put each app on a different port, but it’s not very convenient. It would be better to have each app on its own path, e.g. with URLs like http://<ip-address>/MyWebApp/.

It’s pretty easy to do with nginx. Edit the nginx configuration again, and replace location / with location /MyWebApp/ (note the final slash, it’s important). Reload the configuration, and try to access the app at its new URL… The home page loads, but the CSS and JS scripts don’t: error 404. In addition, links to other pages are now incorrect, and point to http://<ip-address>/Something instead of http://<ip-address>/MyWebApp/Something. What’s going on?

The reason is quite simple: the app isn’t aware that it’s not served from the root of the server, and generates all its links as if it were… To fix this, we can ask nginx to pass yet another header to our app:

proxy_set_header X-Forwarded-Path   /MyWebApp;

Note that this X-Forwarded-Path header is even less standard than the other ones, since I just made it up… So of course, there’s no built-in ASP.NET Core middleware that can handle it, and we’ll need to do it ourselves. Fortunately it’s pretty easy: we just need to use the path specified in the header as the path base. In the Startup.Configure method, add this after the UseForwardHeaders statement:

// Patch path base with forwarded path
app.Use(async (context, next) =>
{
    var forwardedPath = context.Request.Headers["X-Forwarded-Path"].FirstOrDefault();
    if (!string.IsNullOrEmpty(forwardedPath))
    {
        context.Request.PathBase = forwardedPath;
    }

    await next();
});

Redeploy and restart the app, reload the nginx configuration, and try again: now it works!

Run the app as a service

If we want our app to be always running, restarting it manually every time it crashes or when the Raspberry Pi reboots isn’t going to be sustainable… What we want is to run it as a service, so that it starts when the system starts, and is automatically restarted if it stops working. To do this, we’ll take advantage of systemd, which manages services in most Linux distros, including Raspbian.

To create a systemd service, create a MyWebApp.service file in the /lib/systemd/system/ directory, with the following content:

[Unit]
Description=My ASP.NET Core Web App
After=nginx.service

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/apps/MyWebApp
ExecStart=/home/pi/apps/MyWebApp/MyWebApp
Restart=always

[Install]
WantedBy=multi-user.target

(replace the name and paths to match your app of course)

Enable the service like this:

sudo systemctl enable MyWebApp

And start it like this (new services aren’t started automatically):

sudo systemctl start MyWebApp

And that’s it, your app is now monitored by systemd, which will take care of starting or restarting it as needed.

Conclusion

As you can see, running an ASP.NET Core 2.0 app on a Raspberry Pi is not only possible, but reasonably easy too; you just need a bit of fiddling with headers and reverse proxy settings. You won’t host the next Facebook or StackOverflow on your RPi, but it’s fine for small utility applications. Just give free rein to your imagination!

Writing a GitHub Webhook as an Azure Function

I recently experimented with Azure Functions and GitHub apps, and I wanted to share what I learned.

A bit of background

As you may already know, I’m one of the maintainers of the FakeItEasy mocking library. As is common in open-source projects, we use a workflow based on feature branches and pull requests. When a change is requested in a PR during code review, we usually make the change as a fixup commit, because it makes it easier to review, and because we like to keep a clean history. When the changes are approved, the author squashes the fixup commits before the PR is merged. Unfortunately, I’m a little absent minded, and when I review a PR, I often forget to wait for the author to squash their commits before I merge… This causes the fixup commits to appear in the main dev branch, which is ugly.

Which leads me to the point of this post: I wanted to make a bot that could prevent a PR from being merged if it had commits that needed to be squashed (i.e. commits whose messages start with fixup! or squash!). And while I was at it, I thought I might as well make it usable by everyone, so I made it a GitHub app: DontMergeMeYet.

GitHub apps

Now, you might be wondering, what on Earth is a GitHub app? It’s simply a third-party application that is granted access to a GitHub repository using its own identity; what it can do with the repo depends on which permissions were granted. A GitHub app can also receive webhook notifications when events occur in the repo (e.g. a comment is posted, a pull request is opened, etc.).

A GitHub app could, for instance, react when a pull request is opened or updated, examine the PR details, and add a commit status to indicate whether the PR is ready to merge or not (this WIP app does this, but doesn’t take fixup commits into account).

As you can see, it’s a pretty good fit for what I’m trying to do!

In order to create a GitHub app, you need to go to the GitHub apps page, and click New GitHub app. You then fill in at least the name, homepage, and webhook URL, give the app the necessary permissions, and subscribe to the webhook events you need. In my case, I only needed read-only access to pull requests, read-write access to commit statuses, and to receive pull request events.

At this point, we don’t yet have an URL for the webhook, so enter any valid URL; we’ll change it later after we actually implemented the app.

Azure Functions

I hadn’t paid much attention to Azure Functions before, because I didn’t really see the point. So I started to implement my webhook as a full-blown ASP.NET Core app, but then I realized several things:

  • My app only had a single HTTP endpoint
  • It was fully stateless and didn’t need a database
  • If I wanted the webhook to always respond quickly, the Azure App Service had to be "always on"; that option isn’t available in free plans, and I didn’t want to pay a fortune for a better service plan.

I looked around and realized that Azure Functions had a "consumption plan", with a generous amount (1 million per month) of free requests before I had to pay anything, and functions using this plan are "always on". Since I had a single endpoint and no persistent state, an Azure Function seemed to be the best fit for my requirements.

Interestingly, Azure Functions can be triggered, among other things, by GitHub webhooks. This is very convenient as it takes care of validating the payload signature.

So, Azure Functions turn out to be a perfect match for implementing my webhook. Let’s look at how to create one.

Creating an Azure Function triggered by a GitHub webhook

It’s possible to write Azure functions in JavaScript, C# (csx) or F# directly in the portal, but I wanted the comfort of the IDE, so I used Visual Studio. To write an Azure Function in VS, follow the instructions on this page. When you create the project, a dialog appears to let you choose some options:

New function dialog

  • version of the Azure Functions runtime: v1 targets the full .NET Framework, v2 targets .NET Core. I picked v1, because I had trouble with the dependencies in .NET Core.
  • Trigger: GitHub webhooks don’t appear here, so just pick "HTTP Trigger", we’ll make the necessary changes in the code.
  • Storage account: pick the storage emulator; when you publish the function, a real Azure storage account will be set instead
  • Access rights: it doesn’t matter what you pick, we’ll override it in the code.

The project template creates a class named Function1 with a Run method that looks like this:

public static class Function1
{
    [FunctionName("Function1")]
    public static async Task<HttpResponseMessage> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
    {
        ...
    }
}

Rename the class to something that makes more sense, e.g. GitHubWebHook, and don’t forget to change the name in the FunctionName attribute as well.

Now we need to tell the Azure Functions runtime that this function is triggered by a GitHub webhook. To do this, change the method signature to look like this:

    [FunctionName("GitHubWebHook")]
    public static async Task<HttpResponseMessage> Run(
        [HttpTrigger("POST", WebHookType = "github")] HttpRequestMessage req,
        TraceWriter log)

GitHub webhooks always use the HTTP POST method; the WebHookType property is set to "github" to indicate that it’s a GitHub webhook.

Note that it doesn’t really matter what we respond to the webhook request; GitHub doesn’t do anything with the response. I chose to return a 204 (No content) response, but you can return a 200 or anything else, it doesn’t matter.

Publishing the Azure Function

To publish your function, just right click on the Function App project, and click Publish. This will show a wizard that will let you create a new Function App resource on your Azure subscription, or select an existing one. Not much to explain here, it’s pretty straightforward; just follow the wizard!

When the function is published, you need to tell GitHub how to invoke it. Open the Azure portal in your browser, navigate to your new Function App, and select the GitHubWebHook function. This will show the content of the (generated) function.json file. Above the code view, you will see two links: Get function URL, and Get GitHub secret:

Azure Function URL and secret

You need to copy the URL to the Webhook URL field in the GitHub app settings, and copy the secret to the Webhook secret field. This secret is used to calculate a signature for webhook payloads, so that the Azure Function can ensure the payloads really come from GitHub. As I mentioned earlier, this verification is done automatically when you use a GitHub HTTP trigger.

And that’s it, your webhook is online! Now you can go install the GitHub app into one of your repositories, and your webhook will start receiving events for this repo.

Points of interest

I won’t describe the whole implementation of my webhook in this post, because it would be too long and most of it isn’t that interesting, but I will just highlight a few points of interest. You can find the complete code on GitHub.

Parsing the payload

Rather than reinventing the wheel, we can leverage the Octokit .NET library. Octokit is a library made by GitHub to consume the GitHub REST API. It contains classes representing the entities used in the API, including webhook payloads, so we can just deserialize the request content as a PullRequestEventPayload. However, if we just try to do this with JSON.NET, this isn’t going to work: Octokit doesn’t use JSON.NET, so the classes aren’t decorated with JSON.NET attributes to map the C# property names to the JSON property names. Instead, we need to use the JSON serializer that is included in Octokit, called SimpleJsonSerializer:

private static async Task<PullRequestEventPayload> DeserializePayloadAsync(HttpContent content)
{
    string json = await content.ReadAsStringAsync();
    var serializer = new SimpleJsonSerializer();
    return serializer.Deserialize<PullRequestEventPayload>(json);
}

There’s also another issue: the PullRequestEventPayload from Octokit is missing the Installation property, which we’re going to need later to authenticate with the GitHub API. An easy workaround is to make a new class that inherits PullRequestEventPayload and add the new property:

public class PullRequestPayload : PullRequestEventPayload
{
    public Installation Installation { get; set; }
}

public class Installation
{
    public int Id { get; set; }
}

And we’ll just use PullRequestPayload instead of PullRequestEventPayload.

Authenticating with the GitHub API

We’re going to need to call the GitHub REST API for two things:

  • to get the list of commits in the pull request
  • to update the commit status

In order to access the API, we’re going to need credentials… but which credentials? We could just generate a personal access token and use that, but then we would access the API as a "real" GitHub user, and we would only be able to access our own repositories (for writing, at least).

As I mentioned earlier, GitHub apps have their own identity. What I didn’t say is that when authenticated as themselves, there isn’t much they’re allowed to do: they can only get management information about themselves, and get a token to authenticate as an installation. An installation is, roughly, an instance of the application that is installed on one or more repo. When someone installs your app on their repo, it creates an installation. Once you get a token for an installation, you can access all the APIs allowed by the app’s permissions on the repos it’s installed on.

However, there are a few hoops to jump through to get this token… This page describes the process in detail.

The first step is to generate a JSON Web Token (JWT) for the app. This token has to contain the following claims:

  • iat: the timestamp at which the token was issued
  • exp: the timestamp at which the token expires
  • iss: the issuer, which is actually the app ID (found in the GitHub app settings page)

This JWT needs to be signed with the RS256 algorithm (RSA signature with SHA256); in order to sign it, you need a private key, which must be generated from the GitHub app settings page. You can download the private key in PEM format, and store it somewhere your app can access it. Unfortunately, the .NET APIs to generate and sign a JWT don’t handle the PEM format, they need an RSAParameters object… But Stackoverflow is our friend, and this answer contains the code we need to convert a PEM private key to an RSAParameters object. I just kept the part I needed, and manually reformatted the PEM private key to remove the header, footer, and newlines, so that it could easily be stored in the settings as a single line of text.

Once you have the private key as an RSAParameters object, you can generate a JWT like this:

public string GetTokenForApplication()
{
    var key = new RsaSecurityKey(_settings.RsaParameters);
    var creds = new SigningCredentials(key, SecurityAlgorithms.RsaSha256);
    var now = DateTime.UtcNow;
    var token = new JwtSecurityToken(claims: new[]
        {
            new Claim("iat", now.ToUnixTimeStamp().ToString(), ClaimValueTypes.Integer),
            new Claim("exp", now.AddMinutes(10).ToUnixTimeStamp().ToString(), ClaimValueTypes.Integer),
            new Claim("iss", _settings.AppId)
        },
        signingCredentials: creds);

    var jwt = new JwtSecurityTokenHandler().WriteToken(token);
    return jwt;
}

A few notes about this code:

  • It requires the following NuGet packages:
    • Microsoft.IdentityModel.Tokens 5.2.1
    • System.IdentityModel.Tokens.Jwt 5.2.1
  • ToUnixTimeStamp is an extension method that converts a DateTime to a UNIX timestamp; you can find it here
  • As per the GitHub documentation, the token lifetime cannot exceed 10 minutes

Once you have the JWT, you can get an installation access token by calling the "new installation token" API endpoint. You can authenticate to this endpoint by using the generated JWT as a Bearer token

public async Task<string> GetTokenForInstallationAsync(int installationId)
{
    var appToken = GetTokenForApplication();
    using (var client = new HttpClient())
    {
        string url = $"https://api.github.com/installations/{installationId}/access_tokens";
        var request = new HttpRequestMessage(HttpMethod.Post, url)
        {
            Headers =
            {
                Authorization = new AuthenticationHeaderValue("Bearer", appToken),
                UserAgent =
                {
                    ProductInfoHeaderValue.Parse("DontMergeMeYet"),
                },
                Accept =
                {
                    MediaTypeWithQualityHeaderValue.Parse("application/vnd.github.machine-man-preview+json")
                }
            }
        };
        using (var response = await client.SendAsync(request))
        {
            response.EnsureSuccessStatusCode();
            var json = await response.Content.ReadAsStringAsync();
            var obj = JObject.Parse(json);
            return obj["token"]?.Value<string>();
        }
    }
}

OK, almost there. Now we just need to use the installation token to call the GitHub API. This can be done easily with Octokit:

private IGitHubClient CreateGitHubClient(string installationToken)
{
    var userAgent = new ProductHeaderValue("DontMergeMeYet");
    return new GitHubClient(userAgent)
    {
        Credentials = new Credentials(installationToken)
    };
}

And that’s it, you can now call the GitHub API as an installation of your app.

Note: the code above isn’t exactly what you’ll find in the repo; I simplified it a little for the sake of clarity.

Testing locally using ngrok

When creating your Azure Function, it’s useful to be able to debug on your local machine. However, how will GitHub be able to call your function if it doesn’t have a publicly accessible URL? The answer is a tool called ngrok. Ngrok can create a temporary host name that forwards all traffic to a port on your local machine. To use it, create an account (it’s free) and download the command line tool. Once logged in to the ngrok website, a page will give you the command to save an authentication token on your machine. Just execute this command:

ngrok authtoken 1beErG2VTJJ0azL3r2SBn_2iz8johqNv612vaXa3Rkm

Start your Azure Function in debug from Visual Studio; the console will show you the local URL of the function, something like http://localhost:7071/api/GitHubWebHook. Note the port, and in a new console, start ngrok like this:

ngrok http 7071 --host-header rewrite

This will create a new hostname and start forwarding traffic to the 7071 port on your machine. The --host-header rewrite argument causes ngrok to change the Host HTTP header to localhost, rather than the temporary hostname; Azure Functions don’t work correctly without this.

You can see the temporary hostname in the command output:

ngrok by @inconshreveable                                                                                                                                                                                         (Ctrl+C to quit)

Session Status                online
Account                       Thomas Levesque (Plan: Free)
Version                       2.2.8
Region                        United States (us)
Web Interface                 http://127.0.0.1:4040
Forwarding                    http://89e14c16.ngrok.io -> localhost:7071
Forwarding                    https://89e14c16.ngrok.io -> localhost:7071

Connections                   ttl     opn     rt1     rt5     p50     p90
                              0       0       0.00    0.00    0.00    0.00

Finally, go to the GitHub app settings, and change the webook URL to https://89e14c16.ngrok.io/api/GitHubWebHook (i.e. the temporary domain with the same path as the local URL).

Now you’re all set. GitHub will send the webhook payloads to ngrok, which will forward them to your app running locally.

Note that unless you have a paid plan for ngrok, the temporary subdomain changes every time you start the tool, which is annoying. So it’s better to keep it running for the whole development session, otherwise you will need to change the GitHub app settings again.

Conclusion

Hopefully you learned a few things from this article. With Azure Functions, it’s almost trivial to implement a GitHub webhook (the only tricky part is the authentication to call the GitHub API, but not all webhooks need it). It’s much lighter than a full-blown web app, and much simpler to write: you don’t have to care about MVC, routing, services, etc. And if it wasn’t enough, the pricing model for Azure Functions make it a very cheap option for hosting a webhook!

Understanding the ASP.NET Core middleware pipeline

Middlewhat?

The ASP.NET Core architecture features a system of middleware, which are pieces of code that handle requests and responses. Middleware are chained to each other to form a pipeline. Incoming requests are passed through the pipeline, where each middleware has a chance to do something with them before passing them to the next middleware. Outgoing responses are also passed through the pipeline, in reverse order. If this sounds very abstract, the following schema from the official ASP.NET Core documentation should help you understand:

Middleware pipeline

Middleware can do all sort of things, such as handling authentication, errors, static files, etc… MVC in ASP.NET Core is also implemented as a middleware.

Configuring the pipeline

You typically configure the ASP.NET pipeline in the Configure method of your Startup class, by calling Use* methods on the IApplicationBuilder. Here’s an example straight from the docs:

public void Configure(IApplicationBuilder app)
{
    app.UseExceptionHandler("/Home/Error");
    app.UseStaticFiles();
    app.UseAuthentication();
    app.UseMvcWithDefaultRoute();
}

Each Use* method adds a middleware to the pipeline. The order in which they’re added determines the order in which requests will traverse them. So an incoming request will first traverse the exception handler middleware, then the static files middleware, then the authentication middleware, and will eventually be handled by the MVC middleware.

The Use* methods in this example are actually just "shortcuts" to make it easier to build the pipeline. Behind the scenes, they all end up using (directly or indirectly) these low-level primitives: Use and Run. Both add a middleware to the pipeline, the difference is that Run adds a terminal middleware, i.e. a middleware that is the last in the pipeline.

A basic pipeline with no branches

Let’s look at a simple example, using only the Use and Run primitives:

public void Configure(IApplicationBuilder app)
{
    // Middleware A
    app.Use(async (context, next) =>
    {
        Console.WriteLine("A (before)");
        await next();
        Console.WriteLine("A (after)");
    });

    // Middleware B
    app.Use(async (context, next) =>
    {
        Console.WriteLine("B (before)");
        await next();
        Console.WriteLine("B (after)");
    });

    // Middleware C (terminal)
    app.Run(async context =>
    {
        Console.WriteLine("C");
        await context.Response.WriteAsync("Hello world");
    });
}

Here, each middleware is defined inline as an anonymous method; they could also be defined as full-blown classes, but for this example I picked the more concise option. Non-terminal middleware take two arguments: the HttpContext and a delegate to call the next middleware. Terminal middleware only take the HttpContext. Here we have two middleware A and B that just log to the console, and a terminal middleware C which writes the response. Here’s the console output when we send a request to our app:

A (before)
B (before)
C
B (after)
A (after)

We can see that each middleware was traversed in the order in which it was added, then traversed again in reverse order. The pipeline can be represented like this:

Basic pipeline

Short-circuiting middleware

A middleware doesn’t necessarily have to call the next middleware. For instance, if the static files middleware can handle a request, it doesn’t need to pass it down to the rest of the pipeline, it can respond immediately. This behavior is called short-circuiting the pipeline.

In the previous example, if we comment out the call to next() in middleware B, we get the following output:

A (before)
B (before)
B (after)
A (after)

As you can see, middleware C is never invoked. The pipeline now looks like this:

Short-circuited pipeline

Branching the pipeline

In the previous examples, there was only one "branch" in the pipeline: the middleware coming after A was always B, and the middleware coming after B was always C. But it doesn’t have to be that way. You might want a given request to be processed by a completely different pipeline, based on the path or anything else.

There are two types of branches: branches that rejoin the main pipeline, and branches that don’t.

Making a non-rejoining branch

This can be done using the Map or MapWhen method. Map lets you specify a branch based on the request path. MapWhen gives you more control: you can specify a predicate on the HttpContext to decide whether to branch or not. Let’s look at a simple example using Map:

public void Configure(IApplicationBuilder app)
{
    app.Use(async (context, next) =>
    {
        Console.WriteLine("A (before)");
        await next();
        Console.WriteLine("A (after)");
    });

    app.Map(
        new PathString("/foo"),
        a => a.Use(async (context, next) =>
        {
            Console.WriteLine("B (before)");
            await next();
            Console.WriteLine("B (after)");
        }));

    app.Run(async context =>
    {
        Console.WriteLine("C");
        await context.Response.WriteAsync("Hello world");
    });
}

The first argument for Map is a PathString representing the path prefix of the request. The second argument is a delegate that configures the branch’s pipeline (the a parameter represents the IApplicationBuilder for the branch). The branch defined by the delegate will process the request if its path starts with the specified path prefix.

For a request that doesn’t start with /foo, this code produces the following output:

A (before)
C
A (after)

Middleware B is not invoked, since it’s in the branch and the request doesn’t match the prefix for the branch. But for a request that does start with /foo, we get the following output:

A (before)
B (before)
B (after)
A (after)

Note that this request returns a 404 (Not found) response: this is because the B middleware calls next(), but there’s no next middleware, so it falls back to returning a 404 response. To solve this, we could use Run instead of Use, or just not call next().

The pipeline defined by this code can be represented as follows:

Non-rejoining branch

(I omited the response arrows for clarity)

As you can see, the branch with middleware B doesn’t rejoin the main pipeline, so middleware C isn’t called.

Making a rejoining branch

You can make a branch that rejoins the main pipeline by using the UseWhen method. This method accepts a predicate on the HttpContext to decide whether to branch or not. The branch will rejoin the main pipeline where it left it. Here’s an example similar to the previous one, but with a rejoining branch:

public void Configure(IApplicationBuilder app)
{
    app.Use(async (context, next) =>
    {
        Console.WriteLine("A (before)");
        await next();
        Console.WriteLine("A (after)");
    });

    app.UseWhen(
        context => context.Request.Path.StartsWithSegments(new PathString("/foo")),
        a => a.Use(async (context, next) =>
        {
            Console.WriteLine("B (before)");
            await next();
            Console.WriteLine("B (after)");
        }));

    app.Run(async context =>
    {
        Console.WriteLine("C");
        await context.Response.WriteAsync("Hello world");
    });
}

For a request that doesn’t start with /foo, this code produces the same output as the previous example:

A (before)
C
A (after)

Again, middleware B is not invoked, since it’s in the branch and the request doesn’t match the predicate for the branch. But for a request that does start with /foo, we get the following output:

A (before)
B (before)
C
B (after)
A (after)

We can see that the request passes trough the branch (middleware B), then goes back to the main pipeline, ending with middleware C. This pipeline can be represented like this:

Rejoining branch

Note that there is no Use method that accepts a PathString to specify the path prefix. I’m not sure why it’s not included, but it would be easy to write, using UseWhen:

public static IApplicationBuilder Use(this IApplicationBuilder builder, PathString pathMatch, Action<IApplicationBuilder> configuration)
{
    return builder.UseWhen(
        context => context.Request.Path.StartsWithSegments(pathMatch),
        configuration);
}

Conclusion

As you can see, the idea behind the middleware pipeline is quite simple, but it’s very powerful. Most of the features baked in ASP.NET Core (authentication, static files, caching, MVC, etc) are implemented as middleware. And of course, it’s easy to write your own!

css.php