The Bug That Made It to Production (And Shouldn’t Have)

It was a Tuesday afternoon, and I was mid-bite into a sandwich when my phone rang. The client was calm, which somehow made it worse. “The hero section on the homepage is gone. It’s just… blank. The rest of the page looks fine, but there’s a big white gap at the top.”

I pulled up the site. She was right. The hero block was missing. Not broken — missing. The page loaded fine, no console errors, no 500 status codes, no crash. The hero section simply wasn’t rendering. The JSON from Umbraco’s Content Delivery API was returning data, but the heroHeading property that our Next.js HeroBlock component expected had been renamed to heroTitle by a colleague who was cleaning up the content model the day before.

The rename made sense. heroTitle is clearer than heroHeading. My colleague updated the Umbraco document type, republished the content, and saw the correct data in the backoffice. Everything looked right on the CMS side. But on the Next.js side, the HeroBlock component was still looking for heroHeading, finding undefined, and silently rendering nothing. No error boundary tripped because there was no error — just a missing optional property.

This is the insidious failure mode of headless CMS architectures. The API is the contract between two systems maintained by different people (or the same person wearing different hats). When one side of the contract changes without the other side knowing, things don’t crash — they just quietly break. And you find out when a client calls you on a Tuesday afternoon.

That bug cost us four hours of investigation, an embarrassing client call, and a very unpleasant retrospective. It also became the catalyst for everything in this post. In Part 5, we added AI content generation with Gemini. Now we need to make sure everything — the backend logic, the frontend rendering, and especially the API contract between them — is tested well enough that a Tuesday afternoon call never happens again.

The Testing Strategy

Before diving into code, here’s how I think about testing a headless CMS stack. It’s not the traditional testing pyramid — it’s more like a testing diamond, because the API contract layer in the middle carries the most risk.

Backend (Umbraco / .NET):

  • Unit tests for domain logic (validation rules, AI prompt builders, tenant config)
  • Integration tests for the Content Delivery API (does the real endpoint return what we expect?)
  • Architecture tests to keep Clean Architecture boundaries honest

Contract (the API between systems):

  • Consumer-driven contract tests with Pact (this is the layer that would have caught the Tuesday bug)

Frontend (Next.js):

  • Component tests for individual blocks with mock data
  • E2E tests with Playwright for full page rendering
  • Visual regression tests to catch CSS and layout regressions across themes

Performance:

  • Lighthouse CI for automated audits
  • k6 for API load testing

Let’s build each layer.

Backend Testing: Umbraco on .NET 10

MarketingOS uses Clean Architecture with four projects: MarketingOS.Domain, MarketingOS.Application, MarketingOS.Infrastructure, and MarketingOS.Web. The test projects mirror this structure.

tests/
├── MarketingOS.Domain.Tests/
├── MarketingOS.Application.Tests/
├── MarketingOS.Infrastructure.Tests/
├── MarketingOS.Web.Tests/          # Integration tests
└── MarketingOS.Architecture.Tests/ # NetArchTest rules

Unit Tests with xUnit

The domain and application layers contain the business logic that’s most important to test — and most straightforward. No HTTP context, no database, no Umbraco services. Just logic.

Here’s a real example: testing the GenerateBlogDraftHandler, which is a MediatR command handler that takes a topic, calls the AI content service, validates the result, and returns a draft blog post.

First, the command and handler:

// MarketingOS.Application/Features/Content/Commands/GenerateBlogDraft.cs
public record GenerateBlogDraftCommand(
    string Topic,
    string TargetAudience,
    string Tone,
    string TenantId
) : IRequest<Result<BlogDraftResponse>>;

public class GenerateBlogDraftHandler
    : IRequestHandler<GenerateBlogDraftCommand, Result<BlogDraftResponse>>
{
    private readonly IAiContentService _aiContentService;
    private readonly ITenantConfigProvider _tenantConfig;
    private readonly ILogger<GenerateBlogDraftHandler> _logger;

    public GenerateBlogDraftHandler(
        IAiContentService aiContentService,
        ITenantConfigProvider tenantConfig,
        ILogger<GenerateBlogDraftHandler> logger)
    {
        _aiContentService = aiContentService;
        _tenantConfig = tenantConfig;
        _logger = logger;
    }

    public async Task<Result<BlogDraftResponse>> Handle(
        GenerateBlogDraftCommand request,
        CancellationToken cancellationToken)
    {
        var config = await _tenantConfig.GetConfigAsync(request.TenantId);
        if (config is null)
            return Result<BlogDraftResponse>.Failure("Tenant not found");

        var prompt = new BlogPromptBuilder()
            .WithTopic(request.Topic)
            .WithAudience(request.TargetAudience)
            .WithTone(request.Tone)
            .WithBrandVoice(config.BrandVoice)
            .Build();

        var aiResponse = await _aiContentService
            .GenerateContentAsync(prompt, cancellationToken);

        if (!aiResponse.IsSuccess)
            return Result<BlogDraftResponse>.Failure(
                $"AI generation failed: {aiResponse.Error}");

        var draft = new BlogDraftResponse(
            Title: aiResponse.Value.Title,
            Content: aiResponse.Value.Body,
            MetaDescription: aiResponse.Value.MetaDescription,
            SuggestedTags: aiResponse.Value.Tags
        );

        return Result<BlogDraftResponse>.Success(draft);
    }
}

Now the test class. I use NSubstitute for mocking — it’s less verbose than Moq and reads more naturally:

// tests/MarketingOS.Application.Tests/Features/Content/GenerateBlogDraftHandlerTests.cs
using FluentAssertions;
using Microsoft.Extensions.Logging;
using NSubstitute;
using Xunit;

namespace MarketingOS.Application.Tests.Features.Content;

public class GenerateBlogDraftHandlerTests
{
    private readonly IAiContentService _aiContentService;
    private readonly ITenantConfigProvider _tenantConfig;
    private readonly ILogger<GenerateBlogDraftHandler> _logger;
    private readonly GenerateBlogDraftHandler _sut;

    public GenerateBlogDraftHandlerTests()
    {
        _aiContentService = Substitute.For<IAiContentService>();
        _tenantConfig = Substitute.For<ITenantConfigProvider>();
        _logger = Substitute.For<ILogger<GenerateBlogDraftHandler>>();
        _sut = new GenerateBlogDraftHandler(
            _aiContentService, _tenantConfig, _logger);
    }

    [Fact]
    public async Task Handle_ValidRequest_ReturnsBlogDraft()
    {
        // Arrange
        var command = new GenerateBlogDraftCommand(
            Topic: "Headless CMS Benefits",
            TargetAudience: "Marketing managers",
            Tone: "Professional",
            TenantId: "tenant-abc"
        );

        var tenantConfig = new TenantConfig
        {
            TenantId = "tenant-abc",
            BrandVoice = "Authoritative yet approachable"
        };

        var aiResponse = Result<AiContentResponse>.Success(
            new AiContentResponse(
                Title: "5 Reasons Marketing Teams Love Headless CMS",
                Body: "# Introduction\n\nHeadless CMS is changing...",
                MetaDescription: "Discover why headless CMS platforms...",
                Tags: new[] { "cms", "marketing", "headless" }
            ));

        _tenantConfig.GetConfigAsync("tenant-abc")
            .Returns(tenantConfig);
        _aiContentService
            .GenerateContentAsync(Arg.Any<string>(), Arg.Any<CancellationToken>())
            .Returns(aiResponse);

        // Act
        var result = await _sut.Handle(command, CancellationToken.None);

        // Assert
        result.IsSuccess.Should().BeTrue();
        result.Value.Title.Should().Be("5 Reasons Marketing Teams Love Headless CMS");
        result.Value.SuggestedTags.Should().Contain("headless");
    }

    [Fact]
    public async Task Handle_UnknownTenant_ReturnsFailure()
    {
        var command = new GenerateBlogDraftCommand(
            "Topic", "Audience", "Tone", "nonexistent-tenant");

        _tenantConfig.GetConfigAsync("nonexistent-tenant")
            .Returns((TenantConfig?)null);

        var result = await _sut.Handle(command, CancellationToken.None);

        result.IsSuccess.Should().BeFalse();
        result.Error.Should().Contain("Tenant not found");
    }

    [Fact]
    public async Task Handle_AiServiceFails_ReturnsFailure()
    {
        var command = new GenerateBlogDraftCommand(
            "Topic", "Audience", "Tone", "tenant-abc");

        _tenantConfig.GetConfigAsync("tenant-abc")
            .Returns(new TenantConfig
            {
                TenantId = "tenant-abc",
                BrandVoice = "Friendly"
            });

        _aiContentService
            .GenerateContentAsync(Arg.Any<string>(), Arg.Any<CancellationToken>())
            .Returns(Result<AiContentResponse>.Failure("Rate limit exceeded"));

        var result = await _sut.Handle(command, CancellationToken.None);

        result.IsSuccess.Should().BeFalse();
        result.Error.Should().Contain("AI generation failed");
    }

    [Fact]
    public async Task Handle_PassesBrandVoiceToPrompt()
    {
        var command = new GenerateBlogDraftCommand(
            "Topic", "Audience", "Casual", "tenant-abc");

        _tenantConfig.GetConfigAsync("tenant-abc")
            .Returns(new TenantConfig
            {
                TenantId = "tenant-abc",
                BrandVoice = "Bold and irreverent"
            });

        _aiContentService
            .GenerateContentAsync(Arg.Any<string>(), Arg.Any<CancellationToken>())
            .Returns(Result<AiContentResponse>.Success(
                new AiContentResponse("T", "B", "M", Array.Empty<string>())));

        await _sut.Handle(command, CancellationToken.None);

        await _aiContentService.Received(1)
            .GenerateContentAsync(
                Arg.Is<string>(p => p.Contains("Bold and irreverent")),
                Arg.Any<CancellationToken>());
    }
}

A few things to notice about this test structure:

The system under test (_sut) is created in the constructor, not in each test. Every test gets a fresh instance because xUnit creates a new class instance per test method.

FluentAssertions makes the assertions readable. result.IsSuccess.Should().BeTrue() reads like English. When a test fails, FluentAssertions gives you a message like “Expected result.IsSuccess to be true, but found false” instead of the generic “Assert.True failed.”

The last test verifies behavior, not state. It checks that the brand voice from the tenant config actually makes it into the prompt sent to the AI service. This catches a subtle bug where someone refactors the prompt builder and accidentally drops the brand voice parameter.

I also test the BlogPromptBuilder in isolation, the FluentValidation validators for each command, and the domain value objects. These tests are fast — the full domain and application test suite runs in under 2 seconds.

Integration Tests with WebApplicationFactory and Testcontainers

Unit tests are great for logic, but they don’t tell you whether the Content Delivery API actually returns the right JSON structure. For that, we need integration tests that hit the real Umbraco API.

The challenge: Umbraco needs a database. A real SQL Server instance, with the Umbraco schema installed and test content seeded. Running that in CI without flaky infrastructure is the hard part.

Enter Testcontainers. It spins up a SQL Server Docker container per test run, seeds it with test data, and tears it down when tests finish. No shared state, no “works on my machine” debugging.

// tests/MarketingOS.Web.Tests/Infrastructure/MarketingOsWebFactory.cs
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.Extensions.Configuration;
using Testcontainers.MsSql;
using Xunit;

namespace MarketingOS.Web.Tests.Infrastructure;

public class MarketingOsWebFactory
    : WebApplicationFactory<Program>, IAsyncLifetime
{
    private readonly MsSqlContainer _sqlContainer = new MsSqlBuilder()
        .WithImage("mcr.microsoft.com/mssql/server:2022-latest")
        .WithPassword("Test@Password123!")
        .Build();

    public async Task InitializeAsync()
    {
        await _sqlContainer.StartAsync();
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureAppConfiguration((context, config) =>
        {
            config.AddInMemoryCollection(new Dictionary<string, string?>
            {
                ["ConnectionStrings:umbracoDbDSN"] =
                    _sqlContainer.GetConnectionString(),
                ["ConnectionStrings:umbracoDbDSN_ProviderName"] =
                    "Microsoft.Data.SqlClient",
                ["Umbraco:CMS:DeliveryApi:Enabled"] = "true",
                ["Umbraco:CMS:DeliveryApi:PublicAccess"] = "true",
                ["Umbraco:CMS:Unattended:InstallUnattended"] = "true",
                ["Umbraco:CMS:Unattended:UnattendedUserName"] = "test@test.com",
                ["Umbraco:CMS:Unattended:UnattendedUserPassword"] = "Test1234!",
                ["Umbraco:CMS:Unattended:UnattendedUserEmail"] = "test@test.com",
            });
        });
    }

    public new async Task DisposeAsync()
    {
        await _sqlContainer.DisposeAsync();
        await base.DisposeAsync();
    }
}

And the integration test itself:

// tests/MarketingOS.Web.Tests/DeliveryApi/ContentDeliveryApiTests.cs
using System.Net;
using System.Net.Http.Json;
using FluentAssertions;
using MarketingOS.Web.Tests.Infrastructure;
using Xunit;

namespace MarketingOS.Web.Tests.DeliveryApi;

[Collection("WebFactory")]
public class ContentDeliveryApiTests : IClassFixture<MarketingOsWebFactory>
{
    private readonly HttpClient _client;

    public ContentDeliveryApiTests(MarketingOsWebFactory factory)
    {
        _client = factory.CreateClient();
    }

    [Fact]
    public async Task GetContent_HomePage_ReturnsExpectedStructure()
    {
        // Arrange — content is seeded during Umbraco unattended install
        // plus our custom IComposer that seeds test content

        // Act
        var response = await _client.GetAsync(
            "/umbraco/delivery/api/v2/content?filter=contentType:homePage");

        // Assert
        response.StatusCode.Should().Be(HttpStatusCode.OK);

        var content = await response.Content
            .ReadFromJsonAsync<DeliveryApiResponse>();

        content.Should().NotBeNull();
        content!.Items.Should().NotBeEmpty();

        var homePage = content.Items.First();
        homePage.ContentType.Should().Be("homePage");
        homePage.Properties.Should().ContainKey("heroHeading");
        homePage.Properties.Should().ContainKey("heroSubheading");
        homePage.Properties.Should().ContainKey("heroCtaText");
        homePage.Properties.Should().ContainKey("featuredBlocks");
    }

    [Fact]
    public async Task GetContent_ByRoute_ReturnsCorrectPage()
    {
        var response = await _client.GetAsync(
            "/umbraco/delivery/api/v2/content/item/services");

        response.StatusCode.Should().Be(HttpStatusCode.OK);

        var content = await response.Content
            .ReadFromJsonAsync<DeliveryApiContentItem>();

        content.Should().NotBeNull();
        content!.Route.Path.Should().Be("/services/");
    }

    [Fact]
    public async Task GetContent_NonExistentRoute_Returns404()
    {
        var response = await _client.GetAsync(
            "/umbraco/delivery/api/v2/content/item/this-page-does-not-exist");

        response.StatusCode.Should().Be(HttpStatusCode.NotFound);
    }

    [Fact]
    public async Task GetContent_BlockList_ContainsBlockStructure()
    {
        var response = await _client.GetAsync(
            "/umbraco/delivery/api/v2/content?filter=contentType:landingPage");

        var content = await response.Content
            .ReadFromJsonAsync<DeliveryApiResponse>();

        var landingPage = content!.Items.First();
        var blocks = landingPage.Properties["blocks"] as dynamic;

        // Verify block structure matches what the frontend expects
        Assert.NotNull(blocks);
    }
}

// Minimal DTOs for deserialization
public record DeliveryApiResponse(DeliveryApiContentItem[] Items, int Total);
public record DeliveryApiContentItem(
    string ContentType,
    DeliveryApiRoute Route,
    Dictionary<string, object> Properties
);
public record DeliveryApiRoute(string Path);

These integration tests take 30-45 seconds because Testcontainers needs to pull and start the SQL Server container (cached after the first run). But they’re worth every second. They test the real Umbraco pipeline — routing, content resolution, property conversion, JSON serialization. If the Content Delivery API changes its response format in an Umbraco upgrade, these tests catch it.

Testing the Gemini Service Without Real API Calls

The GeminiContentService calls the Google AI API. We absolutely do not want real API calls in CI — they’re slow, flaky, rate-limited, and cost money. Instead, we record HTTP responses and replay them using a custom DelegatingHandler.

// tests/MarketingOS.Infrastructure.Tests/Services/GeminiContentServiceTests.cs
using System.Net;
using System.Text;
using System.Text.Json;
using FluentAssertions;
using Microsoft.Extensions.Options;
using NSubstitute;
using Xunit;

namespace MarketingOS.Infrastructure.Tests.Services;

public class GeminiContentServiceTests
{
    [Fact]
    public async Task GenerateContentAsync_ValidPrompt_ReturnsContent()
    {
        // Arrange — a recorded Gemini API response
        var geminiResponse = new
        {
            candidates = new[]
            {
                new
                {
                    content = new
                    {
                        parts = new[]
                        {
                            new { text = JsonSerializer.Serialize(new
                            {
                                title = "Test Title",
                                body = "Test body content",
                                metaDescription = "Test meta",
                                tags = new[] { "test" }
                            })}
                        }
                    }
                }
            }
        };

        var handler = new FakeHttpMessageHandler(
            new HttpResponseMessage(HttpStatusCode.OK)
            {
                Content = new StringContent(
                    JsonSerializer.Serialize(geminiResponse),
                    Encoding.UTF8,
                    "application/json")
            });

        var httpClient = new HttpClient(handler)
        {
            BaseAddress = new Uri("https://generativelanguage.googleapis.com/")
        };

        var options = Options.Create(new GeminiOptions
        {
            ApiKey = "fake-key-for-testing",
            Model = "gemini-2.0-flash"
        });

        var sut = new GeminiContentService(httpClient, options);

        // Act
        var result = await sut.GenerateContentAsync(
            "Write about testing", CancellationToken.None);

        // Assert
        result.IsSuccess.Should().BeTrue();
        result.Value.Title.Should().Be("Test Title");
    }
}

public class FakeHttpMessageHandler : DelegatingHandler
{
    private readonly HttpResponseMessage _response;

    public FakeHttpMessageHandler(HttpResponseMessage response)
    {
        _response = response;
    }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Task.FromResult(_response);
    }
}

This pattern — fake HTTP handler with recorded responses — is one I use constantly. It’s simpler than WireMock for single-endpoint services and doesn’t require any external process.

Architecture Tests with NetArchTest

Clean Architecture only works if you enforce the dependency rules. Without enforcement, someone (probably future me at 11pm on a Friday) will add a reference from the Domain project to Infrastructure because “it’s just one class” and the architecture slowly erodes.

NetArchTest lets you write xUnit tests that verify assembly dependency rules at compile time. If someone violates the architecture, the test fails in CI.

// tests/MarketingOS.Architecture.Tests/CleanArchitectureTests.cs
using FluentAssertions;
using NetArchTest.Rules;
using Xunit;

namespace MarketingOS.Architecture.Tests;

public class CleanArchitectureTests
{
    private const string DomainNamespace = "MarketingOS.Domain";
    private const string ApplicationNamespace = "MarketingOS.Application";
    private const string InfrastructureNamespace = "MarketingOS.Infrastructure";
    private const string WebNamespace = "MarketingOS.Web";

    [Fact]
    public void Domain_ShouldNotReference_Application()
    {
        var result = Types.InAssembly(typeof(Domain.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(ApplicationNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Domain must not depend on Application layer");
    }

    [Fact]
    public void Domain_ShouldNotReference_Infrastructure()
    {
        var result = Types.InAssembly(typeof(Domain.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(InfrastructureNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Domain must not depend on Infrastructure layer");
    }

    [Fact]
    public void Domain_ShouldNotReference_Web()
    {
        var result = Types.InAssembly(typeof(Domain.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(WebNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Domain must not depend on Web layer");
    }

    [Fact]
    public void Application_ShouldNotReference_Infrastructure()
    {
        var result = Types
            .InAssembly(typeof(Application.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(InfrastructureNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Application must not depend on Infrastructure layer");
    }

    [Fact]
    public void Application_ShouldNotReference_Web()
    {
        var result = Types
            .InAssembly(typeof(Application.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(WebNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Application must not depend on Web layer");
    }

    [Fact]
    public void Infrastructure_ShouldNotReference_Web()
    {
        var result = Types
            .InAssembly(typeof(Infrastructure.AssemblyReference).Assembly)
            .ShouldNot()
            .HaveDependencyOn(WebNamespace)
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Infrastructure must not depend on Web layer");
    }

    [Fact]
    public void Handlers_ShouldBeSealed()
    {
        var result = Types
            .InAssembly(typeof(Application.AssemblyReference).Assembly)
            .That()
            .HaveNameEndingWith("Handler")
            .Should()
            .BeSealed()
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Handlers should be sealed for performance");
    }

    [Fact]
    public void DomainEntities_ShouldNotHavePublicSetters()
    {
        var result = Types
            .InAssembly(typeof(Domain.AssemblyReference).Assembly)
            .That()
            .ResideInNamespace("MarketingOS.Domain.Entities")
            .Should()
            .BeImmutable()
            .GetResult();

        result.IsSuccessful.Should().BeTrue(
            because: "Domain entities should be immutable");
    }
}

These tests have saved me at least three times. The most memorable was when a junior developer added using MarketingOS.Infrastructure.Data; inside an Application layer service because they needed the DbContext to run a complex query. The architecture test caught it in the PR build. We refactored it into a proper repository interface in Application with an implementation in Infrastructure. The whole conversation was “hey, the architecture test failed — here’s the pattern to follow” rather than a code review argument about principles.

Frontend Testing: Next.js

The frontend testing stack is Jest with React Testing Library for component tests, and Playwright for E2E and visual regression. Let me walk through each layer.

Component Tests with Jest + React Testing Library

Component tests verify that individual block components render correctly given Umbraco content data. They’re fast (no browser, no API) and catch the majority of rendering bugs.

First, I set up shared test fixtures — mock Umbraco data that mirrors the real Content Delivery API structure:

// frontend/src/test/fixtures/umbraco-content.ts
import type { UmbracoBlock, UmbracoPage } from '@/lib/umbraco/types';

export const mockHeroBlock: UmbracoBlock = {
  contentType: 'heroBlock',
  content: {
    heroHeading: 'Build Better Websites',
    heroSubheading: 'A modern approach to marketing websites',
    heroCtaText: 'Get Started',
    heroCtaUrl: [{ url: '/contact', name: 'Contact Us' }],
    heroBackground: {
      url: '/images/hero-bg.jpg',
      width: 1920,
      height: 1080,
      altText: 'Abstract background',
    },
  },
};

export const mockFeatureGridBlock: UmbracoBlock = {
  contentType: 'featureGridBlock',
  content: {
    heading: 'Why Choose Us',
    features: [
      {
        icon: 'rocket',
        title: 'Fast Performance',
        description: 'Sub-second page loads with static generation.',
      },
      {
        icon: 'shield',
        title: 'Enterprise Security',
        description: 'SOC 2 compliant infrastructure.',
      },
      {
        icon: 'globe',
        title: 'Global CDN',
        description: 'Content served from 200+ edge locations.',
      },
    ],
  },
};

export const mockLandingPage: UmbracoPage = {
  contentType: 'landingPage',
  name: 'Services',
  route: { path: '/services/' },
  properties: {
    seoTitle: 'Our Services | MarketingOS',
    seoDescription: 'Explore our range of marketing services.',
    ogImage: { url: '/images/services-og.jpg' },
    blocks: {
      items: [mockHeroBlock, mockFeatureGridBlock],
    },
  },
};

Now the component test for HeroBlock:

// frontend/src/components/blocks/__tests__/HeroBlock.test.tsx
import { render, screen } from '@testing-library/react';
import { HeroBlock } from '../HeroBlock';
import { mockHeroBlock } from '@/test/fixtures/umbraco-content';

describe('HeroBlock', () => {
  it('renders heading and subheading', () => {
    render(<HeroBlock block={mockHeroBlock} />);

    expect(screen.getByRole('heading', { level: 1 }))
      .toHaveTextContent('Build Better Websites');
    expect(screen.getByText('A modern approach to marketing websites'))
      .toBeInTheDocument();
  });

  it('renders CTA link with correct href', () => {
    render(<HeroBlock block={mockHeroBlock} />);

    const ctaLink = screen.getByRole('link', { name: /get started/i });
    expect(ctaLink).toHaveAttribute('href', '/contact');
  });

  it('renders background image with alt text', () => {
    render(<HeroBlock block={mockHeroBlock} />);

    const image = screen.getByAltText('Abstract background');
    expect(image).toBeInTheDocument();
  });

  it('handles missing optional fields gracefully', () => {
    const minimalBlock: UmbracoBlock = {
      contentType: 'heroBlock',
      content: {
        heroHeading: 'Just a Heading',
        // No subheading, no CTA, no background
      },
    };

    render(<HeroBlock block={minimalBlock} />);

    expect(screen.getByRole('heading', { level: 1 }))
      .toHaveTextContent('Just a Heading');
    expect(screen.queryByRole('link')).not.toBeInTheDocument();
  });

  it('does not render when heading is missing', () => {
    const emptyBlock: UmbracoBlock = {
      contentType: 'heroBlock',
      content: {},
    };

    const { container } = render(<HeroBlock block={emptyBlock} />);
    expect(container.firstChild).toBeNull();
  });
});

That last test — “does not render when heading is missing” — is exactly the test that would have caught the Tuesday bug if we’d had it. The HeroBlock component should render nothing (not a broken empty section) when the heading property is missing. And there should be a contract test (coming up) that verifies the heading property is always present.

Testing Metadata Generation

The generatePageMetadata function is pure logic — takes a page object and site settings, returns a Next.js Metadata object. Perfect for unit testing:

// frontend/src/lib/seo/__tests__/metadata.test.ts
import { generatePageMetadata } from '../metadata';
import { mockLandingPage } from '@/test/fixtures/umbraco-content';

describe('generatePageMetadata', () => {
  const siteSettings = {
    siteName: 'MarketingOS Demo',
    siteUrl: 'https://demo.marketingos.com',
    defaultOgImage: { url: '/images/default-og.jpg' },
  };

  it('returns SEO title from page properties', () => {
    const metadata = generatePageMetadata(mockLandingPage, siteSettings);

    expect(metadata.title).toBe('Our Services | MarketingOS');
  });

  it('falls back to page name if seoTitle is missing', () => {
    const page = {
      ...mockLandingPage,
      properties: { ...mockLandingPage.properties, seoTitle: undefined },
    };

    const metadata = generatePageMetadata(page, siteSettings);

    expect(metadata.title).toBe('Services | MarketingOS Demo');
  });

  it('generates OpenGraph metadata', () => {
    const metadata = generatePageMetadata(mockLandingPage, siteSettings);

    expect(metadata.openGraph).toEqual(
      expect.objectContaining({
        title: 'Our Services | MarketingOS',
        description: 'Explore our range of marketing services.',
        url: 'https://demo.marketingos.com/services/',
        images: [{ url: '/images/services-og.jpg' }],
      })
    );
  });

  it('uses default OG image when page has none', () => {
    const page = {
      ...mockLandingPage,
      properties: { ...mockLandingPage.properties, ogImage: undefined },
    };

    const metadata = generatePageMetadata(page, siteSettings);

    expect(metadata.openGraph?.images).toEqual([
      { url: '/images/default-og.jpg' },
    ]);
  });

  it('generates canonical URL from route', () => {
    const metadata = generatePageMetadata(mockLandingPage, siteSettings);

    expect(metadata.alternates?.canonical).toBe(
      'https://demo.marketingos.com/services/'
    );
  });
});

Testing JSON-LD Schema Builders

JSON-LD structured data is critical for SEO. These tests ensure the schema is valid:

// frontend/src/lib/seo/__tests__/json-ld.test.ts
import { buildOrganizationSchema, buildBlogPostSchema }
  from '../json-ld';

describe('buildOrganizationSchema', () => {
  it('generates valid Organization schema', () => {
    const schema = buildOrganizationSchema({
      name: 'Acme Corp',
      url: 'https://acme.com',
      logo: 'https://acme.com/logo.png',
    });

    expect(schema['@context']).toBe('https://schema.org');
    expect(schema['@type']).toBe('Organization');
    expect(schema.name).toBe('Acme Corp');
    expect(schema.logo).toBe('https://acme.com/logo.png');
  });
});

describe('buildBlogPostSchema', () => {
  it('generates valid BlogPosting schema with all fields', () => {
    const schema = buildBlogPostSchema({
      title: 'Testing Headless CMS',
      description: 'A guide to testing...',
      publishedDate: '2026-03-01',
      modifiedDate: '2026-03-02',
      author: 'Thuan Luong',
      image: 'https://example.com/hero.jpg',
      url: 'https://example.com/blog/testing',
    });

    expect(schema['@type']).toBe('BlogPosting');
    expect(schema.headline).toBe('Testing Headless CMS');
    expect(schema.datePublished).toBe('2026-03-01');
    expect(schema.dateModified).toBe('2026-03-02');
    expect(schema.author['@type']).toBe('Person');
    expect(schema.author.name).toBe('Thuan Luong');
  });

  it('uses publishedDate as modifiedDate when not provided', () => {
    const schema = buildBlogPostSchema({
      title: 'Test',
      description: 'Test',
      publishedDate: '2026-03-01',
      author: 'Author',
      url: 'https://example.com/blog/test',
    });

    expect(schema.dateModified).toBe('2026-03-01');
  });
});

E2E Tests with Playwright

Component tests verify individual pieces. E2E tests verify the whole thing works together — the Next.js app fetching real content from Umbraco, rendering blocks, handling navigation, and submitting forms. Playwright runs against the full Docker Compose stack.

// frontend/e2e/landing-page.spec.ts
import { test, expect } from '@playwright/test';

test.describe('Landing Page', () => {
  test('renders hero block with content from Umbraco', async ({ page }) => {
    await page.goto('/services');

    // Hero section should be visible
    const hero = page.locator('[data-block="heroBlock"]');
    await expect(hero).toBeVisible();

    // Should have a heading
    const heading = hero.locator('h1');
    await expect(heading).not.toBeEmpty();

    // CTA should be a link
    const cta = hero.locator('a.hero-cta');
    await expect(cta).toBeVisible();
    await expect(cta).toHaveAttribute('href', /^\//);
  });

  test('renders all blocks in correct order', async ({ page }) => {
    await page.goto('/services');

    const blocks = page.locator('[data-block]');
    const count = await blocks.count();

    // The services page should have at least 3 blocks
    expect(count).toBeGreaterThanOrEqual(3);

    // First block should be hero
    const firstBlock = blocks.nth(0);
    await expect(firstBlock).toHaveAttribute('data-block', 'heroBlock');
  });

  test('navigation links work between pages', async ({ page }) => {
    await page.goto('/');

    // Click on the Services link in navigation
    await page.click('nav a[href="/services"]');
    await page.waitForURL('/services');

    expect(page.url()).toContain('/services');

    // The page should have content (not a 404)
    const main = page.locator('main');
    await expect(main).not.toBeEmpty();
  });

  test('page has correct meta title', async ({ page }) => {
    await page.goto('/services');

    const title = await page.title();
    expect(title).toContain('Services');
    expect(title).not.toBe(''); // Should never be empty
  });
});

test.describe('Blog Listing', () => {
  test('displays blog posts with titles and dates', async ({ page }) => {
    await page.goto('/blog');

    const articles = page.locator('article.blog-card');
    const count = await articles.count();
    expect(count).toBeGreaterThan(0);

    // Each card should have a title and date
    const firstCard = articles.nth(0);
    await expect(firstCard.locator('h2, h3')).not.toBeEmpty();
    await expect(firstCard.locator('time')).toBeVisible();
  });
});

Contact Form E2E Test

Forms are where E2E tests really shine — they test the full flow from user interaction to server response:

// frontend/e2e/contact-form.spec.ts
import { test, expect } from '@playwright/test';

test.describe('Contact Form', () => {
  test('submits form successfully with valid data', async ({ page }) => {
    await page.goto('/contact');

    // Fill in the form
    await page.fill('input[name="name"]', 'Test User');
    await page.fill('input[name="email"]', 'test@example.com');
    await page.fill('input[name="company"]', 'Test Corp');
    await page.fill('textarea[name="message"]', 'This is an automated test message.');

    // Submit
    await page.click('button[type="submit"]');

    // Wait for success message
    const success = page.locator('[data-testid="form-success"]');
    await expect(success).toBeVisible({ timeout: 10_000 });
    await expect(success).toContainText('Thank you');
  });

  test('shows validation errors for empty required fields', async ({ page }) => {
    await page.goto('/contact');

    // Submit without filling anything
    await page.click('button[type="submit"]');

    // Should show validation errors
    const errors = page.locator('[data-testid="field-error"]');
    const count = await errors.count();
    expect(count).toBeGreaterThan(0);
  });

  test('validates email format', async ({ page }) => {
    await page.goto('/contact');

    await page.fill('input[name="name"]', 'Test User');
    await page.fill('input[name="email"]', 'not-an-email');
    await page.fill('textarea[name="message"]', 'Test message');

    await page.click('button[type="submit"]');

    const emailError = page.locator('input[name="email"] ~ [data-testid="field-error"]');
    await expect(emailError).toBeVisible();
  });
});

Visual Regression Testing

Here’s where testing gets interesting for a multi-tenant marketing template. Client A’s site has a blue theme. Client B’s site has a green theme. Both use the same components. A CSS change that looks fine on Client A’s theme might break the layout on Client B’s theme. Visual regression testing catches this by comparing screenshots against known-good baselines.

Playwright’s built-in toHaveScreenshot() is surprisingly good for this. It does pixel-by-pixel comparison with configurable thresholds.

// frontend/e2e/visual-regression.spec.ts
import { test, expect } from '@playwright/test';

test.describe('Visual Regression - Client A Theme', () => {
  test.use({
    baseURL: 'http://localhost:3000', // Client A's instance
  });

  test('homepage hero', async ({ page }) => {
    await page.goto('/');
    await page.waitForLoadState('networkidle');

    // Wait for images to load
    await page.waitForTimeout(1000);

    const hero = page.locator('[data-block="heroBlock"]');
    await expect(hero).toHaveScreenshot('client-a-hero.png', {
      maxDiffPixelRatio: 0.01, // Allow 1% pixel difference
    });
  });

  test('feature grid section', async ({ page }) => {
    await page.goto('/services');
    await page.waitForLoadState('networkidle');

    const features = page.locator('[data-block="featureGridBlock"]');
    await expect(features).toHaveScreenshot('client-a-features.png', {
      maxDiffPixelRatio: 0.01,
    });
  });

  test('full homepage', async ({ page }) => {
    await page.goto('/');
    await page.waitForLoadState('networkidle');
    await page.waitForTimeout(1500);

    await expect(page).toHaveScreenshot('client-a-homepage-full.png', {
      fullPage: true,
      maxDiffPixelRatio: 0.02, // Slightly higher tolerance for full page
    });
  });
});

test.describe('Visual Regression - Dark Mode', () => {
  test('homepage in dark mode', async ({ page }) => {
    await page.goto('/');
    await page.waitForLoadState('networkidle');

    // Toggle dark mode
    await page.evaluate(() => {
      document.documentElement.setAttribute('data-theme', 'dark');
    });

    await page.waitForTimeout(500); // Wait for theme transition

    await expect(page).toHaveScreenshot('homepage-dark-mode.png', {
      fullPage: true,
      maxDiffPixelRatio: 0.02,
    });
  });
});

The Playwright config ties this all together with multiple projects for different viewports:

// frontend/playwright.config.ts
import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  testDir: './e2e',
  fullyParallel: true,
  forbidOnly: !!process.env.CI,
  retries: process.env.CI ? 2 : 0,
  workers: process.env.CI ? 1 : undefined,

  reporter: process.env.CI
    ? [
        ['html', { outputFolder: 'playwright-report' }],
        ['json', { outputFile: 'test-results.json' }],
      ]
    : 'html',

  use: {
    baseURL: process.env.BASE_URL || 'http://localhost:3000',
    trace: 'on-first-retry',
    screenshot: 'only-on-failure',
  },

  projects: [
    // Desktop browsers
    {
      name: 'chromium-desktop',
      use: {
        ...devices['Desktop Chrome'],
        viewport: { width: 1440, height: 900 },
      },
    },
    {
      name: 'firefox-desktop',
      use: {
        ...devices['Desktop Firefox'],
        viewport: { width: 1440, height: 900 },
      },
    },
    {
      name: 'webkit-desktop',
      use: {
        ...devices['Desktop Safari'],
        viewport: { width: 1440, height: 900 },
      },
    },

    // Tablet
    {
      name: 'tablet',
      use: {
        ...devices['iPad (gen 7)'],
      },
    },

    // Mobile
    {
      name: 'mobile-chrome',
      use: {
        ...devices['Pixel 7'],
      },
    },
    {
      name: 'mobile-safari',
      use: {
        ...devices['iPhone 14'],
      },
    },
  ],

  // Start the dev server and Umbraco stack before tests
  webServer: [
    {
      command: 'docker compose up -d umbraco sql-server',
      url: 'http://localhost:5000/umbraco/delivery/api/v2/content',
      reuseExistingServer: !process.env.CI,
      timeout: 120_000,
    },
    {
      command: 'npm run dev',
      url: 'http://localhost:3000',
      reuseExistingServer: !process.env.CI,
      timeout: 30_000,
    },
  ],
});

Visual regression baselines are committed to the repo. When a legitimate design change is made, you update the baselines with npx playwright test --update-snapshots. In CI, failed visual tests generate a diff image showing exactly which pixels changed — red overlay on a side-by-side comparison. This has caught several subtle bugs: a margin that collapsed at a specific viewport width, a font that didn’t load in WebKit, a border-radius that rendered differently in Firefox.

API Contract Testing with Pact

Now for the layer that would have prevented the Tuesday bug. Contract testing verifies that the API between Umbraco (the provider) and Next.js (the consumer) stays consistent, even though they’re developed and deployed independently.

Why Contract Tests Matter for Headless CMS

In a headless architecture, the Content Delivery API is a shared boundary. The Umbraco team (or the backend hat you’re wearing) can change content models, rename properties, modify API versions, or change serialization formats. The Next.js team (or your frontend hat) expects specific property names, types, and structures.

Integration tests help, but they test one side at a time. Contract tests test the agreement between both sides. Consumer-Driven Contracts mean the frontend defines what it expects, and the backend proves it can deliver. If either side drifts, the Pact Broker flags the incompatibility before deployment.

Consumer Tests (Next.js / TypeScript)

The consumer (Next.js) writes Pact tests that define the expected API responses. These run without a real Umbraco instance — they mock the API and generate a Pact contract file.

// frontend/pact/umbraco-content-api.consumer.spec.ts
import { PactV4, MatchersV3 } from '@pact-foundation/pact';
import path from 'path';
import { getPageByRoute } from '@/lib/umbraco/queries';

const { like, eachLike, string, regex } = MatchersV3;

const provider = new PactV4({
  consumer: 'MarketingOS-Frontend',
  provider: 'MarketingOS-Umbraco',
  dir: path.resolve(process.cwd(), 'pact/pacts'),
});

describe('Umbraco Content Delivery API - Consumer', () => {
  describe('GET /umbraco/delivery/api/v2/content/item/:route', () => {
    it('returns a landing page with hero block', async () => {
      await provider
        .addInteraction()
        .given('a landing page exists at /services')
        .uponReceiving('a request for the services landing page')
        .withRequest('GET', '/umbraco/delivery/api/v2/content/item/services', (builder) => {
          builder.headers({
            Accept: 'application/json',
          });
        })
        .willRespondWith(200, (builder) => {
          builder
            .headers({ 'Content-Type': 'application/json' })
            .jsonBody({
              name: like('Services'),
              contentType: 'landingPage',
              route: {
                path: like('/services/'),
              },
              properties: {
                seoTitle: like('Our Services'),
                seoDescription: like('Explore our services'),
                blocks: {
                  items: eachLike({
                    content: {
                      contentType: string('heroBlock'),
                      // These are the properties the frontend DEPENDS on
                      heroHeading: like('Page Heading'),
                      heroSubheading: like('Page subheading text'),
                      heroCtaText: like('Learn More'),
                      heroCtaUrl: eachLike({
                        url: like('/contact'),
                        name: like('Contact'),
                      }),
                    },
                  }),
                },
              },
            });
        })
        .executeTest(async (mockServer) => {
          // Point the Umbraco client at the mock server
          process.env.UMBRACO_API_URL = mockServer.url;

          const page = await getPageByRoute('/services');

          expect(page).toBeDefined();
          expect(page!.contentType).toBe('landingPage');
          expect(page!.properties.blocks.items.length)
            .toBeGreaterThan(0);

          // Verify the frontend can access the properties it needs
          const firstBlock = page!.properties.blocks.items[0];
          expect(firstBlock.content.heroHeading).toBeDefined();
          expect(firstBlock.content.heroCtaText).toBeDefined();
        });
    });

    it('returns a blog post with required fields', async () => {
      await provider
        .addInteraction()
        .given('a blog post exists at /blog/test-post')
        .uponReceiving('a request for a blog post')
        .withRequest('GET', '/umbraco/delivery/api/v2/content/item/blog/test-post')
        .willRespondWith(200, (builder) => {
          builder
            .headers({ 'Content-Type': 'application/json' })
            .jsonBody({
              name: like('Test Post'),
              contentType: 'blogPost',
              route: {
                path: like('/blog/test-post/'),
              },
              properties: {
                seoTitle: like('Test Post Title'),
                seoDescription: like('A test blog post'),
                articleBody: like('# Hello World\n\nThis is content.'),
                publishDate: regex(
                  '2026-03-01T00:00:00Z',
                  '\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}Z'
                ),
                author: like('Thuan Luong'),
                heroImage: {
                  url: like('/media/blog/hero.jpg'),
                  width: like(1200),
                  height: like(630),
                  altText: like('Blog hero image'),
                },
                tags: eachLike('testing'),
              },
            });
        })
        .executeTest(async (mockServer) => {
          process.env.UMBRACO_API_URL = mockServer.url;

          const page = await getPageByRoute('/blog/test-post');

          expect(page).toBeDefined();
          expect(page!.properties.articleBody).toBeDefined();
          expect(page!.properties.publishDate).toBeDefined();
          expect(page!.properties.heroImage).toBeDefined();
        });
    });

    it('returns 404 for non-existent routes', async () => {
      await provider
        .addInteraction()
        .given('no page exists at /nonexistent')
        .uponReceiving('a request for a non-existent page')
        .withRequest('GET', '/umbraco/delivery/api/v2/content/item/nonexistent')
        .willRespondWith(404)
        .executeTest(async (mockServer) => {
          process.env.UMBRACO_API_URL = mockServer.url;

          const page = await getPageByRoute('/nonexistent');

          expect(page).toBeNull();
        });
    });
  });
});

When these tests run, Pact generates a JSON contract file at pact/pacts/MarketingOS-Frontend-MarketingOS-Umbraco.json. This file is the source of truth for what the frontend expects. It gets published to a Pact Broker (we self-host one with Docker, though Pactflow is the hosted option).

Provider Verification (Umbraco / C#)

On the Umbraco side, we verify that the real API fulfills the consumer’s contract. PactNet reads the contract file from the Pact Broker and replays each interaction against the running Umbraco instance.

// tests/MarketingOS.Web.Tests/Pact/UmbracoProviderTests.cs
using MarketingOS.Web.Tests.Infrastructure;
using PactNet;
using PactNet.Infrastructure.Middleware;
using Xunit;
using Xunit.Abstractions;

namespace MarketingOS.Web.Tests.Pact;

public class UmbracoProviderTests : IClassFixture<MarketingOsWebFactory>
{
    private readonly MarketingOsWebFactory _factory;
    private readonly ITestOutputHelper _output;

    public UmbracoProviderTests(
        MarketingOsWebFactory factory,
        ITestOutputHelper output)
    {
        _factory = factory;
        _output = output;
    }

    [Fact]
    public void Verify_UmbracoContentApi_MeetsConsumerExpectations()
    {
        // The factory gives us a running Umbraco instance
        // with Testcontainers SQL Server and seeded content
        var client = _factory.CreateClient();
        var baseUri = client.BaseAddress!;

        var config = new PactVerifierConfig
        {
            Outputters = new List<IOutput>
            {
                new XUnitOutput(_output)
            }
        };

        var verifier = new PactVerifier("MarketingOS-Umbraco", config);

        verifier
            .WithHttpEndpoint(baseUri)
            .WithPactBrokerSource(new Uri(
                Environment.GetEnvironmentVariable("PACT_BROKER_URL")
                    ?? "http://localhost:9292"))
            .WithProviderStateUrl(new Uri(baseUri, "/pact-states"))
            .Verify();
    }
}

// Provider state handler — sets up data for each interaction
// This is registered as middleware in the test web factory
public class ProviderStatesController : ControllerBase
{
    private readonly IContentSeedService _seeder;

    public ProviderStatesController(IContentSeedService seeder)
    {
        _seeder = seeder;
    }

    [HttpPost("/pact-states")]
    public async Task<IActionResult> SetState(
        [FromBody] ProviderState state)
    {
        switch (state.State)
        {
            case "a landing page exists at /services":
                await _seeder.SeedLandingPage("/services");
                break;

            case "a blog post exists at /blog/test-post":
                await _seeder.SeedBlogPost("/blog/test-post");
                break;

            case "no page exists at /nonexistent":
                // Nothing to seed — page should not exist
                break;

            default:
                return BadRequest($"Unknown state: {state.State}");
        }

        return Ok();
    }
}

public record ProviderState(string State);

Here’s the payoff: if someone renames heroHeading to heroTitle in the Umbraco content model, the provider verification fails with a clear message: “Expected property ‘heroHeading’ to exist, but it was not found.” The CI pipeline blocks the merge. No Tuesday afternoon call.

The Pact Workflow

The full contract testing workflow looks like this:

  1. Frontend developer writes a Pact consumer test defining expected API shape
  2. Consumer test generates a contract file (JSON)
  3. Contract is published to the Pact Broker (npx pact-broker publish)
  4. Backend CI pulls the latest contract and runs provider verification
  5. Pact Broker shows a compatibility matrix — which consumer versions work with which provider versions
  6. Both sides can deploy independently when the Pact Broker confirms compatibility (can-i-deploy check)
# In frontend CI — publish the consumer contract
npx pact-broker publish ./pact/pacts \
  --consumer-app-version=$(git rev-parse --short HEAD) \
  --broker-base-url=$PACT_BROKER_URL \
  --tag=$(git branch --show-current)

# In frontend CI — check if we can deploy
npx pact-broker can-i-deploy \
  --pacticipant=MarketingOS-Frontend \
  --version=$(git rev-parse --short HEAD) \
  --to-environment=production

The can-i-deploy check is the gate. It queries the Pact Broker: “Is this frontend version compatible with whatever backend version is currently in production?” If the contracts haven’t been verified against the current production backend, the deployment is blocked. No guessing, no “I think it should be fine.”

Performance Testing

Testing isn’t just about correctness — it’s about performance budgets. A marketing site that scores 85 on Lighthouse is a marketing site that loses rankings.

Lighthouse CI

Lighthouse CI runs automated audits against every PR. We set performance budgets as CI gates — if a change drops the performance score below the threshold, the build fails.

// frontend/lighthouserc.js
module.exports = {
  ci: {
    collect: {
      url: [
        'http://localhost:3000/',
        'http://localhost:3000/services',
        'http://localhost:3000/blog',
        'http://localhost:3000/contact',
      ],
      numberOfRuns: 3, // Run each URL 3 times, take median
      startServerCommand: 'npm run start',
      startServerReadyPattern: 'ready on',
    },
    assert: {
      assertions: {
        // Performance budgets — these are hard gates
        'categories:performance': ['error', { minScore: 0.9 }],
        'categories:accessibility': ['error', { minScore: 0.95 }],
        'categories:best-practices': ['error', { minScore: 0.9 }],
        'categories:seo': ['error', { minScore: 0.95 }],

        // Specific metric budgets
        'first-contentful-paint': ['warn', { maxNumericValue: 1500 }],
        'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],
        'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
        'total-blocking-time': ['warn', { maxNumericValue: 200 }],

        // Resource budgets
        'resource-summary:script:size': [
          'error',
          { maxNumericValue: 150000 }, // 150KB max JS
        ],
        'resource-summary:total:size': [
          'warn',
          { maxNumericValue: 500000 }, // 500KB total
        ],
      },
    },
    upload: {
      target: 'temporary-public-storage', // or your own LHCI server
    },
  },
};

The key budgets: performance score above 90, LCP under 2.5 seconds, CLS under 0.1, and total JS under 150KB. These are deliberately strict. Marketing sites should be fast — there’s no complex client-side app to justify large bundles.

k6 Load Testing

Lighthouse tests a single user experience. k6 tests what happens when 100 users hit the Content Delivery API simultaneously. This is important because Umbraco’s Delivery API has output caching, and we need to verify the cache is working under load.

// k6/content-delivery-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Trend } from 'k6/metrics';

const errorRate = new Rate('errors');
const contentDeliveryDuration = new Trend('content_delivery_duration');

export const options = {
  stages: [
    { duration: '30s', target: 20 },  // Ramp up to 20 users
    { duration: '1m', target: 50 },   // Ramp up to 50 users
    { duration: '2m', target: 100 },  // Hold at 100 users
    { duration: '30s', target: 0 },   // Ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'],     // 95th percentile under 500ms
    http_req_duration: ['p(99)<1000'],    // 99th percentile under 1s
    errors: ['rate<0.01'],                // Error rate under 1%
    content_delivery_duration: ['avg<200'], // Average API response under 200ms
  },
};

const BASE_URL = __ENV.UMBRACO_URL || 'http://localhost:5000';

const ROUTES = [
  '/umbraco/delivery/api/v2/content/item/',
  '/umbraco/delivery/api/v2/content/item/services',
  '/umbraco/delivery/api/v2/content/item/about',
  '/umbraco/delivery/api/v2/content/item/blog',
  '/umbraco/delivery/api/v2/content?filter=contentType:blogPost&sort=publishDate:desc&take=10',
];

export default function () {
  const route = ROUTES[Math.floor(Math.random() * ROUTES.length)];
  const url = `${BASE_URL}${route}`;

  const response = http.get(url, {
    headers: {
      Accept: 'application/json',
      'Api-Key': __ENV.UMBRACO_API_KEY || '',
    },
    tags: { name: route },
  });

  const isSuccess = check(response, {
    'status is 200': (r) => r.status === 200,
    'response has content': (r) => r.body && r.body.length > 0,
    'response is JSON': (r) =>
      r.headers['Content-Type']?.includes('application/json'),
    'response time < 500ms': (r) => r.timings.duration < 500,
  });

  errorRate.add(!isSuccess);
  contentDeliveryDuration.add(response.timings.duration);

  // Simulate real user behavior — not a continuous hammering
  sleep(Math.random() * 2 + 0.5); // 0.5-2.5 seconds between requests
}

export function handleSummary(data) {
  return {
    'k6-results.json': JSON.stringify(data, null, 2),
    stdout: textSummary(data, { indent: '  ', enableColors: true }),
  };
}

The thresholds are my baseline expectations: 95th percentile under 500ms, average under 200ms, error rate under 1%. When the output cache is working correctly, the average response time is around 15-30ms after the cache warms up. If the average creeps above 200ms, something’s wrong — maybe the cache was disabled, maybe a content change invalidated everything, maybe the SQL Server container is under-resourced.

I run k6 tests before every production deployment and after any Umbraco upgrade. The numbers go into a spreadsheet (yes, a spreadsheet — sometimes the simplest tool is the best) so I can track performance trends across releases.

Putting It All Together: The CI Test Pipeline

Here’s how all these test layers fit into a single CI pipeline. We’ll cover the full CI/CD setup in Part 7, but here’s the testing portion:

# .github/workflows/test.yml (excerpt)
jobs:
  backend-tests:
    runs-on: ubuntu-latest
    services:
      mssql:
        image: mcr.microsoft.com/mssql/server:2022-latest
        env:
          ACCEPT_EULA: Y
          SA_PASSWORD: Test@Password123!
        ports:
          - 1433:1433
    steps:
      - uses: actions/checkout@v4

      - name: Setup .NET 10
        uses: actions/setup-dotnet@v4
        with:
          dotnet-version: '10.0.x'

      - name: Run unit tests
        run: dotnet test tests/MarketingOS.Domain.Tests tests/MarketingOS.Application.Tests --logger "trx"

      - name: Run architecture tests
        run: dotnet test tests/MarketingOS.Architecture.Tests --logger "trx"

      - name: Run integration tests
        run: dotnet test tests/MarketingOS.Web.Tests --logger "trx"
        env:
          ConnectionStrings__umbracoDbDSN: "Server=localhost;Database=UmbracoTest;User Id=sa;Password=Test@Password123!;TrustServerCertificate=true"

  frontend-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '22'
          cache: 'npm'
          cache-dependency-path: frontend/package-lock.json

      - name: Install dependencies
        run: npm ci
        working-directory: frontend

      - name: Run Jest tests
        run: npm test -- --coverage --ci
        working-directory: frontend

      - name: Run Pact consumer tests
        run: npm run test:pact
        working-directory: frontend

      - name: Publish Pact contracts
        run: npx pact-broker publish ./pact/pacts --consumer-app-version=${{ github.sha }} --broker-base-url=${{ secrets.PACT_BROKER_URL }}
        working-directory: frontend

  e2e-tests:
    runs-on: ubuntu-latest
    needs: [backend-tests, frontend-tests]
    steps:
      - uses: actions/checkout@v4

      - name: Start Docker Compose stack
        run: docker compose -f docker-compose.test.yml up -d --wait

      - name: Install Playwright browsers
        run: npx playwright install --with-deps
        working-directory: frontend

      - name: Run Playwright E2E tests
        run: npx playwright test
        working-directory: frontend

      - name: Run visual regression tests
        run: npx playwright test --project=chromium-desktop e2e/visual-regression.spec.ts
        working-directory: frontend

      - name: Upload Playwright report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: playwright-report
          path: frontend/playwright-report/

  lighthouse:
    runs-on: ubuntu-latest
    needs: [e2e-tests]
    steps:
      - uses: actions/checkout@v4

      - name: Start production build
        run: docker compose -f docker-compose.test.yml up -d --wait

      - name: Run Lighthouse CI
        run: npx @lhci/cli autorun
        working-directory: frontend
        env:
          LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}

The pipeline runs in this order:

  1. Backend unit + architecture tests and frontend Jest + Pact tests run in parallel (they’re independent)
  2. E2E tests run after both pass (they need confidence that individual layers work)
  3. Lighthouse runs last (performance testing on a stack we know is functionally correct)

Total CI time: about 8 minutes. The Testcontainers SQL Server startup is the bottleneck. I’ve considered switching to SQLite for test runs, but the behavior differences between SQL Server and SQLite have bitten me before (collation, datetime precision, transaction isolation). The 30-second container startup is worth the confidence.

Lessons Learned

After six months of maintaining this test suite across multiple client sites, here’s what I’d tell my past self:

Contract tests are the highest-ROI tests in a headless architecture. They catch the bugs that nothing else catches — the silent API shape changes that break the frontend without errors. If I could only have one type of test, I’d pick contract tests.

Visual regression tests need curation. The first time I ran visual regression on the full site across all viewports, I had 47 baseline images. Every time a font loaded a millisecond later, tests failed. I’ve since narrowed it to key sections (hero, feature grid, footer) at three breakpoints. Quality over quantity.

Test data seeding is its own project. Half the effort in integration testing is building reliable test data seeders. Invest time in an IContentSeedService that can create consistent content programmatically. Future you will be grateful.

Don’t test Umbraco’s internals. I wasted two weeks writing tests that verified Umbraco’s Block List editor serialized correctly. That’s Umbraco’s job. Test your logic — the business rules, the content transformations, the rendering decisions. Let the framework test itself.

Run the full suite locally before pushing. CI is the safety net, not the primary feedback loop. I have a npm run test:all script that runs Jest, Pact consumer tests, and a quick Playwright smoke test. It takes 90 seconds. Running it before every push saves 8-minute CI feedback cycles.

What’s Next

We’ve covered testing from domain logic to visual regression, with contract tests as the glue between Umbraco and Next.js. The test suite gives us confidence to move fast.

In Part 7, we’ll take this tested codebase and package it for deployment: multi-stage Docker builds for both Umbraco and Next.js, GitHub Actions CI/CD with environment promotion (staging to production), and the container orchestration that runs the full stack. All those tests we wrote? They become the gates that control whether code moves from one environment to the next.


This is Part 6 of a 9-part series on building a reusable marketing website template with Umbraco 17 and Next.js.

Series outline:

  1. Architecture & Setup — Why this stack, ADRs, solution structure, Docker Compose
  2. Content Modeling — Document types, compositions, Block List page builder, Content Delivery API
  3. Next.js Rendering — Server Components, ISR, block renderer, component library, multi-tenant
  4. SEO & Performance — Metadata, JSON-LD, sitemaps, Core Web Vitals optimization
  5. AI Content with Gemini — Content generation, translation, SEO optimization, review workflow
  6. Testing — xUnit, Jest, Playwright, Pact contract tests, visual regression (this post)
  7. Docker & CI/CD — Multi-stage builds, GitHub Actions, environment promotion
  8. Infrastructure — Self-hosted Ubuntu, AWS, Azure, Terraform, monitoring
  9. Template & Retrospective — Onboarding automation, cost analysis, lessons learned
Export for reading

Comments