Serilog: Grafana + Loki for the Win

    Created: 2026-01-29
    Layer 1

    This post is a continuation of the Serilog series (previous part: Serilog: Structured Logging Explained).

    In this part, we will:

    • Introduce Grafana Loki sink for Serilog
    • Set up Grafana to visualize and query structured logs
    • Benchmark querying structured vs. unstructured logs in a real-world-like scenario

    Let's get started! 🚀

    Prerequisites

    This episode starts from branch 004-grafana-loki. You will find it here.

    Beware - this part is more advanced and assumes familiarity with Docker, Docker Compose, and basic concepts of log management systems like Grafana and Loki. I hope even without prior experience, you will be able to run the scenario and click your way through all steps.

    Ensure you have Docker installed. Docker Compose is utilized to spin up some containers and simulate a real-world environment.

    Brief Overview of scenario architecture

    k6
    file_type_nginxNginx
    API Instance 1
    API Instance N...
    PostgreSQL
    Layer 1Loki
    Grafana

    💡 Tap nodes for details and documentation links

    As you can see in the diagram above, our scenario is quite complex. It all starts with user traffic (generated by k6 – a library for load testing). This traffic is handled by NGINX server (load balancer) that forwards requests to one of our .NET APIs (API mimicking demo e-commerce functionality). Each API instance has 3 Serilog sinks configured:

    • File sink with plain text formatting (unstructured logs)
    • File sink with JSON formatting (structured logs)
    • Grafana Loki sink (structured logs)

    To collect and visualize logs, we use Grafana connected to Loki (log aggregation system optimized for Grafana).

    In SerilogDemo README.md you will find some useful docs and instructions how to run the entire scenario using Docker Compose.

    The most interesting command you can find there is:

    This command will start the entire scenario with 5 instances of our .NET API. Each instance will generate logs. k6 will generate load to simulate user traffic (check the load-test.js file to see the details). The whole scenario runs for about 50 minutes, generating a significant amount of logs for analysis.

    Stream of logs generated by the APIs can be queried and visualized using Grafana dashboards (about that in a moment).

    To stop the scenario, run:

    During the scenario run, I encourage you to look into the SerilogDemo.http file. If you have the REST Client extension installed in VS Code, you can use this file to send requests to the APIs manually. At the end of that file, you will find requests that will allow you to go through the whole e-commerce flow (add items to cart and place an order).

    Note: The whole scenario was prepared with AI assistance, so there is a slight chance that some issues may occur (I've tested it out thoroughly). If you find any, please report them in the SerilogDemo Issues or SerilogDemo Discussions.

    ⚔️ File logs vs Loki logs

    In this section, I planned to show how structured logs are superior to unstructured logs when it comes to querying performance. However, I found quite the opposite - querying unstructured logs from files was faster than querying structured logs from Loki.

    Nevertheless, I will show you how I performed the benchmark, so you can try it out yourself and draw your own conclusions.

    Querying Logs performance benchmark

    Rules:

    • 5 iterations per log source
    • First log files are queried (unstructured logs) for the specific string demo-user-123. The script has full machine resources available and can process files in parallel using multiple threads.
    • Then Loki logs (structured logs) are queried for the same string using Loki API. The script has limited machine resources available (Docker container limitations).
    Performance comparison: File logs vs Loki logs (5 iterations each)

    Why Loki Performed Slowly

    So why Loki performed worse in this benchmark? Let's see a quote from Loki documentation:

    Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs’ labels [...]. Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.

    NOTE

    Additionally, the performance is influenced by other factors. For example - Loki was running in a Docker container with limited resources, while file logs were queried directly from the host machine's filesystem. To be more precise - logs were saved from Docker Volume to host machine and optimized script was run against those files (each file was processed in parallel using multiple threads). Investigate LogSearchTools and Tools folder to get the full picture.

    So basically, Loki does not index all log properties, only labels (check out appsettings.json and Loki sink docs to get full picture). This lead to poor querying performance when searching for properties that are not indexed (like UserId in our case).

    Cardinality and Loki Indexing

    To understand this topic deeply, you must be familiar with the cardinality concept. In short, cardinality is the number of unique values a particular label can have. For example, a label like LogLevel has low cardinality because it can only take a few values (e.g., Info, Warning, Error). On the other hand, a label like UserId has high cardinality because it can have millions of unique values (one per user).

    Loki "likes" low-cardinality labels because they allow for efficient indexing and querying. High-cardinality labels can lead to performance issues and increased resource consumption because the index size grows significantly with each unique value.

    Is Loki for me?

    The answer depends on your querying patterns. Use the interactive guide below to find out:

    🤔 Is Loki right for you?

    How often do you need to query logs?

    Beyond cost-effectiveness, what else does Loki bring to the table? Let's see.

    Loki synergy with Grafana

    There is a good reason why our scenario uses both Grafana and Loki. They work really well together!

    Loki stores the logs, while Grafana provides powerful visualization and querying capabilities on top of them. This synergy allows you to create rich dashboards, set up alerts, and perform complex queries with ease (I will cover some of these capabilities in future posts).

    Here's a screenshot from our scenario load test run:

    Grafana dashboard displaying logs from multiple .NET API instances during load testing

    Another one with UserId filter applied:

    Filtering logs by specific UserId to trace user activity

    And one more showing log details for each .NET API instance (InstanceId label is container ID):

    Viewing logs grouped by API instance (container ID)

    Run scenario yourself

    To run the entire scenario yourself, follow these steps:

    1. Go to the branch 004-grafana-loki of the SerilogDemo repository
    2. Open terminal and navigate to the repository folder
    3. Run the following command to start the scenario with 5 API instances and load testing:
    1. Wait a few minutes to give Docker containers time to start and generate some logs
    2. [OPTIONAL] Open up Docker Desktop application and look up containers spawning logs (you can check their logs in real-time). K6 is the most interesting one - it generates load to the APIs.
    3. Open your browser and navigate to http://localhost:3000 to access Grafana dashboard. From side menu, go to Drilldown > Logs or Explore to query logs (I recommend the first one - it has pre-configured dashboard for our scenario).
    4. Play around with Grafana dashboards, apply filters, and explore logs! I encourage you to try out different queries and see how Grafana + Loki work together.
    5. Looking at Docker containers in real-time is fun :)
    6. After you finish exploring, stop the scenario by running:
    1. Enjoy! 🎉

    NOTE

    If you want to send requests to the APIs manually, use the SerilogDemo.http file with the REST Client extension in VS Code. At the end of that file, you will find requests that will allow you to go through the whole e-commerce flow for a specific UserId.

    WARNING

    After stopping the scenario, containers will be removed, but log files will remain on your machine (you can find them in Volumes folder inside Docker data directory).

    Screenshots for the reference:

    Docker Compose starting 5 API instances with load testing profile
    All containers running: APIs, NGINX, Loki, Grafana, and k6 load tester
    Docker volumes containing persisted log files from the scenario

    Run benchmark yourself

    To run the querying performance benchmark yourself, follow these steps:

    1. Ensure the scenario is running and sufficient amount of logs are generated (follow steps from previous section). 1.1 If you are getting the error error CS1061: 'HttpContent' does not contain a definition for 'ReadFromJsonAsync' and no accessible extension method 'ReadFromJsonAsync' accepting a first argument of type 'HttpContent' could be found (are you missing a using directive or an assembly reference?), you probably forgot to run the scenario and the Loki API is not available. Please run the scenario first.
    2. Download logs from Docker volumes to your host machine. Move those logs into Logs folder located in root of the project.
    3. Navigate to the LogSearchTools folder
    4. Open terminal in that folder
    5. Run the benchmark script:
    6. Wait for the benchmark to complete and view the results in the terminal output.

    TIP

    You can modify the search term and number of iterations by changing the command arguments.

    Benchmark tool comparing file-based and Loki API query performance

    Summary

    In this post, I aimed to showcase a subset of Grafana + Loki capabilities with the Serilog sink in a complex scenario. Additionally, I have made a benchmark comparing querying performance (which is interesting in my opinion, but not comprehensive enough to paint the whole picture).

    Key takeaways

    • Grafana Loki sink for Serilog allows sending structured logs to Loki for efficient storage and querying.
    • Grafana provides powerful visualization and querying capabilities on top of Loki logs.
    • Querying performance may vary based on log source and querying patterns; in this benchmark, file logs outperformed Loki logs for specific search term.
    • Loki is optimized for low-cardinality labels; high-cardinality labels may lead to performance issues.
    • Always consider your specific use case and querying patterns when choosing a log management solution.

    What's next

    In the next part of the Serilog series, we will explore another Serilog sink. Do you have your guess what it will be?

    Stay tuned! 🔥

    Recommended for you