A Practical Stack for Semantic Analytics: Azure, C#, Redis, and PostgreSQL



Happy New Year ๐ŸŽ‰

Welcome to 2026, and welcome to the first post of the new year.


Part 3 Available Here

Up to now, I’ve deliberately avoided talking about technology.

Not because it doesn’t matter,  but because architecture should follow meaning, not the other way around.

That said, once you commit to a semantic-first approach, certain technology choices start to make more sense than others. This post is about the stack I’ve been gravitating toward and why.


Why “Boring” Technology Is Often the Right Choice

Semantic analytics systems are already complex conceptually.

Adding novelty at the infrastructure layer tends to:

For this kind of system, I’ve been prioritizing:

That naturally pushed me toward a stack built around Azure, C#, Redis, and PostgreSQL.


C# as a Semantic Language (Not Just an API Language)

C# is often seen as a “backend enterprise” choice, but it has a few properties that matter a lot here:

  • Strong typing for semantic contracts

  • Excellent support for immutable models

  • First-class async and concurrency

  • Mature ecosystem for data access and testing

In a semantic system, you’re constantly translating between:

C# excels at making those translations explicit.

A semantic query object, for example, can be:

  • Validated at compile time

  • Enforced with invariants

  • Logged and reasoned about during execution

That matters when the system is making decisions on behalf of users.


Azure as an Integration Surface, Not a Magic Box

I’ve found it useful to think of Azure less as “the cloud” and more as:

A collection of well-defined integration points.

Key services that fit naturally into this architecture:

Azure AI Search

  • Indexing semantic models

  • Handling synonyms and aliases

  • Retrieving relevant context for AI-assisted reasoning

Importantly, this keeps semantic lookup separate from transactional storage.

Azure OpenAI (Optional but Powerful)

Used carefully, not as an oracle:

  • The model doesn’t “know” the data

  • It reasons over retrieved semantic context

  • It helps interpret intent, not invent meaning

This distinction feels critical.


PostgreSQL as the Source of Truth

Despite all the excitement around real-time analytics engines, PostgreSQL keeps showing up as the most reasonable core datastore.

Why?

  • Excellent support for relational integrity

  • Strong indexing options

  • JSON support where flexibility is needed

  • Predictable behaviour under load

In this architecture:

  • PostgreSQL stores facts

  • It stores semantic model versions

  • It stores execution results when persistence is required

It’s not trying to be everything, and that’s a strength.


Redis as the Semantic Accelerator

Redis becomes interesting once you accept that:

  • Not all queries are equal

  • Not all aggregations should be recomputed

Rather than caching raw queries, I’ve been exploring:

  • Caching semantic tuples

  • Pre-aggregating at known levels

  • Reusing intermediate results across questions

This shifts Redis from:

“A fast key-value cache”

to:

“A semantic performance layer”

The cache isn’t opaque, it understands grain.


A Typical Query Flow (Conceptual)

At a high level, a query might flow like this:

  1. Natural language or structured request arrives

  2. Semantic intent is resolved (C# domain logic)

  3. Relevant semantic definitions are retrieved (Search)

  4. Validity and aggregation paths are checked

  5. Execution strategy is chosen:

    • Redis

    • PostgreSQL

    • Or a hybrid

  6. Results are returned with context

No single component is doing too much.

Each layer has a clear responsibility.


Why This Stack Feels “Right” for This Problem

This combination:

  • Encourages explicit modelling

  • Avoids hidden behaviour

  • Scales incrementally

  • Can be debugged by humans

And perhaps most importantly:

It lets you reason about correctness before performance.

Which is exactly what semantic systems need.


A Final Thought

I don’t think this stack is “the best”.

I think it’s coherent.

It aligns with a philosophy where:

  • Meaning comes first

  • Technology supports reasoning

  • Systems explain themselves

If you’re working on analytics, planning, or AI-assisted decision systems, I’d be curious to hear what stacks you’ve found that support thinking, not just execution.

If this article sparked ideas, raised questions, or you’d like to discuss how these concepts apply to your own projects, feel free to reach out via dotnetconsult.tech. I’m always happy to chat about real-world .NET architecture, performance, and design decisions.


Comments

Popular posts from this blog

A Secure Blazor Server Azure Deployment Pipeline

Stop Wrapping EF Core in Repositories: Use Specifications + Clean Architecture

Server-Sent Events in .NET 10: Do You Really Need SignalR?