Posts

Showing posts from 2025

Stop Wrapping EF Core in Repositories: Use Specifications + Clean Architecture

GitHub project which accompanies this article is available here    Wrapping Entity Framework Core in repositories has become a default in many .NET codebases. But defaults deserve to be challenged. This post shows: why repository-over-EF breaks down how Clean Architecture + Specification Pattern fixes it and how EF Core InMemory tests prove the approach works The problem with “Repository Pattern over EF Core” EF Core already gives you: DbSet<T> → repository behavior DbContext → unit of work Yet many projects add another repository layer anyway. It usually starts simple: Add(order) GetById(id) Update(order) Then order-processing requirements arrive: “Get open orders for customer” “Get orders awaiting payment older than 7 days” “Get paged orders sorted by date with items” Soon you’re staring at this: OrderRepository ├── GetOpenOrdersForCustomer(...) ├── GetOrdersAwaitingPayment(...) ├── GetOrdersWithItemsAndPayments(...) ├── ...

From Question to Result: A Mental Model for Semantic Analytics

Image
  Part 1: A Thought Experiment: What If Analytics Models Were Semantic, Not Structural? Part 2 : Using a Semantic Model as a Reasoning Layer (Not Just Metadata) Let’s take a concrete example: “Compare forecast vs actual sales by month for bike categories this year.” Most systems jump straight to execution. I think that’s backwards. Step 1: Understand the Question Before touching data, the system identifies: Measures: Forecast, Actual Sales Time range: This year Time level: Month Product level: Category Filter: Bike-related categories This is interpretation, not computation. Step 2: Validate Meaning Next, the system checks: Are forecast and actual comparable? Is category a valid rollup for both? Is monthly aggregation defined? Are defaults available where ambiguity exists? If something is unclear, the system can explain why . Step 3: Decide How to Answer Only now does execution matter: Cached aggregates Precomputed tuples On-th...

Using a Semantic Model as a Reasoning Layer (Not Just Metadata)

Image
  Part 1: Using a Semantic Model as a Reasoning Layer (Not Just Metadata) Most systems treat semantic models as documentation. But what if they were active participants in query execution? This question came up while thinking about natural language analytics. Natural Language Is Ambiguous by Default If someone asks: “Show me sales by product this year” There are immediate ambiguities: Which sales measure? At what product level? Calendar or fiscal year? Gross or net? Most tools resolve this by: Picking defaults Or asking the user to clarify But what if the system could reason about the question? The Semantic Model as Context Imagine a model that explicitly defines: Valid measures Valid rollups Default aggregation logic Synonyms and aliases Before executing anything, the system can ask: Is this aggregation valid? Are these measures comparable? Does this level change meaning? This shifts failure from: “The numbers look...

A Thought Experiment: What If Analytics Models Were Semantic, Not Structural?

Image
I’ve been thinking a lot about why analytics and forecasting platforms feel harder to use than they should. Not harder to build,  harder to think with . Most modern data stacks are incredibly capable: Fast databases Columnar storage In-memory caches ML forecasting libraries Yet the questions users struggle to answer haven’t changed much: “Compared to what?” “At what level?” “Is this rolled up correctly?” “Why does this number look wrong?” This feels less like a compute problem and more like a meaning problem . Where Meaning Lives Today In most systems I’ve worked on, meaning is scattered across: Database schemas ETL logic Cube definitions BI metadata Tribal knowledge None of this is explicit. If someone asks: “Can we compare forecast vs actual by category this year?” The system doesn’t reason about that question. It executes SQL and hopes the result makes sense. A Different Framing What if we treated analytics as a sem...

Dev Tunnels with Visual Studio 2022 and Visual Studio 2026

Image
  Securely exposing your local development environment to the world — explained Introduction Developers building web APIs and web applications often face a familiar challenge: how to expose a localhost app to external services, collaborators, mobile devices or testing environments without deploying it to production. Enter Dev Tunnels ,  a powerful feature in Visual Studio that allows you to securely share your local development server over the internet with a public or authenticated URL. It dramatically simplifies workflows involving webhooks , remote debugging, and cross-device testing.  In this article, we’ll explore what Dev Tunnels are, how they work in Visual Studio 2022 , what’s new in the context of Visual Studio 2026 , and practical scenarios for using them. What Are Dev Tunnels? Dev tunnels let you expose a local web app running on localhost to the internet using a secure tunnel endpoint. Once created: You get a remote HTTPS URL that maps to your loc...

Server-Sent Events in .NET 10: Do You Really Need SignalR?

Image
Real-time updates have become a default expectation in modern web applications. Dashboards refresh automatically, notifications arrive instantly, and long-running operations stream progress back to the UI. In the .NET ecosystem, SignalR has traditionally been the go-to solution for these scenarios. However, with .NET 10 , Server-Sent Events (SSE) have become a first‑class, production-ready option that deserves serious consideration. In many cases, SignalR may actually be overkill . This post explores: What Server-Sent Events are How SSE works in .NET 10 The benefits of SSE over SignalR When SignalR is the right choice How to decide which approach fits your application What Are Server-Sent Events (SSE)? Server-Sent Events provide a simple, standards-based way for a server to push data to a browser or client over HTTP. Key characteristics: One-way communication (server → client) Built on plain HTTP Uses the text/event-stream content type Automatically reconnects if the connection dr...

Moq vs NSubstitute: Choosing the Right Mocking Framework for Modern .NET Projects

Image
  Introduction Mocking frameworks are one of those tools that quietly shape the quality of your test suite. You don’t think about them much—until your tests become brittle, unreadable, or painful to refactor. For many years, Moq has been the default choice in the .NET ecosystem. It’s powerful, expressive, and deeply integrated with expression trees. However, over the last decade, NSubstitute has steadily gained popularity as teams look for simpler, more readable, and more refactor-friendly tests. If you’re starting a new .NET project today , or reconsidering your current testing approach, this question comes up often: Should I still use Moq, or is NSubstitute the better choice now? This article explores that question in depth, with real-world code examples , trade-offs, and guidance based on modern development practices.

FluentAssertions vs Shouldly: Assertion Libraries for Modern .NET

Image
  Introduction Assertions are the voice of your tests. They’re the part everyone reads during: code reviews CI failures late-night debugging sessions Two libraries dominate the modern .NET space: FluentAssertions Shouldly Both dramatically improve upon classic Assert.Equal(...) . But they differ in philosophy, syntax, and licensing,  and that last point matters more than many teams expect. What Assertion Libraries Are Really For Assertions should: Clearly express intent Produce readable failure messages Stay out of the way If your assertions are noisy, your tests become harder to understand than the code they test. FluentAssertions: Overview FluentAssertions uses a chainable, fluent syntax : result.Should().Be( 42 ); It’s expressive, powerful, and very popular. Shouldly: Overview Shouldly uses a BDD-style “should” syntax : result.ShouldBe( 42 ); It focuses on: minimal ceremony extremely readable failure messages Simple...