Using a Semantic Model as a Reasoning Layer (Not Just Metadata)
Most systems treat semantic models as documentation.
But what if they were active participants in query execution?
This question came up while thinking about natural language analytics.
Natural Language Is Ambiguous by Default
If someone asks:
“Show me sales by product this year”
There are immediate ambiguities:
-
Which sales measure?
-
At what product level?
-
Calendar or fiscal year?
-
Gross or net?
Most tools resolve this by:
-
Picking defaults
-
Or asking the user to clarify
But what if the system could reason about the question?
The Semantic Model as Context
Imagine a model that explicitly defines:
-
Valid measures
-
Valid rollups
-
Default aggregation logic
-
Synonyms and aliases
Before executing anything, the system can ask:
-
Is this aggregation valid?
-
Are these measures comparable?
-
Does this level change meaning?
This shifts failure from:
“The numbers look wrong”
to:
“This question doesn’t make sense — here’s why”
Why JSON (or Any Declarative Format) Matters
Using a declarative model (JSON, YAML, etc.) has a side effect:
The model becomes retrievable.
This means:
-
It can be indexed
-
It can be reasoned over
-
It can be fed into AI systems as context
This is where things get interesting.
RAG Without the Hype
Instead of asking an LLM to “understand your data”, you give it:
-
The question
-
The relevant semantic definitions
-
The allowed aggregation paths
The model doesn’t invent meaning.
It operates inside the constraints you’ve defined.
This feels much safer, and much more useful.
The Emerging Pattern
I’m starting to think of the semantic model as:
-
A contract between humans and data
-
A reasoning layer
-
A governance mechanism
-
A performance hint system
In the next post, I’ll try to walk through how a simple question could flow through such a system end to end.

Comments
Post a Comment