Power BI Dynamic Inventory Dashboard
At that particular date, at that particular plant, what is the quantity of that particular material code(s)
Introduction
Context
I had a fulfilling time working as a Demand Planner at a company specializing in Power Technology and Services. Per the standard process, the data was hybrid recorded and maintained using Excel workbooks (clouded on Sharepoint) and SAP S4/HANA.
The problem was the lack of information centralization. Checking and tracking the performance was required to access multiple different data points. Also, this provides zero help in transparency between the Supply Chain team and other stakeholders.
Per the traditional way, I used a lot of INDEX MATCH in Excel, trying to gather all the relevant information across multiple data points. It did not go well. At one point, my workbook took 15 minutes just to open.
In short, there was a lot of room for improvement regarding my data management.
Scope
Basically, I need to see at that particular date, at that particular plant, what is the quantities of that particular material code.
To centralize the information of POs, SOs, availability, and schedule shipping from SAP HANA plus multiple Excel sheets into a single dashboard for tracking stock level by the pre-defined timeline.
The pipeline based on the corporation's facilities and resources at that time, which to my knowing, was not too unique.
Deliverables
The Dynamic Inventory Level Dashboard.
The motivations behind the pipeline design.
Disclaimer:
- The datasets were censored and edited.
- The dashboard in this blog is a prototype.
Link to the resources of this project
Pipeline
Motivation Behind The Pipeline
From Sources to Data Warehouse (Task 1 → Task 3)
The direct access or API to SAP HANA for users like me was restricted.
The solution would be collaboration. The on-demand normalized datasets are processed by a dedicated BI team and mailed via Outlook based on the time aligned.
Together with the Excel workbooks which I do have direct access. These datasets needs to be stored together somewhere after the transformation.
The most common solution is an SQL Data Warehouse.
For this prototype run, I use SQL Server as my local data storing solution, this RDBMS is robust enough to act as a data warehouse. And ... it's free.
Normalization (mostly hadling date format) and bulk upload into SQL Server was handled by a Python Script.
From Data Warehouse to Power BI (Task 3 → Task 5)
Again, I need to see at that particular date, at that particular plant, what is the quantity of that particular material code.
The quantities basic math is: Current Stock + PO quantities - SO quantities. This means the key is a composition of 3 columns: date, plant, and material code.
I was standing between creating the composite key column for all the fact tables( date & plant & material_code in Power Query) or "Merge" the quantity info of the fact tables (Current Stock, SO, PO) into one big table.
Power BI loves star schema, everything is built around it. Merging into one big fact table was my choice for this prototype run. Concerns regarding scalability and performance are raised, but considering the composite key columns will require calculation in the row level, I do not think it will provide any significant advantages if it was implemented instead.
So far so good it’s working on Power BI Service resources. Around 5 million rows for 2 years span (730 days) with around 1500 active material codes and 2 plants.
What does it really means?
“Data should be transformed as far upstream as possible and as far downstream as necessary.” Shout out to Matthew Roche.
All possible transformations in data warehouse instead of Power Query:I cross-joined the date_key, materials, and plant as the “skeleton” for the dataset. Resulted in the new table: Dim_ Cartesian in SQL Server. The intention was use this skeleton and left-joined with the transactional data from multiple fact tables.
Eventually creating one big central fact table.
But I wouldn't join them in SQL Server, at least not with this prototype pipeline, because it requires orchestra tools to ensure every future changes in the small fact tables would be updated in the big fact table, I used Power Query further down the stream instead.

All relevant tables loaded into Power BI.
Merge Query using Dim_Cartasian columns as composite keys to join quantities of Current Stock, POs (incomming), and SOs (outgoing). This big fact table is named Closing_Inventory.
Power Query helps updating every changes in Current Stock, POs, or SOs into the Closing_Inventory table.
The data model at this point seems star schema in my opinion. Not an orthodox modeling but in the form that's can be utilized with Power BI. Explicit the one central big fact table and surrounding dimension tables.
- Dimension tables support filtering and grouping.
- The fact table supports summarization of the numbers.
In this case, filtering applies for 3 main dimensions: date, plant, and material code. As for the sumarization of numbers, the central fact table, Closing_Inventory helps the DAX straightforward.
Measurements in DAX:Stock Level =
VAR Stock = [Innital Stock] + [In] -[Out]
RETURN
Stock
Innital Stock = CALCULATE(SUM(Closing_Inventory[Qty]),'Closing_Inventory'[date_key] <= MAX(dim_date[date_key]))
In = CALCULATE(SUM(Closing_Inventory[PO_Qty]), 'Closing_Inventory'[date_key] <= MAX(dim_date[date_key]))
Out = CALCULATE(SUM(Closing_Inventory[SO_Qty]), 'Closing_Inventory'[date_key] <= MAX(dim_date[date_key]))
Data Tables Relation
Dynamic Inventory Level Dashboard (Interactive)
Dashboard
Main Benefits
- User-friendly centralized display of data from multiple sources.
- Transparency helps cut off cross-checking and query time between the stakeholders.
- Enhance situation awareness for the planner and relevant stakeholders.
Breakdown
Improvements and Expansion
There's a lot of room for improvement regarding the automation of the pipeline or enhancing the situation awareness capability of this solution.
Aim to save time by cut down the manual tasks. Also, highlight the backorders risk in the time window that’s can make the impact for the Planner to expedite with suppllier or for the Sales team to re-negotiate the timeline with customers.
The Pipeline
- For Task 2 and Task 4, incorporate orchestra tools to reduce manual tasks and more sustainable in handling data updating.
- Upstream the data combination to enhance the scalability and the flexibility to set up new tracking measures.
The Dashboard
- Backorders date highlights/warning mechanic.
- Implement calculation of the re-order point for each material for each plant.
Conclusion

For it to work in a business setting, lots of aspects must be brought to the table.
Regarding this blog, the main focus will be the feasible (technology). The motivation utilizing the knowing of the company's facitiliies and the technical skills trying to tackle the challenges I'd encoutered as a Demand Planner.
Although the solution was never fully implemented before I left, I know it can be very beneficial for my work back then.
With that being said, there’s a lot of room for improvement for sure. Especially the scalability and performance optimization.