Your piece nails it. The data wins, not the model. But here’s what I’m running into in my world. I’m trying to build outcome intelligence in LNG and refining.
The execution data you’re talking about, the data that becomes the moat, it doesn’t exist yet. It’s sitting in people’s heads and incident reports if it exists at all. So my first problem isn’t building a better model. It’s designing the system that captures execution data at the workflow surface that nobody else can get their hands on.
What if you have to deliberately design the data capture into the operation itself? Does your three-layer cake still hold when you’re building the bottom layer from scratch?
I suspect that much of this data lives in SAP and specialized systems connected to it. The operators certainly know the production per hour on every platform, the man hours per platform, and the indirect costs allocated to the operation.
Trust me, it does not. The data we are capturing is what we call in the Process Safety world- Leading Indicators.
I’ve been using an analogy recently and found it to be effective when speaking to potential customers.
I see a process facility LNG/refinery etc like a human body. On the left hand, we have DCS/SCADA where the nerve endings of each finger are represented by analyzers, sensors, transmitters etc.
Those sensors send data through the nervous system to the brain
On the right hand, we have all manual workflows but we have no nerve endings. So the brain has no idea what’s happening on the right hand.
We need to connect the dots between the “fingers” so the hand can do co-ordinated work.
This is where I'm focused- the right hand. It's where people make catastrophic mistakes.
The intelligence ontology layer is interesting. Tell me how you see this working?
Your piece nails it. The data wins, not the model. But here’s what I’m running into in my world. I’m trying to build outcome intelligence in LNG and refining.
The execution data you’re talking about, the data that becomes the moat, it doesn’t exist yet. It’s sitting in people’s heads and incident reports if it exists at all. So my first problem isn’t building a better model. It’s designing the system that captures execution data at the workflow surface that nobody else can get their hands on.
What if you have to deliberately design the data capture into the operation itself? Does your three-layer cake still hold when you’re building the bottom layer from scratch?
I suspect that much of this data lives in SAP and specialized systems connected to it. The operators certainly know the production per hour on every platform, the man hours per platform, and the indirect costs allocated to the operation.
Trust me, it does not. The data we are capturing is what we call in the Process Safety world- Leading Indicators.
I’ve been using an analogy recently and found it to be effective when speaking to potential customers.
I see a process facility LNG/refinery etc like a human body. On the left hand, we have DCS/SCADA where the nerve endings of each finger are represented by analyzers, sensors, transmitters etc.
Those sensors send data through the nervous system to the brain
On the right hand, we have all manual workflows but we have no nerve endings. So the brain has no idea what’s happening on the right hand.
We need to connect the dots between the “fingers” so the hand can do co-ordinated work.
This is where I'm focused- the right hand. It's where people make catastrophic mistakes.
Maybe a 4th layer- an ontology/intelligence layer above the workflow layer?