I think David summarized pretty well. Numbers don’t tell everything. Xyla charts are good for scientific comparative purposes between cameras but they don’t tell me how a sensor “see”. For testing cameras I always use models and color/grey charts. Underexposing and overexposing. Then we put those images trough specific pipelines to determine which are the “better-to-us” ways of dealing with those images in post. How you treat them in post can alter quite vastly what and how a sensor see.
After that I have a perfect idea what’s the dynamic range of the sensor for my purposes, taking into account the workflow and the final look needed. To me you can’t separate that for the equation, it’s not only the sensor it’s the whole process what matters.