GPU Debugger and Profiler reporting different Render Command execution times - how to find the gap?

Hello, I am using ICBs to render a scene and at some points I am having some frame drops, but the data provided by the GPU Debugger seems off when compared to the Profiler when those drops occur.

When the render is running smooth both tools give a timing that is “close enough” for the Render Command when I call executeCommandsInBuffer: 7.85ms (GPU Debugger) and 9.90 ms (Profiler).

I say close enough because the scene is dynamic generated, but I tried to measure based on points that looked very close.

But when I am seeing the part of the scene that causes the drops the reports are 9.38ms (GPU Debugger) and 18.52ms (Profiler) - for the Render Command only.

So far I was not able to figure out why they report different timings, what the Profiler reports matches the frame drops.

Is there any other tool or window I can look to figure out what is going on? In the profiler it reports the ~18ms but that is all.

Here is a screenshot of both tools when it is running at 60fps with timing reports close enough:
Screen Shot 2021-04-25 at 19.33.56 Screen Shot 2021-04-25 at 19.37.38

And for the the part of the view with drops:
Screen Shot 2021-04-25 at 19.39.32 Screen Shot 2021-04-25 at 19.43.34

I can’t suggest anything I’m afraid. Instruments (Profiler) would be the most accurate.
Maybe there’s something in the WWDC videos: