Dithering is the key. Banding is not dependent on cube resolution (usually 17^3 in HW). Banding happens when you translate a 16bit correction to a 8 or even 10bit channel => truncation. This happens even if you had 256^3 or 1024^3 cube lut. Dithering eliminates that issue. Dither can be software (like MadVR) or HW like AMD dither for 1DLUT in GPU calibration or inside monitor. IDNK if Resolve uses software dither on LUT3D, ask them.
Cube resolution (nodes) gives you a limit about if some errors can or cannot be corrected …because they are no measured… and they need to me measured. 17^3 cube is about 5000 patches read. If there is an issue between one node RGB value and neighbour but not in nodes it may go unnoticed. But that is not related to banding at all.
Now I understand. Thanks for great explanation. I am curious, so I will try to ask Blackmagic if Resolve uses some kind of dithering (especially if a Declink card is used). BTW what type of dithering would you use for GPU 1DLUT in general? Or is this something that needs to be elaborated for every specific combination of display and GPU?