I have taken a series of 3D confocal images where one of the channels (green) has an almost-uniform coverage in certain z layers, usually near the very top or bottom.
This could be due to bad focus, or just natural separation by density, or something else.
This is causing issues with surface creation, as these layers also become surfaces. Thresholding by volume doesn't always work because sometimes the algorithm detects holes amongst these layers that split them up into different smaller surfaces.
Is there any feature to pre-process these images, where if an almost-uniform brightness is detected in a specific layer, some kind of mean can be calculated, and subtracted from those specific layers? This should effectively filter out these artefacts.