I calculated LOD based on this formula: 3 SD of blank/slope and the lowest concentration of calibration curve is 0.1. But theoretical LOD is 0.4 and is higher than 0.1. what is the meaning of this? is it reliable or acceptable?
There are some ways to determine LOD. Some guidances on method validation describe LOD as you mentioned, but this way to calculete LOD is dependent on the slope of the equation which changes with the concentration range and so it may fail to provide a good estimation of LOD if you are working with high concentrations. Other method validation guidances describe LOD as the concentration in which the signal to noise ratio is 3. Considering this and what you have described, the most suitable way to determine the LOD of your method is determining the concentration in which the signal to noise ratio for each analyte is 3.
Most likely you 'screwed-up' the calculation. The statistical linear calculation of the LOD (and LOQ) is based on the standard deviation (SD) which means the experiment should be repeated carefully at least 3 times and the average used.
The current results indicate that any sample results are meaningless!
LOD as limit of detection corresponds to the 3.3 times of ste Standard deviation over the slope at this point of the curve. And that is the theoretical background of the formula. While knowing that we can thus define that quantitation limit is 10σ/slope . Now what is the standard deviation in this place for instance in an analyzer you see this corresponds to the measurement of the response.