I am using a L8 (2 level, 4 Factor) DOE. I have completed the experiments and am now trying to sift through the data and start the analysis. I completed these experiments in a strictly production style environment so noise factors could not be controlled ( noise factors treated as random). There are 5 output variables: 2 come directly from the process data and the other 3 are measurements. One of the measurement output variables has about 200 to 350 data points per run and the process data variables contain about 50-60 data points per run.

My question is...how do you handle all this data to be able to put it into the Taguchi array. Should the data points be compressed and summarized into a mean value per run or should a selection of 10-20 random data points be used per run or should just 2 points (min and max) be used per run? I have hit a road block because I am not sure what to do next. Minitab also has a limit on the amount of data points being analyzed for a Taguchi DOE. Any insight or help would be greatly appreciated. Thanks!

More Daniela Liberatore's questions See All
Similar questions and discussions