My co-author and I made the first cut at the MIT Sports Statistics & Analytics Conference, and we were looking MLB pitcher injuries and Pitch f/x data. Needless to say, the data maintenance on the pitch-by-pitch data set (nearly 4 million observations) was intensive, and took up much of the last year. We got the analysis done right before the deadline and were able to submit our paper.
Was it everything that we had wanted to do? Of course not. We didn't get to the graphical analysis or the predictive modeling that we had initially wanted to do. But we were able to answer the primary research questions that we had proposed and demonstrate proof-of-concept for the use of Pitch f/x data for monitoring propensity to injury indicators. Of course we had to say, "more refinement and research is needed in this area." That just gives us something to do for JSM 2014.
Now my question concerns the fact that we found that there were anomalies in the data.
About 10% of the data (360K observations) were located in Seattle, and only had pitch placement data. Pitch types, speeds, etc., all the other "good stuff" was missing. We were able to use that for analysis involving pitch counts, but not for anything more involved than that.
The really anomalous observations involved "low volume" observations. Apparently the neural net algorithms used since 2007-08 to identify pitch types have changed and adapted. Some of the observations only appear a very few times (e.g., 26) and are never seen again, while others are legitimate pitch types (e.g., Eephus ... very low speed pitch ... 326 times). These observations have a disproportionate amount of influence on the outcomes, like what you'd expect an outlier to have on a regression equation.
Here's my problem. If you were going to "clean" the data what would you do? How would you go about it? I know that it's common practice in a lot of the published studies (Baseball Prospectus et al.) to limit subjects to those players with a threshold number of innings. But, if you're going to look at injuries, I would think that you would only do that for your control group.
Since this was our first pass at the data, I was disinclined to do anything but use the whole population and be conservative in my findings; explain in the discussion the peculiarities of the dataset. For further work, though, I know that I'm going to have to do things like stratify outcomes based on injury type; control for pitcher/batter handedness; etc.
Any suggestions that you've got, based on working with messy, real life data, would be appreciated.