I have over 5 years developing predictive models with years of experience as a researcher, statistical analyst as well as data scientist. One aspect that I have experienced within the big data sphere as well as predictive modeling landscape is that a lot of emphasis are either place on data quality and quantity, the experience and expertise of the modeler, the kind of system that is being used to build the model, validate, test, and continue to monitor and assess the model quality and performance over time. With this said, I would like to see what others here on Research Gate think are some of the challenging task building either a statistical or predictive models and what were some of the strategies you employed to help address those challenges? What were some of the tradeoffs you had to make and how would you approach similar situation in the future?

Information provided to this inquiry will be used for personal and professional growth.

More Jenkins Macedo's questions See All
Similar questions and discussions