I run experiments with optimization models and heuristics across thousands of scenarios, often producing many .txt or .csv outputs. I am looking for concrete workflows and tools to keep this manageable and reproducible.

What do you use to

  • organize runs and configurations
  • track metrics, random seeds, and code versions
  • parse and store large result sets at scale (for example SQLite or Parquet)
  • parallelize on a laptop or on a cluster
  • generate final tables and plots for papers and teaching

Stack I am considering: GAMSPy for modeling inside Python, pandas or polars for data handling. A few months ago I tested Dr. Tim Varelmann’s GAMSPy course and found the single-environment approach promising for research and teaching.

Course page : whop.com/bluebird-optimization

I would appreciate patterns that worked for you and common pitfalls to avoid. Example repositories are very welcome.

More Lorena S. Reyes-Rubiano's questions See All
Similar questions and discussions