Can you clarify your question a bit more. If you have a combinatorial optimization problem, you must have your instances defined. By definition a problem is comprised of problem instances (in the algorithmic sense).
I am assuming you may mean that your data to form an instance isn't around, and want to see how you formulate it? Worst case is just making your own data that follows from the problem. Usually the best thing you can do is to further understand the problem is (if you have an algorithm in place you are testing) to find a worst case instance that causes the behaviour of the algorithm to be its "worst" (taking the longest) with respect to all your other instances and the input size.
Or is this more of a case where you lack the data to form LPs or MIPs for your problem (to employ something like Karmarkar's Projection Algorithm or Simplex) in testing?
If benchmarks don't exist for it yet, I strongly recommend looking at the properties of the problem. Can it be reduced to any other problems? Also, try to find examples of "worst-case" candidates that can show how bad experiments can execute. If you are studying a problem, you should (I know this is a little subjective) have a thorough understanding of what it is and what has been done on it already. I recommend taking a couple steps back and maybe doing a bit of investigating into the literature. Sometimes researchers already know of a case that throws nasty wrenches to study and don't communicate it much because it has already been pointed out. Even without this, one could just simply pull out a paper and pen and maybe make some cases and study a problem's properties and see if anything sticks out to crunch your calculations on. For example, lots of optimization problems with graph theoretic properties usually can have issues that cause massive bottlenecks in tests if they relate to cycles (e.g., TSP).
So i understand that i can find a well known problem that is a reduction of the problem i want to study , than use the benchmarks of the reduced problem as benchmark for my problem ? I'm i right ?
Can i simply generate my own instances or benchmarks (radnomly for example) ?
Yes, but what exactly are you testing? An algorithm you have proven to solve your optimization problem? Using a standard technique?
Of course you can do that. All benchmarks are really are just a large set of instances that the algorithm should be able to solve, and will often throw and cause problems because the data was carefully picked to ensure most cases are considered; especially those I was mentioning.
Some do that. Two things usually are worth testing; "on average" empirically, and "at worst" empirically (if you are doing testing rather than straight analysis). Try some "random" data, and of course it is essential to try the weird cases that cause the algorithm to perform badly in performance (on your machine). I'd only do this if you have actually (or somebody else has) shown that the algorithm is correct mathematically.
please have a look at the discussion "How to evaluate the quality of metaheuristic algorithms without any benchmarks?" on the link https://www.researchgate.net/post/How_to_evaluate_the_quality_of_metaheuristic_algorithms_without_any_benchmarks
and in particular on the link http://www.fedulov.ge as an application.