@ Michael Patriksson I thank to you. But if we could not have this constraint, what should I do. Actually, alpha, beta and gamma can be anything. here we can not limit them.
Then you are doomed to either reach a stationary point - which may be a local maximum or minimum, or neither, or you need to start learning about global optimization. And by that I mean reading the books of Huang Tuy and his colleagues - NOT using the inferior metaheuristics that are flooding this site.
Or - which is an easy thing to do - you could start your methods at MANY different points in the feasible space and let the method run. If you have done it 1000 times or so, you might be lucky that the best point among those 1000 is pretty good! :-)
You could in fact also use a method that was devised by a former supervisor of mine - the partial linearization methods of Migdalas. It means that if you have an objective with a convex term and a non-convex term, you perform a linearization [1st order Taylor expansion at the current iteraye] of only the non-convex term, solve that subproblem - which always till be convex! - and perform a line search along the direction from the iterate to the subproblem solution. By starting the search from several different points, perhaps you are lucky.
Your Min will exist if your combined function y is not single valued in x for some value of y. This may or not occur depending on your functions f and g and your coefficients alpha, beta, and gamma.
As a general solution I suggest you graph the function and observe if is convex for any y that maps to more than a single value in x.
Also you can take the minimum by differentials method and look for real solutions.