Do we necessarily need constraints? Or can we just try to find a solution that minimizes f_0 for all possible x?
Confused about the equality constraints. Should it be g(x) <= c and -g(x) <= -c?
As noted above, I think it shoukd be -g(x) leq -c so that g(x) geq c and by antisymmetry g(x) = c.
Also, you said most continuous optimization problems can be formulated in this form. What are some examples that cannot be expressed in this form? One I can think of is optimization across a domain of two disjoint sets, but is there a more nontrivial answer?
Does this problem has a dual as in the case of linear programming?
Agreed that it should be -g(x) <= -c; I came here to ask the same thing.
I originally left a comment here about how standard form doesn't seem capable of handling "or" constraints, like "minimize f(x) subject to g(x) <= a OR h(x) <= b". But after thinking about it, I realized that you might be able to do this with some clever functions; for example, let g'(x) = 1 when g(x) <= a and 0 otherwise, and similarly for h'(x), and then you could have a constraint that -g'(x) - h'(x) <= -1. This feels kind of hacky, but I think it checks out. Are weird tricks like this common for transforming problems into standard form?
What are some choices for formulating discrete optimization problems?
What does "optimization" really mean? Minimize the cost of certain functions? What benefits does cost minimization brings to us? Faster computation? Nicer graphics quality?
This looks like linear programming, just with the function not having to be linear. What are the tradeoffs of this? I assume it's slower, but can more problems be represented? More accurate solutions?
Why would we want equality as opposed to optimizing the it?
Why do we use two inequalities in order to represent equality conditions? Is it because most solvers only accept feasibilities/constraints in terms of inequalities (if they do, it seems somewhat inefficient for me...)?
Is it supposed to say that "g(x) >= -c" instead of what it currently says?
Is there any benefit to considering equality constraints in their own right? For example, maybe using two inequality constraints could be fragile with floating point math.
In practice, do we need to add a small epsilon for the equality constraints if we want to solve an optimization problem on a computer?