In this talk, we report a number of curious cases in the numerical solutions of specific geometric variational problems [1, 2] in which (i) the solution surface of the continuous problem is invariant under a symmetry group G, (ii) that the space of finite-dimensional space of discrete solutions accomodates surfaces invariant under a finite subgroup H of G, (iii) but yet the optimal discrete solution does not possess the H-symmetry one would expect. At the same time, many numerical optimization algorithms – gradient descent, accelerated gradient descent, BFGS etc. for unconstrained problems, and their variants for constraint problems – are invariant under orthogonal change of coordinates. In the aforementioned situations, or in situations when we solve a geometric optimization with an initial guess with the wrong symmetry, this innocent invariance in the optimization algorithm means that the numerical optimization method can get stuck at a suboptimal saddle point. In the latter case, the saddle point is likely far from being optimal or desirable. But in the former case, the symmetric but suboptimal saddle point, by an approximation result to be presented, can actually approximate the continuous solution better than the discrete minimizer.