I began writing MathGeoLib some time around the summer of 2011. The library has come a long way since and it now contains a range of common geometry types and different operations related to them. One of the recurring issues has been the problem of computing minimal enclosing shapes for 3D meshes for culling and collision detection. Code for generating minimum enclosing axis-aligned bounding boxes (AABB) was included in MathGeoLib right from the start, and finding minimum enclosing spheres was added almost immediately in November 2011. Convex hull computation was added in 2012. These covered three very common geometric collision shapes, but for oriented bounding boxes (OBB), another very common collision primitive, MathGeoLib did not have anything to offer for a long time. This was due to the fact that there did not really exist any good algorithms for finding minimum enclosing OBBs, and brute force or principal component analysis (PCA) were the most common approaches. This means a choice of getting either practically the optimal result extremely slowly, or possibly an arbitrarily bad result very quickly. A number of intelligent numerical optimization algorithms have been crafted to help this issue, but since those are not able to give any guarantees either, and I could not find any reproducible implementations, they did not feel right.

For a long time, the lack of nice minimum OBB computation code in MathGeoLib kept bugging me in the back of my mind. About half a year ago during Christmas holidays, I decided to start working on a method of my own, which was to be a streamlined SIMD approach that would sample lots of orientations in a very fast manner, using the commonly used brute force rotating calipers scheme as a basis. I had a seed of an idea, which would recursively subdivide R^3 and have an intelligent stopping criterion so that it would know when to stop iterating in order to not waste any time searching for orientations that could no longer yield an improvement with a good probability. The goal was to see how far this could be taken in two weeks, and whether it would fix up the slowness of the crude brute force method, while still keeping the near optimal results.

That method did not work out at all well enough to my liking, and somehow the pursuit led me to look at the convex hull of the input, since that could be used as a preprocessing step to trim any redundant internal vertices of the object, simplifying the data size. The biggest frustration was that my fancy method still brute forced through SO(3) / R^3, which felt too wasteful since it was not able to react at all to the complexity of the actual shape of the input model. At some point, I started thinking what the dumbest brute force method over the convex hull would look like, one that did not iterate at all over the possible orientation space, but instead was able to iterate directly over the vertex-edge-face configuration space of the input. Such a method would have the benefit of capturing the actual complexity of the model, and a model with only few vertices/edges/faces (recall that by Euler's formula, the number of these is linearly proportional to each other) would be very fast to process. Initially it was not clear at all how to do this, or even if this was possible, whether such a scheme would have any relation to giving a tight fit, or if the result could be unboundedly bad (like discrete PCA can be).

Another week or so later, while coding on a train on my way to Tampere, I had a first working version of the new brute force method running. It was probably a O(n^4) solution, but something immediately struck me as odd: whatever input model I would feed to it, out came the optimal minimum volume OBB. I don't think I have ever before held so much disbelief towards my own program code as I did back then. Excited with the discovery, but at the same time, feeling skeptical and doubtful, I kept running more and more tests, thinking that every subsequent test run could be the one to say "disproved by counterexample". To date I have not yet found one.

Then, add a month and a half of research, implementation and optimization to understand (and break!) the behavior of the new algorithm, and what started as a brute force search over a convex hull, has become a somewhat more sophisticated graph search over the edges of the convex hull. The running time of the algorithm on suitably uniformly random input data has improved to a O(n * sqrt(n) * log^2(n)) time complexity. Finally MathGeoLib can now have the algorithm it was intended to have for computing minimum volume enclosing OBBs! The source code is available in the MathGeoLib Github repository. To document the algorithm in detail, I have written a research paper that presents a thoroughly formal analysis of the new method. Now, I did submit the paper to a peer reviewed journal for publication. The reviews were very positive and good, but unfortunately the reviewers' recommendation was that the paper was a bit too theoretical for the journal I chose, and a more mathematical journal would probably have suited better. Fortunately I do not have a career in academia, so instead of choosing to wait for another 4-5 months for a second review cycle, I can just opt to put the paper live here. Additionally, I've also made a live web page that implements (well, actually, cross-compiles, since I am using Emscripten) the algorithm in JavaScript, so that you can test it in your browser, right now. I really hope you find it useful!

As it is with almost any research, this too seems to have raised more questions than answers. Before you go, I would like to iterate the following points that are still open questions, in case you just skimmed the paper and did not have the time to read it fully:

  • The algorithm running time O(n * sqrt(n) * log^2(n)) holds only for the theoretical case when the input model is distributed at uniformly random. In worst and adversarial case, the presented algorithm runs in O(n^3 log n) time. Can the algorithm be improved to retain the fast time complexity for all input models?
  • Of all the ~2000 test models used in the paper and several more used with MathGeoLib, the algorithm has been observed to find a better or the same OBB in each case than an exhaustive brute force search has. However the paper does not contain a formal proof of the optimality of the algorithm, it is only conjectured at this point. It is possible that already tomorrow someone shows a test case that disproves the conjecture. Or hopefully, perhaps someone proves the conjecture correct and therefore completes the optimality proof of the algorithm. Does there exist a counterexample that the new algorithm cannot compute? Or can the conjecture 7.1 presented in the paper be proven to hold?

Even with these theoretical questions still open, the new method has already made a great addition to MathGeoLib, and finally the set of optimal enclosing shape computation code in MGL is adequately complete!

Update 2015/06/05: Several people have been asking for a pdf version, you can find it here.

Last Updated (Friday, 05 June 2015 13:08)