Computer Vision is growing quickly with many new killer apps like augmented reality, intelligent transportation, 3-D scanners, and no doubt many more advanced interactive systems of the future ... so does it make more sense to define a new programming language for CV like Halide (http://halide-lang.org/) does, to use APIs (as has been done now for some time with OpenCV) or to define hardware co-processors? Or perhaps some mix of the 3 as has evolved for graphics (GL, OpenGL, GPUs, RenderMan, etc.). I'm just curious what researchers favor and see as advantage and disadvantages of each.

Similar questions and discussions