As we know that FPGAs are suited for parallel and pipeline based processing. In this regard, can we accelerate FPGAs to solve problems of big data in computer vision perspective.
Big data problem requires a lot of computation power that can achieved by cluster computer of either CPUs or CPUs+GPUs they are more power and also efficient to solve large problems.
FPGA may be well suited for real-time implementation of an already developed computer vision algorithm. This is specifically important as FPGA can simulate the final hardware - which can then be used to estimate other factors for mass production.
As mentioned in previous comments, big data is totally a different problem and hence requires a much more powerful set up. Solving a BD problem is more of a prototyping method which, due to its nature, is not at all suitable for FPGA.
Comparing a Virtex-7 XC7V2000T FPGA with a 6-core Intel Xeon X5690 / Intel 980X (same chip), - I did this a couple years ago - simply in terms of op/s the FPGA is 795 times faster than the CPU. I am not considering cache, memory interface, configurability...just raw computing power. Half the clock speed means half the power. The Intelhas 6 cores, 4 ops/cycle, 3.2 GHz yields 76.8 billion 64-bit ops/s. The Xilinix has 305,400 slices, 4 LUTs/slice, 1,221,600 LUTs, 1.6 GHz clock yields 61,080 64-bit ops/s. in a simple simulation Since one can create a dedicated algorithmic processor, one should be able to run the algorithm 795 x 200 the speed of the CPU. I achieved a couple orders of magnitude doing an unscientific comparison with a simple simulation. There are other factors as mentioned above and algorithmic suitability, space left for cache, etc. Programming can be a pain in the ...but once created, loading the FPGA code is about a millisecond and for each chip you can have a dozen designs in your application and switch the FPGA to another algorithm almost instantaneously. Concurrency / parallelism is algorithm dependent of course. I have not worked with vision and have no expertise in this area. Big data, i.e. analytics I assume since big data is, well, data, FPGAs are well suited for many analytic operations. This is in a lab also, for production, economics is another consideration.
Prof. Richard Rankin, you have provided an excellent information. Not all computer vision algorithms are suited for FPGAs. But I believe that the methods that can be parallelized or pipelined are easily implementable in FPGAs. Even, I have implemented couple of image enhancement algorithms in FPGAs. I am able to achieve the performance of about 120 fps for video processing algorithms of resolution of 1900 x 1200. However, my algorithm works only for medium fog videos. With this perspective, I thought to use the FPGAs to solve the problems related to big data. I am confident that I can achieve it. Thank you all.
I think FPGAs and GPUs can have a similar performance in many machine vision and image processing tasks. It depends on which one you are more comfortable to implement the algorithm. I have seen some examples of using FPGAs for big data and I think they are capable of that.
The real differences between the two are not in performance but are in price, power consumption, being suitable to be used in embedded systems and development time.
Agreed. FPGA power consumption and ipso facto heat is lower. Design time is higher forFPGAs but tools are improving. Another issue is the requirement of specialized hardware. Your FPGA utilizing code will not run on an "off-the-shelf" machine. However, the FPGA could be delivered to market on a standard bus board, e.g. for PCI, as a software / hardware package. Oracle and others are doing well with proprietary systems. Very large corporations, e.g. Google, Amazon, etc. can afford to design and build their own complete systems. Still other, not so large firms such as hedge funds, brokerages and others keep their proprietary analytics under lock and key as this is their "bread and butter". Not too long ago a major brokerage had an employee theft of code which resulted in prison time for the culprit. Not only is hardware harder to steal, algorithms patented as circuits are much easier to defend in court with a hundred years of case law behind circuit design. As with massively parallel subsystems, you are developing new algorithms (not simply code) for critical sections which may be 10% of your overall system but 90% of your current processing time and you are thus working with a relatively small section of your overall code. But it is a complex overall decision not to be taken lightly.