I currently have an OpenCL code that uses double precision floating point in massively parallel filtering functions (i.e. a lot of multiply-accumulate instructions). Since it does not meet a time deadline, I was wondering how much time I can save if I used 64-bits fixed-point integers rather than 64-bits floating-point. The accuracy will only be marginally affected. The main problem I have is execution time. Before doing the transition I would like to hear from some experts.

More Mario Roberto Casu's questions See All
Similar questions and discussions