For floating point program data cache miss rate as well as TLB miss rate is high as compare to integer program. can somebody explain it in a lucid way?
It depends on the nature of the program, the data access pattern, and so on. But, in general, I think that if you consider as a typical "floating point program", as a based "64-bit float" program, and you consider as a typical "integer program" as a based "32-bit integer" program, then the conclussion is that the data size accessed by the float based program is bigger, then the miss-rate is higher.
In other case, more information about the architecture, processor, language and compiler is neeeded.
How are floating point calculations implemented in your system? In hardware, or library routines instantiated by the compiler? If the latter, accesses to code and data for floating point routines may cause more cache misses.
In order to know a deeper analysis about cache behaviour, I recommend you the Second Chapter of the book "Computer Architecture. A quantitative approach" by John Hennessy and David Patterson. 5th edition.
Book Computer Architecture – A Quantitative Approach (5. ed.)