As with so many things, whether this is good or bad depends upon what you want to do.
Are you planning to run Gromacs as a major component of your research? Will you routinely be doing these calculations? It might be worth spending the money to get a better computer (and possibly a UPS backup, by the sounds of your power situation). Are you just running the occasional calculation? 1 ns/hr is quite respectable in such a case.
For reference, consider the benchmark chart at: http://www.gromacs.org/GPU_acceleration
They report about 40-50 ns/day on a 6-core i7 CPU (about twice as fast as you're getting). But they also worked on a 24k atom, cubic box system - that's a bit under half the size of your system. So you're basically benchmarking to the slowest system they put on their chart - which is still quite fast compared to what plenty of people run these simulations using.
Also note that they use a GeForce 680 in many of their benchmarks - this is a comparatively cheap card just now, as it's a few generations old, and you can find it for around $100 at some vendors. If you feel like monkeying around with graphics chip acceleration (which is sometimes harder to make work that it seems it should be), the charts suggest a well-over-2-fold improvement in calculation speed for some systems.
@Stephane Abel, I have modded a laptop with egpu 1050Ti x16 to x4 conversion. So, that's a bottleneck. i7-4702mq with 4C/8T... Later this month I am expecting to quadruple this speed as I'm developing a server (single node) myself under $2000... 10TFlops approx... And under 1kWh. Will give you the global ip and guest login so that you can test LINPACK and simulation yourself. ;)
I find post MD 'analysis' more time consuming that pushes your limits to know more and more about your system in order to come up with something meaningful.
It can certainly be better. At UNC-CH, the new Linux cluster with GPU compute cores can yield up to 600 ns of Gromacs all-atom MD in one day (for a similar ~60K atom system).