You need to supply more information about the platforms that you're working on. But essentially there's two approaches to this:
Analyze the complexity of the algorithm using 'bigO' notation (or similar).
Run the algorithm and measure the time taken
The first approach is largely theoretical, but if carried out correctly will give some idea as to how your algorithm will scale up, it can also be used to compare performance against other algorithms.
When using the second approach be very rigorous in your approach. Firstly run the algorithms on a machine that has as little other software running as possible. Use the system clock to note start and end times. Given that modern operating systems run so many concurrent processes you are unlikely to get the same results twice (techniques like processor caching can also cause discrepancies). Make many runs and average the results.