Seems like you need to preform a Big-O analysis (http://en.wikipedia.org/wiki/Big_O_notation). This can be done by hand by analyzing all the loops in your code.
This will give you a theoretical efficiency of your algorithm, but not practical. Depending on the data and the algorithm, even a huge O() function can be quick if run on small amount of data.
Bottom line is -- if you have another algorithm to try, and you don't want to do BigO analysis for both -- run each on small sets of data and compare. If your data is HUGE -- then BigO analysis is the best way to go.
Seems like you need to preform a Big-O analysis (http://en.wikipedia.org/wiki/Big_O_notation). This can be done by hand by analyzing all the loops in your code.
This will give you a theoretical efficiency of your algorithm, but not practical. Depending on the data and the algorithm, even a huge O() function can be quick if run on small amount of data.
Bottom line is -- if you have another algorithm to try, and you don't want to do BigO analysis for both -- run each on small sets of data and compare. If your data is HUGE -- then BigO analysis is the best way to go.
As Mikhail said, use O to find the complexity of ur algorithm.. In addition, study different aspects of ur algorithm (i.e. if its graph-based algorithm some problems are difficult when there is cycle and easy when acyclic representation exists).
Study the assumptions, knowledge representation and inference method .. those all affect the complexity of ur algorithm
If you want to measure the running time of your MATLAB implementation instead, you can use the built-in profiler ("doc profile" for details). Just switch it on ("profile on") before you start your algorithm and type "profile off; profile viewer" at the end. You will get a fine-grained analysis of the functions you used as long as they are written in MATLAB and not compiled away in a mex file.
I was searching for more and I found some additional concepts in [1] to define the performance in terms of speedup and efficiency. This terms were defined as:
"Speedup: is the ratio of serial execution time of the fastest serial algorithm to the parallel execution time"
" Efficiency: usually expressed as the ratio of speedup to the number of processors"
To measure the speed of some code I also saw that is used FLOPS (floating-point operations per second) and in Matlab the execution time is measure with the command "tic-toc".
Now, how can calculate the FLOPS in an code?
[1] Gupta, A., S. Koric, y T. George. «Sparse matrix factorization on massively parallel computers». En Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, 1, 2009. http://dl.acm.org/citation.cfm?id=1654061.
You can count FLOPs by analysis of the code and it's loops. Total run time divided by the floating point ops carried out gives you effiective FLOPs.
I would just reiterate that Big-O and speedup/efficiency compared to a theoretical serialized algorithm really have no bearing on actual measured times on modern hardware with multi-level caches, etc. I have seen algorithms with worse theoretical run times that outperform better theoretical algorithms because they do extra work to optimize cache locality. If you aim is theoretical analysis, divorced from specific hardware/languages, then Big-O is what you want. If actual times on particular systems is important, then implementing and measuring time is the way to go. In that realm, speedup is measured as a factor of improvement over another implementation/algorithm in a similar environment.
I assume we use a finite element method because you haven't had an analytical solution. However, we need to use an analytical approach until at such a level that no more formulas can be used. It means, I prefer to break down the thermal problem into several stages based on the possibly analytical formulas. The experiment data would be the most acceptable to measure the performance of an algorithm. You may study from other person works/papers who have the same problem/case and compare your result with their result.
I am amazed that in no case did anyone think that getting a accurate answer was important. Changing the run time, or number of CPU cycles is easy - decrease the number of nodes, the cycles are reduced by N^3 (with a solid model). Numerical accuracy, stability and resolution are important and should be included in the tradeoff. First be sure that the result matches physical measurement.
As alway the rule is the same: make it right then make it fast.
In computer science, we use efficiency to describe properties of an algorithm relating to how much of various types of resources it consumes. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process, where the goal is to reduce resource consumption, including time to completion, to some acceptable, optimal level.
And I add three points for measure efficiency:
1.1 What Are We Testing?
What we're trying to test as:
• if statements
• while loops
• do loops
• for loops
• array accesses (reading and writing)
• mathematical statements (integer and floating point)
• Logical operations (and'ing, or'ing)
1.2 What We Testing:
• Bubble Sort
• Insertion Sort
• Fletcher 32 bit CRC Checksum
• Run Length Encoding
• Prime Number Generation (Floating Point and Integer Operations)
It seems that you know big-O and all other asymptotic notation but you are feeling difficulty in the way measurement. If it is so then follow following instructions:
1. first take an input of some size n ( eg: in case of sorting ..starting from 1 to 20 in
count of variables to be sorted)
2. try to find the number of comparisons in your algorithm (initially it will take some time so be patient ).
3. try to fit an equation as f(n) where f(x) is time taken and n is size of input.
4. then try to find the asymptote function( try to find about asymptotic notation) similar to that function.
then you will able to calculate the time complexity more understandingly.