Computational costs are important in numerical methods, as if they were obtained by counting the number of operations(flops). The order to count this number has been asked by MATLAB that I have recently encountered.
If you have a mathematical expression in a form of a matlab code, then you can count the number of the multiplications to count manually the multiplication operations to execute this mathematical expression.
A practical way to measure the complexity is by means of the cpu time. As the cpu time increases it means more complexity. The matlab can output the spent time by the computer to run the code.
Usually, approximate estimates of the flops count for a given algorithm are carried out 'on paper' and are meant mostly to provide a general scaling law of the computational cost of the problem as a function of the number n of unknowns/degrees of freedom/variables involved. It should be noted however that 1) these flop counts are just rough estimates 2) the relationship between the effective computational cost and the flop count has become weaker and weaker over the years, since modern processors need much less time to carry out arithmetical operations than to retrieve data from memory. As a result, the effective computational cost depends more on the details of how the algorithm has been coded and on how efficiently the data required by the algorithm are being retrieved from the different memory layers. For the small and medium sized problems that MATLAB can handle easily, there is in my experience little or no clear relationship between the flops count and the actual computational cost.
If you have a mathematical expression in a form of a matlab code, then you can count the number of the multiplications to count manually the multiplication operations to execute this mathematical expression.
A practical way to measure the complexity is by means of the cpu time. As the cpu time increases it means more complexity. The matlab can output the spent time by the computer to run the code.