If you could pick one algorithm which you like the most (if you are somebody who has his/her own, not including your own)
Funny question. There is probably no good answer unless you specify what kind of problem you need to solve. For example, there are numerous algorithms for discrete eigen problem (in matrix form). I like the Jacobi method, probably the oldest one, for several reasons. But after I have to optimize something, then some form of Genetic Algorithm is my favorite choice. Just because I am lazy programmer.
My favourite algorithm is "Hello World", I am able to code it in a lot of programming languages!!!!
Funny question. There is probably no good answer unless you specify what kind of problem you need to solve. For example, there are numerous algorithms for discrete eigen problem (in matrix form). I like the Jacobi method, probably the oldest one, for several reasons. But after I have to optimize something, then some form of Genetic Algorithm is my favorite choice. Just because I am lazy programmer.
The Fast Fourier Transform Algorithm because is able to solve same problem in wide variety of applications...
Sieve of Eratosthenes. It was the first algorithm I ever applied.
Dear Daniel Page,
I would like to be informed whether or not your question is stated in order to help you writing some research paper. If so, then it is worth dedicating some time writing a well documented answer. But if the question it is only stated in order to satisfy your curiosity, I keep my previous answer on: The best algorithm is "Hello World".
Best regards.
Kitchen algorithms. I'm definitely getting some useful (and tasty) output with those.
The algorithm to generate IFS fractals using the chaos game. The reason why is that quite a simple algorithm and relatively few parameters can generate very interesting & beautiful images. It is something I have been playing with as a hobby.
@Juan-Esteban Palomar Tarancon ·- This is curiousity. Why would I pool favourites from other people to feed my own interests if I already have a very well grounded research field I work in? I have my own specializations in Theory of Computation and Algorithms, and I just like seeing what algorithms are people's favourites. It's no different than me asking about somebody's favourite song, or is somebody's favourite theorem logically. Like some people may really like Graph Algorithms, or some may really enjoy fractals. There's no need to be concerned if somebody is asking YOUR personal favourite.
The reason I phrased it in the parenthesis in this matter is to rule out the idea that people have hypothetical 'pissing' contests about their own algorithms. For example, I have the best algorithm in a certain domain of generation algorithms, so this doesn't count, nor is it my favourite (personally). Hope this clears this up for you.
Since I didn't answer the question: My favourite algorithm is probably Euclidean Division Algorithm.
Dear Daniel,
Now, I agree with you. My favorite is the procedure to cook a good valencian "paella".
In the common sense: „understanding, learning, applying” and from my personal orientation „ Potential function method”.
As social procedure the best is a „good mountain walk with friends and a good beer with them in the evening. I suggest the last procedure to everybody.
The algorithm to calculate algebraic expressions. This algorithm is very important to define functions.
Algebraic expressions should be in a mathematical space. And we must know their mathematical properties such as convergence speed of algorithms, whether or not they constitute a Banach space. That is, perhaps we should know their analytical properties, too...
Frame's algorithm for finding the determinant of an nxn matrix using only trace and matrix multiplication is pretty elegant (though not particularly efficient). However, my favourite has got to be the following, whose name I do not know.
To find integers x and y so that ax+by=g, where g is GCD(a,b):
Form the 2x3 array
1 0 a
0 1 b
And perform only the following restricted row operations:
- add only integer multiples of rows
- multiply rows only by 1 or -1
- swap the two rows
to bring the matrix into the form
x y g
u v 0
where g> 0. It is easy enough to see that this can always be done (always perform a row operation that reduces the size of the larger element of the third column until one of them is 0. (It's easy enough to give an exact procedure based on the Euclidean algorithm, but this is not even the most efficient method)
Then
1. g=GCD(a,b) -- that is, the procedure finds the GCD, and this is necessarily the nonzero element in the third column upon completion
2. ax+by=g -- that is, a solution to the above problem is found
and
3. X = x+su, Y = y+tv is the GENERAL solution to the above problem, as s, t run over all integer pairs.
The proof of the above facts take only a few lines and are a nice undergraduate exercise.
My favorite algorithm is the algorithm for generating ascending compositions. We recently published the algorithm in "Binary Diagrams for Storing Ascending Compositions", The Computer Journal, (2012) DOI: 10.1093/comjnl/bxs111.
Anti-oscillation device for iteration x --> f(x):
y* = f(x)
y**=f(y*)
x-->(y*+y**)/2 instead of x--->f(x)
I used this scheme first for solving Kepler's equation for strongly elliptic orbits and found in unbelievably stable and efficient. Who knows of the method and can give me references? Can't believe that I should be the first inventor.
The simple Logistic Map because it's a great pedagogical tool to explain chaos & random number generators.
One of my favorite algorithms involves generating Seligorean n-tuples. In 74 I saw the following. Take any odd number square it and then split it. 3^9=4+5. Then we have a triple 3^2+4^2=5^2.
then take 5^2=25=12+13.
then we have 5^2+12^2=13^2.
Adding we get 3^2+4^2+12^2=13^2
This process can continue indefinetly. Also we can start with any odd natural number.
The reason I like this so much is because it was the answer to a grade 10 student's quaestion " Can the Pythagorean relationship work for more than 3 numbers?"
1) try everything
2) replicate what works
3) repeat
I guess this is GA or evolutionary, but I use it in a more selfish, directed sense.
1. An algorithm that attempts to solve a problem and the reason for its failure leads to theoretical results.
2. 2. An algorithm that uses in its code the complexity (the number of steps) that it needs to solve the problem.
A-Star Alog.....
BEST ALGO FOR FINDING Shortest Path at minimum cost
Actually, question is incomplete as already pointed out by Marek. Optimality of an algorithm can be determined based on some specific problem.
Five minutes ago, I was watching at the sky. I could see the following message written in a cloud:
Warning: Error 404. The universe will be restarted in 30 seconds. All its contents will be destroyed.
Sorry by the troubles. After installing the new version, this bug will be corrected,
This is my favorite algorithm.
When you need guaranteed results, for example when human safety is at stake, turn to interval methods. They are applicable to wide variety of problems: optimization, finding roots of equation systems, solving ODEs, and many others. An excellent entry point on the web is:
http://www.cs.utep.edu/interval-comp/
just in case you haven't heard of them earlier. However, nothing is perfect. Their worst case complexity is exponential. Nevertheless, problems with over 70 unknowns were solved using those methods, including the proof of Kepler's conjecture.
Try them! No more trapping in local minima during optimization, even when they aren't smooth (differentiable).
Hi Yves. That's not an algorithm, it's a problem. Presumably you have some algorithmic solution to that problem. Could you be more specific?
In fact, there is more than one problem there. If it's an infinite point set there is one sort of approach; if it is a finite point set there is another. Also, what constitutes a "solution"? A drawn figure? A description of line segments forming the boundary? I would suspect that the desired "solution" is a constant-time decision criterion to determine whether a given point is in or out of the convex hull. I can imagine some nice procedures that would yield such a criterion, but this is outside my field, so I don't know what the standard approaches are.
Well, that's certainly closer; it's a list of algorithms by name. Are we to guess which one is your favourite, or is there a hidden clue there?
It is not an exam. I have no expertise in this particular field and did not find your answer "Convex Hull of a 2D Point Set" informative. I asked for clarification out of curiosity regarding which algorithm you had in mind.
You speak of "algorithms" like a computer scientist. I understand the term as a mathematician. Euclid dealt exclusively with geometry, generally with figures consisting of infinitely many points -- and many of his proofs consisted of describing an algorithm. One of the most famous algorithms of number theory come from his geometry -- the Euclidean Algorithm. Even so, finding the convex hull of a planar set consisting of a circle, a segment of a parabola and 6 discrete points is an interesting computational problem that belongs to a class which is surely machine-solvable via well-known algorithms.
Hi Yves. I am a pure mathematician working in the field of combinatorial matrix theory. Most people don't know what that means; in a nutshell we ask, "Suppose we restrict the elements of a matrix to (some) set S. Can it have property X? For what orders is this possible? What is the internal structure? What general methods can be used to obtain them? And so on"
Classical questions:
For which orders do there exist square matrices with elements in S={1,-1} whose rows are orthogonal? (this is the Hadamard Matrix Problem)
For any n that is not the power of a prime, does there exist an nxn matrix P such that PP^T = nI+J, where J is the matrix of all 1's? (This is the finite projective planes problem; more generally, for which orders n?)
MOLS problem: In general, how many latin squares of order n may one have which are mutually orthogonal -- that is, with the property that the superposition of any two squares yields all n^2 pairs of symbols?
The MUB problem is more recent. It sounds non-combinatorial and matrices are not mentioned but actually can be formulated as a question in combinatorial matrix theory: For which dimensions d do there exist d+1 unitary bases of complex d-dimensional space, such that there is a constant k (which turns out to necessarily be 1/ \sqrt{d}) so that for any vectors u, v in two different bases in the collection, | u . v | = k? If d+1 bases do not exist in some order d, what is the largest possible number?
If you know something about what is known about the MOLS problem and the MUB problem your first instinct is to think that they are versions of the same problem, but we're quite sure they're not, although there is some connection. However, the finite projective plane problem and the MOLS problem are, indeed, two different formulations of the same problem, and a solution to one will solve the other.
Hi Yves. Unfortunately, this site doesn't seem to give an effective way to link to a particular comment (I'd be interested if anyone knows how). You can find my answer earlier in this discussion. If you list comments chronologically you'll find mine on Oct 29, 2012.
A brief summary: It is an algorithm whose name I do not know, that solves the problem: given integers a, b find integers x, y such that ax+by=g (= GCD(a,b)). The method is a snap to carry out by hand, even for large numbers -- an easier cousin of Gaussian Elimination. For simply finding GCD it runs significantly faster than the Euclidean Algorithm in the average case, and at least as fast in every case. But for finding x, y it runs far faster than the usual method of Euclidean Algorithm plus back-substitution. Further, it finds not one solution, but ALL pairs (x,y). Further, the method generalizes to any number of integers (in place of just two: a, b) and to similar problems over more "interesting" domains. A truly marvellous, elementary and versatile tool that deserves greater exposure.
Hi Yves. First, as I state, the steps of reduction are not (always) the steps of Euclid's algorithm because it allows negative remainders. And that is why it runs considerably faster in the average case.
For example, consider Euclid's algorithm for the numbers 125, 8, arranged in "pair shifts" -- one element shifted by an appropriate multiple of the other:
(125, 8) -> (5, 8) -> (5, 3) -> (2, 3) -> (2, 1) -> (0,1)
5 divisions and subtractions. The algorithm I describe:
(125,8) -> (-3,8) -> (-3,-1) -> (0,-1)
3 divisions and subtractions -- the gcd is the absolute value of the nonzero number. -- or we can finish with a single negation.
For large numbers there can be far more saving. Of course I'm asking to carry along the matrix, so there is extra load when this is done, two multiplications and subtractions in each step; but fewer divisions and fewer iterations of the loop.
Second, in some sense it is "extended Euclidean", but if I call it that who will know -- there a million "extensions" to Euclid's algorithm. What I mean by "I do not know its name" is that I do not know any that is used by convention. I am not asking permission to make up my own name. It is an already-well-known procedure. If that is truly it's name, I'm happy to use it.
Third, while "Bézout-Euclid algorithm" is not generally used to my knowledge, it is an apt description of the standard procedure of computing the Bézout coefficients with Euclid's algorithm and the awkward (and time-consuming) back-substitution. This is explicitly NOT the algorithm I describe. The whole point of it algorithm is to AVOID the back-substitution step, thereby saving considerable time and energy.
In this sense it is more like Gauss-Jordan than Gaussian Elimation. But it is NOT Gauss-Jordan. Observe that the matrix STARTS in RREF and finishes quite differently.
So, fourth, it is not "funny" that I use the matrix formulation. I do so because it provides a more efficient procedure in real terms.
For comparison, if you carry out the Euclidean algorithm in the example above -- where are your Bézout coefficients? You still have several steps of messy algebra to find them. I'll explicitly use my procedure to show how arranging the information in matrix forms saves us work:
1 0 125
0 1 8
=>
1 -16 -3
0 1 8
=>
1 -16 -3
3 -47 -1
=>
-8 125 0
3 -47 -1
=>
-8 125 0
-3 47 1
The GCD is 1, and the Bézout coefficients are found on that line. In particular,
125x(-3)+8x47 = 1
Further, the other line of the matrix provides a vector by which all possible Bézout can be computed, in parametric form, by shifting the found pair. So the GENERAL solution to
125x+8y = 1
is
(x,y) = (-3,47) + t (-8, 125) = (-3-8t, 47+125t)
That is
x = -3-8t
y = 47+125t
where t is any integer.
You might say "that last step was easy; (-8,125) is just the original pair with one negated. That is true in this case and will happen whenever the numbers are relatively prime. But in general it does not happen. True the standard approach gives the solution in two simple steps, but the reason the solution is complete is not obvious, whereas there are no addition steps in the matrix approach and the reason one has the general solution is completely transparent.
Out of curiosity I searched "extended Euclidean algorithm" and see that someone is using that name at Wolfram for the usual (awkward) method of keeping track of the calculations for the Bezout algorithms in a forward direction along with the Euclidean algorithm.
https://www.researchgate.net/post/What_is_your_favourite_algorithm?ch=reg&cp=re65_x_p2&pli=1&login=craigenr@cc.umanitoba.ca
While that is close in spirit to the algorithm I describe, it is neither the same algorithm (since it still adheres to positive remainders) nor does it take advantage of the simplifyied bookkeeping that matrices allow. Look at their example, and all the auxilliary calculations it entails.
Further, the example they use (GCD of 987 and 610) requires 14 iterations of the loop in the Euclidean algorithm, whereas the procedure I describe uses only 8 iterations.
Interestingly (I did not know) Bezout's formulation was for polynomials, not integers -- which is a context I often use this procedure for. There is less difference in efficiency in this case, though the matrix procedure still provides superior bookkeeping in any case.
Depends on what you mean by changing the asymptotic complexity. Both are log in the larger of the two given numbers, and for the average case the constant for the algorithm i describe is about half that of the Euclidean Algorithm. 1/4 if we are calculating the Bezout coefficients. Much better in the best case (constant time) and in the worst case the two algorithms are essentially identical (unless you're also calculating Bezout coefficients, in which case the use of matrices in the recording procedures cuts the work almost in half).
But frankly I don't care much about complexity, even asymptotic complexity. It is the elegance that appeals to me with this algorithm. That, and ease of use. Take to arbitrary 5-digit numbers and find the Bezout coefficients by HAND. If you use the Euclidean algorithm I guarantee you that you'll have several false starts before it works. But a bit of care and attention you can do it first time with this algorithm and not a lot of sweat. It is just a well-designed procedure that does much of the thinking and bookkeeping for you.
It's also why I like Gauss-Jordan elimination. It is not (quite) as efficient as Gaussian elimination with back-substitution, but it is much more elegant and far less error-prone as a by-hand calculation. When you work almost entirely by hand (as I generally do) efficiency is not all it's cracked up to be when it falls behind in elegance, simplicity and avoidance of errors.
From a teaching perspective, I find the Sieve of Eratosthenes (http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) to be my favorite. As most students (from Jr High School thru college) understand the concept of prime numbers, this algorithm showing them how quickly one can determine a prime is fascinating.
Programming this algorithm on different CPUs, different OSes, and using different languages shows how computers and languages have evolved. It is a small enough algorithm that it can be implemented in any language from assembly to BASIC to shell script.
¿Which is the best algorithm?
This question raises two new questions: ¿each problem has one algorithm? And if there are more than one algorithm for a problem ¿how determine their efficiency and effectiveness?
The best algorithm is which one does its job efficiently and effectively, producing the required result.
The importance lies in the algorithms which guide to understanding and solve problems. The use of algorithms often occurs in mathematics and computer science (for compressing data, images or send electronic certificates and other uses).
The aim and essential quality of an algorithm is to be correct, ie to produce the required result in a finite time. Also, the algorithms should be well structured, clear, legible, easy to use. The most significant criteria to evaluate an algorithm is its efficiency. So, we need algorithms that run faster and use less memory.
The complexity of an algorithm is the cost, measured in running time, or storage, or whatever units are relevant, of using the algorithm to solve the problem at hand.
We live in a world designed for - and increasingly controlled by - simple or complex algorithms. The use of algorithms in programming people and/ or computers. The algorithms determine: espionage tactics, encryption keys, computer networks, robots movements, stock prices, movie scripts, architecture and engineering, changing a car wheel, brush our teeth, make coffee or fry a egg, writing books, teaching at school, predict the movements of financial markets, even more people created an algorithm to determine the winners to the Senate and the presidency of the U.S.. This activities or problems can be described in terms of algorithms.
Algorithms are pieces of human knowledge, as the books, the highways and the computer's programs. Things that are "personal". Not see or touch the algorithms does not make them less “real” in our universe. Carl Sagan said, we are part of the universe and everything else. So, to create something like an algorithm that controls a computer, we are creating something that is real.
This depends on the purpose that you pick the algorithm. As researchers, they may like the algorithms they developed, and often compare their algorithms with the algorithms in literature in possible ways. Theoretically, a good algorithm lies in its performance for solving a problem. This needs to concern the complexity of time and space, and the implementation. Industry may more concern the later, while academia may more concern the former.
Dining Philosophers Problem
In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.
It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals. Soon after, Tony Hoare gave the problem its present formulation.
http://en.wikipedia.org/wiki/Dining_philosophers_problem
Looks like some of us confuse the problems to be solved with algorithms, i.e. the ways to solve them, efficiently or not - but that's another story.
Recursive version of the merge sort invented by John von Neumann in 1945. The simple idea of divide and conquer and recursively repeat has been the introduction of generations of computer scientists to the idea of parallel computer processing. I don't think I have to explain the importance of distributed and parallel processing in today's world and how ubiquitous it will be in the future.
Thanks for your comment Marek. I noticed the same thing. An "algorithm" is not the same thing as a "problem".
I've already answered the question but I have an observation about naive implementarion of recursion that Ryan's comment brings to mind (I agree that von Neumann's idea is elegant. That was his gift; this is not about that algorithm...)
In simple code you can write a recursive algorithm for spitting out the nth Fibonnaci number (for whole number inputs) like this (pseudo-maple):
f := function(n) if n
@Rob: Most adherents of pure mathematics know the saying 'iteration is human - recursion is divine' and believe it. This teaches what lack of experience can do even to inteligent people.
A tricky one, I guess. I'd be willing to say quicksort, if only for the brilliant, yet simple, proposition its correctness is based upon:
If in an array A, there is an index i such that A[j] = i, and the same is true for A[1..i-1] and A[i+1..|A|] recursively, then the array is sorted.
For pure elegance, imo it's hard to do better than Euclid's gcd algorithm.
Hi Chris Merrill. I agree that the Euclidean algorithm is highly elegant. But it is not the fastest algorithm for finding GCD and there is a very simple algorithm that improves on it. In brief, do the same thing but permit both positive and negative "remainders". If you are working with large numbers, this algorithm is significantly faster in the average case. I is never slower than Euclid's method because it always follows that method when it is optimal. And in the worst case scenario, both algorithms are equally efficient.
Here is a comparison I gave in a previous comment:
Euclid's algorithm for the numbers 125, 8, arranged in "pair shifts" -- one element shifted by an appropriate multiple of the other:
(125, 8) -> (5, 8) -> (5, 3) -> (2, 3) -> (2, 1) -> (0,1)
5 divisions and subtractions. The algorithm I describe:
(125,8) -> (-3,8) -> (-3,-1) -> (0,-1)
3 divisions and subtractions -- the gcd is the absolute value of the nonzero number. -- or we can finish with a single negation.
Incidentally, Daniel Page, who posed the question, also says the Euclidean Algorithm is his choice.
ResearchGate doesn't give us the capability to link to specific comments but if you view this entire discussion you'll find my contribution near the top.
In it I discuss the related question of finding integers x, y such that ax+by=g, where g is the GCD of a and b. It is well-known how to "back-substitute" after the Euclidean Algorithm to solve this problem, and this is commonly taught. But in this case it is quite inelegant, and by far not as efficient as the matrix-based method I describe there. I do not, however, know of standard name for the procedure I describe. I teach it, however, whenever I teach GCDs to my students, as it is simply superior and the students find it very easy to apply.
The most useful and surprising single algorithmic operation that I encountered is Penrose inversion of matrices. For my purposes I implemented it in C++ taking Singular Value Decomposition (from Press et al. Numerical Receipes in C) as a black box. So the main algorithm in it is SVD which is too complex to see its elegance if it is there. But the effect is great: no LU decompostions any more, no troubles with ill conditions matrices any more, ...
Alexander Ermolitski IIT BSUIR
I like my own algorithms considered in the article "The generalized Poincare conjecture and some algorithms", Intellectual Archive ID 976 ( the file attached).
Alexander Ermolitski IIT BSUIR
Everybody can find my algorithms in attachment
My favorite algorithm is one that I dont understand, has been programmed by somebody else and works perfectly...
Exhaustive search is the "best" algorithm ever and thus my favourite :)
Algorithm teacher is my favorite algorithm because I fall in love instructor who teaching algorithm.
♥JUST KIDDING!!! (HA HA)
My favorite algorithm is the one that I don't understand.
That said, of all algorithms that I have mastery over, A* is probably my favorite to implement, mainly due to the context in which it's usually implemented, but also because it's really quite elegant, as far as algorithms go. Studying it improved my coding style quite a bit, as it helped me figure out how to make any recursive function iterative.
But what is the task algorithm A* solves? Any pointers, maybe?
My favorite is Dijkstra's algorithm for solving shortest path problem. The basic for car navigation, transport systems, etc.
Funny question!!!! Nobody can answer unless you specify what kind of problem you need to solve. If it is asked to me, then I will say the algorithms developed by me are the best.
Sieve of Eratosthenes, which I was define by myself, when I solved project euler problems. Then I figured out that my algorithm had a name, and it's defined thaousands year before)))
Hans, you could have pointed also to C.F.Gauss and his Easter Algorithm. He had good reasons for his interest in the topic. Today, with computers everywhere, the dynamics of the Sun, Earth, Moon -system (astronomical basis of all calendar science) is still exciting but the numerical accidents on which formulas like yours are built are much less so, apart from, perhaps, a certain sportive element.
i should say it is the "Ant Colony Optimization" Algorithm. It is very vivid and derived from the observations of ordinary life. also it is a probalilistic algorithm, which can solve problems like finding paths through graphs. as we know, graphs are common entities in daily life.
My favorite? Hmm, the mother.
Of all algorithms, of course :-)
(GCD).
I vote for Ant Colony Optimization metaheuristic. It was defined by Dorigo, Di Caro and Gambardella.
As I mostly deal with image classification and clustering problems , the algorithms that serve the best to me is SVM and random forest depending on the application, for binary classification and less training data SVM is better, but for multi-class and large datasets . random forest is better because it is robust to noise , does not overfit and deals well with missing data. For clustering I use kmeans and gaussian mixture models more frequently.
Monte Carlo methods: easy, elegant but a bit messy, useful in many fileds (integration, optimization, linear equations, numerical solutions of differential equations)
In the OR field, and for dynamic and real reactive systems, I like heuristics based on Monte Carlo process. I use it for solving some vehicle scheduling problems (DARP, PDPTW etc.).
I wonder whether non-deterministic algorithms indeed deserve their name.
Their stopping rules are usually not unique and different users can easily
obtain different results even for the same data. At the same time, it is beyond any doubt that they are of great practical importance. Relatively small effort on human side and your income may be tripled, if you happen to be in industry.
(discrete) Fast Fourier Transform because of : 1) the huge impact in applications and computing (including the fast multiplication of integer numbers!), 2) the intrinsic elegance and relation with number theory. The algorithm and its mathematical foundations put in evidence the point that there is no difference between the so called 'pure mathematics' and 'applied mathematics'.
Of course my opinion on Fast Fourier Transform is not only my personal opinion, but it is shared by many other mathematicians with various nuances, including Rajendra Bhatia and Gilbert Strang. In particular I invite to read the last pages of the book on Fourier Series by Rajendra Bhatia.
I have read many as part of my courses but the one I liked the most is "Ant Colony Optimization". Something that is so common in nature and can be applied to solve other practical problems in networking and robot path planning.
Octav Olteanu, University Politehnica of Bucharest
Gram - Schmidt procedure in separable L_2 spaces is useful in solving inverse problems, starting from the moments of a function. Thus one gives "geometric" simple solutions for problems that are hard to be solved by some other methods.
A.A.Ermolitski , IIT BSUIR, See my algorithms in:
1)New approach to the generalized Poincare conjecture, Applied mathematics, V.4,
Number 9, September 2013 ( on line ).
2) Crystal spheres and a geomertric cocept of the world, IntellectualArchive, physics,
(on line), 2013.
According the my research the very important algorithms from information science point of view are: genetic algorithms; algorithm for complexity proposed by Kolmogorov; algorithmic probability developed by Solomonoff, and learning algorithms.