So lets review.
Lisp is especially good for parsers,
Prolog is especially good for Logic based expert systems
Java is cheap has a wonderful library, including a rulebase engine, and will run on nearly everything
Basic can be used in a pinch
C/C++ run faster
Matlab makes wonderful sims/prototypes, including NN's
But Erlang and scala are in the wings
Check the libraries out, before you BUY a compiler if it doesn't have the A.I. libraries you will spend a lot of time reinventing the wheel. In the case of Java, check out the Java Community because the libraries that come with it, are only part of the picture.
lisp is the best for general AI (not just equations ML stuff). I'd suggest clojure since it has many good libraries for hooking into hadoop if you're doing large scale, distributed AI computations. And its the modern lisp.
OK I'll ask. What are you doing in Matlab that is AI? I've done symbolic processing (MAPLE) but what are you guys playing with? I don't know if it is still around but there was a thing called Ops-5. I can remember using it but I don't remembe many of the details. It gave you the process to process rules. It may have been replaced by something. HJ
c++, borland ++, delphi, java for the language and PIC, Arduino, ARM ICs and some logic gates to apply the mechanical and thinking processes.
MATLAB or something similar (very high level with lots of toolboxes/libraries) for prototyping and research.
Java or C# for performance AFTER creating a MATLAB prototype.
I'd certainly recommend Lisp (bye John McC BTW) and also Prolog, but truly it's very much "horses for courses". Every language cited here and above have their strengths and weaknesses according to what you're trying to represent. Maybe you could say a little more about what you're trying to achieve?
Matlab + Java, as the others said, Matlb has a vast amount of tools for prototyping and java fits in for performance.
depends on what you do. hardcore AI typically uses lisp; applied AI (machine learning) is usually implemented in matlab these days. although R is gaining ground for stat-based machine learning. If you need speed for heavy machine learning, then you'll have to resort to C/C++ (maybe Java if not memory intensive)
To me, you should use a language that you're familiar with. Not disconsidering the question, but are you considering the use of multi-agents? If you do so, that could be part of the answer to your question or a filter to the existing languages.
Hi Haseeb. To my knowledge, many AI software for modeling distributed artificial intelligence problems are written in C, C++, C# or Java. On the other hand, there are several "ready-2-use" softwares that you can use for modeling purposes.
In any case, It depends on your computational skills. In this paper (http://jasss.soc.surrey.ac.uk/12/4/4.html), for instance, you will find a introductory comparison between multi-agent based softwares (NetLogo, Repast and Sesam)
Hi Haseeb. To my knowledge, many AI software for modeling distributed artificial intelligence problems are written in C, C++, C# or Java. On the other hand, there are several "ready-2-use" softwares that you can use for modeling purposes.
In any case, It depends on your computational skills. In this paper (http://jasss.soc.surrey.ac.uk/12/4/4.html), for instance, you will find a introductory comparison between multi-agent based softwares (NetLogo, Repast and Sesam).
I think there is no "silver bullets" in programming . The language depends on the problem. For example, Lisp is a language that has the greatest impact in Artificial Intelligence, not only for the symbolic manipulation capabilities, but because in fact it was the first, created by McCarthy. It is a predominant language in the United States first. However, remember that artificial intelligence is essentially symbolic data handled with heuristic methods. In this sense, languages are more suitable description languages. Some have argued that the most descriptive of prescriptive languages is Lisp, but the King of the descriptive language is Prolog.
Prolog has the advantage over the Lisp that it already has implemented its search strategy. It is a language closer to the natural (and therefore more descriptive). And it has its problems, first, if the problem requires a different search strategy of implemented in Prolog . Also, many versions have had problems of efficiency, due to the intractable problem of in-depth search in many real cases. The Prolog is a language that has prevailed mainly in Europe.
Typically prescriptive languages such as C + +, Delphi, etc., Can be used, but are very uncomfortable, both from the point of view of implementation of the resources already on Lisp and Prolog. On the other hand, is the problem of reusability. In fact, this has been the biggest problem I have had the knowledge-based systems.
In that sense, Lisp and Prolog have been created to facilitate the use of certain forms of knowledge representation (lists, trees, graphs, part of the predicate calculus, frames, semantic networks). However, in these forms of representation are not guaranteed the possibility of standardization of knowledge bases. Today, this problem is being handled through the representation of ontologies. And some versions of these languages are capable of representing knowledge using OWL, the ontology language more popular these days.
The conclusion is that depending on the problem, we choose the most appropriate language. And that these languages must be able to handle multiple representation formalisms.
this question is very important to me. i search for a good and comprehensive answer for this question last 3 month!
but i can't find a good one. i usually use c# for my research because I'm familiar with it. but always I am in a doubt for using C#. is there any negative point in my research because of using C# and .net framework?
Sorry... I forgot to say that my research topic is evolutionary computation and combinatorial optimization...
There is classical A.I. and Soft Computing based A.I. LISP is great at creating rulebases, while Neural Network models often help with soft computing.
It depends therefore on what kind of A.I. you want to pursue, and even then, it might make sense to use a selection of languages, for the different parts of your A.I.
Matlab, isn't really a language so much as a modelling system for determining what the effects of an algorythm should be. As such, it has lots of different modeling tools meant to model many different languages, and approaches.
It's expensive but can be used like LabView to even program an FPGA directly.
Java is free, and there is a free IDE called Eclipse that is included with distributions of Linux, and can be downloaded for Win or Mac computers. It also is meant to be a write once and work anywhere type of langauge, which means you don't necessarily have to have a virtual machine running to use it, (it has its own).
Java is a C-like language based loosely on the C++ variant, but with a massive published library that does part of just about everything. Since Java is an interpreted language, anything it can do, C++ or C# can do faster, if not better, but C# has the .net library which is much less obvious about how it operates.
Clojure, Erlang, Scala, etc, are relatively new additions to the programming stable, and should be evaluated. For instance the AKKA system installs a microkernal based message passing subthread that might be the basis for a light agent.
Lisp is not Agent Oriented, Scala is, but this does not mean that Lisp is obsolete, merely that it has a different philosophy about how to build an A.I.
For me Python is the most useful language in AI, and this because it's a dynamic yet simple language. Also it have a lot of module such as NLTK ( which deals with Natural Language processing); numpy, scipy...
http://www.python.org/
http://www.nltk.org/
http://numpy.scipy.org/
LISP is really the champion since it is lthe "list" language by far. In AI we do list processing more than anythign else.
I belive, you can start doing it in pascal or basic, that will make you understand how it works internally the logic (This is for learning). But then I would use the following:
Prolog
Lisp
Jython + weka
Python + libraries like NN.
R + Rweka + Machine learning modules (tons)-
It is true that if all you are doing in AI is classical AI, then LISP it. if I want to do NN, use Matlab. If you want to do statistical classification I will use still Matlab. If I want to program an application to be deployed by a company I will use an IDE like NetJavaBeans or Eclipse.
What problem will be solved by AI? because it will determine the capability of AI. There are AI conventional or soft computing, which one do you choose?
if only the simulation, then Mathlab and Lab view very appropriate. But if the experimental analysis then ANSI C , C or assembly would be better. Because it produces small resource and fast processing.
I think I have to agree with Mr Tiago Pereira, do with what tools you are familiar with, unless you have plenty of time for learning. Personally, I have seen conference papers submitted by mainland Chinese researchers using the BASIC language for AI problems. What is most important is that the AI program behave as it is intended to, and that we try to work with what tools, resources we have.
AI generally uses machine learning algorithms...these algorithms require a lot of computations. so if you are not deploying on a large scale, matlab (licensed) or octave(open source) are the best languages available as they have some best build in algorithms to make computation fast. At the time of deployment , you can convert them to languages like python(most popular) ,C++ etc..
The question really is “what is Artificial Intelligence?” This link might help you answer this question http://www-formal.stanford.edu/jmc/whatisai/node1.html I propose that the term “Artificial intelligence” should be changed to “Implied Artificial Intelligence” because it is the genius of the programmer that creates the AI Program, not the machine itself, that produces the impression of intelligence.
You can, effectually, write AI in any language because the program is based on the genius of the programmer, if you had the time and money you could write it in JavaScript.
Pick any language you are most comfortable with, C++, C#, VB, Java, LISP, Ladder (use for programming PLC for controlling machines see Siemens s7200), it really doesn’t matter, it all depends on your genius to create the AI impression.
Summarizing all the comments:
The best is the language which:
1) you already know
2) already has good AI libraries in specific topic of your interest
3) can efficiently deal with amount of data you going to face
My vote is going for JAVA as I use weka, rapid-miner, shogun
IT DEPENDS WHAT KIND OF PROBLEM IT IS... if the problem is mostly about parsing and stuff you can use LISP...
if its about logic like a route finding problem i'd recommend java or PROLOG
We wanted to use LISP in 1985 attempting to build about 200 Expert Systems but our customers wanted PROLOG. Much easier to program, change and maintain. I purchased two Explorer machines ($150k each) thinking they would be best for LISP work but then only used them as demo machines. Not good judgement at the time. We used Demo 1 and later Demo 2 to design a working image of each ES and that worked very well. Prospects could take that and use same to "sell" internally. From '85 to '89 we built more than 300 ESs. Fun time. We had BIG hopes for AI but were too early and on the bleeding edge. We did learn how to provide what the prospect wanted vs. what my 21 engineers thought they should have. If you have lived that you understand what I'm saying.
All depends of specific problems, but I think that all is possible in C. Other good option is Java Prolog. And for elements like ANNs, Fuzzy logic or Genetic Algorithms, Matlab could be good for a fast prototyped.
Although Lisp was created to solve Artificial Intelligence problems by John McCarthy (recently died) there were other languages created with this purpose like Prolog. Currently C, C++, Java and other languages are also used considering problem solving modeling. Cheers
So lets review.
Lisp is especially good for parsers,
Prolog is especially good for Logic based expert systems
Java is cheap has a wonderful library, including a rulebase engine, and will run on nearly everything
Basic can be used in a pinch
C/C++ run faster
Matlab makes wonderful sims/prototypes, including NN's
But Erlang and scala are in the wings
Check the libraries out, before you BUY a compiler if it doesn't have the A.I. libraries you will spend a lot of time reinventing the wheel. In the case of Java, check out the Java Community because the libraries that come with it, are only part of the picture.
Artificial Intelligence is a broad field. It depends on what you do. C++ is best for computational intelligence. Matlab is good as well.
I vote for Python due to its increased community support and easy conversion to languages such as C++ or Matlab. Also, if AI is going to progress away from simple logic problem-solving into adaptive computation, a more easily programmable language like Python or Scala is helpful.
Is it possible to envision a translator to connect languages for Artificial Intelligence as we do for human tongues?
Short answer is Yes, it's called a cross compiler. Most computer languages are synthetic languages and do not have near the ambiguities of natural language, so cross compiling used to be a very important part of porting between computers.
Porting was virtually eliminated by compiling to an intermediate language, that was portable across systems, such as .net or Java VM (JVM) Java for instance
implements an open source virtual machine that can be easily ported to new systems.
It is interesting to note that MS Visual Studio, has dozens of languages that all compile to a mid-level code that implements the .net runtime.
The question starting this thread or topic is
Which one is the most useful language for artificial intelligence?
Nevertheless, it seems that everybody has understood "programming" language, in spite of no
occurrence of the word "programming" in the initial question. The suitability of a language is a topic
investigated in http://www.ijopcm.org/Vol/11/IJOPCM(vol.4.2.2.J.11).pdf
The question starting this thread or topic is
Which one is the most useful language for artificial intelligence?
Nevertheless, it seems that everybody has understood "programming" language, in spite of no
occurrence of the word "programming" in the initial question. The suitability of a language is a topic
investigated in http://www.ijopcm.org/Vol/11/IJOPCM(vol.4.2.2.J.11).pdf
Ok, what it sounds like YOU are saying is that the Modelling Language is the most important. That is what all the guys that were spouting Mathematica were talking about. Mathematica and Lab View are two interfaces for modelling.
But the discussion so far, has not been all in vain, because, according to my understanding, a proper modelling language can be taken as pseudo-code for practically any coding (Programming) interface.
I have been working on the idea, of a Macro-Language, that self-models and uses a GA to self-optimize. Unlike Forth, (The quintessential Macro-Language) in which it could be said, that if you learned one Forth, you learned One Forth, not all the others, and where there was always more algorithmic diffusion over time, we need a language that converges to common algorithms if only because optimization favors those that are more robust as they get recompiled in different models, over those that get more flaky the more they get recompiled.
What I have been thinking might work is a sort of Meta-heuristic search of the Algorithmic population to find the best fit population of solutions, (According to context) and then doing a Model-Test, Model-Test cycle where the first test is of
the population, and the second test is of the most likely suspect algorithm. That this modeling step needs to be done before committing to a move makes for better unsupervised learning.
We then need feedback about the operation of the sub-processes, so that we can assure reliability by rewinding those that fail, and the ability to go back to the population and re-evaluate it to find an alternate (Hopefully more reliable) version to process. (That this latter interface is tantamount to a statistical optimization process, relies also on the need to evaluate the relative speed of process, and use that to feed back into the code selection)
Believe it or not, what this is actually leading to, is a skill memory not much different from that created by the cerebellum. The interesting aspect of this, is that the actual macro language is arbitrary (as is the modeling language) as long as the GA population is expressive enough and exercised out of band with the action mechanisms, and the optimization mechanism can assure convergence to near optimal macros.
Somehow I don't think that is what you were discussing.
John McCarthy, who coined the term AI in 1956, defines it as
"the science and engineering of making intelligent machines".
If by intelligent machines you understand computers or machines
that need to be programmed, then you are talking about programming
languages. By contrast, if you understand machines which can think
in a human-like way, then it is worth pointing out that a human
can learn and handle any language without being programmed.
Thus, if implicitly you understand those machines which need the
help of a programmer, we are not being very ambitious.
With respect to programming languages, the best one depends on
the aim you have in mind. Nevertheless, lisp is a programming
language with no syntax; and this is the preferred property for AI researchers.
Ok, so, I have just discussed a new language lets test LISP against it...
1. Does Lisp have a Meta-Heuristic Library that will let us do a context search against populations of GA generated macros?
2. Does Lisp have a GA capable of generating macros via some arbitrary machine language
3. Does Lisp have a Rewind Mechanism that can be used for both optimization and stepwise refinement of macros?
4. Is LISP a modelling language that just by changing one compiler parameter lets us decode the "Model" of the macro to find its limitations
5. Can the population of GA generated macros be tested for "Comfort Limits" to assure that it will not damage the robot.
6. Does LISP have an evaluation package that can "Test" the outputs of the Models selected population to first see if it meets the requirements, then to pick an optimal member of the population?
Individual programmers may have achieved these steps separately, but I have yet to hear of a LISP version that contains them all, as part of the compiler.
Perhaps it is not enough to have a program that is data.
1) From an algebraic view-point a language is a free-monoid, and the free-monoid structure arises from the concept of monad.
2) Every monad gives rise to a Kleisli category. State-dependent maps are morphisms in a Kleisli category.
Inclusion maps in categories with fuzzy subsets are morphism of a Kleisli category.
These facts lead us to the conclusion that in the core of Computer Science and AI lies the concept
of monad. The only programming language involving monads is Haskell. In Haskell monads are
first-class citizens. It does not matter the language you prefer, in order to be a good programmer
you must learn Haskell. Perhaps Haskell does not improve your algorithms, but it will improve
your programming skills.
Of course it is only C/C++ because of its speed and compatibility with many platforms.
In general, LISP,Prolog and other AI languages are specific languges based on AI logics. But ıf you want to look from the away to see the whole AI world it is C++.
This topic is devoted to artificial intelligence. Speed is related to performance. If the measure of intelligence were its speed, then the most intelligent entities would be photons and neutrinos.
While the above is undoubtedly true, speed is related to performance, if it takes 6 months to run a program, what is the likelihood that the cluster it is running on will not at least partially fail during that time?
A.I. problems include some of the most intractable problems in computing today, problems which are usually trading speed for massive parallelism, and that are
just barely computable with large clusters of computers. When we ask a program to do such a task, should we not, at least optimize the code to keep the costs down? In the existing language framework, this means, using Mathematica or Lab View to simulate the program logic, is a good thing, but simulations are not the real thing, so then we implement them in a bridge language like java, and finally translate them into a C++ implementation in the hopes we can increase the relative speed. We can then use script languages like Python and Ruby to stitch together smaller applications into larger integrated packages, but should not stop there, but go on, to replace the Python and Ruby code later with C++ as well.
Even Haskell is not, I think the whole answer for artificial consciousness, although I respect the programmers that share time between Haskell and Scala.
Am I going to have to go head to head with that crowd, just because someone is dogmatic about monads? What do I know from Monads, I am talking about the structure of even advanced compilers being less effective than similar structures in the brain, simply because they don't integrate the same functions. If I ever manage to implement a simulation of what I think is going on in the brain, it won't look like Haskell, I am pretty sure.
I must say that I am not a professional programmer, because my work scope is Math. However I have programmed as a hobby in Assembler, Basic, C, C++, Fortran77, Fortran 98 Objective C, Mathematica, Octave, Clean, Scala, Java, C#, Perl, Python, Ocaml, F# and Haskell.
In Haskell I have coded large algorithms with poor performance, but having no bug. Since the performance is poor, I work usualy with C-code, but to enjoy of the art of programming, the language Haskell is poetry. Algebra is poetry and monads are abstractions underlying in all language structures.
Since understanding monads, functors, natural transformations, Kleisli categories and algebraic structures is a task for intelligent beings, they are a very proof of intelligence. By contrast, I think that ignoring them is not a question of proud.
Finally, the best programming language is a question of personal preference. Every language is good for a purpose. What cannot be good is the ignorance. Learn several languages, try them and enjoy making an intelligent task.
Well, the problem is, that Algebra doesn't help define function, it merely simulates it once it has been defined. You may claim that monads underlie all computer languages, but are you sure that such is true for natural languages, especially degenerate languages needed to deal with uncertainty? Since I don't understand Monads, I can't comment, but unless you can figure out how to build a monad on top of my Satisfycing Gate, I am also too busy to worry about whether you think I am ignorant or not.
I agree that programmers should learn many languages, I personally have worked with Basic, Basica, Sharp 4 bit Basic(TRS80 Pocket), QuickBasic, DOS, WINDOWS CMD, UNIX, LINUX, At least 5 or 6 versions of ASM (One for each processor I have worked with), Algol, Pascal, Fortran, Fortran 77, Cobol, Forth, C, C/C++, MOO, JAVA, SCALA, HTML, PHP, SQL, WIKI, Blogs, Arduino, G-Code etc. And I don't even claim to be a programmer ;) In fact one of the rites of passage for a programmer seems to be to build their own language as an interface to their code.
I used to wonder why anyone wouldn't program in ASM until I learned how efficient C and C++ by extension were.
Your Mileage may vary...
Of course, language wars are religious ones.
Speed is in assembler and smartness in functional languages and OOP ones. The virtue is in an equilibrated position between speed and smartness, between simplicity and performance.
This is why I think that the question of the best programming language is similar to the following one.
-- What's the best colored car for driving fast?
-- Red, obviously......
P.S.
Please, don't take into account the last words if you haven't a good sense of humor.
Actually, you might be wrong there, it seems that red cars get more tickets, so driving a red car, while it attracts the right kind of attention (Girls etc.) also attracts the wrong type of attention(tickets etc.) (Except at night, when red is almost as good as black). Of course YMMV on humor...;)
To redirect the topic toward the initial question it is worth mentioning why Lisp is the preferred language for AI researchers.
The main aim of AI is the creation of computer applications, that is, their code. Usually, the output of computer applications consists of data. However, in Lisp both code and data are the same thing, that is, both have the same syntax and the same structure. Thus, in the most natural way, the output of a Lisp program can be the code of another one,
I think Sanskrit language is the most useful language for artificial intelligence- computer or machine. The foremost reason is that Sanskrit is purified language and based on a system. Sanskrit grammar is most scientific. However, there is a need to become more flexible as it is not so easy to grasp like other languages. In a syntax each and every word is not fixed rather you can place it anywhere as per rules of case and declension.
AI programming languages generally involve manipulation of symbols more than numbers (seldom). Rather, a symbolic language which is able to represent objects surrounding us, our immediate environment, objects and relationship between those objects is likely to fit as a programming language for AI. Artificial intelligence, unlike natural intelligence, deals with simulation of the real system's behavior, such as learning new behavior, reasoning and understanding symbolic data and information. Reasoning involves logic-so, Prolog is a good option for logic based expert systems (I borrow from Graeme Smith).
Java is platform independent, has huge in-built libraries that follow routine (rule) based engines. Initially, computer languages were designed for simple calculations manipulating numbers and hence, in those good old days, computer language developments were primarily based on such computations involving numbers. Computers are now virtually used for every probable system's simulation, whether for flight simulation, digital imaging, satellite-based warfare, medical devices, control systems etc. But the concept was adopted from those strings of bits which were found to be useful to manipulate and represent symbols characterizing rules for creating and relating symbols.
AI programming languages can be described as an adaptive language that rely on the cognitive aspects of human behavior, thinking, reasoning, natural language processing and visioning based on the theory of symbolic manipulation and representation. In conventional computing languages like C, C++ and others, sub-level tasks like creating new data types and allocation of memory (i.e., realloc in C) are required prior to or during designing programs. AI generally frees itself from these tasks as it rather employs declarative programming style (See the article by Gunter Neumann) using built-in data structure like trees and pattern matching.
Human brain has a unique capacity to understand and detect patterns, which animals, lack to a great extent(except chimps-they too can learn to some extent when taught properly). It's all because of basal ganglia and cortical functions. AI programming languages, simulating this particular behavior, aims at pattern matching excellence based on generalization from abstract information, since; symbolic computation is much more supported on abstract level rather than that of standard languages. In this case, LISP has a long history of having a fully distinct and parenthesized syntax. Lisp is good for creating tree data structures and provides a new ground for programmers to create new syntax. You may call it procedural or meta-language that sort of stuff.
The real challenge perhaps, lies not in identifying functions but implementing its machine codes. But before that, designing codes that would generate behavior (as class, sub-class, methods and attributes) for these implementation codes is the real challenge. Human behavioral and animal motor coordination functions are now processed; each of the sub-functions identified as well classified and then designed as programs for simulating those similar functions as machine codes, i.e., robotic simulations applying AI programming languages. That means, a system has to learn in order to mimic such functions, which is based on reasoning, and hence, logic. This would be more interesting when we understand more about the origin of motor and sensory pattern coordination of human behavior, since human behavior is the integration of some reasoning which must have some logic (am I joking?).
A good programming language supports modularity and Lisp supports integration of functional and object-oriented programming. So, I thinks Lisp is good for AI.
For more, reader may refer to a beautiful paper by Gunter Neumann titled –“Programming Languages in Artificial Intelligence”
To write fast code, assembler or C is the best option. By contrast, to write code fast, Lisp, Haskell, Java and C# work fine. To improve your personal skills, learn abstract algebra, music, every programming language and several natural ones.
Nevertheless, if in the binomial computer-programmer the intelligent part is the programmer, then the best option consists of trying to imitate the programmer behavior. To this end, both programming languages Lisp and Haskell are very useful, because in the functional paradigm an application is a function and the output of an application can be also a function, therefore an application too.
I am going to object to the assumption of "Logic" in the brain on general principles, and therefore to object to the assumption of "Reason". One of the problems I see with A.I. is that it bought the Laplacian error, that everything can be processed using axiomatic systems. Godel's work, and Heisenbergs work beg to differ. Further, many soft computing paradigms are required to understand how the brain works, and some of these while they may be supersets of logic, are not at all the same type of thing.
To add to my objections, I object to the assumption that what the brain is doing is patten matching. I personally believe that the brain is less than interested in patterns, having a less rigid step that does not imply discrete memory locations, but is sensitive to similarities, and lends itself easily to polynomial time.
Sir, you are saying sensitive to similarities, does that not relate to patterns, after all, what is a pattern? findings objects of similarity among diverseness. does it seem so?
the notion that brain uses logic on general principles is a highly debatable one, and one cannot deny outright the assumption of reasoning. the question then comes to mind is, does reasoning apply logic? or the other way around, do logic have reasons? The whole notion of perceptual understanding and conceptual argumentation is based on the theory of logic and reasoning. one may easily conceive that what differentiates us from animals? why can a bird build her nest but not secure it from strong winds? human mind involves something special abstract feelings like empathy, sympathy and conation, perhaps, here i can agree that it may not relate to logic and reasoning on these aspects. but cognition has its basic assumption based on reasoning.
our left brain skills are associated with mathematics, logic and language. the only other thing that differentiates human from animals is-Language. Left brain uses logic detail-oriented information which is rule based. Even the theory of language is rule and routine based. one cannot just put up some words and create some meaning out of it. there are some "similarities"-rather relativeness amongst words that defines its abstractness. This are but what-patterns? so is it not rational to say that language is a pattern that we speak and make sense of, those senses which can be interpreted and understood by someone who must learn the programming behind the origin of such language.
mush debate is required to deny outright the theory of logic and reasoning.
a paper cites the basis of cognitive coding... you may wish to read.
How does a brain build a cognitive code?
Grossberg, Stephen
Psychological Review, Vol 87(1), Jan 1980, 1-51. doi: 10.1037/0033-295X.87.1.1
The brain applies logics like a human speaks in prose even when he does not knows what thing is prose.
Logics is an abstraction of the underlying machinery in several mind processes.
Even a dog uses those properties remaining unaltered under some group of transforms without knowing neither algebra nor group theory. For instance, you can observe that a human, who is owner of a dog, can get fatter, can shave his beard, can change his preferred perfume, and under all these transforms his dog can identify him. This fact is only possible if the concept a dog has of his owner is an abstraction constructed disregarding those properties that change under these transforms.
The brain associates, and what is logic but a specific set of Associations that have a specific set of rules. But association is not limited to logic, nor to its rules, so what then, is the basis of the assumption that logic rules the brain?
By the same token, when we say reason, we often mean rational thought, but what is rational thought but what the Greeks called "Right Thinking" and an assumption of the superiority of logical rules, over other forms. Be very careful saying that the brain uses reason.
You say that similarity is a pattern, but what pattern if multiple memory locations all respond each to a different similarity, and the end-result is a Cloud of responses, none of which exactly equal a complete pattern, but of which, many combined are equivalent to a pattern? Should we ignore the first order response, the cloud, just because we are more familiar with discrete storage, where we can directly test for a pattern?
There are, of course abstractions, and transforms in which objects are recognized, but does it really require logic to achieve that? All that is needed is the ability to associate, and abstract, and while I have said that logic is a way of associating, abstraction from the general to the specific is extremely difficult for Logic.
Further, although Fuzzy Logic, has the ability to deal with some ambiguity, uncertainty is a critical failure mode for logic. Yet the human brain can, and does navigate in uncertainty fairly well. How do you account for this, when you ascribe the properties of logic, to what is obviously a superior system as far as robustness is concerned. No do not assume logic, is at all a part of the brain, instead assume that the brain has a superior system, that logic is a pale reflection of, and Reason, merely a bagatelle.
An example can illustrate the way by means of which brain uses logic rules.
There is a fish inside a little box. My cat wants to steal the fish. To this end he takes the box and runs away.
Without having read any logics book, my cat knows that taking the box implies to take the fish. What he is thinking is: A contains B implies that if I take A, then I take B too.
The pattern is " if....then".
Of course my claim is not inspired by any goddess. By contrast, my teacher is my cat, which is not able to lie and it is very intelligent.
Thanks Smith , Palomar and Jackson.
a good discussion points out diverse arguments and counterarguments that enriches a didactic discussion. things are often better known, conceptions gets cleared and understood by having deductive discussions, and i have learned a few things and get to clear my misconceptions from copybook reading.
The point as Smith has mentioned is the ability to associate between objects of similarity which indeed do follow specific rules and routines(say, patterns). whether the brain follow logical rules or does logic rules the brain may not be that important, but what counts is whether we can disregard the assumption of reasoning in the human brain!!!
The point is that, we construct logic or can say, human beings have the inherent ability to construct rule sets based on logics which the animals cannot(or are they?). didn't ever thought of that dog example of Palomar. yes its true that a dog does not know algebra or game theory yet still, the dog may be able to identify his(or her) master by means of abstraction or association which is indeed interesting from the ethological point of view. So what are those properties that change under transforms and how do they change? That means although the logic remains the same, the specific sets of associations changes and so the dimensions of abstraction changes-do they, if I am correct.
I also don't get the meaning of this, " multiple memory locations all respond each to a different similarity"-how does it stand? the word "each" delineates the phrase rather, can it be that multiple memory locations all respond to a different similarity, or say, "each" of the multiple memory locations respond to a different similarity...? am bit confused.
And Jackson, you say that process of abstraction of necessity retains associations- is though correct, but it is also correct that abstraction is not always a necessity. not everyone learns in this world!!!(that's not my saying)! those associations are retained as memory and hence there must be some role of logics. During the logical process of induction and deduction, assertions are broken down into pieces and there lay the importance of reasoning of how to correctly re-associate and re-frame through deductive reasoning the process of abstraction or rather, abstraction(generalization from bounded variables-Ref. Herbert Simon 1978) through deductive reasonings (coming back to the point of Smith which questions on the assumption of whether logic rules the brain?). You may think of Palomar's dog perhaps remember him from a bounded sets of variables which the dog associated in its memory through perceptual learning. Do you call it abstraction??
The discussion may have an important perspective from a different point of view. The view is that about human ability. Considering that the human ability is fixed at birth and only the effort matters, then, this may open a wonderful chapter to alter human ability when you say that "this guy has the ability, he can do the job... Or say, hey you, put some extra effort, you have the ability to do the job or such similar things in everyday life.
Lot of people work hard and yet, often do not succeed! Why? May be the ability to associate or the lack of it or may be, the ability re-associate and re-frame through deductive reasoning the process of abstraction .
Palomar's Question:"can it be that multiple memory locations all respond to a different similarity, or say, "each" of the multiple memory locations respond to a different similarity...? am bit confused. "
the word "Different" implies that there is some selectivity that sorts the multiple locations so that they each have a separate "Pattern". I would question that this mapping function is necessary, or desireable. The mapping if it exists, at all, can only be done by connectomics, which is uncertain at the cellular level. The fact is that some of these similarities are close enough together, that they can vote, on the interpretation by promoting one or two neuron/s of their association to the output role for the whole neural group. Information is lost, but because the information is redundant and by its sheer quantity ambiguous, the voting merely acts to damp out the individual differences in detection between similar elements.
My theory is that instead of patterns, these neurons are detecting Symmetries, and it is only once the symmetries are compared against previous symmetries that there is a hope to detect a "pattern".
I would like to introduce another view-point based upon languages.
In general, words of a natural language denote equivalence classes. For instance, the word "book" denotes a lot of different objects of different size, different weight , different contents, etc, that is to say, the same word denotes different objects which are pairwise equivalent.
This is a very important machinery in order to minimize memory; because terming different books by different words the vocabulary becomes an endless object. Perhaps thinking about this topic, the great mathematician H. Poincaré said: Algebra is the art of denoting different objects by the same name.
Once we accept that conceptual thought works with equivalence classes, we can see that equivalence classes can be deduced by observing word occurrences in an evolutive model.
Consider a sentence w1w2...v1...wn, where the w1,....wj, v1 are the involved words. In general, substituting the occurrence of a word, say v1, by another one v2, the result is again a sentence, that is it has a meaning, provided that v2 belongs to some equivalence class containing v1. Thus, if a human observes several sentences w1w2...v1...wn, w1w2...v2...wn, ...w1w2...vn...wn he can consider that all words v1, v2...v2 belong to some equivalence class V. In fact, a human considers that two objects v1 and v2 are equivalent when, under some circumstances, any occurrence of v1 can be substituted by an occurrence of v2 and vice versa. This is also true with respect to words and sentences. In addition, the greater is the occurrence frequency of admissible substitutions, the stronger is the equivalence. These ideas lead to conclude that, if the probability of remembering an association increases with the frequency and vanishes as the time runs, then the time works as a depredator performing a natural selection that arranges both memory and concepts, hence equivalence classes.
Indeed, the association we are talking about is the binomial: " object equivalence-class ".
Again, I have to say that object equivalence is a Matching algorithm While to some extent this covers the role of digital content addressable memory, which can be used to implement a type of implicit memory system, without degrading the matches we get caught up in pattern matching again. This is why I believe that a Satisfycing Gate is critical to digital implicit memory. Similarity is a looser definition, a sort of Good enough match, instead of a rigid one to one match.
If we think about it from the nature of neurons we see that all we need for a neuron to fire, is enough synapses with high enough weights to detect signals, and we will get an Action Potential high enough to trigger the Axon. We can get this level of input, without an actual match.
In a digital implementation of a satisfycing Cam element, each digit matches and then the output is fed into the satisfycing gate to degrade the signal so that the output from the gate, is NOT a direct match. The idea is to soften the effect so that matching does not happen at the first layer, and in doing so, forces the processing of the second layer to include more information. Think of it as Lazy matching. The redundancy is critical, to be able to incorporate the larger picture into the memory.