Because the present forms of data representation---including vector space, logical, graphs, etc.---have not been uniformly useful, does it mean that, in general, there will not be a single universally preferable form of data representation which would capture previously inaccessible and more complete view of data objects?
CAUTION: This is a serious scientific question with radical applied implications.
Since appropriate data visualization is dependent on the context in which the data will be used, perhaps the structure of a dataset might follow some formatting and collection protocols, and visualization modes might be selected by the user, as in the display of weather data (.tmy3, .wea etc.) in software like climate tool (or others).
Data representation issues arise in relation to human-machine systems, especially human-machine systems that include computing and/or control systems. Humans use the innate connection between an internal, sensory perception (subject) of any "thing" (object) and its symbolic, linguistic representation apparently seamlessly. Moreover, it seems this connection also speaks to the connection between (and the unified, integrated whole formed by) the conscious and the sub-conscious. Let me make bold to conjecture that all our conscious thinking is symbolic, and mostly governed by our innate absorpotion of Chomsky's notional Universal Grammar (that we mostly acquire during our mother-tongue acquisition years). With effort, we also learn foreign languages and other grammatically-governed endeavours like music, dance, etc. But the content expressed in this thinking does not all originate in the conscious, its bulk comes from the sub-conscious. The access to this content is content-addressable, associative, and distributed over the body. (I believe that even neuroscientists, like the celebrated V. S. Ramchandran, have proposed that our intelligence and memory is not all located in centres like the brain and its parts. For supporting my argument with common sense, I will point out to the need of practice in learning tasks like using a mouse, a touchpad, etc. in relation to human-computer systems.) With computing machines, analogous situation rarely existed before, until the advent of massive data sets stored permanently in forms amenable to and with tools offering efficient and content-addressable search (www and the search-engines). Now that we are face-to-face with a replica of our distributed intelligence (without an integrated purpose and consciousness) in the form of this web, we had better be searching again within ourselves for finding solutions to the massive problems of big data. Let us treat the www (and its successors) as the sub-conscious, the search engines as the "brain" (that -- erroneously -- appears to hold all the intelligence), and the various forms of data representations as all organs of expression.
May I be so impolite as to suggest a hint? ;--)
Take a look, for example, at a string, e.g. BCCABAC. Can you recover its formative history: the sequence of operations involved in its formation? Obviously, no. (By the way, for grammar learning, this would have made all the difference in the world.)
HINT: We need the representational formalism that provides new formal means/tools for talking about formative object history in a *direct* manner. So, all tricks like inserting brackets etc. will not do, since they are quite artificial means for addressing the issue. Besides, we want the representation more general than a string.
Prof Goldfarb, we know you are hinting towards the ETS (Evolving Transformation System) representation. I had read some early papers of your group proposing and developing ETS. We would like to know more, about the development after the early period, up till now.
Thanks, Ramprasad! I will get to this later.
But, for now, let's not spoil for other people fun of discovering the inadequacies of conventional forms of data representation.
@Johann I agree. Yet, can we try to see the "duality" between the conscious and sub-conscious as being more general, and try to discover the same in artefacts (datasets and search engines) generated by human-computer interaction over a long period? Only via a conjecture and then practical experimentation, empirically, not via a deducible theory seeking to claim soundness and completeness? (At the same time I am somewhat worried that this has become a fashion in new computing paradigms: conjecture from generalisation of perceived biological models and seek empirical justification using easily generated but specific evidence; and ultimately end up making unjustified generalisations... I want not to be lured by the sirens, but I don't know if I am already hooked. We can perhaps remain sane if we claim neither soundness nor completeness. Universality is not what Lev sought in the question he initiated.) But your point about the objective reality being just there regardless of what (or if) we observe, after burying Schroedinger's Cat, is well taken. Let us not debate about our knowledge acquisition and interpretation, instead let us just observe that data representation is about acquiring, storing, retrieving, and presenting results of interaction of sensors (human/bio/artificial) with the external world, the last part (presenting) sought to be accessible to humans. Perhaps the interpretation of the presented "data" can be left to attending humans.
@Johann, of course, I address universality. I believe that humans (and all biological species) are endowed with the same form of representation, which must be ubiquitous throughout the Universe.
@Joachim, tuples are a set theoretic concept, which I believe are not that useful in natural sciences. Why?
The main reason is related to the fact that both, each entity in the tuple has no structure and the tuple itself offers a very primitive kind (linear order) of structure. If you look at any natural object, e.g. a tree or a face, you can see immediately why a tuple is a very blunt instrument for dealing with reality. Partly, for the same reason vectors (which are numeric tuples) are also a very crude form of representation.
@ Johann: The line between the two "parts" or "types" of our consciousness is thin when we want to be able to define it universally for all humans across time, space (geography), culture, development stages of individuals, etc. But, observe that what is a deliberate (conscious) act requiring effort for a novice (say, driving) is an effortless, largely unconscious act for the one who is teaching the novice the same activity. Similar direction of development and temporal change can be observed in the more recent user interfaces of operating systems and frontend applications such as web browsers: the more they remember of user habits and preferences, and the more the humans get to know the behaviour of the machines, the faster and better they both respond, with better throughput and representation for the whole human-machine complex. And in both (humans and machines), the history stored tends to become an entangled web of associative memories, rather than flat tables of tuples related only by enumerations.
@ Lev: Set theory has been in use in natural sciences informally for many centuries and formally for at least one, and seeing that these centuries have been particularly fruitful for natural sciences, the statement that suggests that set theoretic concepts are not "that" useful therein seems unpalatable. Again appealing to the notion (or hypothesis) of Universal Grammar, or just to the empirical observation that over millennia humans have turned instinctual but prolonged activities into grammatically governed cultural, coordinated ones (cooking and brewing and eating and drinking, music and dance, all communication, teamwork, laws, ad infinitum...) it may seem plausible that whatever knowledge we need to share, transmit, codify, teach, store-to-retrieve-with-high-fidelity, etc., must be in a grammatically structured form, the most general/powerful being recursive (or perhaps recursively enumerable) sets. Of course, we do conceptualise uncountable sets such as real numbers and the continuum, but what we transmit to students studying real numbers is a representation grammatically structured and finite. As Manin has observed (A Course in Mathematical Logic, Translated by Koblitz, Springer, 1977, p. 18) "The main reason for [the tendency of human languages to transmit information in a sequence of distinguishable elementary signs] is probably the much greater (theoretically unlimited) uniqueness and reproducibility of information than is possible with other methods of conveyance." Why can't we continue to use both: perception of wholes in nonlinear forms, and formal representation in sequential, linear, discrete forms? And of course we do this only for machines and human-machine systems, not for van Gogh paintings...
Also see Stephen Black, "The Nature of Living Things", William Heinemann Medical Books Ltd, UK 1972: p. 93: "All living things are largely water and it has been proposed that even this water is to some extent ordered according to molecular shape. Thus the water within the living cell is in fact more viscous than the water without -- a property which results in different behaviour in a confined space such as the capillary tube. Water of this kind is described as 'polywater'." Shape, sequencing, and their connections with life, with chemical reactions within and behaviour of living organisms is the subject matter of this spendid book.
Set theoretical modeling is inadequate, no doubt. Even logicians will never deny this. But then any other representation in use now, is so. The main problem that any representation schema needs to address is: how to glue together various forms of representation and modeling so that humans can work in more and more efficient, reproducible, well co-ordinated ways, while staying creative, within structured as well as unstructured collectives of humans and machines.
In my opinion a new form of data representation which might fundamentally transform AI and ML does not need to be a completely new concept. We have a whole bunch of probabilistic approaches already, it should be only a matter of time until some useful knowledge representation scheme comes up out of this. Don't forget it is still a comparatively young research field.
@Ramprasad, set theory is a very basic language for defining math. structures but it *does not address the structures themselves*. For the representational formalism we definitely need some non-trivial formal structure, data structure, which would clarify the form in which we view and collect the data. For the last two centuries, the basic form of data representation in science and applications has been the vector space. The reason why the latter turned out to be not an adequate representational formalism I briefly addressed in my last post at https://www.researchgate.net/post/What_is_the_concept_of_class_in_your_favorite_learning_model_eg_in_SVM_ANN_etc#share .
@Joachim, I’m afraid you are not talking about an acceptable (formal) structure of the data representation space. You see, what the modern mathematics should teach us is that we have to take the concept of structure defined by means of some operations very seriously. We need an adequate understanding what the universal data model is.
In AI and computer science in general, this wisdom has been neglected for a number of reasons, which resulted, for example, in the lack of adequate learning models and the historical neglect (in CS) of various *natural* setting for search problem. By the way, the success of Google is explained not by any breakthrough in formulation and solution of such search problems but by simply admitting it as fundamental to our daily lives. However, due to the total neglect of the most natural settings for search problems by CS, Google has had no formal developments to rely on.
As you can see there is much confusion regarding the concept of representational formalism. This is mainly due to the fact that so far no science---including mathematics and computer science---has addressed this concept. But our future depends on it, since the representational formalism is supposed to clarify how we *view* and *collect* the data. I'm suggesting that such data habits as we developed so far are not productive, and we are faced with the historically unprecedented change in this respect.
OK, let's take, for example, a tree. How should we approach the 'right' form of representation for it?
(I'm considering a tree, but *exactly the same* considerations apply to a webpage, image, etc.)
@Theo: "We have a whole bunch of probabilistic approaches already, it should be only a matter of time until some useful knowledge representation scheme comes up out of this."
Obviously, probabilistic approaches do not address the issue of representation *at all*.
Are you implying that we don't need to be concerned about the question of structural data representation?
For this discussion to remain a discussion and not a puzzle-solving game, we need a shared definition (with a shared interpretation of it!) of what is "the" question of structural data representation. That may necessitate a similarly shared definition of what can claim to be a data representation, and what data representations can claim to be structural ones. Lev might as well initiate the process by giving his preferred definitions and interpretations.
Of course, Ramprasad. Let me follow, first, section 1.1 of the above book ROMAS http://www.cs.unb.ca/~goldfarb/BOOK.pdf .
So, in dealing with the concept of representational formalism, we should try to generalize the only basic and millennia tested example we have---the numeric representation. See Fig. 1.1 there.
Now, as you can imagine, moving on to the concept of structural representation has been anything but a routine affair. Here we are faced with several issues related to the generalization of the numeric representation. Should we rely on some kind of interconnected 'units'? Which kind of units should one consider? Should each unit be structured? Etc. Also, what is the 'physical' meaning of such informational units?
Now getting back to the "Hint" in my first post, it has gradually become clear to me that things like strings, graphs, etc. have not been successful as candidates for structural representation. Why? Partly I explained that in the Hint..
So let's discuss a bit that hint, which apparently was ignored. ;--)
Johann, why making life so complicated? ;--))
The above Hint was drawing your attention to the fact that *all* objects/processes have formative histories---either completely encoded, as far as Nature is concerned, or 'simulated', as far as an agent interacting with a particular object is concerned. And so it is critical for the structural representation to have the capability to deal with this, universal, feature of reality. In particular, above all, structural representation must probably be a temporal generalization of natural numbers as they are defined (temporally) in the Peano axioms
http://en.wikipedia.org/wiki/Natural_number#Peano_axioms
(see also Figs. 1.4 and 1.5 in ROMAS.)
Now, the only missing ingredient is the structure of units out of which the proposed structural representation is supposed to be built. Please see section 1.4 in ROMAS, where each unit is the informational blueprint for a concrete *event*, out of which our reality is comprised.
To get an intuitive feel for the representation, please see a very simple example in section 2.9 in ROMAS.
No definitions and interpretations yet. Anyway, reading some parts threin (ROMAS), and reading Lev's responses here, it is becoming less and less clear. My question is now this: let us assume all the language, numbers, logic, set theory, strings, all all all that has been there in science so far is inadequate, hopefully inadequate, etc. Or worse, it is self-contradictory etc. How does that all mean that we are faced with an imminent (benevolence or danger -- take your pick -- of) a fundamentally new (and universal too!) representation formalism? Just by recognizing what biology tells us -- that physics is very primitive -- how can we conclude that a great advance in a short time is imminent?
@Ramprasad:
1. "No definitions and interpretations yet. Anyway, reading some parts threin (ROMAS), and reading Lev's responses here, it is becoming less and less clear."
Do you want formal definitions? Please see the very first paper in my profile in ResearchGate. (I thought you would have done it by now).
But somehow I feel that the problem is not here. Obviously, just "recognizing what biology tells us -- that physics is very primitive" is not going to help one if one cannot understand the introduced concept of struct as a form of representation.
It appears to me that subconsciously you are expecting to see something expressed via the basic formal concepts familiar to you and *this is not going to happen*. The proposed concept of struct is a new formal concept not reducible to the conventional ones. But it is not a difficult concept at all: even undergraduate students can grasp it very quickly. Also, have you read through the illustrative example I mentioned in my last post?
2. "How does that all mean that we are faced with an imminent (benevolence or danger -- take your pick -- of) a fundamentally new (and universal too!) representation formalism?"
If you, by yourself, have not already come to the conclusion about the enormous scientific stagnation of the whole field of AI (and ML / PR in particular)---despite zillions of 'useful' programs having nothing to do with the true AI agenda---there is no easy way to the realization of this crisis. (See my review of Nilsson's book "The Quest for Artificial Intelligence"
http://www.amazon.com/Quest-Artificial-Intelligence-Nils-Nilsson/product-reviews/0521122937/ref=cm_cr_dp_qt_hist_three?ie=UTF8&filterBy=addThreeStar&showViewpoints=0 )
I should add that already the fathers of the Scientific Revolution, mainly of the 17th -- 18th centuries, realized the enormous challenge of the theory of Mind---as a *non-spatial* entity in contrast to the spatially based science (including the formal concept of space) they have been developing---when they have removed the mind from their (and hence ours) scientific agenda.
So to address you question why "we are faced with an imminence of . . . a fundamentally new (and universal too!) representation formalism? ", I would in turn ask you:
++++++++++++++++++++++++++++++++++++++++++++++++++++++
What else---besides the fundamentally new and universal representational formalism---can take us out of the spatially based mathematics and science and put us on the new rails leading to the development of AI? In other words: What else can take us out of the realm of conventional (spatially based) scientific language?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
However, one thing is clear to me: the incremental tinkering with the conventional formal concepts will continue to lead us nowhere. ;--)
@ Lev: "However, one thing is clear to me: the incremental tinkering with the conventional formal concepts will continue to lead us nowhere. ;--)"
- I cannot agree more.
My main question is: even if there be stagnation everywhere in every sphere of science and technology and of any intellectual endeavour, how can we be certain that there is an imminent possibility of propositioning and realization of a fundamentally new, universally applicable way out, from amongst us? Isn't it possible that we all, who claim to be doing any intellectual activity, are not up to it? And then some recluse like Grisha Perelman, or some community working in their natural setting away from the Internet-connected "small world" will come up with the solution that is badly needed, say 30 years from hence, a solution that we do not even come to imagine until we find them and come to know of their solution?
So let us see what is being proposed and let us ponder over it. I am sure we can listen with all seriousness, without first giving our vote that there is a fundamentally new universally acceptable way out already. (Personally I do hope that ETS and its further advance has what is needed in it. My request is to have it discussed without puzzles.)
@Ramprasad: "Personally I do hope that ETS and its further advance has what is needed in it. My request is to have it discussed without puzzles."
I can assure you that the last thing I want is to put puzzles in your way: I dislike puzzles and avoid relying on them at any cost. ;--)
@Johann: "Why are you clutching that tight on the induction? The data representation you are proposing depends in my opinion not on the absolute prevalence of the induction."
I don't "clutch" to induction at all. I'm simply giving the intellectual thanks where they are due: induction has been the key to the development of ETS formalism since the late 1970's.
(By the way, that is why PR / ML are the key areas for the development of AI. Also, the problem of induction might be considered one of, if not the central problem in epistemology and perhaps in the entire philosophy.)
I explained in ROMAS why I emphasize its role in the development of ETS representation. Basically, without linking the concepts of class and class representation, and without the need to have much more information embedded in the representation---in order to be able to construct on the basis of a small sample (during learning) more reliable class representation---the ETS project would not have started in the first place.
Of course, now, one can 'forget' about this original motivation and proceed to collect *any* data in this form. But because this view of data is so radical and we have had no experience with it at all, one still needs some motivation for 'suffering' through this event-based view of data.
Finally, I would like to emphasize that, if I'm still sane, ;--) we do get the view of data objects that we have *never* experienced before. But as always, since you can't get something for nothing, we have to spend some efforts in developing intuition and tools (first of all, an arsenal of various structured events) in order to rip the benefits.
As to the deduction vs induction, as the father of the formalized deduction, Aristotle, suggested, deduction is 'trivial' but induction is very, very, very, . . . deep. ;--))
I hope you understand that induction = PR / ML
Johann, here is a simple hypothetical experiment to see the role of deduction vs induction in the functioning of our mind. Suppose that you could use some non-invasive equipment to attach to your brain which would every millisecond flash on some screen which areas of the brain you are using, deductive or inductive, as you engage in your daily business. What do you think the ratio of the two would be? I bet probably much less than 1 to 1000.
@ Lev: "because this view of data is so radical and we have had no experience with it at all, one still needs some motivation for 'suffering' through this event-based view of data." I suppose the strongest motivation can come from a simple, layperson-accessible demonstration of practical utility for vexatious practical problems. Such a demonstration will not even require the audience to have (or shed) philosophical commitments. I recall one of the early ETS group papers demonstrating applications in PR and chemistry. However, I am waiting for something like a working model I can use or point to (just like a locomotive or a steamship was a clear demonstration of the "validity" of the science of thermodynamics, or the power of GNU/Linux is a clear demonstration of the power of software freedom).
Moreover, in relying too much on philosophical and/or cognitive science arguments, you invite the danger or contradicting yourself. Even if experimentally you "prove" that the ratio of the extent of the use of "inductive" to "deductive" "parts-of-brain" is 1 bn : 1, that does not "prove" anything about ETS or any other candidate data rep formalism unless you commit to deduction too much here: " 'deducing' theorems about data representation based on some empirical data of electrical signals etc. in a brain". At the same time, any adverse empirical data in cognition and neuroscience can easily discredit your data rep formalism unduely. Attaching brain's preferences to data rep preferences so tightly will be the culprit, not any objective reality.
All this apart from the fact that neuroscientists, based on strong and long-drawn evidence, are challenging the notion of fixed spatiality of anything (even skills such as viewing and use of hand etc.) in the brain.
Therefore, let us avoid stereotypes and just say that induction and deduction are two necessary and complementary paradigms of thinking and argumentation. Furthermore, we can arguably say that in data representation and in computer science in general, induction as a tool is not given its due. It is used only for proofs in theoretical computer science, if at all, and not for synthesis of models. It should be used more for the latter, because it will help us build more versatile, more explanatory, more lasting models. And of course we must demonstrate this last statement by practical examples of solutions to vexatious practical problems simplified due to this new approach.
@ Johann : "I observe more inductive approach in science last century..." : to me it seems deduction was sought to be made foundational by the science community only in the last century, a la Hilbert and Russel et al., and the success of mathematical physics after Einstein 1905 gave a boost to a newfound belief in the Platonical view of the world. Otherwise, science was always inductive; that's why alchemists, for all their shortcomings ultimately were the forefathers of chemistry. The attempts of Platonists did and could not succeed,as CERN and LHC shows: experimental validation is necessary, and no amount of experiments is sufficient. Computer science has been walking on a tightrope holding a very long horizontal pole in each hand : the theory is just a branch of mathlogic, the practice does not care what the theory is. [ Exaggeration, exexaggeration, forgive me... ]
@ Johann: "But knowledge aggregation is not only about try and error, feel and smell and wonder, but also about strong and systematic hypothetical thinking, imagination and synergy... " This balance is what is missing in arguments made by Lev above, while the ETS formalism apparently does not lose that balance. Lev says that "subconsciously you are expecting to see something expressed via the basic formal concepts familiar to you and *this is not going to happen*." But, the ETS papers precisely seek to do the same: expressing a new concept via older basic concepts shared by the audience earlier. Now whether vocal computer scientists, with whom Lev interacts, swear by those older concepts as being "formal" or not, is not the important question here. Nor is the question whether ETS is fundamentally different (and to what extent) and universally acceptable. That can be decided after a couple of more decades.
Therefore, again my request: give us a good tutorial on ETS here. With at least one hands-on application.
@Ramprasad:
1. "Lev says that "subconsciously you are expecting to see something expressed via the basic formal concepts familiar to you and *this is not going to happen*."
Yes, this is not going to happen. ;--))
2. "Therefore, again my request: give us a good tutorial on ETS here. With at least one hands-on application."
This is also not going to happen: the ETS is not yet in the *usual* digestible state. ;--))
Besides, why would I attempt "a good tutorial on ETS here"?
==================================================
Ramprasad, somehow you are still missing the very basics: there is no 'royal' road to ETS, it is a completely different formal language. Besides, you have not asked a single *concrete* technical question about ETS representation.
@Johann, the short answer is :"petri nets and cellular automaton" have nothing to do with the forms of *data* representation. (see Fig. 1.1 in ROMAS)
@Johann: "I'm trying to comprehend the ETS using in my opinion similar concepts as a baseline I'm a little bit familiar with..."
This is a bad idea. I thought I suggested several times not to do that. ;--)
You cannot reduce ETS representation to anything, although this is what everyone got used to doing anyway. ;--)
Do you think I was not sufficiently serious when I claimed that it is not reducible to anything known? ;--)
@Johann:
1. What do you want to do with fractal structures?
2. What do you mean by " interference processes in complex systems"?
3. The first chapter you are referring to is an introductory chapter for a 2-volume book (and it is already quite long), but the concept of emergence will be explained in the second volume via the emergence of various ETS representational stages (see the first paper under my ResearchGate profile ).
If you want to take my advice, don't waste you time by checking ETS on the basis of various artificial paradigms propose so far but try to understand the new idea of formative object structure. You will not find its analogues, so you will have to spend some time getting this very basic and conceptually relatively simple idea at least with the help of the illustration in section 2.9. (Just walk into this new, ETS, 'room' without any prejudices, and then you will see that the time wasn't wasted at all.)
@Johann, It is not clear to me what "complex behavior interference between different cognitive agents" is about. Which kinds of agents are we talking about?
@Johann: As you know, the idea of event is not new at all. What is new is the specific idea of events as structured junctions in the flow of processes, and the idea of looking at objects, processes, agents via their formative histories represented by interacting structs (which are temporal streams of interconnected events).
So, the main idea---using the relevant developmental biology analogy---is that a struct is the formative history of the corresponding object / process understood very broadly (since an agent interacting with an object does not know the object's complete formative history).
In your case, as the agent evolves, it participates in new events, and its formative history is continuously updated.
Unfortunately, the answer to most of your questions is "NO". With our tiny resources this has not been possible yet, especially since I focused on the conceptual integrity of the formalism. However, you may want to check also with one of my former PhD students, prof. Dmitry Korkin here at ResearchGate: https://www.researchgate.net/profile/Dmitry_Korkin/
Presently any *serious* ETS application will require some conceptual adaptation and is at the level of good Masters or PhD Thesis.
@Johann:
1. "In RAMAS p. 7 you write "So the true scientific revolution—associated with the transition from the quantitative (numeric) to the qualitative (structural) description of Nature—is still ahead." Do you see the object structure equals to the object quality? So for example, such semantics like "this object is useful for something" you would assume encoded in the origin class of the object?"
You see, the basic idea is that the ETS representation is supposed to be the only one we may need to talk about anything, including "quality" which is a very fuzzy concept.
But as far as any semantics is concerned, I do expect it to be clarified by the ETS class representation. In fact, I'm trying to get some investors to help me built a fundamentally new kind of search engine, dealing with the 'semantic' side of search..
2. "What about "very long" object histories? How one is supposed to determine the search? Is it process then not the same like indexing and running through all the event steps by natural numbers like shown in the Figure 2.2? Or is the branching then very crucial for understanding/identification of the class representation?"
The length of struct is handled via the representational stages, which collapse the long structs into the next level structs (see part IV of the main paper in my profile What is a structural representation?, WISR).
However, the concept of class representation is not simple (see part III of WISR). But this concept is very powerful: for the first time when class generating system (CGS1) is constructing a member of the class, some of its constrains allow for some events from other CGS2 to intervene in this object construction process. This is a very important feature.
Johann:
1. "As argument there is often the one: "Look at quantum physics - the world is discrete." But there is also an argument from the science branches using holistic points of view, that there is no discrete states at all or, at least, there are continuous processes that are inseparable in pieces and that processes in the nature are "continuous flow", steady functions etc., for example, the perspective on human perception in gestalt psychology."
ETS clearly suggests that there is no contradiction at all between the holistic, or gestalt, view and the discrete representation: the gestalt view is the result of the dominance of the overall object structure. (By the way, this view is much closer to the original view of gestalt psychologists.)
2. "What about a continuum, spectrum? How are such entities addressed in ETS?"
At this time I can only see the dominance of discrete / structural. It appears to me that the role of continuous in science can be explained as based solely on the dominant role of continuous formal models.
Note that if it turns out that the millennia old *spatial* considerations (including the introduction of the square root of 2) are not central to information processing, as I have been suggesting in ROMAS, than the above dominant role of continuity will be no more.
@Johann:
1. "ETS seems to provide a very strong affinity to causality. Is there no coincidence at all from the ETS perspective?"
I'm not quite sure exactly what you mean, but it appears to me it is too early to address these issues from ETS perspective. ETS suggests that there might be much less randomness than was conventionally supposed.
2. "Can you give anyway a hint toward [Emergence + ETS] as well?"
The "emergence" might be associated with the emergence of new representational stages (see part IV of WISR), when the next level structural constraints come into play.
@Johann:
1."The necessary dimensions to cover would be in my opinion: discrete - continuous, coincidence - causality, deterministic - endless, precise - fuzzy, inductive - deductive etc. There might be also other important dimensions that MUST be addressed either by clear preference to the "one pole" of the dimension or a proof that there is no dichotomy at all within the concept itself. "
I believe you are approaching the question of representational formalism in a way biased by the many issues that have been brought up previously but without the eye on the idea of structural representation. On the other hand, the related issues *should* be resolved within the representational formalism which claims to be universal. But the overriding issues initially is to understand if the logic and the structure of the formalism is "universal". One should not put the cart before the horse.
2. "Another important thing is the applicability (mapping) of the concept with a lot of examples regarding everyday's objects/processes..."
This, one should be able to see from the structure of the formalism. As far as ETS is concerned, events are the only ubiquitous reality. ;--)
Above all, one must understand why representational considerations should become central in AI, including ML / PR, if we are to see a substantial progress in the field.
I hope the semantic representation of data will be "that" thing that change the conventional models of data storage. We all understand everything by its deffinition or meaning, an action, an object, a relationship... The physical attributes of something are really important but we always evaluate what they mean for us.
@Yuniel, You are quite right: I expect the proposed structural representation (the struct) to clarify the nature of 'semantics'. This representation---based on the formative object structure---suggests that an object's 'semantics' is directly related to the formative object history *as perceived by the agent*. In particular, an agent does not have identical processes for representing an object made by the agent (e.g. a box) vs an object which was not made by the agent (e.g. a tree, or a star). However, the form of representation is the same, i.e. the ETS struct.
@Lev:
I´m actually working in simulating crowd behavior using agents and one important piece in my research is to incorporate semantics and semantic reasoning.
@Yuniel, here you are: you can try ETS. But since it is not a standard tool, you should be prepared to invest some time (not that much) in trying to understand it. ;--)
@Lars: ETS has nothing to do with Lojban project. It is not a language proposal but a representational formalism (i.e. something like a new math. language for science).
When Goethe was in Italy he began formulating ideas that would lead to the development of a new science of his devising, morphology. Initially, he postulated an Ur-Pflanze, which would be the archetype of all plants. He believed that with this ideal structure in mind, he would be able to determine the basic form of both those plants that actually existed and those that possibly could exist. Goethe’s ideas came to public fruition in the publication of his Metamorphose der Pflanzen in 1790. In that small treatise, he argued that the various parts of the plant—the stem, leaves, petals, sexual organs, and seeds—could be understood as transformations of one elemental structure, which he symbolically represented as a leaf—or, as he expressed it, “the leaf in its transcendental aspect.” He applied this fundamental conception also to animals. He argued that animal form had two features: an inner kernel and an extrinsic deformation of that kernel. The inner kernel consisted of a topological arrangement of parts, and the deformation resulted from an external accommodation to the surrounding environment. Thus the skeleton of the seal, for example, exhibited a topological pattern of bones shared with land animals, but also displayed particular deformations extrinsically adapting the animal to its aquatic environment. According to Goethe, who was much influenced by Spinoza’s conception of adequate ideas, the archetypal pattern of the vertebrate was an idea actually resident in nature. Moreover, the archetype of the vertebrate, in this scheme, was a force productive of the organism.
source:
http://books.google.ca/books?id=UuXLx5l5fUIC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
Louis, thank you for this contribution.
Of course, Goethe is always Goethe, even in his scientific attempts. However, we must realize that, at that time, it was humanly impossible to achieve what he actually set out to do, especially given his---at that time unavoidable---bias ( "Goethe placed great stock in the significance of visual images for both the advancement of science and the conduct of life" p. xxvii of your source).
It appears that the unprecedented nature of the scientific transformation we are facing now---in order to integrate mind into the scientific picture---is related to the need to put our science on new, *non-spatial* (hence non-visual), rails, while it has been moving along the spatial rails ever since its emergence.
Lev,
Any expression of the world through any language is necessary spatial. The world is evolving, it is changing. It can be expressed through form transformations (morphology). Any concept of time is inferred from change in forms. I do not beleive in the spatial concept of time. All that exist are change of form, and this concept of form transformation cannot be formalized through a single parameter t. It can be formalize as form transformation. All the particle of the standard model are generaled through a sequence of symmetry breaking events. Evolutions in general are not taking place in time. Time is not a stage where thing happens. Things happen which allow to define time. Change define time.
Since language can only describe the result of change, the transformation of forms since language is fixed, language has a tendency to make us see the world as platonic. Language describes what has happen, it cannot describe what is happening. Bergson was of the opinion that the rational/intellectual/language side of our Mind is by necessity spatial and that the creative/inductive/living/duration side of our Mind cannot be reduced intrisically to a spatial metaphore. It is why is struggle so hard to express this in writing in all his books. Bergson was following the spiritualist philosophical tradition that began with Maine de Biran and Ravaisson. Ravaisson was influenced also by Schelling who was influence by Goethe.
Louis, I'm not sure what you are suggesting. Please take a look at http://www.cs.unb.ca/~goldfarb/BOOK.pdf
for the context of my original question.
Lev,
"In light of the evolving nature of the Universe, in the proposed formalism, we postulated that the
‘true’ structure of an actual object oj (in Fig. 1.1) is to be understood as the object’s ‘formative
structure’, where the latter is directly related to the object’s ‘formative history’ and captures its
formation process. "p 6.
The concept of formative history was central to morphology as conceived by Goethe and later developed in mathematics. My comments about time is to emphasize that formative history does not need a prior concept of time.
For a clearer expression of my thoughts in the specific context of visual perception:
http://www.collectionscanada.gc.ca/obj/s4/f2/dsk1/tape8/PQDD_0023/NQ51844.pd
- Louis
Louis,
1."The concept of formative history was central to morphology as conceived by Goethe and later developed in mathematics."
It has never been "developed in mathematics".
2. "My comments about time is to emphasize that formative history does not need a prior concept of time."
I agree, if by "time" you mean a spatial dimension associated with it. But still we have to understand what temporality in Nature means. After all, formative object structure is the 'expression' of temporality.
Louis: "The concept of formative history was central to morphology as conceived by Goethe and later developed in mathematics."
Lev: It has never been "developed in mathematics".
-- This is hard to believe. Perhaps not as developed or followed as the spatial concept of time, but, yet, thinking in non-spatial terms is as fundamental (and natural) as the spatial one. Einstein would not have conceived of his 4-D model of space-time and relativity had there not been some seedlings of the non-spatial concept of time available from earlier times. What do we make of the tensors? Perhaps, the overemphasis on formal logic that began towards the end of the 19th century and the unprecedented rise of the centrality of institutionalized science associated with the same period makes us believe that thinkers always thought the way institutional researchers express and publish today. Even these researchers do not think the way they express and publish, do they?
Lev,
I think that we agree on time.
What do you think of the mathematical approach such as Rene Thom's book: Structural Stability and Morphogenesis.
Group theory allows to describe structural concepts. Quantum theory is based on that and on the concept of symmetry breaking.
The Foundation of Ernst Haeckel’s Evolutionary Project in Morphology, Aesthetics, and Tragedy
http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDIQFjAA&url=http%3A%2F%2Fhome.uchicago.edu%2F~rjr6%2Farticles%2FNetherlands.doc&ei=I2SwUJCAAaqpiQLK5oC4Aw&usg=AFQjCNFBCia81rSrRITZIheTMmhop1U0uQ&sig2=m-LVICf6uUXQgRinfYzJ5g
"Einstein would not have conceived of his 4-D model of space-time and relativity had there not been some seedlings of the non-spatial concept of time available from earlier times."
Ramprasad, this is incorrect. Despite some non-spatial *intuitive* elements of such understanding, the formal machinery does not give much in this respect. "Tensors" have nothing to do with the non-spatial understanding. You should take much more seriously the central role of geometric (and hence spatial) considerations in mathematics.
By the way, I don't know how you can talk seriously about non-spatial consideration when discussing a 4-D *space*. ;--)
Louis, I have Thom's book. But despite some interesting observations, Thom has been following the main line of modern math.
The role of group theory in physics and the concept of symmetry are based on the the 'geometric', hence spatial, considerations.
As to your .doc file, one should understand that all this talk has been just talk, it's not science. To set science on new rails one needs to offer a new formal non-spatial (informational) language that can clearly suggest which side of 'reality' we have been missing so far.
Lev,
1. Any form of language is necessary spatial in nature. You invented a language to describe process in nature, but your language is spatial like any language.
2. Basil Hiley is a physicist which works towards a process theory of physics. He is using a Clifford algebra in order the fundamental process from which space-time emerge. Do you think that his formalism is appropriate?
3. "The role of group theory in physics and the concept of symmetry are based on the the 'geometric'". I think that it is the other way around. A geometry is defined as a group of transformations.
4. Any language is an implicit theory of the world. Is there a similarity between your process language and Whitehead's process philosophy?
Louis,
Obviously you have your views, and for some reason you haven't read *carefully* my popular introduction to our formalism ( http://www.cs.unb.ca/~goldfarb/BOOK.pdf ). I know that because you have not brought up the main issue of structural object representation.
I can't do better than I have already done in that introduction (including the example in section 2.9). The only thing I should mention is that the source of your and others difficulties has to do with the fact that the issue of fundamentally new form of representation has never been brought up in science, including math.
As to the Whitehead's process philosophy, I believe that ETS is the closest representational formalism in his spirit.
By the way, the best way to understand the central role of spatial considerations in math. is to trace the history of math. beginning with the ancient *measurement* practices (which I will do in my book).
Lev,
I have briefly overview your work. Then I stepped back and the above questions of general nature came to me and I decide to probe your mind. I want to be proven wrong but my feeling is that it is possible to express transformations, sequence of changes that occured in the past but not the process going on doing these changes. Thank you for your answers.
Louis,
I didn't understand this:
"I want to be proven wrong but my feeling is that it is possible to express transformations, sequence of changes that occured in the past but not the process going on doing these changes."