I do not know if this is one aspect of an answer to your question, but there is a meta-pattern to the semiotics of conceptual value-fields. See my article
Article A post-structuralist revised Weil–Lévi-Strauss transformatio...
My understanding is that "Natural Language" - the spoken word - is best supported by Natural Language itself. There is no meta-language, like there is for programming languages. And I have created software https://github.com/martinwheatman/enguage which demonstrates this principle. This approach may not be as crazy as it seems, the philosopher Iris Murdoch (amongst others) talk about inner voice - that we construct our thoughts, our world, in language. Biologists have no difficulty with self-generating information bearing systems (e.g. cells). But is hugely disruptive to the computing industry (just what entrepreneurs say they’re looking for!)
Church-Turing, which has been super-successful for over 80 years, says that all programming languages are equivalent, and the idea is that you get a translation to a language which is supported mechanically, e.g. on a Turing Machine, or on silicon. So people looking to support NL, such as Cucumber and Alexa, implement “meaning” in snippets of programming language, e.g. Python or JavaScript. This is fine, until you want to create language, to adjust your world view, through speech then you have to have a program which writes code from speech. I’m not saying that writing can’t be used to represent speech (and I do have about 30 written ‘repertoires’), and hence many suggest that I’ve simply got a new representation of programming languages (not a semi-colon in sight!), but when the ‘program’ is only ever spoken, my work shows that it is language which is the Universal Machine.
If you're looking for a 'mathematical' underpinning for this work, it is described in the late 19th Century by Charles Sanders Peirce in his Semiotics, and is used in Ogden and Richards's "The Meaning of Meaning", published in 1923. Peirce’s Sign can be found in the approach by Pragmatists such as Charles Morris, J. L Austin, John Searle, and Paul Grice.
Dear James B. Harrod , I read your article, and seems promising for meta-pattern to the semiotics of conceptual value-fields , but do you think, based on your knowledge, that "meta-pattern" leads to "language-pattern"
Dear Martin Wheatman , I agree that "Natural Language" is best supported by Natural Language itself. In other words, the laws (yet to be fount) are supposed to be in the structure of the spoken language itself. However, the fact that the promising laws still not found does not mean they do not exist
In your software creation, I assume you have built it based on rules, so the software therefore reflects the rules you put, if you change the rules, the software changes as well
For Charles Sanders Peirce in his Semiotics (and others alike), those researcher require changing the language structure, which therefore will change the spoken language, and therefore the obligation of teaching the new resulted language publicly becomes a bigger problem than the current NLP framework itself
I guess it depends on how "language-pattern" is defined and at what level of communication. It is at least one dynamic structure that in-forms selection of topical elements (ideas, values, Greek top) at the narrative and perhaps the discourse level of spoken and written narratives and discourses. This was admirably demonstrated by the anthropologist of myth, ritual and art, Claude Levi-Strauss. I suggest that for Derrida the meta-pattern is at least one dynamic structure that informs what he describes as the "graphematic" structure of writing (absence, reiterability and différance), which structure Derrida asserts overdetermines speech, and in general, all kinds of signification.
With respect to the 20 or so neural network loci for language (spoken and written), I suggest that neuroimaging studies might help answer your question and facilitate differentiating components of "language-pattern".
I used to differentiate the language modules according to the (attached table), but as highlights on it indicate, it needs extensive revision. In that out-dated table I allocated the L anterior temporal (BA 38) as a hub for narrativity (event sequencing, stories, etc.); R BA 38 for discoursivity (intertextuality (Kristeva, Derrida), speech acts (Austin, Searle), meaning-effects (Gadamer, Iser); and the graphematics structure to R BA 44/46. I have more recently summarized neuroimaging studies that might be especially relevant for identifying the neural nodes in a network associated with applying the revised Weil-Levi-Strauss formula (rCF), which is a supplementary file to my published paper (attached).
You can find “the law” in https://en.wikipedia.org/wiki/Triangle_of_reference#/media/File:Ogden_semiotic_triangle.png This shows that the link between a SYMBOL and REFERENT, a.k.a. reference object, (or as Computer scientists put it a “name” and a “value”) is always through a THOUGHT or REFERENCE. Semioticians talk about a ‘functioning sign’, this diagram shows (“the law”) that ‘everything is interpreted’. I hesitate to say that everything is a function because this is a loaded phrase. In Computer Science, the link between a “name” and a “value” (say, a memory address and the value it contains, or a function and its parameters, and their return value) is atomic. This can be seen in Saussure’s notion of a sign as a direct value https://en.wikipedia.org/wiki/Course_in_General_Linguistics#/media/File:Tree.gif
[I assume you have built it based on rules]
No, not rules as such which might suggest there is some axiomatic representation. It simply arbitrarily maps an utterance or phrase onto a reply, based on further phrases or utterances and their replies. If there is a mapping, it returns the reply, if not it is ‘infelicitous’, as the Speech Act community would say, and it replies that it doesn’t understand. So if I say “hello” it may also reply “hello to you too”. This can be seen in https://youtu.be/sw8T-mg86M0 If this is a ‘rule’ then it one is based on an English-speaking society, rather than some axiomatic representation. If I say “bonjour” and there is no mapping for this phrase (it is inconsequential that it is French!), then it will say “sorry, i don’t understand”. The felicity it often encoded in the first word: “ok, …”, “Yes, …”, “sorry, .. “ etc.
The problem with technology like Alexa, Watson or Cucumber is that they associate an utterance with some program, which is essentially a direct mapping. If you think I have created a purely functional language where the only data type is and utterance or phrase, then I’m happy with that. This does suggest it is programmed, for which you might think involves some written artefact, such as a program. This can also be represented by a phrase or utterance https://youtu.be/zqp-Z_OtCp0 . In my paper, https://www.researchgate.net/publication/326140012_Unifying_Speech_and_Computation I show how this can achieve mathematical functions, and an example is freely available in the source code.
[those researcher require changing the language structure]
Not at all, because the mappings are arbitrary - anything can be said. Speech is the Universal Machine!
Dear Martin Wheatman , your reply seems to be more practical, however, my question is somehow hypothetical theoretical,
For [those researcher require changing the language structure] , for example, think of "Lojban":
https://simple.wikipedia.org/wiki/Lojban
Researchers who developed it, they did great (a rule-based language with a very powerful logic, and therefore a math), however, introducing new language (with the obligation to be teached publicly, in order to be used), comes with a bigger task than the current NLP task itself
I enjoyed your paper, Hans Goetzsche on the rejection of the term meta- when talking about something, although meta-language does come in useful when talking about the definition of a language by syntax (e.g. Baccus Naur Form) But this I would argue that this is essentially a Homuncular Argument (reductio ad absurdum?): representing something in a simpler form in the hope it will be more understandable.
(Incidentally, the notion of 'mafia' can also be used in English in an ironic manner when talking about a 'clique' - a [self-selecting?] group of [essentially good] people, e.g. the Downing Street Mafia)
Dear Martin, I'm happy that you enjoyed my paper. I do not want to abandon the 'meta' expression; only point to less serious kinds of use of the word. Tarski's technical term 'meta-language' is, of course, to be taken seriously. #Hans
Is there a pattern in natural semantic meta-language? In response to some phrases among the stimulating replies:
* “We construct thoughts, and our world in language.” But we also act and react in all media much of which is hard-wired. The world, thoughts, perception, matter-energy interchange, and meaning itself, give effect to archetype.
* “There are self-generating information systems in cells.” But in bio-chemistry, the medium is the entire message (as Marshall McLuhan found to some extent in cultural media). Even RNA variants are more than mere messengers. Bio-chemistry speaks in natural numbers and with natural motivations, the closest dynamic structure to archetype of which we know. Grammar in coding DNA is the physical, mechanical logic, and grammar in non-coding DNA is the canvas or substrate or context or language, including Zipf frequencies among optional codons (Stanley, Tjian, et al, Boston).
* “If you want to create language…” But we could only re-format a code in various media, using their optionality layers, such as allocation of limited sound patterns, to limited meaning patterns. There is only a vague tendency to allocate certain sound combinations to certain meanings (Blasi et al).
* “To adjust your world view.” But that is impossible. Never did, never hoppen. Inter-translatability of languages, and diplomacy, is part of the evidence of hard wiring of media and messages.
* “Spoken language is the Universal Machine.” But speech relies more heavily on environment, situations, assumptions, prejudice, irony, threats, posturing, etc. Best look at all media in concert, particularly behaviour, for underlying code.
* “Language pattern is at least one dynamic structure that informs selection of topical elements, ideas, values…” But diction and grammar is relatively neutral compared to other media, and serve other media. Best look for meaning in behaviour.
* “Derrida’s meta-pattern… over-determines speech, and all kinds of signification.” Many considerations over-determine speech.
* “The 20 or so neural network loci for language… L anterior temporal BA 38 for narrative; R BA 38 for discourse, inter-textuality…; R BA 44/46 for speech acts, meaning-effects, graphematics structure… neuro-imaging studies may identify neural nodes in a network.” But brains perceive, process, store and retrieve systematically. The number 'about 20' perks my interest. I expect some brain regions to reflect the archetypal structure that I have demonstrated in subconscious behaviour, consisting of five layers; an axial spatial grid (to which I allocate the temporal hippocampi lobes, see Furter 2018; Stoneprint Journal 4, Stoneprint tour of London. Taxi drivers exercise the knowledge, p5. Lulu.com); bearing 16 types of about nine optional features, each expressed at fixed average frequencies worldwide, and 4 transitional sectors, and two semi-polar markers, and two ‘galactic’ crossings, totalling typology of 27, of which three are probably not near the surface, thus 24 in periphery; and the axial centre, and two near-central polar markers which are the only features that may mutate in relative position.
* “The revised Weil-Levi-Strauss formula (rCF), supplement to my paper.” I cite Harrod’s equations of stock characteristics and relationships, and offer its larger archetypal context, see Furter; Blueprint, here on Researchgate, and in a slight update in Furter; Blueprint on www.edmondfurter.wordpress.com The numbers and rules you seek may lie in this revival of structuralist anthropology.
* “Semiotic triangle... everything is interpreted.” But the sensory filter is already developed in Plato’s cave analogy.
* “Saussure’s ‘sign’ as direct value by rules?... Or arbitrarily maps an utterance or phrase onto a reply… based on further utterances and replies.” But context of inherent content, experience, environment, expectations, etc, is not mainly cumulative, not mainly learned, and thus not evolutionary.
* “Mappings are arbitrary, anything can be said.” But limited by context, situation, convention, intelligence, ultimately archetype.
* “A rule-based language… but adopting is a bigger task than current NLP.” Yes, language enables socio-economic appropriation and exploitation. Like law, it is a donkey carrying compromises of optional necessities.
* “Speech is the Universal Machine.” But archetype enables several media. Speech is the best to abstract or project or translate meaning into conscious ego logic, requiring the provisions and definitions that Harrod lists above, as warnings. Formulae of conscious meaning would probably not use archetypal elements such as number, direction, pairing, mutation; or functions such as equal, complementary, etc. Artificial intelligence, AI, or rather artificial behaviour, could approach human behaviour by finding the factors and paramaters of projection between archetypal logic, to subconscious logic, to conscious logic.
* “Mappings are arbitrary, anything can be said.” But limited by context, situation, convention, intelligence, ultimately archetype.
Yes, but there is no direct (algorithmic) relationship between environment and what can be said - that lens (your 'limitation') is provided by your interpretation - the 'world' you construct.
Daniel Chandler talks about whether language surrounds ideas (a cloak) or whether language supports ideas (a mould) . My software works on the principle that words are ideas, depending on that lens: the 'lens' is constructed with words (noting that - as Ogden and Richards point out - words don't 'mean' anything in themselves!).
There is one caveat, that we extend belief to the speaker until they say something negative - something we don't understand, or something that is contrary to our beliefs, at which point we need to express something ourselves - "now hold on/sorry, can you repeat that". This is known as 'felicity' by Austin, or 'adequacy' in the Ogden and Richards Triangle, and this directs the flow of conversation (which is how I model conversation in Enguage) I'm thinking of encoding it, "yes, ...", "ok, ...", "sorry, ..." etc, but it becomes stilted.
But I think this simple 'truth' is what Human Life Programming Laguage should really be looking for as mathematical pattern.
The conscious logic assumption that language 'constructs' meaning, or could do so, is over-rated and egotistic. Nature and culture re-express archetypal structure, from the periodic table to species to ecosystems, all setting bounds to what we are likely to be, do, and say. Insurers and biologists and statisticians know that the average is rarely shaken.
To semiotics, I propose that the core content of meaning is revealed by individual and collective recurrent behaviour. I have drafted a table of the known recurrent and thus archetypal features in art and built sites (supported by myth, iconography), in my paper Blueprint on www.edmondfurter.wordpress.com where I welcome your comments.
Edmond Furter I haven't had too long to study your method, but your analysis of 'archetypal structure' through radial graphs is interesting but they are very visual, and analyse 2-dimentional artefacts: paintings, maps and tables.
For me, this begs two questions: firstly, does this make such analysis inaccessible to people with limited or no sight; and also, how does the perspective of the viewer affect the analysis? Further, the edges between objects seem to be attached at the eyes. Is this significant - ocular-centric?
Martin, archetypal structure enables all media in nature and culture, including 2D media such as art and built sites, and 3D media such as myth (characters interacting in episodes, with landscape playing a part, see my magazine edition on the Hercules myth cycle in the Peloponnese and in later Greece, on www.stoneprintjournal.wordpress.com). The structure is revealed by finding the axial grid between the focal points of the characters, always the eyes (ocular grid), except for two characters, where the focal point is the chest (heart), and midriff (womb). Between particular axes lie three polar axles, forming two mirrored triangles, their corners on junctures of limb-joints, not eyes. I found this structure, I did not invent it. The evidence is overwhelming that this structure enables meaning itself.
In one paper I used a radial graph to demonstrate that each of the twelve to sixteen types express certain features, and that these features each have their own average frequencies. The radial graph is a features frequencies graph. I will not use it again, since it is easily confused with the spatial pattern of the axial grid.
Perspective of the viewer does not affect the spatial structure, since it is inherent in the spacing of the focal points. Any oblique view would equally distort the plane and the spatial structure. There are five layers in the structure, mutually confirming one another; certain optional typological features of fixed average frequencies of expression; the peripheral sequence of the characters expressing some of these features; the axial grid between their focal points; three central axles between certain junctures; the implied time-frame or Age of one of the three central axles (usually in Age prior to the work). The entire 'grammar' of culture is as subconscious and innate as the Periodic Table of elements and their potential isotopes and chemical properties are. Thus my proposal that my evidence reveals the structure of meaning at the human ecological scale. We operate by certain laws that we did not invent or develop. I list the known features of archetypal structure in my paper Blueprint. I expect that AI technologists could programme these features and layers into a semiotic model. The key, and perhaps the answer to your question, is to distinguish between the core structure, and works such as a myth or ritual or calendar as an expressed variant. The minimum of meaning is about 60% of currently known features (finding more features of lower frequency would probably maintain this average).
Some of our allotted media allow various optional features, for example the sound-allocation layer in language, enabling apparent difference and tribal bonding to delimit co-operation v competition. I am working on a paper to expand on the potential of using discovery of the core content of culture to alleviate perceived culture shock, which is due to globalisation born of over-population, already partly dramatised and internalised in our myths of wars, Towers of Babel, and histories of empires v patriots. There is only one culture, but we have vested interests in sustaining the fiction of cultural differences. Anthropology had inherited the early linguistic fictions that are difficult to overcome, despite Chomsky and Jung. Levi-Strauss had the right approach, but did not find the formula.