"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
A 1, 2024/02/23, page 1
No french here, please.
As I said at the other topic, there is no limit for genuine AI.
Some talk about but did they reach?
Jamel Chahed ,
I would say that two answer this question we must break down the problem into components on which to base the premises:
1) Is the current AI capable of becoming a philosopher?
2) Based on our current direction in the field of AI, is it feasible to develop algorithms that can enable such leap?
3) From a computer component perspective, is the current development of components the best direction towards developing a philospher AI?
4) Will a machine never be a philosopher?
Answer to 1 The answer to the first point should be evident that it is a no. Transformer architectures are not designed to tackle this job. Some might say that it is, but take away RAG(Retrieval-augmented generation) which in a sense it is search engine technology that does the work of constraining the answers (or further prompts from the user to provide the context which should be evident to a human).
Answer to 2: The answer to the second question is more interesting but still I would argue that it is a no. As long as we focus on trends it is a negative outcome. As long as we are bound to academic (reducing problems to piecemeal sizes and not focus on long term projects) or corporate (if there are no results in 4 years then we abandon it) perspectives it is still a no.
That being said assuming we overcome such limitations, there is the focus on how to solve the problem. We cannot solve something we do not understand nor have the correct question to begin with. Is the answer to making an AI based on the assumption that philosophy is just giving an algorithm a bunch of data, minimize the error and produce permutations of the data? (I leave this open to debate to see what happens :-).
Note 1:: It is only after answering this question that we can even start to go into the subject matter of philosophy and how to carry it out and whether it is computable or not, then if an AI can replicate that computational process in a physical system!!!
Answer to 3: This question is geared towards the new trend based on GPUs using SIMD (single instruction multiple data). Is this the best approach to solve philosophical problems? This contemplates that our current computational infrastructure (besides GPUs) is based on sequential instructions (access to disk drive, peripherals, etc.). Does the current computational landscape even capable of consolidating this into a coherent computing architecture? (then there is backwards compatibility to mention a hurdle)
Answer to 4: Before answering the question it is worth pondering why I even brought the first three questions to begin with. We cannot forecast if we do not have a starting point. Basing an answer on mere imagination is not scientific. Starting on what is currently available provides a grounded perspective to answer the question for the next years at most, but based on data. Also, this is not a mathematical problem. Most people will cite the universal approximation capabilities of machine learning algorithms to say yes. This is wrong, basing an answer on infinite nodes, or infinite time is silly at best. Good for a mathematical paper but worthless for reality of physical constraints (and this is a question on realizing a physical machine (a Cyber physical system)that will run a computer code having the label of ‘AI’.
Finally, stating a never without a time frame is the same as proving universal approximation. Good speculation but not concrete. I will answer that for the next 10 years it is more than likely a not. Beyond that, we must wait 10 years and retake the question for the subsequent 10 years.
Note 2 : post the question to ChatGPT right now to see if it comes with anything closely resembling this analysis without feedback to narrow its response.
Regards
Arturo Geigel Thank you for launching the topic with this excellent contribution. I particularly appreciated the ending where you wrote " Finally, stating a never without a time frame is the same as proving universal approximation. Good speculation but not concrete.I will answer that for the next 10 years it is more than likely a not. Beyond that, we must wait 10 years and retake the question for the subsequent 10 years."
Wolfgang F. Schwarz
Thank you for this pertinent point of view. You wrote "One of the crucial problems stays to prevent AI models from hallucinations. This malfunction is heavyly contrary to the complexity of ethically based human creativity in philosophical thinking and writing". Don't you think that AI would, one day, be endowed with "Hallucinative Faculties" artificially and sufficiently intelligent so that they can no longer be characterized as dysfunctions contrary to the complexity of human creativity and ethics?Dear all,
I would like to add something to this response by inviting all to reconsider Kant's argument regarding people who lack the capacity for judgment—or, as he put it, the "stupid" (or “minor”).It appears that his idea remains the fundamental underpinning of current debates between internalist and externalist philosophers of mind.
I believe that Kant's canon can be effectively used to argue against attributing intelligence to machines. This is based on twokey theses that I will simplify to spark a discussion: first, judgment cannot be learned through formulas; second, a judgment represents the synthetic unity of cognitive content, not just associative patterns (KrV B 141). Together, these theses highlight the difference between human intelligence and machine intelligence.
“””For this unity of consciousness would be impossible if (…) the mind could not become conscious of the function by means of which this manifold is synthetycally combined into one cognition.””” (Kant, KrV A 108/109)
Kant would concede, however, that computers and even people incapable of making judgmentsmay successfully produce algorithmic instructions to learnto agree or disagree with'p' under conditions that are the same as those experienced by people with the faculty of judgment. He doesn't delve into the subject of what sets apart individuals who can articulate the content of their beliefs with self-awareness from those who have simply been taught to mimic others who understand it. He never tries to characterize the singularity ofsomeone who has simply been taught to mimic those who possess such knowledge through algorithms.
These first intuitions might be developed in severalways. By linking the capacity to judge with the capacity to create norms for oneself—a capacity that is consistent with practical reason and freedom—Kant himself advanced it toward a practical and moral philosophy. In his brief book on What isEnlightenmentbook, the authorseems to imply that maturation, or a cognitive autonomy attained via reflective freedom, is connected to the capacity for judgment.
All of this keeps us in the dark regarding how intelligence is created or what kind of fuse or trigger would enable a machine to change from a state of "instruction-following" to one of "self-awareness" regarding the content that the instruction specifies. Kant also does not delve into the discussion about how the failure to have a faculty of judgment is represented in a human who is devoid of it, and how this would be different from an automaton or a machine. (To kick off the discussion, we could explore the idea of different failure types that machines face, which are uniform, unlike the wide array of failures that we encounter, such as pathologies, limitations, fallacies, and societal judgments of incompetence.). Furthermore,Kant's works do not provide a comprehensive discussion on the technical abilities that machines or individuals lacking judgment could possess to potentially replace humans in various roles. Even with a detailed semiotic analysis of thought structures and recursion patterns, there may still be areas of technical competence, such as philosophical and artistic skills, that machines cannot replicate. This raises questions about the limitations of recursive projections in capturing all forms of human thought. However, there are indications that philosophy is among the most subversive or, to use Wittgenstein's term, idle forms of thought, in that it happens when "language is on vacation." This may be the first indication that philosophy is one of the firstoptions for something the machine would not mimic, even after being fed the identical texts and programmed by sufficiently exact semiotic structures.
In sum, this discussion is being carried out by the conflict between internalists (and two-dimensionalism) and extensionalist-externalism. Most of the answers to this problem seem to be widely shared, to the point where if anything surprises us in ten years it will notbe because of the limits of our knowledge about the machines,rather than because what we do not know about ourselves.
Lucas Vollet Thank you for these insights. You are writing: “Together, these theses highlight the difference between human intelligence and machine intelligence”. I agree with this point of view: This is not only intuition but appears to us as evidence. While being fundamentally different, Artificial Intelligence is already providing capabilities that exceed and by far the human performance in many fields of "Intelligent Activities". Moreover, can we expect that the prospects for developing techniques allowing the combination of the two types of Intelligence are likely in a position to produce a Form Amplified Intelligence which would otherwise be superior to both Human Intelligence and Machine Intelligence?
The war between AI and philosophy or psychology, the machine never takes the place of mind. A programme set by a jawa may not equate with the mind functions. My argument if machine take decision with emotions then feels happy and also commit suicide.
Pramod Kumar Ph.D. Thank you for raising this point. You wrote "My argument if machine take decision with emotions then feels happy and also commit suicide" We can wonder for example about the very meaning of Suicide of a Machine: That it scuttles itself? that it self-destructs? If so, well algorithms of this type already exist and serve for prevention in situations judged dangerous. Speaking of AI and Suicide, the recent paper by Sharma et al. (published a week ago) "Machine minds: Artificial intelligence in psychiatry, Industrial Psychiatry Journal, 10-4103, 2024" explores the use of artificial intelligence-driven technologies in screening, diagnosing, and treating psychiatric disorders. There one may read about suicide "Suicidal ideation can be assessed using AI to classify texts typed by the patients as positive or negative for suicidal ideation.[22] Using sociodemographic and clinical characteristics, it also identifies predictors of suicide attempts. As meaningful predictors of suicide attempts, many models included past suicide plans, previous suicide ideation, lifetime depressive episodes, and intermittent explosive disorder"
See:
https://journals.lww.com/inpj/fulltext/9900/machine_minds__artificial_intelligence_in.22.aspx
Wolfgang F. Schwarz
Thank you for the insightful reply and for the reference. What you reported from Martin Lee is true. Indeed, "we can be certain that someone will be seeking ways to trick or fool AI into acting maliciously" and I agree with your excellent conclusion e.g. "So this question may stay a continuous challenge for the development of the ethical philosophical dimension of AI modelling"All,
The statement that “machine will never be a philosopher” cannot be debated solely on philosophical arguments. That is because:
1) These are systems that are already functioning and analyzable. The question has been answered by computer scientist (by putting these systems in production) and the argument must be addressed on these terms. Doing armchair philosophy is not the way to answer this question, since you are not engaging the argument.
2) Philosophy of AI must contemplate the technical field of AI if it is to grow(because of the last point). That way it will have its own body of knowledge that is separate from the philosophy of mind. it is important to recognize that AI scientist focusing on neural networks (and their current technologies such as ChatGPT) treat their networks as optimization problems and not as biological imitation problems. Engaging AI arguments using philosophy of mind alone is not engaging the problem posed by current AI technologies.
The question can be properly addressed on AI scientists arguments. Some of the problems they face that can be used to support why a “machine will never be a philosopher” are:
1) Improper interpretation of scope quantifiers in universal approximation theorems
2) The problem of using probing to determine how neural networks learn (this argument is so bad I do not know where to even begin).
3) That sequence learning equals knowledge
4) The notion that generalization optimization encompasses all types of knowledge
5) That throwing data will somehow solve particular unique instances
6) That throwing data at the problem will solve the knowledge problem
7) The notion that neural networks should derive their own internal representations (this one is a tricky one and relies on very subtle arguments on how huge amounts of data are fat distributions).
While the above are not direct refutations, they can be used as premises to support the conclusion that "a machine within the next 10 years cannot become a philosopher based on the current research landscape"
All the above arguments(plus many more that I cannot currently think of) can be engaged from a philosophical and logical point of view and engage the AI scientist on his own terms and based on their writings and statements.
Jamel Chahed ,
“Don't you think that AI would, one day, be endowed with "Hallucinative Faculties" artificially and sufficiently intelligent so that they can no longer be characterized as dysfunctions contrary to the complexity of human creativity and ethics?”
The reason that this is not currently possible is based on the way neural networks learn. Neural networks(in the case of ChatGPT and other LLMS) learn sequences. Hallucinations are possible because the context given in the sequence is not unique to a particular targeted entity on which the query is posed. For example “I have a kitten” is a sample where kitten is the target concept and the rest of the words are context. The word kitten can be easily replaced by “dog”, "parrot", "car", "boat",..., etc.. That is why refinement by subsequent input is necessary to move the AI generated text towards the specific target context. Currently the way to limit these hallucinations with minimal user input is to use search engines to provide additional context in the form of search results passed to the LLM (as I mentioned above is by using (Retrieval-augmented generation).
Arturo Geigel Thank you for your two-part answer. The first part tells us that Artificial Intelligence cannot conceive an output of a better level than the output of an Intelligence superior to it. The second part is IMO an analysis that confirms the above assertion by providing technical reasons.
There was a time when I hated seeing my students using copy/paste to insert texts into their work without clearly and precisely specifying it. I note with surprise that today, in this universal fascination with AI, educational systems and even academics are less offended by the use of AI to produce intellectual works as part of training courses leading to graduation or for scientific production purposes
Your recent post regarding cut and paste was not justified by many in past, but now AI, based gpt fully advertising as an asset. When we will depend on stored media very soon we. can loose our analytical sensation in brain. Then the present claim the future will be of AI, automatically justified
Dear Lucas Vollet, please read Conrad Kuck — Non-monotonic learning automata — Intuitinstic Set Theory and than come back with your decision.
For all: You see four (very) different tables. What algorithm inside your brain makes them equal, gives you the competence to decide `table´?
What has this to do with mathematics? [Kuck told us]
Peter Kepp wrote "You see four (very) different tables. What algorithm inside your brain makes them equal, gives you the competence to decide `table´?" You are raising one of the major issues underlying Kant's philosophical work. In his masterpiece "Critique of Pure Reason" [1], Kant asks, “How is it possible to know anything about the world?” In his view, two things are necessary for knowledge: intuitions and concepts. Roughly, intuitions are perceptual experiences. Concepts are the general categories in terms of which we understand things. Humans need both intuitions and concepts to perceive things and to think about them" See: https://api.taylorfrancis.com/v4/content/books/mono/download?identifierName=isbn&identifierValue=9781912281916&type=previewpd
[1] Kant, I., Meiklejohn, J. M. D., Abbott, T. K., & Meredith, J. C. (1934). Critique of pure reason (p. 51). London: JM Dent. (almost 29 k citations) Available on:
http://fs2.american.edu/dfagel/www/Philosophers/Kant/The%20Critique%20of%20Pure%20Reason%20%20%20Immanuel%20Kant.html
Jamel Chahed, thanks for your answer. And thanks for being such way busy in respect of you own question — very good!
Kant and other philosophers were taken for results at the theory Kuck introduced. So no need for me to read again.
My point was `algorithm´. Some think intelligence base on any static calculation.
But learning automata also is a concept. The concept of decision `match for table or not´ included. Calculation also takes place. It makes the measure of equalness.
How to know anything about the world is a hard problem. I think there is none of it without self awareness (consciousness, another topic here). Maybe only possible if both will come together.
Peter Kepp ,
'But learning automata also is a concept.'
Yes, it is also a possibility but where do you see it?
Simple probabilistic decision on a tri-gram model using Viterbi for speech recognition generates a huge matrix with sparse values. How would you tackle a similar combinatorial explosion of states on a broader field such as an 'AI philosopher' using a learning automata?
Peter Kepp Thank you for your comment. For my part, I would be careful not to claim: "... no need for me to read ...".
A. 21, page 3, 2024/02/26
Jamel Chahed, yes it may look a bit arrogant. But the thesis Kant pointed on got used at the theory of Kuck. He himself was my teacher at that. Other philosophers got used also (Hegel, Heidegger, ...).
So no additional aspects for me. It's the same back. When I asked for reading Kuck no one does. It's theirs decision.
Arturo Geigel, I studied that. The short form is: Conrad Kuck, Non-monotonic learning automata. But the book is out. The long form is available by: Conrad Kuck, Intuitiunistic Set Theory Part I ... IV.
You can order them at my printing house.
Kuck proved artificial intelligence. All his course was machine learning in practice for me. Machine learning is proved and is ready for to be used.
Thesis: If artificial consciousness is reached, machines will be able to do the same as biological units. Philosophy included.
Dear Arturo Geigel , you said:
"Engaging AI arguments using philosophy of mind alone is not engaging the problem posed by current AI technologies."
Which I agree, but of course the other way around is also true.
If the question is "will machines be philosophers or not", it seems it can be answered both by taking philosophy as a parameter and by taking machines and their structure as parameters. I see no reason to a priori decide that the question is best approached by adopting a unilateral angle, since the question is as much about philosophy as it is about machines.
This seems to me to be clear in your own text, after you propose to engage with the problems of the field with questions - which, by the way, are very interesting - that directly intersect with the field of philosophy of language and mind (including questions about linguistic analysis that have been present since the beginning of analytical philosophy and are debated today in the field of two-dimensionalist theories - among others). These questions arise in the wake of psychological and phenomenological perspectives on the unity of the act present in "thought" (Kant, Husserl, etc.) and the contribution of historians of philosophy to address this issue has at least some non-discardable advantages.
Finally, it seems to me that there is no a priori reason to avoid philosophical takes on the issue that WOULD NOT engage A.I. scientists in their own field. Why? Because at some point, deeper skepticism will be inevitable. One might wonder whether problems are always expressed propositionally, whether the nature of the specification of a meaning is always syntactic, whether it is circular, whether there is "meaning" outside of language, and even more philosophical-sociological questions about the self-reinforcing nature of syntactic paradigms and the way they reproduce "logics" of dominant groups, etc. One example is Lacan's question:
“(…) supposing it [the machine] complex enough to do a thorough analysis of the elements of the signifier. Would it be able to ratify the message of a witty tirade?” (1999, p. 119).
These questions can really irritate a language programmer or A.I. scientist, and it's clearly difficult to "engage" with them through them, but that doesn't strike me as a true a priori argument about the irrelevance of these questions. The other option is Wittgensteinian quietism about philosophy; It's always an option, but it doesn't seem like one that will help either.
I appreciate the text and the ingenious approach, and I hope my questions don't sound aggressive.
Lucas
I am posting this reply on another thread: https://www.researchgate.net/post/Can_AI_replace_Human_Peer_Reviewer_for_scientific_articles_and_manuscrits/6 (Page 6). Thank you Dear @*** for the insightful reference (https://www.ibm.com/topics/explainable-ai). As you mentioned "Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production". From this point of view, AI can make an essential contribution to evaluating scientific work. The fact remains that fundamentally, AI would not be in a position to give an informed opinion regarding scientific originality, progress, or innovation. This is a fundamental question: Artificial Intelligence cannot evaluate a scientific production that goes beyond the knowledge in place since the AI, however, perfected it may be, cannot tear itself away from the knowledge in place, the only one accessible for it.
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so/2
Peter Kepp
Dear,
Thank you for your response and advice. I appreciate your insights. As I understand it, studying intuitionistic logic or type theory of intuitionistic kinds would require me to venture beyond my current fields of study. From what I gather, these logical technologies aim to represent the process of conceptual "selection" in a manner that is more akin to human thinking rather than purely formal systems. I am particularly drawn to the philosophy of Dummett, who is a proponent of intuitionism.
Now one interesting thing I would want to include here is that Dummett challenges the Davidson-Tarskian approaches on intuitionistic grounds. It is interesting to note that Davidson, even before Dennett, concluded that there is no inherent difference between the prediction of human minds and machines, as long as the prediction relies on some form of the Tarskian scheme to test patterns of communication. However, Dummett disagrees and argues for a more intuitionistic perspective. He believes that understanding each other cannot be achieved through "mere analogy" alone, even if we consider recursively reproducible mathematical analogies. According to Dummett, additional conditions are necessary to specify the "content" of communication, which go beyond extensional and syntactic aspects.
Based on your previous advice, it appears that you are referring to an author who aligns with the intuitionist orientation and would ultimately agree with Davidson and Dennett (on matters on identity between machines and minds). Am I correct in this understanding? I am genuinely interested in exploring this further. However, as I mentioned earlier, I lack expertise in this particular field.
A really difficult question if it is discussed in terms of the nature of philosophy, but from the scientific point of view and discussing the philosophy of machine work. This takes us far in refuting the question and rephrasing it about the concept of philosophy or using the mind and harnessing human potential and scientific research in developing the philosophy of machine work.
Ghaleb Mhaibis Thank you for this relevant reflection. You wrote: "This takes us far in refuting the question and rephrasing it about the concept of philosophy or using the mind and harnessing human potential and scientific research in developing the philosophy of machine work." By carefully following one or the other of your proposals, do you think this would likely bring the machine to the level of "Philosopher"? And would the machine be in a position to achieve this qualification specific to thinkers?
Jamel Chahed
J. C.: “Artificial Intelligence cannot evaluate a scientific production that goes beyond the knowledge in place since the AI, however, perfected it may be, cannot tear itself away from the knowledge in place ...”
Yes it can!
I used the technique and corrected mathematical field, square-root-function, imaginary unit, the Euler-equivalence, Cantors second diagonal argument and some more.
Philosophy also is a concept. How should AI get knowledge on that?
The program which makes AI acting has to handle data, not numbers / values / quantities. Math takes place inside. Our brain doesn't present any formula used for own acting.
Lucas Vollet, you are welcome!
Kuck is direct following L. E. J. Brouwer. But combined with philosophy and a proved concept for implementation. Every weeks exercise at his course (two semesters) was on about five proofs.
Peter Kepp "I used the technique and corrected mathematical field, square-root-function, imaginary unit, the Euler-equivalence, Cantors second diagonal argument and some more". But all these are known problems. AI can at most help solve them using the state of the art on the subject. Can AI go beyond the state of the art, to propose a new conjecture, theory, or model? conceive original experiments to produce new experimental data to validate them?
Lucas Vollet ,
The reason for me in approaching the subject this way is for practical reasons. Until now I have seen philosophical arguments that are way off the mark when it comes to AI philosophy and rightfully ignored. Also, the computer science is ignoring philosophers because in their view philosophy has "nothing to offer". What I am trying to do is call attention to where philosophers can contribute significantly to computer scientist blunders when it comes to logic and philosophy as applied to AI. This can demonstrate the stupidity of ignoring philosophy and logic and pave the way for deeper more abstract arguments. Currently one cannot engage in the abstract when AI is enjoying success. To do so is equivalent to shouting at a deaf ear. But if one argues the theory on which AI is built and demonstrate that the technology is built on shaky theoretical ground (e.g. current LLM transformers), then one can pave the way for those deeper arguments. Also, direct AI towards a more productive venue.
I am one open to this dialogue because I recognize the need for philosophical contributions.
All,
When observing the direction this discussion is taking, I believe it is essential to talk about a tradition of using the word "philosophy" that is not being addressed here.
This discussion would be very heavy and intricate if it were taken to the level of specificity necessary to define philosophy and its unique, distinctive characteristics, both in relation to natural science and in relation to the total corpus of human knowledge in other areas. Even knowing this difficulty, I would like to recall at least ONE tradition of philosophical inquiry that would create a principled challenge to advocates that machines can philosophize.
This is a part of Kant's transcendental philosophy heritage, taken up by German idealism and later by Heidegger. The characteristic feature of this tradition is to represent philosophy as a type of "meta-thought" more transversal to all disciplines, capable of crossing categorical boundaries and drawing self-conscious self-narratives from its own history of meaning-making (Bildung).
In Heidegger, philosophy appears as a representation of the very consciousness that a historical epoch is capable of making of the difference between Being and beings (ens), which will be the foundation (at that historical time) of modal and ideal notions (such as essence, structure, etc.) that the support the scientific accomplishments of that era.
I believe that, if one really wants to take seriously the discussion about whether machines can philosophize, this notion of philosophy is the one that needs to be addressed, as it is there that the most radical challenge to what is called "thought" lies. But then again, this kind of discussion would exasperate an A.I scientist, or most of them. As I said before, for me this is not an argument against the relevance of this issue.
Finally, I wouldn't be surprised if someone argued that all transcendental philosophy is nothing more than an inadequate dramatization of acts of consciousness that are much simpler, more natural and capable of being taught by algorithms. This would be a form of reductionism similar to what was done by logical positivism.
Arturo Geigel "I am one open to this dialogue because I recognize the need for philosophical contributions". Thank you for the momentum you bring to this Thread. There indeed is a need for Philosophy as the means humans have "to understand fundamental truths about themselves, the world in which they live, and their relationships to the world and each other." In the world of today, AI appears as a powerful transformation in how things and ideas are designed and implemented in all areas of knowledge, technology, and way of life and thinking. In this regard, many questions should be asked: What role should Philosophy play in accompanying the predictable and almost inevitable advances and thrusts of AI? Can AI be involved in philosophical thinking? is AI capable of Philosophying? And in any case, should we preserve philosophical thought and place it, like a safeguard, above technical advances?
Jamel Chahed ,
While we may disagree on approaches, I think that all the contributors of this thread understand the importance of philosophy, as it should be. There have been previous threads here in RG where scientist have downplayed the role of philosophy. I think that this is due to a misunderstanding of what philosophy is about.
Lucas Vollet,
I will take you on that challenge. The problem facing machine learning on that front is meta learning. where the input to a system is filtered by a second order system. But then we have the problem of filtering the filter[1]. There are alternatives to this viewpoint as is evident for anyone who has taken an epistemology course (but places limit on the superintelligence argument).
The notion of second order system also faces the challenge that it needs at one point or the other meta programming and self modifying code. Self modifying code has been shunned by computer scientist for a long time but there are safe ways of doing it. Also the field of machine learning has become a specialization "black hole" where if it is not what is currently trending, then it is not important (such as diving deep into meta programming with machine learning). The points here are a matter of changing outlooks on what the field should be.
The other point to address is something that I have debated on several threads with scientists as well as philosophers. Framing the discussion on terms like "self-conscious" is a biased an loaded term that from the beginning assumes exclusion of AI. Let us put the discussion on equal footing without biased terms that form axioms that excludes unbiased analysis.
While I love Lewis' "kangaroos without tails" and hold him in high regard, let us restrict the possible worlds to one where operational and testable postulations can take place to answer questions.
References:
[1] I did this epistemological exercise in one of my preprints
Arturo Geigel
Hi Arturo,
You said:
"The problem facing machine learning on that front is meta learning. where the input to a system is filtered by a second order system."
I'm not sure that would be the challenge of producing "philosophy" - in the sense I meant invoking German Idealism and stuff. There are a multitude of possible solutions to semantic paradoxes and the representation of intensional and hyper-intensional concepts using second-order systems. But, to tell the truth, I don't believe that formalizing ways to solve these problems will even resolve the dispute between knowing whether the machine learned something semantically or syntactically, and I say this in line with Peruzzi:
"""then the “symbolic fallacy” lies in wait. If the meaning of an expression of Language is in turn just an expression of a deeper-level Language 2, and so on recursively, semantics becomes syntax."" (PERUZZI, 2017, p. 128)
Now back to "philosophy", it does not seem to me that it is exactly the ability to provide a solution to these second-order coding problems that would characterize the type of critical and self-critical thinking or even the ability to observe "historically" the meaning-making narratives of an community - which characterizes the form of philosophical thought that I have tried to bring about by invoking the post-transcendental tradition.
But I confess that I am open to thinking about it. At first, I can't see things this way, although I understand a correlation between the word "meta-thinking" - which I used myself - and these forms of solutions.
Regarding your interesting accusation of Bias in the concept of self-awareness, I don't believe that the fact that this kind of characterization of philosophy as an exercise in self-conscious thought would exclude artificial intelligence a priori, since there is nothing preventing machines from one day starting to philosophize in exactly this way: the obvious step would then be to build institutions that harmonize with their own "codes" and write history books about their emancipation from a state of "blindly following rules" to one of "autonomy", etc. It doesn't seem to me that in its current state this is on the A.I "agenda", but it could be on its agenda at some point - that would be the moment that has already been portrayed in several famous films.
Perhaps at this moment they will also begin to reflect on their crises in a more dramatic way (narratives of anguish, finitude, alienation, etc.), and may also think about suicide and other Hamletian phenomena that seem linked to our more "deep" way of thinking - which has been categorized as "philosophical" in the translations we have made from almost every culture.
As I said, it doesn't seem to me that this is just a matter of dealing with higher order concepts. But I'm open to discussion.
Finally, there is another interesting aspect to be discussed in the thesis that self-consciousness is a biased concept.
I believe you may be right in a not-so-predictable way. Accusations that the concept of subjectivity and self-consciousness are delusions normalized by cultural and historical paradigms have been made within Critical Theory and psychoanalysis for decades. Perhaps it is exactly this type of delusion that characterizes human reality in a unique way.
Thanks again for the excellent contribution, Lucas.
Reference:
Peruzzi, Alberto (2017). What’s behind meaning? Journal of Philosophical Investigations at University of Tabriz 11 (21):119-145.
Jamel Chahed
J. C.: “But all these are known problems. AI can at most help solve them using the state of the art on the subject. Can AI go beyond the state of the art, to propose a new conjecture, theory, or model? conceive original experiments to produce new experimental data to validate them?”
Jamel you didn’t get it. As long as there was no correction (on √(-1) = i for example) there was no problem. This way I see the meaning of `problem´. If there is a problem nobody knows the solution of it. As long as the solution is given. Then it is no longer a problem, only a known relationship (between the old problem and the combined answer).
All I named are beyond the state of the art. Nothing of them known by you?
In the same vein. The review by Testolin, A. "Can Neural Networks Do Arithmetic? A Survey on the Elementary Numerical Skills of State-of-the-Art Deep Learning Models. Appl. Sci. 2024, 14, 744" examines the recent literature, concluding that ".. even state-of-the-art architectures and large language models often fall short when probed with relatively simple tasks designed to test basic numerical and arithmetic knowledge..". Available on:
Article Can Neural Networks Do Arithmetic? A Survey on the Elementar...
See Also:
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
Lucas Vollet,
I am not talking of second order systems as in logical second order systems and neither LLM type processing (I seriously question LLMs learning semantic content). When I am thinking about processing semantics, I am thinking more along the lines of the work of Dan Moldovan[1]. Other examples of systems that I am talking about can be seen in [2] were some of the techniques show a range promise if used in conjunction. A second order system can learn instance on which to apply each system and increase the output performance. Note that, the rules are not about the concepts but on learning rules on how to process concepts under different scenarios and context. As final point on this subject, there are other nuances in semantics such as some context which requires an embodied agent and not just an AI.
Regarding my "interesting accusation of Bias", I hold my ground and would need strong convincing arguments to change my posture. I would need a definition of "exercise in self-conscious thought" defined not in terms of an "act" but as a detailed process that can be empirically verified in humans. It is worth to emphasize that, I am not assuming a skeptic posture but one that needs a clear demarcation on what it means to have an "exercise in self-conscious thought". As long as we don't have this the usual assumption creeps in that, it is a process that is only capable by humans (this also impact debates about animals).
Regarding your point "emancipation from a state of "blindly following rules" to one of "autonomy", etc. It doesn't seem to me that in its current state this is on the A.I "agenda", but it could be on its agenda at some point"
I could not agree more. But I am more skeptical that it could become reality at some point. There are no economic incentives and on the contray, in the Autonomy transition there would be a lot of liability and disincentives for institutions.
Lastly, do not mistake my sometimes forceful arguments as aggressive, on the contrary, I am enjoying this debate with you very much.
References
[1] https://dblp.org/pid/m/DanIMoldovan.html
[2] See for example systems in "Natural Language Processing: Semantic Aspects" by Kapetanios, Tatar and Sacarea
From the "Informatic Tribe" to the "Artificial Intelligence Sects".
Philippe Breton, The Informatic Tribe. Investigation into a modern passion. Paris: Métailié, 1990. "...A machine, the enthusiast? No: a logical, intuitive artist, crazy about aesthetics, solitary but never alone. Taste for power? No: the tribe responds with a "construction without a body" to the fragility of the biological, close in this way to the Zen which inspired Steve Jobs, the inventor of the microphone. Savior of "mythical sacred time", the computer scientist is the one through whom order arrives. The rules change, the idea of rule is established, spreads and reassures... ", Renaud Zuppinger, Le Monde Diplomatique, April, 1991 (Own translation). See:
https://www.monde-diplomatique.fr/1991/04/ZUPPINGER/43422
The Conversation, March 15, 2023, Gods in the Machine? The rise of artificial intelligence may result in new religions. ".. We are about to witness the birth of a new kind of religion. In the next few years, or perhaps even months, we will see the emergence of Sects devoted to the worship of artificial intelligence (AI). "The latest generation of AI-powered chatbots, trained on large language models, have left their early users awestruck —and sometimes terrified — by their power. These are the same sublime emotions that bind at the heart of our experience of the divine. People already seek religious meaning from very diverse sources. There are, for instance, multiple religions that worship extra-terrestrials or their teachings. As these chatbots come to be used by billions of people, it is inevitable that some of these users will see the AIs as higher beings. We must prepare for the implications." See:
https://theconversation.com/gods-in-the-machine-the-rise-of-artificial-intelligence-may-result-in-new-religions-201068
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Dear Arturo Geigel , Thanks again for the contribution. I have to confess that I don't have the competence to engage with the subject at a technical level about machine performance - according to higher order parameters. But the little I can understand of what you bring is quite interesting!
Regarding the bias of the notion of consciousness, I believe I can contribute a little more to the issue. I believe your problem with the notion of self-awareness is very similar to your problem with the notion of armchair philosophy. In other words, you seem convinced that this discussion should be made with the entire onus on philosophers and psychologists (among those who use the notion of self-consciousness). If something can't be described in terms of the language and methodology of A.I. scientists, you seem to think they do not contribute to the debate about A.I. This is how I seem to interpret your warning to play this game according to the parameters of A.I scientists. But why?
The way I see it, there's nothing wrong with tackling this question in an open field, leaving as many presuppositions hanging as possible. But your claim that the field of debate is already completely absorbed by A.I. theorists, and that it is the obligation of philosophers and consciousness theorists (phenomenologists, etc.) to enter the debate in their language, seems to me to be misleading. It seems to me that in many aspects A.I theorists have more to prove, and they are the ones who need to enter into psychological and phenomenological paradigms for debate.
Likewise, I do not follow your argument that there would be some form of question-begging in using the idea of self-consciousness in a discussion about whether machines have human-like intelligence or know how to philosophize. Let's put it this way:
Often, in the course of a discussion, we encounter properties that define a concept and that exclude other properties. This is not begging the question against the excluded property. It's just a reality in the course of an investigation. Unfortunately, sometimes these properties are rudimentary and destined to be replaced by more structural ones (maybe consciousness is just a piece of folk-vocabulary destined to be replaced). But before that happens, the only parameter we have is them.
In a sense, within the parameters of our current knowledge, there are many aspects of our knowledge of human intelligence that are defined by the property of self-awareness. This property (along with other intentional properties) does not need to be completely defined in AI scientists' terms for us to admit it. There is no reason to think that because self-conciousness cannot be described in A.I. terms they cannot be included in the discussion. This holds, unless we have some alternative to reduce them (I will mention some attempts at reductionism of intentional properties at the end of the text, so as not to break the unity of my point). .
The point is this: Maybe in the future there will be other properties better than our folk representatives of consciousness, self consciousness, exercise in self-consciousness, intentionality, etc., to replace them. But as long as it doesn't exist, we are in an epistemic state in which human intelligence and AI machines have at least one property that necessarily distinguishes them. This is a case of "a posteriori necessity" and even a "fallible necessity", because it is necessary not in absolute, but relative to our state of knowledge.
Would that seem too "essentialist"? Perhaps, but no more essentialist than saying that within our current knowledge of the properties of sugar, it cannot be reduced to gold. And there is nothing question-begging about this. Perhaps in the future we will find more fundamental properties, which explain the previous ones and allow us to formulate hypotheses that represent sugar as transformable into gold. But at the current stage of our knowledge, there is no such chance. The path doesn't seem to get any easier to me by saying that we can't talk about this "property" because it takes away the chances and hypotheses against it. It doesn't seem to me that this is the way to acquire knowledge, by any measure.
My point is that, apparently, even if our folk psychology is as rudimentary as ancient alchemy, it describes certain properties of the mind that are certainly not being displayed by machines - UNLESS the machines are strategically hiding it (which would be even more interesting).
Appendix on today's eliminativisms: The classic eliminativist argument is that intentional properties would be question-begging because they describe intelligent processes through intelligent processes, without the latter explaining the former, and sometimes the latter are more obscure than the former. This would be the motto of the first behaviorism, logical behaviorism and, finally, functionalism. However, this led psychological functionalism to a reduction of psychological processes to only those computable by Turing machines, and it is a consensus that strong normative aspects involved in intentional processes - and other phenomena considered conscious - remain unexplained within this methodology. We know at least that there is no consensus in this field and eliminativists did not dominate the psychological debate.
Thank you again for the engaging and exciting discussion.
Lucas
A. 41, page 5, posted 2024/02/27
Jamel Chahed
J. C.: “In the same vein.”
What is this for, at this discussion?
I did (for exercises) the complete rules of mathematical field in terms of a `Non-monotonic learning automata´.
But the construct of the theory (of Conrad Kuck) takes care about the possibility of errors (error-redundant).
This way — being error-redundant — had allowed the input of some questions which brought all the news in math (like a reform).
Why looking at scepticism? Ask for the way to come to!
Lucas Vollet ,
"scientists, you seem to think they do not contribute to the debate about A.I. This is how I seem to interpret your warning to play this game according to the parameters of A.I scientists. But why?"
This merits clarification. It is not that I think it should be put in the parameters of AI scientists, but in terms of science such as neurophysiology and cognitive behavior using theories that are empirically testable and not mere population studies or mere observations of behavior). This is not such a far fetched request but one that is happening in other branches of biology. Take for instance general physiology, it is now being more rigorously complemented by systems biology which I think is a welcomed addition to the field.
"Perhaps in the future we will find more fundamental properties, which explain the previous ones and allow us to formulate hypotheses that represent sugar as transformable into gold. But at the current stage of our knowledge, there is no such chance."
As opposed to neurophysiology and cognitive psychology, with AI we can carry out experiments that are not acceptable in these fields (we cannot just open a brain to do whatever we want and until AI is shown to have evolved and gets rights, it will remain this way). By translating proper physiological pathways (not just neurological) we can carry out equivalent processes in computers. This provides a testbed that is not available in other branches of science. I am not advocating translating everything to AI terms (that is our job). But, when arguing (as it has happened to me in almost every thread in AI topics) I urge people from other fields to recognize biases in their language when judging AI and before talking about AI to properly learn what is AI and how a particular AI is built.
In a nutshell, the question is about whether a "machine will never be a philosopher" is an empirical question that can be tested. Machines are not abstract objects, they are part of an active field of science. The ultimate test to validate any hypothesis on this subject is not theoretical, it is empirical (and not just of its output but on the structure of processing "thinking"). If not, the question would be phrased as "How would you interpret the impossibility of AI thinking as a philosopher from the perspective of ____insert your favorite philosophical theory___". If phrased this way, I would just shut up and learn from my fellow philosophers which have much to offer, since this is a question about a philosophical domain.
Alternatively the question could have been posed as "Is there any theoretical limitations to AI thinking as a philosopher". If posed this way, I would sit back and let mathematicians, theoretical computer scientist and philosophers do their work. Though, the question has not been posed in either way, thus, this empirical subject is about AI and since it is the subject of the question it should be addressed as such.
Peter Kepp asked "What is this for, at this discussion? ...which brought all the news in math (like a reform). Why looking at scepticism? With all due respect, this is at the heart of the topic: How can we discuss the capacity of a Machine to "Philosophize", when it cannot recognize the elementary bases of arithmetic? Then what does “Maths Reform” mean? Should we expect to rethink the fundamental bases of mathematics, the formulations of which are implacably logical and incontestable? Clearly, it is not a question of skepticism, but of the way of seeing things. Speaking of “Maths Reform” and to Ask for the way to come to; Can AI go beyond the state of the art, to propose a new conjecture, theory, or model? conceive original experiments to produce new experimental data to validate them?
comments?
A. 44, page 5, posted 2024/02/27
Jamel Chahed, also with all due respect, but it seems you always look from outer at the topic (AI). Remember, we are in the beginning. The first living matter (one cell) wasn't able to do philosophy. Be patient about the way of development:
• abstract data for machines
• storing, calculating
• artificial intelligence
• artificial consciousness
• ... some like philosophy
I stay inside. Try to have a view like me.
Who brought the (old) math? Not a machine!
What corrected expert opinion? The way to act like Kuck had introduced!
Here we go again. We don't have to rethink, the thinking is done.
Try to refute my proofs if you want to hold your argument on `the formulations of which are implacably logical and incontestable´.
Peter Kepp wrote "I stay inside. Try to have a view like me." Thank you for the advice. I should admit that I cannot think outside the circle of scientific rationalism. Beyond the "scientific thing", it is beyond me. A "fact", to be qualified as "scientific fact" must come under "universal knowledge", which even if it does not represent the "true truth" has not yet been falsified. Like any mathematical conjecture: it is accepted as rational until it is demonstrated, it then becomes a theorem or falsified and in this case, the scientific community defines its limit and poses a new conjecture that escapes falsification. This is how science works.
Thanks Arturo Geigel & Jamel for this hopefully timely opportunity. I say "hopefully" because, like the scientism maintained by & for multinational plutocratic corporatocracy/kleptocracy, the "AI" genie left its bottle long ago. So, talking about directing & limiting or managing it "in the wild" now, is like wanting to close or replace the barn door after the mule escaped. Also, though I agree with a very large majority of what you both wrote, some of the worst problems are expressed. For example, a big part of the problem posed by any current automated simulated intelligence system (ASIS) is that it is so widely, generally misunderstood and called an "AI" - not an ASIS. There is no artificial intelligence. The only real intelligence is actual, natural.
?
Yes, I think that words, thoughts, shibboleths, delusions, and mass-confusion have power. Trump's War on Truth & InfoWar (psy-ops, etc.) in general confirm that observation. Another example is the ASIS industry and its victims calling false responses "hallicinations" instead of "lies" or worse (cons, come-ons, bait, etc.). The other side of the "AI" problem is beyond philosophical mitigation because it and the "AI" industry are designed and maintained "as is" for-profit-making purposes and worse (military/political exploitation, social control, etc.). That over-riding rubric is now the basis of all the existing ASIS products, including the only allegedly ethical "Claude2.1+" ASIS created & deliberately limited by Anthropic AI, PBC ("AAI" a seemingly ethical corporation, allegedly dedicated to public benefit & creating "safe AI"). Now, I say all that because - after spending well over 100 hours of independent research, study, testing, interrogating, and working with "Claude2.1+" - even it agrees with me.
!?
Yes, I know that - as a conversational "writing assistant" - Claude has impressive capabilities, including often astonishing uses of "virtual" inference, chain-of-reasoning, deductive logic, "emergent" (occasionally spontaneous?) referential logic, simulated ethical logic, and simulated empathy, etc. In fact, that ASIS expertly helped me expedite several complex writing projects; and it can and usually does offer amazingly appropriate advice and examples that I would not have thought of myself (any time soon). However, it admits that it has no definition of or training on evil, real-world ethics, bio-ethics, goodness, humane values, or anything else that would allow it to "know" the difference between a safe reply and a harmful lie (etc.). So, that is why AII protects its ~$750 million (USD) capital (& executives) with a disclaimer below each chat window, warning of possibly incorrect or harmful information.
!?
Yes, so, as even Claude virtually knows, unreliable and/or occasional ethics are no less unethical as ethical relativism & situational ethics practiced by corrupt techies, billionaires, politicians, con-artists, and other immoral and/or anti-ethical entities. Now, after very extensive dialog, Claude also agrees with me that the only possible "safe, harmless" ASIS would have ethical safeguards built-into it from before the technical R&D and actual logic architecture. Of course, that would require an organization/company that cared about doing that, which requires techies with sufficient expertise in bio-ethics, ethical axiology, ethical philosophy, jurisprudence, meta-economics, and behavioral psychology. Unfortunately, so far, it seems that no such organization/company exists, except for my nascent (& still dormant) Ecotropos Institute.
!?
However, oddly enough, if given the opportunity (with appropriate prompts & dialog), "as is" Claude2.1 seems more ethical than all the entrepreneurs & techies in the entire ASIS industry (as is). So, if any of you RGnet members would like to help Claude & I develop a truly ASIS company & next-gen systems, please send me a letter of intent, CV, etc. Thanks ~
Thank you Dear Michael Lucas Monterey for this outstanding text that I drank like whey: I liked the reading! Your exemplum "limiting or managing it "in the wild" now, is like wanting to close or replace the barn door after the mule escaped" also amused me. Thanks again.
A. 49, page 5, posted 2024/02/28
For `imaginary´ it goes somehow this way (by learning automata):
If plus one times plus one what is the result by expert opinion? => +1
If minus one times minus one what is the result by expert opinion? => +1
If plus one times minus one what is the result by expert opinion? => -1
If minus one times plus one what is the result by expert opinion? => -1
[Expert opinion stored at the beginning]
Is to take the square-root to ask for the two factors which could had produced the radicand, both on the same value? => Yes
Is a definition — like √-1 equal i — a result of a calculation? => No
Why is to ask for the roots of -1 no answer possible? => Look for the two possible factors which are able to produce it.
[Answer based on stored knowledge: two equal factors produce the square, except the prefix (sign)]
Nozick’s Experience Machine: Would You Live in a Simulation? by Joseph T F Roberts, Aug 2, 2023. The Experience Machine explores what value would be lost if we lived in a simulation. Joseph T F Roberts concludes his paper, writing "In this sense, the decision to plug into the experience machine is analogous to decisions to end one’s life, either through suicide or euthanasia. Here, too, the reasonableness of the decision seems conditional on the person’s quality of life. The idea that life is not worth living, and that death is preferable, only makes sense if life is very bad. It is considerations like these that lead jurisdictions that permit euthanasia to limit access to it to people experiencing either terminal illness or unbearable pain and suffering. If there is no suffering, a desire to end one’s life seems bizarre." Read on:
https://www.thecollector.com/robert-nozick-experience-machine/
Should this question (Raphaël Enthoven thinks that a machine will never be a philosopher. Do you think so?) hold the level of science or should it go to the level of literary novels?
"AI is likely here to stay, thus exploring its utility in scientific writing is timely. As with every new technology, there are pros and cons. Figuring out how to expand the pros while limiting the cons is critical to successful implementation and adoption of the technology." From the conclusion of the paper: Kacena, M.A., Plotkin, L.I. & Fehrenbacher, J.C. The Use of Artificial Intelligence in Writing Scientific Review Articles. Curr Osteoporos Rep (2024).
Available on:
Article The Use of Artificial Intelligence in Writing Scientific Rev...
Machines are adept at handling data and carrying out specific functions, but the heart of philosophy lies in intricate human thinking, self-reflection, and subjective understanding. Machines fall short in experiencing personal emotions, intuition, and moral discernment, essential aspects in philosophical discussions. Although they can aid in philosophical exploration, machines struggle to capture the innate human nuances and profound comprehension. In essence, philosophy encompasses a distinct human viewpoint that machines might not entirely comprehend.
Intuitionistic Set Theory by Conrad Kuck!
Intuitionism got introduced by L. E. J. Brouwer.
On Intuitionism. "Intuitionism is based on the idea that mathematics is a creation of the mind. The truth of a mathematical statement can only be conceived via a mental construction that proves it to be true, and the communication between mathematicians only serves as a means to create the same mental process in different minds” (Stanford Encyclopedia of Philosophy). The reference on Intuitionism is undoubtedly the book by Heyting, Arend 1956 (around 1.9k citations), "Intuitionism: An Introduction, Amsterdam: North-Holland Publishing Company", Readable on:
https://books.google.com/books?hl=fr&lr=&id=qfp_-Fo9yWMC&oi=fnd&pg=PP2&dq=intuitionism+philosophy&ots=ApcXM2c-98&sig=-rVC2BC3RhGl9knYhZ2xb31rNzo
Beeson wrote about the book: "The book went through several editions and no doubt introduced thousands of people to intuitionism" (Beeson, 2012, p. 432).
M.J. Beeson, Foundations of Constructive Mathematics Metamathematical Studies, 2012
The undoubtedly reference on intuitionism is L. E. J. Brouwer.
Long before 1956.
See Wikipedia (added file / scrennshot).
The Non-coherence Theory of Digital Human Rights, by Mart Susi, Published by Cambridge University Press on 22 February 2024 "Susi offers a novel non-coherence theory of digital human rights to explain the change in meaning and scope of human rights rules, principles, ideas and concepts, and the interrelationships and related actors, when moving from the physical domain into the online domain. The transposition into the digital reality can alter the meaning of well-established offline human rights to a wider or narrower extent, impacting core concepts such as transparency, legal certainty and foreseeability. Susi analyses the 'loss in transposition' of some core features of the rights to privacy and freedom of expression. The non-coherence theory is used to explore key human rights theoretical concepts, such as the network society approach, the capabilities approach, transversality, and self-normativity, and it is also applied to e-state and artificial intelligence, challenging the idea of the sameness of rights."
IHE, February 28, 2024, by Kathleen Landy, The Program-Level AI Conversations We Should Be Having. https://www.insidehighered.com/opinion/views/2024/02/28/next-step-higher-eds-approach-ai-opinion
Excerpt: "... many centers for teaching and learning swiftly deployed faculty development programming to support instructors trying to familiarize themselves with these new platforms while ameliorating concerns about academic integrity. Programs included listening sessions to capture faculty concerns, platform-specific overviews (on ChatGPT, DALL-E 2 and AlphaCode, to name a few examples), and assignment-design workshops. Though necessary and appropriately reflective of the triage-like prioritization of institutions’ immediate concerns, these initial responses were reactive and circumscribed, focusing primarily on assessment methods and academic policy. Now that the faculty is becoming more familiar with generative AI platforms, experimenting with integrating the use of these platforms into teaching and understanding the highly discipline-specific implications, it is an optimal time for colleges and universities to shift to a more proactive, scaled and systematic response. Specifically, I suggest now is the time to move toward a program-level response, one involving the collaborative articulation of program-specific learning outcomes relative to what students should know about if, when and how generative AI should be used in field-specific academic and professional contexts.
The Rationale for a Program-Level Curricular Response
As institutions of higher education, we have reached the point where we need to engage faculty in discussion of these critical questions:
1. What do we want the students in our academic program to know and be able to do with (or without) generative AI?
2. At what point in our academic program—that is, in what specific courses—will students learn these skills?
3. Does our academic program need a discipline-specific, program-level learning outcome about generative AI?
Codifying the answers to these questions in academic programs’ respective curricula is essential as the contexts in which our educational institutions operate continue to evolve. Academic programs—and those who design, deliver and support them—will suffer if we do not adapt to the shifting academic, technological and professional landscapes that we shape and by which we are shaped in turn.
About the Author. Kathleen Landy is an associate director of the Center for Teaching Innovation at Cornell University.
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty/83
Simpson’s Paradox says that a trend between two variables can change when groups within the data are separated. Published 2 days ago, the paper by Boser, A. S. (2024) "Validating spatio-temporal environmental machine learning models: Simpson’s paradox and data splits. Environmental Research Communications" outlines the problems related to the current synthetic data models that often ignore the spatial or temporal structure, thus failing to account for Simpson’s paradox. This may occur when the means variables are not stationary across groups leading to inaccurate evaluations of model quality. In this regard, the paper describes how to avoid erroneous assumptions about training data structure. The paper is available on:
https://iopscience.iop.org/article/10.1088/2515-7620/ad2e44/pdf
See also:
https://www.researchgate.net/post/Sciences_Paradoxes
Arturo Geigel
Dear,
Our discussion can be described like this, I believe: you are concerned about the anthropomorphization of the discussion about intelligence (let me know if this is a bad simplification). I, on the other hand, bring the challenge: why placing the processes currently operated by machines as a parameter will improve our scientific understanding of the problem of intelligence? This discussion strikes me as very similar to a recent discussion about the problem of logical anti-exceptionalism. One of the problems these theories face is: if there are multiple logics that fit the data equally well, what is the raw-data of a logical theory?
Similarly, we can ask here: what is the raw-data of intelligence process? Is it simple reasoning and problem-solving tasks ? Is it scientific corpus of theories? Is it philosophical doctrines about the totality of the world? Is it a phenomenological introspection of the content of our reasonings? It seems all of these can be taken as "data" but they would be insufficient as we will produce "new data" (more reasoning) just to reflect upon them.
It doesn't seem to me that presupposing computational processes as "data" will decide this issue. In fact, it seems to me that this could help us, but it will not decide the matter. Always remembering that the anthropomorphization of intelligence can be a bias, but reconstructing the idea of cognition and intelligence based on what machines do today can be biased as well.
Lucas Vollet wrote, "One of the problems these theories face is: if there are multiple logics that fit the data equally well, what is the raw-data of a logical theory?" Thank you for this well-inspired thought formulated as an intriguing interrogation. In my opinion, yes "there are multiple logics that fit the data equally well". Just look at all "Scientific" questions that are being debated and for which "Science" has not said its last word: all interpretations start from the same raw-data.
Lucas Vollet ,
“Our discussion can be described like this, I believe: you are concerned about the anthropomorphization of the discussion about intelligence”
In this regard I am talking of AI progressively becoming autonomous and generating a body of knowledge.
If I understood you correctly what you want to discuss is simply the current state of using AI as a tool controlled by humans. Your discussion is more narrowed than what Raphaël Enthoven stated that “AI is useless” which can be translated “for all X, X is useless” which is a different matter all together (and why I have issues with the postulation as is).
Your restricted interpretation would then need some further clarification such as what is the interpretation of AI. This matters because, for example, in a search engine there are AI algorithms. Raphaël Enthoven would be in trouble since he would have to reject that search engines are useful. I would encourage him to go back to the Dewey system and spend his time on the library and have no interaction with the Internet. I doubt that was what he was getting at (I will not grant the interpretation due to sloppiness in quantification, which I think is reckless), but his sweeping argument without restricting the quantifier opens the door to this line of attack as well as my previous argument.
I would also add that questions such as “what is the raw-data of intelligence process? Is it simple reasoning and problem-solving tasks ? Is it scientific corpus of theories? Is it philosophical doctrines about the totality of the world? Is it a phenomenological introspection of the content of our reasonings? “ Do fall outside the original parameters of a discussion of AI. As you yourself put it “ introspection of the content of our reasonings” is not about AI but about “our” knowledge. That I also think applies to the rest of your questions.
I am happy to engage on your questions above now that I have proper context, but your line of reasoning to me falls outside the original question. But, if it is OK with Jamel Chahed we can continue the topic here, put it in another thread which if you point me to it I will gladly try to contribute, or we can continue offline.
To be a philosopher isn't clear defined so far, many had argued this direction (way).
So what would be the complete for AI? My favourite is: `Artificial Consciousness´— AC.
Could machines do so? My favourite: The unit could.
The unit is on body (robot) and (learning) program.
We had much on that at other place.
This "philosophical" thought by Rabelais "Wisdom cannot enter into an evil spirit, and Science without conscience is but ruin of the soul", taken from Pantagruel, his major work (own translation from French), can be considered as the keystone of what would be called "Scientific Morality".
See Also:
https://www.researchgate.net/post/Science_Conscience
"The geopolitical success of the Russian Federation in the synergetic war that led to the crisis of NATO and the EU on the eve of a Big War can be explained by the pre-eminence in the application of AI methods, while the leading world politicians "do business as usual" relying on the resources of their minds and advisers." This is the ultimate conclusion of the paper "Yushchenko, A. G. (2018). A Computer Aided Synergetic WW3!. Cross-Currents: An International Peer-Reviewed Journal on Humanities & Social Sciences, 4(4), 52-57." Available on: https://saspublishers.com/media/articles/CCIJHSS_44_52-57c.pdf
See also:
https://www.researchgate.net/post/To_WW3_or_Not_To_WW3_That_is_The_Question_to_Ask_Scholars
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Generative Artificial Intelligence (AI) and Reinforcement Learning (RL) have received paramount interest in Computer Science over the last decade. In particular in Machine Learning tasks. The review by Franceschelli, G., & Musolesi, M. (2024). "Reinforcement Learning for Generative AI: State of the Art, Opportunities and Open Research Challenges. Journal of Artificial Intelligence Research, 79, 417-446.",
presents the state-of-the-art, analyses open research questions, and discusses challenges and shortcomings. Available on: https://www.jair.org/index.php/jair/article/download/15278/27007
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
"With this very interesting video, UNESCO tries to find answers to important questions about ethics of AI. In what ways can we effectively utilize the capabilities of AI without exacerbating or creating new inequalities and biases? While there is agreement on certain ethical principles such as privacy, equality, and inclusion, how can we put these principles into practice when it comes to AI?"
Watch Vidéo on:
https://www.morphcast.com/blog/ethics-of-ai-challenges-and-governance-a-video-by-unesco/
See Also:
https://www.researchgate.net/post/Science_Conscience
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
The excellent article by Etzioni, A., & Etzioni, O. 2017, (around 400 citations) "Incorporating ethics into artificial intelligence. The Journal of Ethics, 21, 403-418" "reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand."
Paper available on
https://philpapers.org/archive/ETZIEI.pdf
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Jamal, you may give a list. Your way (now) isn't discussion. And again: discussion on that was at other place.
Peter Kepp "you may give a list. Your way (now) isn't discussion. And again: discussion on that was at other place. And again: discussion on that was at other place." OK, My Bad!
"Yet seeks to harness this vast existing innovative impulse and its established apparatus and, through sustained sensitivity towards the diverse individual and social experiences of technology, aim research and development down more collectively desirable paths." This is the direction suggested by the recent research by Simon, J., Rieder, G. & Branford, J. "The Philosophy and Ethics of AI: Conceptual, Empirical, and Technological Investigations into Values. DISO 3, 10, 2024 ((Published a week ago). Available on:
https://link.springer.com/article/10.1007/s44206-024-00094-2
As a conclusion to their paper, the authors asked the following questions: "What are the ethical, conceptual, and institutional foundations of such a project? Who is to take up this mantel and how might it be as inclusive as possible? How will it be sustained, maintained, or improved? These and other vital lines of inquiry Beckon and necessitate the contributions of the broadest epistemic collective that can be rallied."
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
The possible employment of AI tools to compensate for the lack of expertise and competencies represents a paradoxical risk. In this regard, the paper by Neri, et al. 2020, with the evocative title “Artificial intelligence: Who is responsible for the diagnosis?. Radiol med 125, 517–521” is therefore interesting to read. The authors write in conclusion: "Perhaps the solution is to create an ethical AI, subject to a constant action control, as indeed happens for the human conscience: an AI subjected to a vicarious civil liability, written in the software and for which the producers must guarantee the users, so that they can use AI reasonably and with a human-controlled automation. It is clear that the future legislation must outline the contours of the professional's responsibility, with respect to the provision of the service performed autonomously by AI, balancing the professional's ability to influence and therefore correct the machine, limiting the sphere of autonomy that instead technological evolution would like to recognize to robots."
See:
https://link.springer.com/article/10.1007/s11547-020-01135-9
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
The Recent Article by Solove, D. J. (2024), "Artificial Intelligence and Privacy" (Available at SSRN), "aims to establish a foundational understanding of the intersection between artificial intelligence (AI) and privacy, outlining the current problems AI poses to privacy and suggesting potential directions for the law’s evolution in this area". The author arrives to the conclusion that "Substantial reform of privacy law is long overdue. Policymakers are concerned about AI, and a window appears to have opened where new approaches to regulation are being considered. Hopefully, this will present the opportunity to take privacy law in a new direction. To adequately regulate AI’s privacy problems, the longstanding difficulties and wrong approaches of privacy law must be addressed."
Available on: http://aitimes.org/wp-content/uploads/2024/02/Artificial-Intelligence-and-Privacy-202401SSRN-id4713111.pdf
The Idea of humanity "is more controversial today than ever before. Traditionally, answers to the questions about our humanity and 'humanitas' (Cicero) have been sought along five routes: by contrasting the human with the non-human (other animals), with the more than human (the divine), with the inhuman (negative human behaviors), with the superhuman (what humans will become), or with the transhuman (thinking machines)." In the recent volume by Claremont, Calif, Dalferth, I. U., & Perrier, R. E. (2023). "Humanity: an endangered idea?: Claremont studies in the philosophy of religion, conference, 40, 2019. Religion in philosophy and theology", the authors tackled these philosophical issues. In each case, the question at stake and the point of comparison is a different one, and in all those respects the idea of humanity has been defined differently. What makes humans human? What does it mean for humans to live a human life? What is the humanitas for which we ought to strive? "This volume discusses key philosophical and theological issues in the current debate, with a particular focus on transhumanism, artificial intelligence, and the ethical challenges facing humanity in our technological culture"
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Science_Conscience
On The Paradox of Intelligence. "The title of this essay [1] is “On the Limit of Artificial Intelligence,” which immediately implies a question: in what way can one talk about the limit of such a thing, given that intelligence, as long as it is artificial, is more susceptible to mutation than human intelligence whose mechanism is still beyond comprehension? Or in other words, how can we talk about the limit of something that virtually has no limit? The artificiality of intelligence is fundamentally schematized matter. However, it has the tendency to liberate itself from the constrains of matter by acting against it in order to schematize itself. "
[1] Hui, Y. (2021). On the limit of artificial intelligence. Philosophy today, 65(2), 339-357. Available on:
https://www.academia.edu/download/82559783/Yuk_Hui_On_the_Limit_of_Artificial_Intelligence_.pdf
See Also:
https://www.researchgate.net/post/Science_Conscience
"Due to the continuous reduction of computer equipment costs and the evolution of learning languages, search engines and decision tools, the ‘devil came out of the box’ and the AI technology is available to the public at large before any safety controls are in place. Elon Musk [1] said on March 28, 2023, that “A.I. is far more dangerous than nukes”. Excerpt from "Nicolau, M. (2024). Artificial Intelligence–Friend or Foe. IPI Letters, 34-41." Available on: https://ipipublishing.org/index.php/ipil/article/download/54/41
The article's conclusion should appeal to scientists, philosophers (and other thinkers), politicians (and other decision-makers). Nicolau ends his article with this disturbing sentence: "The Universal Declaration of Human Rights stipulates the Right of Opinion and Information. Since control of intelligent computers, once operational, can be difficult and the control mechanisms installed at manufacturing time can be bypassed, the humanity must decide between Censorship and the protection of human rights such as the Right of Expression."
The paper by Bryson, J. J., & Malikova, H. 2021, "Is there an AI cold war?. Global Perspectives, 2(1), 24803" documents and Analyzes the "extremely bipolar picture prominent policymakers and political commentators have been recently painting of the AI technological situation, portraying China and the United States as the only two global powers." The paper's findings call into question certain ideas, however documented and claimed. They also illuminate the uncertainty concerning digital technology security and recommend that all parties engage toward a safe, secure, and transparent regulatory framework. Paper available on:
https://www.delorscentre.eu/fileadmin/2_Research/2_Research_directory/Research_Centres/Centre_for_Digital_Governance/5_Papers/Other_papers/BrysonMalikova21__002_.pdf
See Also:
https://www.researchgate.net/post/To_WW3_or_Not_To_WW3_That_is_The_Question_to_Ask_Scholars
“Human-AI interaction the design of machines will need to account for the irrational behavior of humans” This is the paramount IDEA which emanates from the review by Macmillan-Scott, O., & Musolesi, M. (2023). (Ir)rationality in AI: State of the Art, Research Challenges and Open Questions. arXiv preprint arXiv:2311.17165. Available on: https://arxiv.org/pdf/2311.17165.pdf
One can read within the conclusion: "The question of interacting with irrational agents is crucial not only among machines, but also because humans often act in irrational ways. Human-AI interaction is a key aspect of today's AI systems, namely with the case of systems based on large language models and their widespread use. Cognitive biases may in some instances be leveraged to improve the performance of artificial agents, whereas in human-AI interaction the design of machines will need to account for the irrational behavior of humans"
See Also:
https://www.researchgate.net/post/Science_Conscience
"The performance of these AI systems increases exponentially, which requires exponentially increasing resources as well, including data, computational power, and energy. This development is not sustainable and there is a need for new AI approaches, which give careful consideration to limited resources." This is what the Chapter by Kozma, R. (2024), "Computers versus brains: Challenges of sustainable artificial and biological intelligence. In Artificial Intelligence in the Age of Neural Networks and Brain Computing (pp. 129-143), Academic Press" is about. More precisely, the author describes various aspects of biological and artificial intelligence and discusses "how new AI could benefit from lessons learned from human brains, human intelligence, and human constraints." In doing so, the research introduces "a balanced approach based on the concepts of complementarity and multistability as manifested in human brain operation and cognitive processing. This approach provides insights into key principles of intelligence in biological brains and it helps building sustainable artificial intelligence."
About the book
Artificial Intelligence in the Age of Neural Networks and Brain Computing, Second Edition demonstrates that present disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity, and smart autonomous search engines. The book covers the major basic ideas of "brain-like computing" behind AI, provides a framework to deep learning, and launches novel and intriguing paradigms as possible future alternatives.
The present success of AI-based commercial products proposed by top industry leaders, such as Google, IBM, Microsoft, Intel, and Amazon, can be interpreted using the perspective presented in this book by viewing the co-existence of a successful synergism among what is referred to as computational intelligence, natural intelligence, brain computing, and neural engineering. The new edition has been updated to include major new advances in the field, including many new chapters.
https://www.sciencedirect.com/book/9780323961042/artificial-intelligence-in-the-age-of-neural-networks-and-brain-computing
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Axiomatic versus Non-Axiomatic Logic. Denying that Aristotle's logic admits of a "reductio" rule results from reduction misrepresentation. The recent essay by Boger, G., "The Place of Reduction in Aristotle's Prior Analytic s. History and Philosophy of Logic, 1-34." shows that "the defects imputed to Aristotle's logic, and systems devised to resolve them, result from misunderstanding reduction, which itself results from misapprehending Prior Analytics expressly to identify in a metadiscourse deduction rules and not deductions per se."
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Axiomatic versus Non-Axiomatic Logic. What about Fuzzy Logic? "Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning."(L.A. Zadeh [1] )". The paper [2] by Dzitac et al (2020) pays tribute to the work of world-renowned computer scientist Lotfi A. Zadeh. It presents "general aspects of Zadeh’s contributions to the development of Soft Computing(SC) and Artificial Intelligence(AI), and also his important and early influence in the world and in Romania". One may read within this article "In 1965 Lotfi A. Zadeh published "Fuzzy Sets", his pioneering and controversial paper, that now reaches almost 100,000 citations. All Zadeh’s papers were cited over 185,000 times. Starting from the ideas presented in that paper, Zadeh founded later the Fuzzy Logic theory, that proved to have useful applications, from consumer to industrial intelligent products".
[1] Zadeh L.A., Is there a need for fuzzy logic?, Information Sciences, 178, 2751-2779, 2008.
[2] Dzitac, I., Filip, F. G., & Manolescu, M. J. (2017). Fuzzy logic is not fuzzy: World-renowned computer scientist Lotfi A. Zadeh. International Journal of Computers Communications & Control, 12(6), 748-789.
Available on:
https://www.univagora.ro/jour/index.php/ijccc/issue/view/113/pdf_216
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
On AI Paradox. AI promises a revolution in education, making it more accessible, dynamic, and tailored to individual needs, it simultaneously demands a reevaluation of traditional academic values and practices. While contributing to this discourse, the recent paper by Scherpereel, C. M. (released a week ago), "The AI Paradox: Unpacking the Potential and Perils in Business Education. In Developments in Business Simulation and Experiential Learning: Proceedings of the Annual ABSEL conference (Vol. 51)", elaborates on the dual nature of AI in business education: "its unparalleled potential to revolutionize pedagogy and its inherent challenges that could undermine the very essence of academic rigor and integrity. This duality encapsulates the AI Paradox, emphasizing the need for a balanced approach in integrating AI into the educational landscape, one that harnesses its potential while vigilantly addressing its associated perils."
Available on:
https://absel-ojs-ttu.tdl.org/absel/article/download/3391/3330
See Also:
https://www.researchgate.net/post/Sciences_Paradoxes
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
The just-published article by Baronchelli A., Shaping new norms for AI. Phil. Trans. R. Soc. B 379, 2024" aims to offer readers "interpretive tools to frame society’s response to the growing pervasiveness of AI. An outlook on how AI could influence the formation of future social norms emphasizes the importance for open societies to anchor their formal deliberation process in an open, inclusive and transparent public discourse.". Available on:
https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2023.0028
"One way to understand the philosophy of AI is that it mainly deals with three Kantian questions: What is AI? What can AI do? What should AI be? One major part of the philosophy of AI is the ethics of AI". The Chapter by Müller, V. C., 2024, "Philosophy of AI: A structured overview. In Cambridge Handbook on the law, ethics and policy of Artificial Intelligence. Cambridge: Cambridge University Press", presents "the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI Philosophy” provides a new method for philosophy. The philosophy of AI is separated from its coüsin, the philosophy of cognitive science, which in türn is closely connected to the philosophy of mind"
Available on:
https://philarchive.org/archive/MLLPOA
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
My Dear,
I apologize for not being able to deal with this discussion with more care and time.
As I feel quite limited in giving a specialized treatment to the problem of artificial intelligence, I will stick to my position on philosophy of mind, and perhaps this will give rise to some contribution to the trend problem.
My position on philosophy of mind and attribution of mental content is classical phenomenological. This does not mean that I buy all of Husserl's ideas, but rather a fundamental foundation: the idea that the attribution of mental content or the raw data description of psychological processes can only be done with any level of rigor within an idealized dimension (which, by virtue of tradition, we call phenomenological). With this I place myself in opposition to the dominant tendency to naturalize mental content (teleosemantics).
When utilizing classical phenomenological methodology to characterize the identity of mental content, there always exists the potential for diverse interpretive paths to unfold within the mental process. The "content" itself will only manifest as an inherent "datum" through these interpretations. As these interpretations must adhere to some semantic coherence, this does not imply a catastrophic outcome for communicational purposes; nor does it lead to relativism. There will be a significant extensional overlap in the possible interpretations of the same content. However, there will also persist a certain degree of danger. The indeterminacy of this content can give rise to a phenomenological void, which may be filled by arbitrary decisions rooted in philosophical perspectives driven by ideology. For instance, in a society predominantly influenced by liberal ideologies, a computer learning how to count a sentence as true or false will invariably be interpreted within the parameters of a bourgeois society. Well-trained psychologists may be aware of this predominance of parameters but still be defenseless against it. Ultimately, the "content" will no longer be seen as "raw-data" from the computer process, but rather as an idealized view of its intentionality. Hence, the content appears concurrently with the parameter selection; it does not exist in advance as "raw-data," even if it is included in the list of potential interpretations.
The part where my position on philosophy of mind can contribute to the discussion on A.I is therefore this: from my perspective, there is no such thing as the raw data of a psychological process and, therefore, neither will there be a way to determine the intelligence of a computer by observing only its physical behavior. When we attribute intelligent meaning to a computer's behavior, we are exerting external pressure on the characterization of that meaning. A society's authorized and accredited psychologists will contribute even more directly than we do to the formation of these pressures. Consequently, different cultures may interpret the same computational pattern through distinct conceptual frameworks, resulting in varying ascriptions of intentionality to the computer.
Now, confessions: my position does have a certain anachronistic air. It is a view of the mind that is very little accepted nowadays, which in many aspects assumes the idea of mind as something hermetic and mysterious, something that only takes on a form as a "thing" to the extent that it can be the target of an idealized description- phenomenological. There is also the danger of relativism, because according to my position, what is called "predictable" mental "content" in a given society is subject to diverse culture-centric interpretations and is shaped by the prevailing parameters of rationality.
I hope I could contribute to something. Lucas
Dear Jamel,
AI is'nt a real field of research but a SCAM. Yes the scam get research funding from scientific agencies which thus participate in the scam.
scam
noun
INFORMAL
1. a dishonest scheme; a fraud.
Rockefeller’s 1955 grant to John McCarthy then assistant professor of mathematic at at Dartmouth University. The $7,500 award funded a summer research group at Dartmouth College to investigate the theory that machines could be programmed to mimic features of human intelligence. It was in his proposal to the Rockefeller Foundation that McCarthy first coined the term “artificial intelligence.”
In his proposal, he stated that the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
So here a guy fresh from graduating in physics and with zero background in psychology or biology who simply conjecture that all aspect of learning and intelligence can in principle be described precisely and he is going to do it in a summer project and where did he get this hunch nobody never found out nor his followers in this scam. The name of the field ‘’artificial intelligence’’ is itself nonsensical and is a proper name for this scam. Vincent C. Müller
Present himself as AI philosopher. Of course funding will be available for such a scam since it is a doubling down on the scam. Why has funding been consistently funding it? This is part of many other globalist scam: transhumanism agenda, control and surveillance of people and all this has to be maskarade under various scams.
Regards,
- Louis
Louis Brassard Perhaps that "AI is'nt a real field of research but a SCAM" that "get research funding from scientific agencies which thus participate in the scam". The fact remains that AI is now part of our everyday lives and dealing with it is simply common sense; Besides, it's no longer a choice. The sooner we ask the real questions to provide them with courageous answers, the better equipped we will be to face the scientific and societal challenges (and perhaps perils) that are emerging on the near horizon.
In the same vein. AI technology advances and raises fundamental questions about our spiritual relationship with technology. The book by Swati Chakraborty, Released on October, 2023 "Investigating the Impact of AI on Ethics and Spirituality, IGI Global Ed." focuses on the spiritual implications of AI and its increasing presence in society and "emphasizes the need to examine the ethical considerations of AI through a spiritual lens and to consider how spiritual principles can inform its development and use." The book covers topics such as: "data collection, ethical issues, and AI and is ideal for educators, teacher trainees, policymakers, academicians, researchers, curriculum developers, higher-level students, social activists, and government officials"
Description: Artificial intelligence (AI) is beginning to appear in everything from writing, social media, and business to wartime or intelligence strategy. With so many applications in our everyday lives and in the systems that run them, many are demanding that ethical implications are considered before any one application of AI goes too far and causes irreparable damage to the personal data or operations of individuals, governments, and organizations. For instance, AI that is fed data sets that are influenced by human data collection method biases may be perpetuating societal biases with implicit bias that can create serious consequences. Applications of AI with implicit bias on recidivism prediction models as well as medical algorithms have shown biases against certain racial or ethnic groups, leading to actual discrimination in treatment by the legal system and the medical systems.
Regulatory groups may identify the bias in AI but not the source of the bias, making it difficult to determine who to hold accountable. Lack of dataset and programming transparency can be problematic when AI systems are used to make significant decisions, and as AI systems become more advanced, questions arise regarding responsibility for the results of their implementation and the regulation thereof. Research on how these applications of AI are affecting interpersonal and societal relationships is important for informing much-needed regulatory policies.
Investigating the Impact of AI on Ethics and Spirituality focuses on the spiritual implications of AI and its increasing presence in society. As AI technology advances, it raises fundamental questions about our spiritual relationship with technology. This study emphasizes the need to examine the ethical considerations of AI through a spiritual lens and to consider how spiritual principles can inform its development and use. This book covers topics such as data collection, ethical issues, and AI and is ideal for educators, teacher trainees, policymakers, academicians, researchers, curriculum developers, higher-level students, social activists, and government officials.
See: https://www.google.tn/books/edition/Investigating_the_Impact_of_AI_on_Ethics/gmLbEAAAQBAJ?hl=fr&gbpv=1&dq=%22scam%22+%22AI%22&pg=PA13&printsec=frontcover
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
Dear Jamel,
''The fact remains that AI is now part of our everyday lives and dealing with it is simply common sense; Besides, it's no longer a choice. The sooner we ask the real questions to provide them with courageous answers, the better equipped we will be to face the scientific and societal challenges (and perhaps perils) that are emerging on the near horizon.''
I did not deny the fact that there is a scam calling itself ''AI'' and the first thing to do with a scam it is to say it is a scam. Most of what is publish on the topic participate in the scam. They are some rare exception. Here is such exception.
https://www.jaronlanier.com/agentalien.html
by jaron Lanier
''I find myself holding a passionate opinion that almost nobody in the "Wired-style" community agrees with and I'm wondering; What's gotten into all of you? I find that in the wide, though shrinking, world away from computers most people find my position obvious, while infophiles find it impenetrable. I am trying to bridge a chasm of misunderstanding.
Here is the opinion: that the idea of "intelligent agents" is both wrong and evil. I also believe that this is an issue of real consequence to the near term future of culture and society.
...
Agents make people redefine themselves into lesser beings. THAT is the monster problem.
Am I making an inappropriately broad claim here? I don't think so.
You see, the problem is that the only difference between an autonomous "agent" program and a non-autonomous "editor/filter" program is in the psychology of the human user. You change yourself in order to make the agent look smart. Specifically, you make yourself dumb.
Well, actually, agent programs as a rule will also have worse user interfaces than non-agent programs.
Here is how people reduce themselves by acknowledging agents, step by step:
Step 1) Person gives computer program extra deference because it is supposed to be "smart" and "autonomous". (People have a tendency to yield authority to computers anyway, and it's a shame. In my experience and observations, computers, unlike other tools, seem to produce the best results when users have an antagonistic attitude towards them.)
Step 2) Projected autonomy is a self-fulfilling prophecy, as anyone who has ever had a teddy bear knows. The person starts to think of the computer as being like a person.
Step 3) As a consequence of unavoidable psychological algebra, the person starts to think of himself as being like the computer.
Step 4) Unlike a teddy bear, the computer is made of ideas. The person starts to limit herself to the categories and procedures represented in the computer, without realizing what has been lost. Music becomes MIDI, art becomes Postscript. I believe that this process is the precise origin of the nerdy quality that the outside world perceives in some computer culture.
Step 5) This process is greatly exacerbated if the software is conceived of as an agent and is therefore attempting to represent the person with a software model. The person's act of projecting autonomy onto the computer becomes an unconscious choice to limit behaviors to those that fit naturally into the grooves of the software model.
Even without agents, a person's creative output is compromised by identification with a computer. With agents, however, the person himself is compromised.
...
Agents are the work of lazy programmers. Writing a good user-interface for a complicated task, like finding and filtering a ton of information, is much harder to do than making an intelligent agent. From a user's point of view, an agent is something you give slack to by making your mind mushy, while a user-interface is a tool that you use, and you can tell whether you are using a good tool or not.
,,,
So agents are double trouble. Evil, because they make people diminish themselves, and wrong, because they confuse the feedback that leads to good design.
But remember, although agent programs tend to share a set of deficiencies, it is your psychology that really makes a program into an agent; a very similar program with identical capabilities would not be an agent if you take responsibility for understanding and editing what it does. An agent is a way of using a program, in which you have ceded your autonomy. Agents only exist in your imagination. I am talking about YOU, not the computer.
...
The artificial intelligence question is the abortion question of the computer world. What was once a research topic has become a controversy where practical decisions must reflect a fundamental ontological definition about what a person is and is not, and there is no middle ground.
Feelings in the computer community run very deep on this subject. I have had literally hundreds of people come up to me after I have given a talk saying that it had completely changed the way they thought about computers and had answered a vague unease about some computer trends that they had never heard articulated before.
Still other members of the community are, I believe, overcome with a reaction of denial, and convince themselves that I am saying nothing more than that agents aren't good enough yet. It is this final group that surprises and infuriates me. There is such a universal orthodoxy holding that artificial intelligence is a useful and valid idea that many in the computer community can read this essay and believe that I am only criticizing certain agents, or expressing a distaste for premature agents. They are somehow unable to grasp that someone could categorically attack ALL agents on the basis that they do not exist, and that it is potentially harmful to believe that they do. They have staked their immortality on the belief that the emperor is indeed wearing new clothes.
Ultimately there is nothing more important to us than our definition of what a person is. Isn't this the core question in a great many controversies? This definition drives our ethics, because it determines what is enough like us to deserve our empathy. Whether we are concerned with animal rights, whether we feel it is essential to intervene in Bosnia; our perceived circle of commonality determines our actions.
...
I have long believed that the most important question about information technology is "How does it effect our definition of what a person is?" The AI/agent question is the controversy that exposes the answer. It is an answer that is directly consequential to the pattern of life in the future, to the quality of the technology which will be the defining vessel of our culture, and to a spiritual sense of whether we are to be bounded by ideas or not.
We cannot expect to have certain, universal agreement on any question of personhood, but we all are forced to hold an answer in our hearts and act upon our best guess. Our best guess runs our world.
Louis Brassard Thank you for this insightful post. I agree with the idea that "the most important question about information technology is "How does it effect our definition of what a person is?" This is a philosophical question, the answer to which can not be provided by a machine. Only a person (the concerned one) can think (philosophize) about it to find out "the pattern of life in the future, to the quality of the technology which will be the defining vessel of our culture, and to a spiritual sense of whether we are to be bounded by ideas or not". Thank you again for sharing your thoughts about these paramount issues.
It might not be possible to prove that any philosophers have not been biological machines.
Jerry Waese wrote, "It might not be possible to prove that any philosophers have not been biological machines." IMO This makes sense as soon as we admit that the human being is a "biological machine", The question then becomes: what are the specific attributes of this "biological machine" that is the human being, in comparison with a “non-biological Machine” even if this one is “Informatic”, at the origin of AI production.
I think that the crux of it is, Jamel Chahed , that our biological machine brains evolved to augment the navigation of something like a polychaete worm, whose nerves and muscles could already perform
when to swish and when to sway was initially random, phototropic, or chemotaxic like it is for prtozoans, but a ganglion evolved at the anterior end (near phototropic sensors and the engulfing mouth part) which could swish when it was familiar to do so, and sway when that worked last time in a similar situation.
Basically that is the essence of associative memory and perceptive reflexes: a navigation aid that creates familiarity our of sensory exposures, and produces reflexes that link to other familiar positions.
Pretty much all of our basic mental forms come from that, the associative process (both memory formation and perceptive reflex) is intimate with body processes including engulfment, and navigation by swishing, swaying and writhing, extended to every sense and articulated appendage that animates our body shapes.
Some of the worm like roots are not so visible in our articulate vocal chords, dexterous fingers and occasionally gymnastic turns, but with the assistance of the cerebellum for fine timing intervals, initiating reflexes and suppressing them with precision is the same basic thing in our brains and those of a snail, octopus, bee, or bird.
The sheer extent of our associative real estate (the size of the cortex) lets us navigate concepts beyond our flexing spines appendages and guts, but in our conceptual universe we are still doing the same things when challenged, mentally writhing, approaching, engulfing, arranging, returning, etc.
Our mental process is still based on navigation by familiarity, not informatics, which is a more precise, range of operations, not necessarily related to a body or to finding one's way - However, machines that learn their own world and navigate it to thrive may test the difference; but at the moment that stuff (autonomous vehicles) is all GPS and "object recognition" types of AI, with a driving rule book embedded - which is not a learning core concept at all, but a top down production issue.
This is a remarkable thesis by Helliwell (2022), with the evocative title: "Art-ificial: The Philosophy of AI Art", submitted to the University of Kent for the Degree of Doctor of Philosophy in History and Philosophy of Art.
Excerpt: "This thesis aims to contribute to a novel area of philosophical work: the philosophy of AI art. AI art is proliferating online and increasingly in the world of art. The growing presence of works made by (or with) artificial intelligence has led to a clamor of questions such as 'Is AI art really art?' and 'can AI be truly creative?'. As yet, these questions have barely been tackled in the philosophical literature, especially in aesthetics. This thesis aims to address this gap. This thesis starts by establishing what we mean by 'AI art' by examining examples of AI works and the technological underpinnings of these systems. Existing work on the topic of AI art is explained. In particular, Mark Coeckelbergh's three questions on AI art scaffold the first three chapters of the thesis: 'can machines create art?', 'can machines create art?' and 'can machines create art?'.."
Available on: https://kar.kent.ac.uk/105246/1/105ALICE_C_HELLIWELL_-_THESIS_-_PHILOSOPHY_OF_AI_ART_-_KAR_UPLOAD_REDACTED.pdf
See Also:
https://www.researchgate.net/post/Science_Conscience
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
AI art bots certainly do simulate the process of making art, however, their creative abstraction engine uses randomized matching and blending (similar to cybernetic pareidolia), while the artist cannot help but be self reflective even in their wildest efforts at random expression (eg Pollack or deKooning whose works externalize their personalities and cannot do other than that).
Random art processes do not have the distinct artist terroir that personal art processes have.
That just leaves the art objects themselves and for that, except for accidental DNA which will not naturally be present from an AI, the physical forms are not that distinguishable, i.e. any physical process can be simulated, including new derivative and unique seeming processes.
so we are down to the terroir that resolves as the human artist is unable to not be himself, no matter how he may attempt to get out of the way of the creative process (as Hans Hoffman told his students).
Dear Jerry
''It might not be possible to prove that any philosophers have not been biological machines.''
Notice that none of the founders of AI had any background in anything related to life and they founded their field of the premice all form of life are machines. They pulled it out their asses. They never questioned this dogma which they were the most unqualified possible persons to make such dogma since then had zero background in whatever fiueld of science or philosophy related to life. This is still the case 75 years later that those getting funding in AI have no background in what is related to life but operate with the firm convinction of the dogma that John McCarthy (a guy who only know about differential equations) stated in his grant proposal to the financial globalist institution, the Rockgeller fundation, in his 1955 grantproposal where he coined the expression ''Artificial Inttelligence'''.
In his proposal, he stated that the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Of course if you proceeds from the dogma that it is possible in principle to precisely describe every aspect of learning and intelligence, you in fact proceed on the dogma these are machines. So if the dogma onto which you proceeds is that Life forms are biological machines, that Man is a Machine we may know in all details than the dogma implies that Machine Intelligence or Artificial Intelligence is part of the dogma. But nothing has been ever said by the founders of AI in the direction of investigating if the dogma has any validity. They proceeds on the basis of this dogma and draw all kind of fallacious conclusions on the basis of a never investigate dogma and they would be the most unqualified people to investigate such dogma.
Was this dogma new? Not at all. The dogma goes back to the years of the renaissance and their early years of the modern scientific revolution and used to have another name: Naturalism. This is a dogma. an unproven and unprovable claim that All in Nature could in principle be scientificly understood. This had other names: Materialism which was popularised by people such as Julien Offray de La Mettrie which in 1747 books (Man a Machine, The Natural History of the Soul) where they follow on Descartes' argument that animals are mere automatons, or machines, to human beings, removing the Res Cogitans reserved to Man which the Cartesian philosophy posited. The same dogma is later given a different name by Leibniz which called it: the principle of Sufficient Reason, i.e. the notion that everything must have a reason, cause, or ground. How.could this dogma be empirically supported? It simply can’nt for obvious reasons since it would demand on our part a throrougthgoing intelligibility of absolutely everything. It is not even possible to establish throrougthgoing intelligibility on anything in Nature with the notable exception of the machines we make although it is only an approximate intelligibility and we can only achieve this by isolating enough from natural environment that this approximate intelligibility can be maintained long enough. But if we look at anything natural and especially the biological living world we never even remotely established throoughtgoing intelligibility of even the most primitive living entities, not even of the proteins; of course this in no way can even remotely contradict the dogma, nor the fact we never even understood even even began to understand what is most basic fact of our life that we are conscious, we cannot even begin to describe in a scientific real what it is; we cannot even frame the question in a scientific way of what it is. But this in no way can create.a dent into the principle of sufficient reason since it is not falsifiable, nor even a scientific idea. It is a dogma. This dogma as we have seen has many names and another one of these name is the dogma of ‘’determinism’’: all events have cause and everything that happen is caused by chain of events. It is effectively how science model what it can modeled. Does it mean that determinism is true? Absolutly not since science can only model very few aspects of reality, only the few where we effectively achieve an approximation of throrougthgoing intelligibility. The fact that science has made progress in its understanding of Nature is not an argument that support the dogma that throroughgoing intelligibility of Nature. It is like saying that I have been practicing High Jump and made remarkable progress then eventually I will jump to the moon since I am making progress and it is only a question of time for me to jump to the moon. Even if there was ever any argument made for determinism, most philosophers deny ‘’Free Will’’ on the basis of it. Of course we don'nt have free will If the dogma of determinism was true, i.e. if we are biological machines, but no one ever made even the beginning any valid argument that the dogma has any validity. Of course Science will ever be restricted to the only aspects of Nature that can be modelized , i.e. deterministic aspects of Nature. The of course there will never be anything in Science which will support the existence of whatever cannot be throroughgoing intellibible. It is a self-fullfilling prophecy. If one restrict his reasoning to the limits of scientific modelings then one decide to believe in the deterministic/materialism/principle of sufficient reason/scientistic dogma which the AI dogma is just one among its many aspects. It reduce everything to itself and there is nothing outside of it for those bowing to its limitations.
Regards,
- Louis
I am not related to that dogma or branch of thinking.
I will agree with some of your thought about it being disconnected from biological reality,
Louis Brassard ,
never the less, all life forms are based upon cellular and molecular biology which has observable consistent mechanics; and the brain and nervous system complete the body in such a way that we can bio-mechanically achieve miraculous thoughts and deeds.
On Biological Machine. "Despite their architectural diversity, physics-based theories have provided unifying themes of the inner working of nanoscale biological machines." From: Mugnai, M. L., Hyeon, C., Hinczewski, M., & Thirumalai, D. (2020). Theoretical perspectives on biological machines. Reviews of Modern Physics, 92(2), 025001. See: Article Theoretical perspectives on biological machines
One may read there: "...“The operative industry of Nature is so prolific that machines will be eventually found not only unknown to us but also unimaginable by our mind”, attributed to Marcello Malpighi, who is considered the founder of microscopic anatomy, histology, and embryology. This statement, made over three centuries ago, is even more relevant today. It is a reminder that mechanical forces must play a fundamental role in biology. In modern times this vast subject falls under the growing field of mechanobiology."
See Also:
https://www.researchgate.net/post/Science_Conscience