ChatGPT scored a 155 on an IQ test , and has sufficient background to process mathematical proof review for example and verifying scientific formulas and checking at real time the plagiarism traces but the scientific community argues that the confidentiality breach prevents the use of AI as recognized peer reviewer , what do you think about it ? writers and journals should they recognize the AI as a valid peer reviewer ?
Samir Touzani
Dear Professor,
Thank you for the interesting topic of discussion.
I am not an expert on AI. As far as I understand AI, it is a programmed method of doing something. Human peer reviewing scientific articles and manuscripts, is just not a science, but also is an art. If AI is used for this purpose, then that individuality (please read: art) will get lost. So, I would prefer to believe that AI cannot replace human peer review of scientific articles and manuscripts.
Best regards,
Anamitra.
AI was not consulted for this answer.
Today‘s AI is capable of mimicking scholarly feedback. AI grammar, syntax, flow and logic will be good enough. However AI does not actually “know” that it is reviewing academic research. It may not be capable of subtle (or even some obvious) observations that a human expert might make. The output would be flat and mediocre, but acceptable.
If in the near future strong AI is achieved, then the reviews will be at the highest level. But of course if AI is that strong then it will be both author and reviewer.
I think humans can be seduced by AI mimicry. There is quite possible even today that human reviewers will submit work that is actually processed by AI. If today’s weak AI becomes the norm for fast reviews, I think the quality of reviews will be no better than a review from a human subject matter expert.
Confidentiality concerns, if any, will be swept aside as AI utility improves.
Thank you All, George McIlvaine your first sentence motivated me to request ChatGPT to verify his answer about its capability for this , you can find the attached mp4 recorded discussion, so my question were : Are you able to review scientific manuscripts to be published in scientific journals authored by doctors, bachelors and engineers?
ChatGpt answer's were :
Yes, I can review and provide feedback on scientific manuscripts authored by professionals from various fields, including doctors, bachelors, engineers, and many others. Here's what I can do:
However, it's important to note the following:
If you'd like to proceed, you can share a section or the main points of your manuscript, and I'll do my best to help!
I agree with the responses above, particularly Dr Anamitra Roy that AI would lose the human aspect of peer reviewing.
I used to peer review nursing articles, and would try to help - if necessary - authors (particularly new authors) who had written well but needed a little bit of advice. When I saw the amended version, it was usually fit for publication. These authors were usually nurses and had knowledge and experience but had not yet become authors.
However, other authors had written fine articles but sometimes another pair of eyes looking at their submission, just added a little something. When I was peer reviewed, I often found this and was grateful for that extra bit of help with what I had felt was really good work on my part, but the peer reviewers just added that little bit extra.
Mary C R Wilson yes sure, peer reviews is necessary , thank you for sharing your experience about that, but AI will for sure make dramatic changes every where, i remember when internet became available to public, some one telling us internet is a phone showing pages, now we can see how internet transformed all process every where, AI will do that soon in the next two (2) ot three (3) decades, and i m optimistic about that despite the fear expressed many times, i think we should adapt the scientific methodology to have AI as part of it like reasoning and experimentation.
No, at current level of AI , it is not possible. But, In the future , AI Technology with consciousness may replace human peer reviewer.
Using AI in peer review
Tools like ChatGPT can help, but transparency is vital, say Mohammad Hosseini and Serge Horbach
In May, Northwestern University, Chicago researcher Mohammad Hosseini and Aarhus University, Denmark researcher Serge Horbach took a closer look at how generative AI programmes like Chat GPT could change peer review. As the technology develops at breakneck speed, their exploration offers a guide to the positives and negatives of using AI...
https://www.researchprofessionalnews.com/rr-news-europe-views-of-europe-2023-8-summer-reads-using-ai-in-peer-review/
The integration of AI tools into the peer review process can be beneficial in assisting with certain tasks such as language editing and conflict of interest detection. However, the use of AI tools must be continually evaluated and responsibly implemented to ensure that they are not perpetuating biases or impacting the quality and reliability of scholarly literature. The expertise and judgment of human reviewers will always be essential in ensuring the rigor and dependability of the peer review process, and the continued integration of AI tools should be viewed as a complementary tool rather than a replacement...
https://www.enago.com/academy/chatgpt-disrupt-peer-review-science-vigilance/
Generative AI’s potential role in peer review is complex, with the capacity for great time-saving efficiency as well as for severe ethical violations and misinformation. In theory, generative AI platforms could be used throughout the peer review process, from the initial drafting to the finalization of a decision letter or a reviewer’s critiques. An editor or reviewer could input a manuscript (either in whole or individual sections) into a generative AI platform and then prompt the tool for either an overall review of the paper or for a specific analysis, such as evaluating the reproducibility of the article’s methods or the language clarity. However, this comes with a myriad of potential benefits and drawbacks...
https://blog.cabells.com/2023/09/13/the-role-of-generative-artificial-intelligence-in-peer-review/
Some journals have proposed adopting generative AI tools to augment the current peer review process and to automate some processes that are currently completed by editors or reviewers, which could meaningfully shorten the time required to complete a thorough peer review. Currently, few publishers have posted public position statements regarding the use of generative AI during peer review; an exception is Elsevier, who has stated that book and commissioned content reviewers are not permitted to use generative AI due to confidentiality concerns. The future of generative AI integration into journals’ manuscript evaluation workflows remains unclear...
https://blog.cabells.com/2023/09/13/the-role-of-generative-artificial-intelligence-in-peer-review/
The advent of easily accessible large scale natural language processing tools like ChatGPT is opening a new realm of ethical and practical considerations. Non-article research outputs like data, methods, and code are gaining prominence, evolving from nice-to-have supporting documentation to citable published artifacts, formally preserved in the scientific record. Peer reviewers face increasing demands on their time and expertise, making it more challenging to secure reviewers. As it did in so many other areas, the pandemic has accelerated that trend—and that is just the beginning.
But in spite of these seismic shifts, peer review itself remains largely unchanged, both in its value to the scholarly community and its day-to-day practice at journals. Peer review is the primary way that journals evaluate the rigor, credibility, and potential interest of research submitted for publication consideration. What does the changing publishing landscape mean for the practice of peer review, and for peer reviewers themselves?
https://peerreviewweek.wordpress.com/
Ljubomir Jacić i think peerreviewer will have AI as powerful tool to accelerate demands treatment, this pushs to peer reviewers to learn the prompts to leverage AI advantages, as you said demands increased, rigor can be verified primarily by AI like math proofs for example and interest as well using prompts. The change i think is the reviewers will use the AI prompt , special extensions during the review process.
How is artificial intelligence shaping peer review? What are its benefits and risks?
There are beneficial uses for AI, if done carefully, like automating various aspects of the process — matching manuscripts with the right reviewers, identifying potential ethical issues, assessing the language quality and writing. All these things can be done reliably with AI now, and they can increase efficiency and take those tasks away from editors and reviewers, to allow them to focus on the science.
There's also the possibility that AI becomes so good that it actually can do peer review. Of course, nobody believes that right now, but we also didn't believe that open AI would be at the stage it is today. ChatGPT is passing college exams.
The challenge, though, is that AI algorithms can inherit biases from the data they're trained on. It could lead to even more bias, like biased reviewer recommendations. We have to ensure we're making efforts to eliminate that and reduce unintended bias.
There are also ethical considerations around privacy and data security and transparency. Authors and reviewers need to be aware of how their data is being used and who has access to it.
And there are some things AI tools are still not capable of doing — evaluation that you need human judgment for. AI algorithms can't yet determine what's novel or groundbreaking. They’ve been trained on existing research, and it's new discoveries we're looking for...
https://www.aps.org/publications/apsnews/202310/peer-review.cfm
Can AI replace Human Peer Reviewer for scientific articles and manuscrits? No, not at all. Al cannot replace the totality of human potential and experience in reviewing scientific manuscripts.
Artificial Intelligence–Assisted Review
Overview: Artificial intelligence and machine learning software are developed to catch common errors or shortcomings, allowing peer reviewers to focus on more conceptually-based criticism, such as the paper’s novelty, rigor, and potential impact. This strategy is more widely seen in humanities and social sciences research.
Pros: Increases efficient use of peer reviewers’ time; improves standardization of review; can automate processes like copyediting or formatting
Cons: Requires extensive upfront cost and development time as well as ongoing maintenance; prone to unintentional bias; ethically dubious; requires human oversight...
https://blog.cabells.com/2023/09/20/innovations-in-peer-review-increasing-capacity-while-improving-quality/
"Fixed knowledge differs from the ever-evolving living knowledge that humans engage with critically when reviewing and critiquing articles and publications. Artificial intelligence cannot substitute for humans in such a task."
We hear so often about the things that are broken in peer review. Not enough reviewers, slow turnaround times, and imperfect measures of impact. Rarely do we raise our glasses to the things that are going well, or that have greatly improved in peer review. However, as we move into an environment assisted by AI, being aware of what is going well and what we should celebrate enhances our motivation to get things done...
Peer review is having a bit of a renaissance, especially with the growth of papermills, predatory publishing, compromised peer review, and a growing list of retractions. While it is not perfect, peer review is one of the best processes we have to reduce the amount of junk science that gets published!
On the humanity of peer review - the importance of people in the process of creating, reviewing, and publishing research.
"While systems, tools, and standards can provide helpful (and often essential) support, it’s the contributions of individuals that will be most critical to the future of both peer review and publishing..."
https://scholarlykitchen.sspnet.org/2023/09/28/guest-post-striking-a-balance-humans-and-machines-in-the-future-of-peer-review-and-publishing/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
https://scholarlykitchen.sspnet.org/2023/09/25/some-thoughts-on-peer-review-and-the-future-of-publishing/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
Peer reviewers are not paid, which is not fair. It is just about altruism of peer reviewers.
The following article offer the help of AI:
Ending Human-Dependent Peer Review
The only way to make the situation fairer is by ending the human-dependent review system. We should invest more in AI-related components of journal review system, and gradually move away from the current human-dependent one. The arguments against the AI-dependent review are that AI can’t do critical thinking like humans and is based on algorithms that are frequently biased. We need to properly train AI to reduce the prevailing algorithmic limitations. Reviews of the existing AI-run article review software and models show certain degrees of effectiveness and efficiency, but that they are not quite ready to replace the human reviewers...
https://scholarlykitchen.sspnet.org/2023/09/29/ending-human-dependent-peer-review/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
Interesting Proposal Prof. Samir Touzani
No human peer review and I will add, An AI test on text similarity and on vocabulary meaning, to make it robust.
I agree with Prof. Ljubomir Jacić statement.
Kind Regards.
‘We’re All Using It’: Publishing Decisions Are Increasingly Aided by AI. That’s Not Always Obvious.
Using AI as an assistant is a growing trend among academic editors, as journals field more submissions while tapping a depleting well of peer reviewers. In this reality, an AI tool that can quickly identify whether a paper’s subject matter falls within a journal’s scope, or can expeditiously find potential peer reviewers with relevant expertise, can be valuable...
Trust, after all, is the metaphorical Everest for artificial intelligence...
https://www.chronicle.com/article/were-all-using-it-publishing-decisions-are-increasingly-aided-by-ai-thats-not-always-obvious?emailConfirmed=true&supportSignUp=true&supportForgotPassword=true&email=jacic%40mts.rs&success=true&code=success&bc_nonce=0aa51m9m2euub4uro0nufxh&cid=gen_sign_in
I agree with Murtala Ismail Adakawa
It depends on the discipline. I have always been published in journals for nurses or carers and I think it is important that the peer reviewer has insight into what is necessary for articles related to caring for people.
Dear Samir Touzani, I fully agree with Ljubomir Jacić. I would like to add that perhaps AI, being objective in reviewing, could not consider the fact that many journals to peer reviewers strongly recommend helping authors make the manuscript publishable. Therefore, perhaps journals could lose a few percentage points of their earnings but scientific literature would gain a lot.
The Peer Review Renaissance: An Urgent Call for Transformation
How can authors and reviewers unite and work together as a team instead of being on opposite sides? How can reviewers have ongoing discussions and conversations with the author during the review process itself? And what measures can authors and reviewers take to harmonize their efforts with the common aim of refining the paper to its highest potential?
This peer review renaissance is not merely a destination but a journey of continual improvement. There will be a lot of failed experiments as the process evolves. But every experiment will take us one step closer to our goal of making the process more robust, efficient, diverse, and collaborative...
How can a combination of human expertise and AI make the peer review process more efficient? I envision a future where AI is not a threat or a cause for worry but a tool for enhancing efficiency...
https://scholarlykitchen.sspnet.org/2023/10/12/the-peer-review-renaissance-an-urgent-call-for-transformation/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
The selection of an AI model is influenced by various aspects, including accuracy, availability, reliability, and ethics. What are your thoughts on improving the database? The AI system must provide evidence of its performance by achieving an R2 value of 1 and a MAPE value of 0.00%. Is this possible?
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work...
Article Editors’ Statement on the Responsible Use of Generative AI T...
Milan Marija Mirkovic in this Example the ChatGpt plus is acting as data scientist better than humain
humain https://medium.com/@teja.btc07/data-science-made-super-easy-with-chatgpt-plus-83745cf4fb9b
Perhaps RG should use AI to prevent all the pointless answers from seeing the light of day.
And the deliberately faux-ignorant questions that are posted to boost the author's visibility.
James Garry Karl Pfeifer please post relevant answers , i may ignore simply your answers , but posting such answers is not useful for no purpose , good or bad, some funny answers would be acceptable.
Samir Touzani
Maybe you should learn to read between the lines. Or, to repurpose an old saying, peer review begins at home. Cleaning up or preempting the crap on RG would be an easy form of peer review. If even that small step isn't possible, then there is no hope for more nuanced peer review that requires recognition of creativity and insight, and yes, sometimes the ability to read between the lines.
Karl Pfeifer sorry the answer is not clear at all, there is no scientific statement that says "read between the lines", if we have something to say we say it that's all, I am not following up on this suspicious discussion.
Samir Touzani
Scientists and scholars don't always explicitly say what they have to say:
https://academic.oup.com/book/8932/chapter-abstract/155235871?redirectedFrom=fulltext&login=false
https://www.sciencedirect.com/science/article/abs/pii/S088949062300008X
"This study gives some more examples of the benefits of AI. However, we are now at the stage where artificial intelligence can carry out tasks which require creativity and judgement, such as recommending acceptance or rejection, creating reviewer reports, and identifying cases of image manipulation, duplication, and plagiarism. This is where the ethical issues really come to the fore..."
https://publicationethics.org/news/where-next-peer-review-ai
Article Is the future of peer review automated?
https://publicationethics.org/news/where-next-peer-review-part-1
Springer Nature developing new peer review platform
Springer Nature has embarked on a project to build what it describes as the next generation in peer review platforms. Snapp (Springer Nature’s Article Processing Platform) aims to improve the publishing process and provide a more agile response to the growth in open access (OA). The publisher says its rollout marks a 'key investment from the company in the future of publishing' that 'puts editors, authors and reviewers firmly at the centre'...
Using automation, AI and machine learning (ML) technologies, Springer Nature says Snapp offers:
https://www.researchinformation.info/news/springer-nature-developing-new-peer-review-platform?utm_campaign=RI%20Newsline%2012-12-23&utm_content=Springer%20Nature%20peer%20review%20platform&utm_term=Research%20Information&utm_medium=email&utm_source=Adestra
Karl Pfeifer “Perhaps RG should use AI to prevent all the pointless answers from seeing the light of day.“ ->
Desr Karl, above is imho completely wrong approach, as for now AI is trained by us (human) answers and the use might be one to individually filter such (per user unwanted replies, as even RG offers partially by its blocknf feature) but please never ever allow for censoring style suppression of (seemingly) “unwanted„ posts or messages.
Please never forget: the wrong (nonsense) of today is all too often the mainstream of tomorrow!
Maybe now, and in the future - even more so... if, on the one hand, he learns scientific intuition, and on the other hand, he does not learn from us sinners to be guided by emotions and prejudices instead of logical arguments.
Christian G. Wolf I had in mind all the answers that merely say "congratulations" or "thank you" or somesuch, not answers that make a point or an observation, in other words answers with content, even if wrongheaded. The mere congrats and thankyous produce clutter in which answers with content get lost.
I completely agree with Karl Pfeifer that clean up is necessary in RG and simple filtering does not even need AI, a list of words/phrases with a simple REGEX will do. It will clean the clutter included in most threads.
Now to answer the main question
I can say with confidence that I am on the minority of the AI community that does not cheer LLMs. While they are a move forward, they are not what people think they are. A language model can answer based on permutations of the data given to it. While one can guide the response, it does not erase the possibility for hallucinations. LLMs also do not have the reasoning capability to understand science nor technology development. Being able to read a language and outputting sentences does not mean that the person understands the subject. Bypassing understanding the content can be done by a non language speaker by finding matches between inputs and candidate outputs pairs without knowing the precise content and choosing the highest frequency pair. That is the nature of training neural networks in a nutshell. That is not peer review. As a matter of fact, it would hamper the progress of knowledge.
Doing peer review goes beyond mere comparison against permutations on the data given to it (that is, if the work is truly "novel"). it also goes beyond mere pattern recognition between inputs and outputs. It also assumes contextual background (which is not bound by chain length as in the LLM model), possible knowledge of mathematics, bridging between different fields, among other nuanced evaluation parameters.
Ljubomir Jacić "Springer Nature developing new peer review platform" based on AI may be useful to have published Articles with good quality , in my opinion the Human Peer Reviews is not reliable as it seems to appears from answers , i will cite here two (2) cases of bad quality articles published in reliable journals :
The first case is Fleischmann and Pons's Article about Cold fusion published in Journal of Electroanalytical Chemistry journal and later in Nature refer to this link for more info https://repository.lsu.edu/cgi/viewcontent.cgi?article=6312&context=gradschool_disstheses
The second case recently about supra conductivity , arguing to find the material LK-99 allowing supraconductivity refer to this link for moe infos
https://www.theguardian.com/science/2023/sep/02/room-temperature-superconductor-south-korea-lk-99-nuclear-fusion-maglev
So the human peer review is far to be perfect to ensure publishing good quality articles, then AI is a necessary way to tackle this.
With respect to „… it will clean the clutter in most threads.“ there is the imho main critical point I thought about and have not seen discussed anywhere, that the greeting style „ „… good work!“ posts are (imho) not just well meant (useless - in the sense of information) but (imho) very harmful intentionally and hence systematically made posts to gather high(est) possible RG ranking as qick as possible, as some „cleverlies“ here found such innocent posting never anyway to be detectable fraud by becoming a massive multiplier to one ownes possible maximum recommendation per user, which was technically 1 (one single) recommendation per user (!) as was - and should be (!).
Such is however mechanistically bypassed process wise once one takes a comparative large human network (10-20 more than enough) which starts writing 1/20 „Goid work!“ and 19/20 (!!!) then recommending such scientifically worthless answer to gain an about 19-fold higher (imho) illegal or if legal at least fully inadequate unscientific level of comment recommendations, which I am not sure is not weighted low enough to actually stop such method to exist in the first place.
Any ideas or comments on paraneuic thinking of mine - or did I just nailed it!?!
If so - please send me 10^5+ answer ( ;-) ) recommendation of yours to me right here if you all agree (once showing you what’s it all about with such „fraud“ and nothing like fraud that is (imho). How can it be otherwise, that some have (e.g. first hand noted by me quite often in recent 2 about years) articles get say 250+ recommendations but only have say 100+ about reads.
If so, either such articles must have been read together in large group of audience in a university (or rather a bar …) and then collectively clicked „recommend“ or it’s just as I suspect a pseudo clever form of fraud in a rather actually not so subtle form!
All - get this - please - this statistical level of each of us is scientifically meaningless unless we allow to give it some meaning!!
To all those doing it - and at the same time use part of one time to read „ethical“ related posts like mine - stop this childish behavior and start getting the work forward by writing own meaningful articles and preprints with the required substance!!
And if you can’t do this (yet) yourself grow with us (who can) and please STOP making brilliant still new young RG platform to become a (stupid) FaceBook style arena!!!
Thank you for your understanding and (reading) time.
Sincerrely,
Christian G. Wolf
You all are special as who you are.
Not who you get clicked on here.
Dear Christian G. Wolf , there are many discussions about this issue. If you are interested, I may trace them and post. I understand well the problem. RIS may be manipulated by means of asking many questions and likes thereafter, for both Q&A. RIS is not scientific indicator, but measure of visibility and "popularity".
An example of one research question, which depicts such behavior, follows,
please, do visit.
https://www.researchgate.net/post/Do_you_agree_with_what_Hamas_did_in_Israel#view=653a4340065a6a01930b9ca5
Christian G. Wolf gets to the heart of the matter. But there is a new danger, the generation of such worthless answers by an AI. But there is a new danger, the generation of such worthless answers by an AI. Something similar has already happened in the judiciary.
https://www.hessenschau.de/panorama/prozesse-landgericht-vermutet-ki-hinter-massenklagen--v1,kurz-ki-hinter-massenklagen-vermutet-100.html
(Regional court-suspects-AI-behind-mass-lawsuits). Possibly we may need to add a captcha to RG.
I am inclined to say that the problem with the assessment (any kind) done by AI is that the assessment is being done without knowing a context.
As AI continues to progress, refining confidentiality protocols, bettering contextual comprehension, and establishing accountability, it could gain broader acceptance as a legitimate peer reviewer. Authors and journals must embrace the potential AI offers while safeguarding the integrity and benchmarks of peer review.
From my perspective, we are gradually moving to what I term a man-machine-match! the introduction of AI to the peer review system will upgrade the demand for systematic science, there's going to be a very competitive power among journals, should be a healthy one. Transparency in terms of selection, timely review, and publication integrity will become the advantage. This system is designed by human interference, there is the question of checks and balances as the flexibility of the model to select and will continue to require consistent upgrades. The rationality in human thinking that predominates the science community will gradually erode as digitalization will be the only bedrock for ranking, and some quality innovation that defiles AI will be hindered. The question of digital superiority will be the order of the day. We wait to see the first journal to adopt the synergy considering these facts, some research is going to be shuffled.
Using AI in Peer Review Is a Breach of Confidentiality
"“As the scientific community continues to evolve, it is essential to leverage the latest technologies to improve and streamline the peer-review process. One such technology that shows great promise is artificial intelligence (AI). AI-based peer review has the potential to make the process more efficient, accurate, and impartial, ultimately leading to better quality research.”
I suspect many of you were not fooled into thinking that was me who wrote that statement. A well-known AI tool wrote those words after I prompted it to discuss using AI in the peer review process. More and more, we are hearing stories about how researchers may use these tools when reviewing others’ applications, and even writing their own applications.
Even if AI tools may have “great promise,” do we allow their use?..."
https://www.csr.nih.gov/reviewmatters/2023/06/23/using-ai-in-peer-review-is-a-breach-of-confidentiality/
Affiliation Bias in Peer Review of Abstracts by a Large Language Model
"Ever-increasing numbers of submitted manuscripts put the academic publishing system and its human peer reviewers under pressure. Researchers have started to use large language models (LLMs) as support for writing and reviewing articles. A threat to the integrity of peer review is the influence of authors’ institutional prestige on the evaluation of their work (affiliation bias). Whether using LLMs to assist in peer review will increase or reduce affiliation bias is unknown. We assessed affiliation bias in peer review of medical abstracts by a commonly used LLM..."
Article Affiliation Bias in Peer Review of Abstracts by a Large Language Model
Now it is possible to write any article with the help of artificial intelligence.
Unfortunately it was also already possible before to "write any article" even without help of artificial intelligence. Thats (unholy pressure to publish as much as possible - and typically even no-one ever reading) is the primary reason there are some many - let me state polite as possible - not really well written, let alone well thought and worked out, formal peer-reviewed papers.
Look on the other side of the fence - at least in the not so distant future, all papers (good and bad) will be read - by AI.
To answer to the question
“Can AI replace Human Peer Reviewer for scientific articles and manuscrits ?”
- it is necessary before to understand – what is “Human Peer Reviewing” in “conventional” scientific journals now, while that
“……Now it is possible to write any article with the help of artificial intelligence….”
- isn’t principally so, if we say about really scientific articles. Any AI isn’t able to do that, . since can use only already known information, and so correspondingly is able to make only inferences that are in framework of existent theories. Though yeah, now even not too advanced AI can write any number of “quite philosophical” articles, books, etc., in mainstream philosophy; and practically any number of quite scientific articles, etc., in GR, Standard Model, cosmology,
- since at least in last few dozens of years and now in mainstream science really there is no necessity – in the philosophy no any necessity at all; in fundamental physics there practically no necessity, in understanding by authors what the words that they write in the articles really mean. As that mostly now the authors of published in scientific and philosophical “peer-reviewed” journals articles, etc., really do.
Correspondingly, despite rather numerous, and in really fundamental physics practically all, published “peer-reviewed” “fundamental breakthroughs”, really are some by no means scientifically grounded fantastic mental constructions that has no any relation to real science; so, say, physics-2024 is the same as physics-1980, and on at least 90% is the same as physics-1930,
- including the constructions contain all fundamental flaws that appeared in physics more 100 years ago. Situation in other sciences rather probably is similar.
I.e. most of “peer reviewed “fundamental breakthroughs” above really are nothing else than some secondary paper, and that is completely rigorous, completely objective, and completely true, evaluation of what now science is – and what really “peer-reviewing” is.
The situation above in “peer-reviewing” is essentially determined by that in the journals a couple of dozens of years ago the purely mafia rule, that “the authors should propose in submission a few non-blind reviewers” was introduced, and real peer-reviewing, as that was in whole science history up to ~ first 1990-s simply has lost any scientific sense,
– since now the editors look at a submission’s “non-blind reviewers” list aimed at to evaluate really only – the author(s) belong or not to some authoritative scientific group; if belong – any such paper is published, if the reviewers aren’t proposed in a submission, or don’t belong, such papers are mostly rejected by editors; if a submitted paper contains really important scientific results, the paper is rejected obligatorily, aimed at to block information about the results and the authors, while the results can be “discovered” by some “more correct” authors; including by the editors themselves.
An example – soon 70 submissions of papers in framework of the of the philosophical 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception [ https://www.researchgate.net/publication/363645560_The_Information_as_Absolute_-_2022_ed ] and the Planck scale informational physical model, 3 main papers are
https://www.researchgate.net/publication/354418793_The_Informational_Conception_and_the_Base_of_Physics
https://www.researchgate.net/publication/367397025_The_Informational_Physical_Model_and_Fundamental_Problems_in_Physics
https://www.researchgate.net/publication/369357747_The_informational_model_-Nuclear_Force],
- were rejected in 2010-2023 years by editors of “peer-reviewing” journals; and even preprints publishing sources.
At that the conception solves most of problems in real philosophy, the model is the base of now fundamentally necessary for real physics step in the way “classical physics” – QM – Planck scale physics; the last example see comments to submission of the paper “The informational model - Nuclear Force” in
https://www.researchgate.net/publication/369357747_The_informational_model_-Nuclear_Force/comments, and so the papers were/ are completely undoubtedly publishable.
The post is rather long already, so now
Cheers
Let’s continue [see the SS post above].
At that really the “peer-reviewing” is rather routine work, it is necessary to analyze submitted papers aimed at checking accordance of the papers with rather rigorously defined now criteria the scientific papers should be – and basing on just existent published science - consistent with which: [in the paper ]
- initial assumptions are consistent or not with really validated experimental data;
- the assumptions are consistent or not with main and really fundamental laws/links/constants that are really substantively established in the science a paper relates to which;
- the assumptions are or not mutually consistent;
- follow or not from the assumptions some senseless consequences;
- do or not presented in the submission scientific results really follow from, and/or grounded by, the initial assumptions:
- is or not used in the paper mathematical formalism correct;
- are or not the results new;
- are or not the results actual enough in the science.
All that above really can be carried out by advanced enough specific AI, which so could to form at least first approximation remarks to submissions, and in a dialog with authors to evaluate rationally the authors’ objections to the remarks; if the remarks are really rebutted by authors, to recommend submission for publishing.
That above doesn’t happen now, while say, in seems practically all now journals the programs that seek for “plagiarism” that “test on text similarity” are used, where the programs really seek for only some similar wordings passages in an analyzed paper that are written also in some other existent papers,
- what really is quite natural if a program analyze a literature, music, etc., things,
- but that is quite senseless at analysis of scientific papers. In science not some nice wordings or music passages are stealing, but ideas, and a plagiarist by no means copies something, but thoroughly prepare his text be by no means similar, transforms equations series, etc.
And, again, the real analysis can be only provided that the AI program is completely “ethical” and so objective.
Though that is possible only if the AI program is developed by really professional and ethical programmers, and with participation of really professional and ethical scientists.
In another case the AI “peer-reviewing” by no means will differ from existent now in mainstream journals “peer-reviewing” – see the SS post above.
An example, how poor ChatGPT, which “knows” that mentions of wordings “Sergey Shevchenko”, and “Vladimir Tokarevsky” are strongly prohibited,
- answers two questions “What is the “Information as Absolute”” conception? And “What is the informational physical model” see in https://www.kharkovforum.com/threads/what-is-the-information-as-absolute-conception.5429665/#post-71245266 and
https://www.kharkovforum.com/threads/what-is-the-informational-physical-model.5431758/#post-71245236 in “Тільки зареєстровані користувачі бачать весь контент у цьому розділі” smashes some URLs of SS&VT papers are pointed, which for some unknown reason are accessible only for the registered members on this [Kharkov city forum], but that isn’t too essential in this case, the papers are in my profile.
Cheers
Unveiling Perspectives on Peer Review and Research Integrity: Survey Insights
"The scrutiny of peer review and research integrity has raised questions both in the presence and absence of AI. Is the current inquiry into research integrity during peer review solely prompted by the advent of AI, or has it always been a concern, considering past incidents involving retractions, plagiarism, and other unethical practices?
“The research integrity problem has many facets. At the top is the sheer volume of papers to be peer reviewed, and the lack of enough peer reviewers. It is next to impossible to be true to each and every paper, and to ensure that everything is as per requirement. Unless we fix that, we are not going to know which paper has dodgy ethics, and which does not. And therefore, institutional science will always be playing a catch-up game in terms of research integrity. The integration of AI offers potential solutions, but it’s important to recognize that technological advancements alone cannot fully address these structural challenges. Fundamental changes in the peer review process are essential for a proactive stance on research integrity.”
Indeed, research integrity has consistently been a cause of concern. Therefore, the blame lies not with AI but rather with the established processes, particularly as the dynamics of academia undergo transformations, pushing against traditional boundaries and introducing a multitude of new opportunities and challenges..."
https://scholarlykitchen.sspnet.org/2024/02/07/unveiling-perspectives-on-peer-review-and-research-integrity-survey-insights/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
Raphaël Enthoven thinks that a machine will never be a philosopher. Do you think so? A New Question On: https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
Could AI Disrupt Peer Review?
"Spending time poring over manuscripts to offer thoughtful and incisive critique as a peer reviewer is one of academia’s most thankless jobs. Peer review is often the final line of defense between new research and the general public and is aimed at ensuring the accuracy, novelty, and significance of new findings.
This crucial role is voluntary, unpaid, and often underappreciated by academic publishers and institutions. As with other tedious jobs in today’s world, this raises the question: Can, and more importantly, should, publishers trust AI to handle peer review instead? A number of researchers say no and are growing concerned about how AI may threaten the integrity of the review process by reinforcing bias and introducing misinformation...
Without concrete policies that lay out guidance on transparency or penalties for using AI in peer review, Mollaki worries that the integrity and good faith trust in the peer review process could collapse. Never mind that the question of whether AI is actually capable yet of providing effective peer review is also up for debate..."
https://spectrum.ieee.org/ai-peer-review
There was a time when I hated seeing my students using copy/paste to insert texts into their work without clearly and precisely specifying it. Paradoxically, I note with surprise that today, in this universal fascination with AI, educational systems are less offended by the use of AI to produce intellectual works as part of training courses leading to graduation.
Explain AI tools known as XAI are crucial tool for human oversight in AI in general and in review process if AI is used. XAI allows to explain for each case the algorithmel used by AI making AI usage totaly transparent.
Explainable artificial intelligence (XAI)
"Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development..."
https://www.ibm.com/topics/explainable-ai
Thank you Dear Ljubomir Jacić for the insightful reference (https://www.ibm.com/topics/explainable-ai). As you mentioned "Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production". From this point of view, AI can make an essential contribution to evaluating scientific work. The fact remains that fundamentally, AI would not be in a position to give an informed opinion regarding scientific originality, progress, or innovation. This is a fundamental question: Artificial Intelligence cannot evaluate a scientific production that goes beyond the knowledge in place since the AI, however, perfected it may be, cannot tear itself away from the knowledge in place, the only one accessible for it.
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so/2
A digression: If we are considering Super Artificial Intelligence then the answer to the question seems easy - simply no. Otherwise, all our papers will be rejected right away.
Have a great week ahead,
Kamil.
I think considering some answers AI is very helpful to foster integrity and avoiding bias.
Dear Samir Touzani
I think there are two approaches visible right now in the scholars’ opinions.
On the one hand those that see AI as a threat because it may have a bias as it cannot be empathetic.
On the other hand those that see AI as a fully automated decision making without any bias.
I think that everything depends on that how AI would be applied in the process of reviewing, whether as a tool to support a decision maker or as a fully automated decision making.
Best regards,
Kamil.
The review by Testolin, A. "Can Neural Networks Do Arithmetic? A Survey on the Elementary Numerical Skills of State-of-the-Art Deep Learning Models. Appl. Sci. 2024, 14, 744" examines the recent literature, concluding that ".. even state-of-the-art architectures and large language models often fall short when probed with relatively simple tasks designed to test basic numerical and arithmetic knowledge..". Available on:
Article Can Neural Networks Do Arithmetic? A Survey on the Elementar...
See Also:
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Dear Jamel Chahed
That is completely true.
AI and algorithms make mistakes.
However, it is just a matter of time when they will not.
Best,
K.
From the "Informatic Tribe" to the "Artificial Intelligence Sects".
Philippe Breton, The Informatic Tribe. Investigation into a modern passion. Paris: Métailié, 1990. "...A machine, the enthusiast? No: a logical, intuitive artist, crazy about aesthetics, solitary but never alone. Taste for power? No: the tribe responds with a "construction without a body" to the fragility of the biological, close in this way to the Zen which inspired Steve Jobs, the inventor of the microphone. Savior of "mythical sacred time", the computer scientist is the one through whom order arrives. The rules change, the idea of rule is established, spreads and reassures... ", Renaud Zuppinger, Le Monde Diplomatique, April, 1991 (Own translation). See:
https://www.monde-diplomatique.fr/1991/04/ZUPPINGER/43422
The Conversation, March 15, 2023, Gods in the Machine? The rise of artificial intelligence may result in new religions. ".. We are about to witness the birth of a new kind of religion. In the next few years, or perhaps even months, we will see the emergence of Sects devoted to the worship of artificial intelligence (AI). The latest generation of AI-powered chatbots, trained on large language models, have left their early users awestruck —and sometimes terrified — by their power. These are the same sublime emotions that bind at the heart of our experience of the divine. People already seek religious meaning from very diverse sources. There are, for instance, multiple religions that worship extra-terrestrials or their teachings. As these chatbots come to be used by billions of people, it is inevitable that some of these users will see the AIs as higher beings. We must prepare for the implications." See:
https://theconversation.com/gods-in-the-machine-the-rise-of-artificial-intelligence-may-result-in-new-religions-201068
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
More see a couple of SS posts on page 6, here only a few comments to:
“…[I think there are two approaches visible right now in the scholars’ opinions].
On the one hand those that see AI as a threat because it may have a bias as it cannot be empathetic….”
- any AI is a program that runs on completely material construction “a computer”, and though any computer isn’t completely material thing – since it is governed by completely non-material product of some completely non-material system “some programmer(s)’ consciousness(es)”, really any “biases” and “empathies” are completely determined only by biases and empathies of the programmer(s), again see SS posts pointed above.
So that
“….On the other hand those that see AI as a fully automated decision making without any bias.
I think that everything depends on that how AI would be applied in the process of reviewing, whether as a tool to support a decision maker or as a fully automated decision making..…”
- though looks as something that in principle can be realized - but only if some “AI reviewer” is developed by really honest, i.e. without some biases and empathies, programmers,
- first of all as the inspector that checks accordance of a reviewed manuscript with the main criteria a scientific paper should be in accordance with – see SS post February 2. That is really in most cases doable task, besides some problems at evaluation of “actuality”, if some manuscript contains really fundamentally new results,
- but all that really can be effective only if there exists the possibility of a dialog ““AI reviewer” – corresponding author”.
However, again, that is possible only if the program is developed by really honest, i.e. without some biases and empathies, programmers, what looks now as seems too hard problem in recent scientific community.
And as to
“…A digression: If we are considering Super Artificial Intelligence then the answer to the question seems easy - simply no. Otherwise, all our papers will be rejected right away.….”
- that isn’t correct, any computer program, including any AI, knows only what is downloaded by programmers and can be found in existent electronic scientific data bases. So any program really cannot really rationally enough evaluate really new things. Though yeah, some really honest “AI reviewer” yet now would reject because non-accordance with the SS post February 2 criteria at least 90% of “fundamental” papers with quite new “fundamental breakthroughs” that now are well vividly published without any problems in all conventional journals.
Cheers
I find that the two following quotes on democracy, by Churchill apply perfectly to the peer-review of scientific and academic production (just change Democracy by Peer-Reviewing): 1. "Democracy is the worst system of government, except for all the others which have been experienced in history"; 2. "Democracy is a bad system, but it is the least bad of all systems"
Source:
https://des-livres-pour-changer-de-vie.com/110-citations-cultes-de-winston-churchill/
So a scientific or intellectual production duly published according to the standard of scientific or academic publishing protocols deserves in principle a priori-worthy favorable prejudice until we study the text, we understand the arguments, we analyze the references, and we compare thoughts and ideas. It is only then that we can allow ourselves constructive and informed criticism. And why not proceed to the development of an alternative thesis, see even to state an antithesis?
Anonymous quote: (Own translation from French): We are like books. Judged on the cover, at best by a synopsis, at worst by reviews. But who read the story?
Source
https://www.evolution-101.com/thoughts-on-judgments/
See Also:
https://www.researchgate.net/post/Scientific_Integrity_on_ResearchGate
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
"AI is likely here to stay, thus exploring its utility in scientific writing is timely. As with every new technology, there are pros and cons. Figuring out how to expand the pros while limiting the cons is critical to successful implementation and adoption of the technology." From the conclusion of the paper: Kacena, M.A., Plotkin, L.I. & Fehrenbacher, J.C. The Use of Artificial Intelligence in Writing Scientific Review Articles. Curr Osteoporos Rep (2024).
Available on:Article The Use of Artificial Intelligence in Writing Scientific Rev...
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Chef de Cuisine: Perspectives from Publishing’s Top Table
"In the near to mid future, I suspect we’ll enter an age of hybridity during which peer review evolves to incorporate AI functionalities much the way in which AI will supplement or bolster many human inputs and outputs. We rightfully scrutinize the perils and failings of the peer review system, but I would argue that it remains the best means of validation for now (“the worst possible system other than all other systems”). I’m most excited about the potential of AI to reduce drudgery and admin, enliven the work experience, and help us with our mission of seeking out and disseminating the objective truth..."
https://scholarlykitchen.sspnet.org/2024/02/29/robert-harington-talks-to-niko-pfund-of-oxford-university-press-in-this-series-of-perspectives-from-some-of-publishings-leaders-across-the-non-profit-and-for-profit-sectors-of-our-industry/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
I learn with interest that even in the field of belief which, in certain aspects, falls under the principle of "Freedom of Conscience", there is an Ethics that of the "Ethics of Belief" or "Evidentialist Ethics". Insights are exposed in this paper: The ethics of belief in paranormal phenomena. Journal of Anomalous Experience and Cognition, 2(1), Available on:
Article The Ethics of Belief in Paranormal Phenomena
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
https://www.researchgate.net/post/Science_Conscience
Generative Artificial Intelligence (AI) and Reinforcement Learning (RL) have received paramount interest in Computer Science over the last decade. In particular in Machine Learning tasks. The review by Franceschelli, G., & Musolesi, M. (2024). "Reinforcement Learning for Generative AI: State of the Art, Opportunities and Open Research Challenges. Journal of Artificial Intelligence Research, 79, 417-446.",
presents the state-of-the-art, analyses open research questions, and discusses challenges and shortcomings. Available on: https://www.jair.org/index.php/jair/article/download/15278/27007
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
"With this very interesting video, UNESCO tries to find answers to important questions about ethics of AI. In what ways can we effectively utilize the capabilities of AI without exacerbating or creating new inequalities and biases? While there is agreement on certain ethical principles such as privacy, equality, and inclusion, how can we put these principles into practice when it comes to AI?"
Watch Vidéo on:
https://www.morphcast.com/blog/ethics-of-ai-challenges-and-governance-a-video-by-unesco/
See Also:
https://www.researchgate.net/post/Science_Conscience
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
In the same vein. The excellent article by Etzioni, A., & Etzioni, O. 2017, (around 400 citations) "Incorporating ethics into artificial intelligence. The Journal of Ethics, 21, 403-418" "reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand."
Paper available on
https://philpapers.org/archive/ETZIEI.pdf
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"Yet seeks to harness this vast existing innovative impulse and its established apparatus and, through sustained sensitivity towards the diverse individual and social experiences of technology, aim research and development down more collectively desirable paths." This is the direction suggested by the recent research by Simon, J., Rieder, G. & Branford, J. "The Philosophy and Ethics of AI: Conceptual, Empirical, and Technological Investigations into Values. DISO 3, 10, 2024 ((Published a week ago). Available on:
https://link.springer.com/article/10.1007/s44206-024-00094-2
As a conclusion to their paper, the authors asked the following questions: "What are the ethical, conceptual, and institutional foundations of such a project? Who is to take up this mantel and how might it be as inclusive as possible? How will it be sustained, maintained, or improved? These and other vital lines of inquiry Beckon and necessitate the contributions of the broadest epistemic collective that can be rallied."
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
The possible employment of AI tools to compensate for the lack of expertise and competencies represents a paradoxical risk. In this regard, the paper by Neri, et al. 2020, with the evocative title “Artificial intelligence: Who is responsible for the diagnosis?. Radiol med 125, 517–521” is therefore interesting to read. The authors write in conclusion: "Perhaps the solution is to create an ethical AI, subject to a constant action control, as indeed happens for the human conscience: an AI subjected to a vicarious civil liability, written in the software and for which the producers must guarantee the users, so that they can use AI reasonably and with a human-controlled automation. It is clear that the future legislation must outline the contours of the professional's responsibility, with respect to the provision of the service performed autonomously by AI, balancing the professional's ability to influence and therefore correct the machine, limiting the sphere of autonomy that instead technological evolution would like to recognize to robots."
See:
https://link.springer.com/article/10.1007/s11547-020-01135-9
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
Last time in the thread a series of mainstream philosophical citations appeared, which,
- since the cited papers are some the mainstream products, while in mainstream philosophy [and all sciences, though] all really fundamental phenomena/notions, first of all in this case “Matter”, “Consciousness”, “Space”, “Time”, “Energy”, “Information”, are fundamentally completely transcendent/uncertain/irrational,
- really are corresponding senseless sets of wordings, where the authors really don’t understand – what the used words mean.
Commenting of all citations really so is unnecessary, so let to comment a next “excellent article (around 400 citations)” where the author
“…"reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions,… It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical…
…the article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place…”
- from what it looks that the author has rather strange imagination about what are ethics, computers, and computers programs, including AI programs.
“Ethical choices” - more correctly “ethical norms/standards”, were/are indeed made by human beings – with the one aim: the norms regulate behavior and relation between human beings so that should be such, that would provide stability of human beings societies,
- and so the norms can be well different in different societies. Say, in most of societies it is wrong to kill other human beings, in a killers society that is quite ethical norm.
However really in every concrete case/society the number and cases/situations, where some ethics really acts, is rather limited; and really there is no some problems at introducing that in corresponding AI programs,
- as, say, rather numerous cases/situations happen at cars motion, while now most of this are well effectively introduced in rather primitive driverless cars AIs.
And though, of course, for any/every completely material a computer’s chip, and a computer hardware as a whole, it is all the same – it provides or not a safe driving of a car, or evaluates – is or not something in compliance with ethical norms [say, most are in only 10 Commandments],
- however computers are governed by fundamentally non-material programs, which are developed by fundamentally non-material systems “programmers’ consciousnesses”, and in the programs anything that exists in of human beings relations can be quite rationally and now rather easily, included.
Returning to the thread question, really now in all journals there exist requirement for authors to propose a few “non-blind reviewers”, which is really used by editors only for clarification - the author(s) of a submission belong or not to some authoritative scientific community,
- and if belong, the submission is published really independently on its content, if don’t belong, some submissions are accepted, but if contain really important scientific results are rejected. That can be elaborated in an AI, which will be well more primitive than any driverless car AI, but if it would be introduced in the journals submission practice, and at that all editors will be dismissed, the recent journals’ content will be practically the same as is now.
Cheers
This is how AI will transform the way science gets done
"Soon, specifically trained LLMs might move beyond offering first drafts of written work like grant proposals and might be developed to offer “peer” reviews of new papers alongside human reviewers.
AI tools have incredible potential, but we must recognize where the human touch is still important and avoid running before we can walk..."
https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/
The Idea of humanity "is more controversial today than ever before. Traditionally, answers to the questions about our humanity and 'humanitas' (Cicero) have been sought along five routes: by contrasting the human with the non-human (other animals), with the more than human (the divine), with the inhuman (negative human behaviors), with the superhuman (what humans will become), or with the transhuman (thinking machines)." In the recent volume by Claremont, Calif, Dalferth, I. U., & Perrier, R. E. (2023). "Humanity: an endangered idea?: Claremont studies in the philosophy of religion, conference, 40, 2019. Religion in philosophy and theology", the authors tackled these philosophical issues. In each case, the question at stake and the point of comparison is a different one, and in all those respects the idea of humanity has been defined differently. What makes humans human? What does it mean for humans to live a human life? What is the humanitas for which we ought to strive? "This volume discusses key philosophical and theological issues in the current debate, with a particular focus on transhumanism, artificial intelligence, and the ethical challenges facing humanity in our technological culture"
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Science_Conscience
On The Paradox of Intelligence. "The title of this essay [1] is “On the Limit of Artificial Intelligence,” which immediately implies a question: in what way can one talk about the limit of such a thing, given that intelligence, as long as it is artificial, is more susceptible to mutation than human intelligence whose mechanism is still beyond comprehension? Or in other words, how can we talk about the limit of something that virtually has no limit? The artificiality of intelligence is fundamentally schematized matter. However, it has the tendency to liberate itself from the constrains of matter by acting against it in order to schematize itself. "
[1] Hui, Y. (2021). On the limit of artificial intelligence. Philosophy today, 65(2), 339-357. Available on:
https://www.academia.edu/download/82559783/Yuk_Hui_On_the_Limit_of_Artificial_Intelligence_.pdf
See Also:
https://www.researchgate.net/post/Science_Conscience
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
This
“…On The Paradox of Intelligence. "The title of this essay [1] is “On the Limit of Artificial Intelligence,” which immediately implies a question: in what way can one talk about the limit of such a thing, given that intelligence, as long as it is artificial, is more susceptible to mutation than human intelligence whose mechanism is still beyond comprehension? Or in other words, how can we talk about the limit of something that virtually has no limit? …..”
- is a next example of typical mainstream philosophical paper, i.e. a really senseless set of really senseless claims; in this case the author firstly writes about some “human intelligence” [i.e. real intelligence] “whose mechanism is still beyond comprehension”, but further completely comprehensively and certainly claims that some “something … virtually has no limit”.
And, moreover, call that as some highly intelligent in mainstream philosophy, but really by no means existent, “The Paradox of Intelligence”, claiming, correspondingly, further that
“…The artificiality of intelligence is fundamentally schematized matter…” – what is a tooo strange claim, the fundamentally “schematized matter” is fundamentally only a computer’s hardware, which without installed program is fundamentally nothing else than some box with some atoms and molecules –as that, say, any stone is, , and so that
“….However, it has the tendency to liberate itself from the constrains of matter by acting against it in order to schematize itself. ….”
- is some again really senseless claim about some mystic desire of the box to liberate itself from the constrains of matter, making at that some mystic acting against it in order to schematize itself….
Though yeah, that is a completely typical and legitimate paper in mainstream philosophy, where, since the mainstream really has only fundamentally transcendent/mystic “knowledge” about what are all/every really fundamental phenomena/notions “Matter”, “Consciousness”, “Space”, “Time”, “Energy”, “Information”,
- i.e. so really only fundamentally transcendent/mystic “understanding” what are “human” and “intelligence”,
- all publications logically completely inevitably are something like the commented here one above.
Regrettably in this thread the posts with “philosophical” papers promotion last time are too numerous, while the concrete thread question is really interesting.
That
“…"Humanity: an endangered idea?: ….In each case, the question at stake and the point of comparison is a different one, and in all those respects the idea of humanity has been defined differently. What makes humans human? What does it mean for humans to live a human life? What is the humanitas for which we ought to strive? … the ethical challenges facing humanity in our technological culture"…..”
- is commented in SS post in https://www.researchgate.net/post/Is-analytic-philosophy-dead/6 , page 6
Cheers
The paper by Bryson, J. J., & Malikova, H. 2021, "Is there an AI cold war?. Global Perspectives, 2(1), 24803" documents and Analyzes the "extremely bipolar picture prominent policymakers and political commentators have been recently painting of the AI technological situation, portraying China and the United States as the only two global powers." The paper's findings call into question certain ideas, however documented and claimed. They also illuminate the uncertainty concerning digital technology security and recommend that all parties engage toward a safe, secure, and transparent regulatory framework. Paper available on:
https://www.delorscentre.eu/fileadmin/2_Research/2_Research_directory/Research_Centres/Centre_for_Digital_Governance/5_Papers/Other_papers/BrysonMalikova21__002_.pdf
See Also:
https://www.researchgate.net/post/To_WW3_or_Not_To_WW3_That_is_The_Question_to_Ask_Scholars
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
“Human-AI interaction the design of machines will need to account for the irrational behavior of humans” This is the paramount IDEA which emanates from the review by Macmillan-Scott, O., & Musolesi, M. (2023). (Ir)rationality in AI: State of the Art, Research Challenges and Open Questions. arXiv preprint arXiv:2311.17165. Available on: https://arxiv.org/pdf/2311.17165.pdf
One can read within the conclusion: "The question of interacting with irrational agents is crucial not only among machines, but also because humans often act in irrational ways. Human-AI interaction is a key aspect of today's AI systems, namely with the case of systems based on large language models and their widespread use. Cognitive biases may in some instances be leveraged to improve the performance of artificial agents, whereas in human-AI interaction the design of machines will need to account for the irrational behavior of humans"
See Also:
https://www.researchgate.net/post/Science_Conscience
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"The performance of these AI systems increases exponentially, which requires exponentially increasing resources as well, including data, computational power, and energy. This development is not sustainable and there is a need for new AI approaches, which give careful consideration to limited resources." This is what the Chapter by Kozma, R. (2024), "Computers versus brains: Challenges of sustainable artificial and biological intelligence. In Artificial Intelligence in the Age of Neural Networks and Brain Computing (pp. 129-143), Academic Press" is about. More precisely, the author describes various aspects of biological and artificial intelligence and discusses "how new AI could benefit from lessons learned from human brains, human intelligence, and human constraints." In doing so, the research introduces "a balanced approach based on the concepts of complementarity and multistability as manifested in human brain operation and cognitive processing. This approach provides insights into key principles of intelligence in biological brains and it helps building sustainable artificial intelligence."
About the book
Artificial Intelligence in the Age of Neural Networks and Brain Computing, Second Edition demonstrates that present disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity, and smart autonomous search engines. The book covers the major basic ideas of "brain-like computing" behind AI, provides a framework to deep learning, and launches novel and intriguing paradigms as possible future alternatives.
The present success of AI-based commercial products proposed by top industry leaders, such as Google, IBM, Microsoft, Intel, and Amazon, can be interpreted using the perspective presented in this book by viewing the co-existence of a successful synergism among what is referred to as computational intelligence, natural intelligence, brain computing, and neural engineering. The new edition has been updated to include major new advances in the field, including many new chapters.
https://www.sciencedirect.com/book/9780323961042/artificial-intelligence-in-the-age-of-neural-networks-and-brain-computing
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Energy_Renewable_Energy_and_Levelized_Cost_Of_Energy_LCOE_Paradoxes
Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning."(L.A. Zadeh [1] )". The paper [2] by Dzitac et al (2020) pays tribute to the work of world-renowned computer scientist Lotfi A. Zadeh. It presents "general aspects of Zadeh’s contributions to the development of Soft Computing(SC) and Artificial Intelligence(AI), and also his important and early influence in the world and in Romania". One may read within this article "In 1965 Lotfi A. Zadeh published "Fuzzy Sets", his pioneering and controversial paper, that now reaches almost 100,000 citations. All Zadeh’s papers were cited over 185,000 times. Starting from the ideas presented in that paper, Zadeh founded later the Fuzzy Logic theory, that proved to have useful applications, from consumer to industrial intelligent products".
[1] Zadeh L.A., Is there a need for fuzzy logic?, Information Sciences, 178, 2751-2779, 2008.
[2] Dzitac, I., Filip, F. G., & Manolescu, M. J. (2017). Fuzzy logic is not fuzzy: World-renowned computer scientist Lotfi A. Zadeh. International Journal of Computers Communications & Control, 12(6), 748-789.
Available on:
https://www.univagora.ro/jour/index.php/ijccc/issue/view/113/pdf_216
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Released 3 days ago. The paper by Maci, S. M., Plagiarism, Fraud, Retracted Papers, and Ethics in the Post-Pandemic Era: The State of the Art. Medical Discourse and Communication, 28-40, 2024, reports the state-of-the-art about plagiarism and retractions. One may read there: "The number of articles retracted by scientific journals had increased 10-fold during the previous 10 years. Fraud accounted for some 60% of those retractions. However, retraction may be due to other reasons. For instance, during the pandemic surge, retractions were necessary because the novelty of the virus made scientists to continuously update their knowledge about it. Furthermore, the more scientists published papers about Coronavirus-19, the quicker was the publication progress. The need for having up-to-date information about modelling epidemic, controlling spread, diagnostic and testing, as well as mortality, on the one hand has caused an augmented speed on typical peer-review systems (the hardest to sustain ever); on the other hand, a more in-depth knowledge about the virus itself led to retract papers whenever any information about it was overpassed by new knowledge."
See Also:
https://www.researchgate.net/post/Art_of_State-of-the-Art_on_Science_Knowledge
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
"We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level. We apply this approach to a case study of scientific peer review in AI conferences that took place after the release of ChatGPT...
Applying this method to conference and journal reviews written before and after the release of ChatGPT shows evidence that roughly 7-15% of sentences in ML conference reviews were substantially modified by AI beyond a simple grammar check, while there does not appear to be significant evidence of AI usage in reviews for Nature...
First, we show that reviewers are more likely to submit generated text for last-minute reviews, and that people who submit generated text offer fewer author replies than those who submit written reviews. Second, we show that generated texts include less specific feedback or citations of other work, in comparison to written reviews. Generated reviews also are associated with lower confidence ratings. Third, we show how corpora with generated text appear to compress the linguistic variation and epistemic diversity that would be expected in unpolluted corpora. We should also note that other social concerns with ChatGPT presence in peer reviews extend beyond our scope, including the potential privacy and anonymity risks of providing unpublished work to a privately owned language model..."
https://arxiv.org/abs/2403.07183
The volume "Mind Design III: Philosophy, Psychology, and Artificial Intelligence, by John Haugeland, Carl F. Craver, Colin Klein (Editors) The MIT Press, November 21, 2023," offers an essential collection of classic and contemporary essays on the philosophical foundations and implications of artificial intelligence.
Presentation: In the quarter century since the publication of John Haugeland’s Mind Design II, computer scientists have hit many of their objectives for successful artificial intelligence. Computers beat chess grandmasters, driverless cars navigate streets, autonomous robots vacuum our homes, and ChatGPT answers existential queries in iambic pentameter on command. Engineering has made incredible strides. But have we made progress in understanding and building minds? Comprehensively updated by Carl Craver and Colin Klein to reflect the astonishing ubiquity of machine learning in modern life, Mind Design IIIoffers an essential collection of classic and contemporary essays on the philosophical foundations and implications of artificial intelligence. Contributions from a diverse range of philosophers and computer scientists address the nature of computation, the nature of thought, and the question of whether computers can be made to think. With extensive new material reflecting the explosive growth and diversification of AI approaches, this classic reader equips students to assess the possibility of, and progress toward, building minds out of computers.
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"As part of the technological revolution, several academic institutions have embraced new initiatives and implemented changes in their campus life to leverage the potential of these advancements." From Venkateswaran et al., 2024, "Applications of artificial intelligence tools in higher education" [1]. I, the same vein, one may read there: "The integration of technology has facilitated a more efficient and modern educational experience for students and faculty alike. By adapting to these innovations, universities aim to stay competitive and relevant in the rapidly evolving educational landscape"
[1] Venkateswaran, P. S., Ayasrah, F. T. M., Nomula, V. K., Paramasivan, P., Anand, P., & Bogeshwaran, K. (2024). Applications of artificial intelligence tools in higher education. In Data-Driven Decision Making for Long-Term Business Success (pp. 124-136). IGI Global.
See:
Chapter Applications of Artificial Intelligence Tools in Higher Education
About the volume:
Data-Driven Decision Making for Long-Term Business Success serves as guidance and insights amidst this academic challenge.
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"It is our responsibility as a scientific community to pro-actively seek any opportunity to inform and help steer policy decisions and best practices based on the collective and cumulative knowledge generated through our research over the past five decades. Otherwise, the fact that we as a Community place ourselves on a high pedestal of greater knowledge and morally laudable ambitions will not shield us from being complicit in our disciplinary banner being used for low quality products or bad policy decisions related seemingly to AI in Education" Extract from the conclusion of the paper by Porayska-Pomsta, K. "A Manifesto for a Pro-Actively Responsible AI in Education. Int J Artif Intell Educ 34, 73–83 (2024)". Available on:
https://link.springer.com/article/10.1007/s40593-023-00346-1
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"AI has the potential to deliver great benefits for education. However, we have also seen that there are also risks associated with its use" This is what the just published "AI report" [1] states. One may pursue reading there: "However, using learning analytics without adequate teacher oversight may disadvantage students dealing with adverse life circumstances that are impacting their performance, thus increasing the risk level. When it comes to relying on AI for decisions that may impact a learner’s future opportunities, we are moving into the ‘high’ and perhaps ‘unacceptable’ risk territories. Therefore, we can see that the level of risk resides not so much within the tool as within the contexts in which they are used. While human oversight may help to mitigate some of the risks, we should be aware of the danger of dependence lock-in, in which humans become increasingly dependent to AI to make decisions. All this underscores the importance of the development of Explainable AI, as discussed above. In order to ensure its responsible use in educational settings, it is important to remain ever aware of the balance that needs to be struck between leveraging AI’s benefits and evaluating and mitigating potential risks and ensuring that human oversight is included and human values are served"
[1] Le Borgne, Y. A., Bellas, F., Cassidy, D., Vourikari, R., Kralj, L., Obae, C., ... & Weber, M. (2024). AI report. by the European Digital Education Hub’s Squad on Artificial Intelligence in Education,
Report Downloadable on:
https://repository.rcsi.com/ndownloader/files/44112797/1
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
"Integrating quantitative and qualitative language analysis in physics education research might be enhanced by recently advanced artificial intelligence-based technologies such as large language models, as these models were found to be capable to systematically process and analyse language data." Extract from just-published paper by Wulff, P. (Heidelberg University of Education, Germany,) "Physics language and language use in physics—What do we know and how AI might enhance language-related research and instruction, European Journal of Physics, 45(2), 2024." Available on: https://iopscience.iop.org/article/10.1088/1361-6404/ad0f9c/meta
The author argues in the conclusion that a deeper understanding of language usage in physics classrooms and designing language-sensitive instructional materials and guidance is necessary to offer students the most advantageous circumstances for understanding and producing physics.
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
"The time has come to summarise the initial Brussels-Washington consensus about what counts as AI for legal purposes." Extract From "Floridi, L. On the Brussels-Washington Consensus About the Legal Definition of Artificial Intelligence. Philos. Technol. 36, 87., 2023." There one can read about the revised definition of AI: "Artificial Intelligence (AI) refers to an engineered system that can, for a given set of human-defined objectives, generate outputs – such as content, predictions, recommendations, or decisions – learn from data, improve its own behaviour, and influence people and environments"
Paper available on:
https://link.springer.com/article/10.1007/s13347-023-00690-z
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Recent SS post in https://www.researchgate.net/post/Ideas_for_Aether_Theory_of_Gravity/3
[ about ChatGPT] is relevant to this thread question.
Cheers
The just-published research by Fadli, Radinal, et al. "Effectiveness of Mobile Virtual Laboratory Based on Project-Based Learning to Build Constructivism Thinking, International Journal of Interactive Mobile Technologies 18.6, 2024," is about "Constructivism" as a theoretical basis in education that views learning as an active process where students play a role in constructing their knowledge. In particular, the research explores "the effectiveness of this approach in building constructivist thinking in learning electrical installation practices. The research results show that the Mobile Virtual Laboratory is valid based on expert assessment, in addition there has been a significant increase in students' understanding of concepts and practical skills, with a high level of satisfaction with the use of the Mobile Virtual Laboratory. So, it can be concluded that the Mobile Virtual Laboratory based on Project-Based Learning can be an effective tool to support constructivist learning in electrical measurement practice. The results of this research open opportunities for further research on how Artificial Intelligence can build constructive thinking."
See Also:
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so
Is ChatGPT polluting peer review?
"Alongside using AI tools for writing research papers, academics might now be using ChatGPT to assist in peer review, according to a preprint (itself not peer reviewed). The study looked at conference proceedings submitted to four computer-science meetings and identified buzzwords typical of AI-generated text in 17% of peer review reports. The buzzwords included positive adjectives, such as ‘commendable’, ‘meticulous’ and ‘versatile’. It’s unclear whether researchers used the tools to construct their reviews from scratch or just to edit and improve written drafts..."
https://www.nature.com/articles/d41586-024-01051-2
On the Publish or Perish Dogma. Just released: The Second Edition of the well-referenced book "Moosa, Imad A. Publish or perish: Perceived benefits versus unintended consequences. Edward Elgar Publishing, 2024." Here is an excerpt from the interesting review of the book by Hugh David, in Ergonomics, 62(12), pp. 1630–1631: "Moosa's work is a valuable contribution to the topic of research in academia, as it highlights the problems experienced by many academics worldwide. As an academic, I can subscribe to the opinion that conflict exists between teaching and research and that as a consequence, students are often neglected. This being said, however, it would be too simplistic and sweeping a statement to negate the obligation of academics to teach as well as to publish research material. It is also too simplistic to state categorically that when good teachers publish this could not be a good thing. For one, the growing body of research in the field of education can be regarded as a positive result arising from the pressure on academics to publish. Argument seems tantamount to throwing the baby out with the bath water, metaphorically speaking. There are many academics who perform valuable research and still remain good teachers, with their research having a positive effect on their tuition. Surely a compromise could be reached between Moosa's two extreme options, facilitating a more nuanced system? What I find lacking in Moosa's book is a historical view of what academia used to be and why changes were necessary at all. Should his recommendation be to return to a previous scenario of "publish or perish". Review available on:
https://www.researchgate.net/publication/329555588_Publish_or_Perish_Perceived_Benefits_versus_Unintended_Consequences [accessed Apr 18 2024].
Moosa's Book Consultable on:
https://books.google.tn/books?hl=fr&lr=&id=-7H1EAAAQBAJ&oi=fnd&pg=PR1&dq=publish+or+perish&ots=EVBzwn9LZz&sig=JarWMYIU6vJalAqzqHKhNUAX2pg&redir_esc=y#v=onepage&q=publish%20or%20perish&f=false
See Also:
https://www.researchgate.net/post/Scientific_Integrity_Research_Ethics_and_Higher_Education_Deontology_The_Senior_Scholars_Duty
https://www.researchgate.net/post/Science_and_history_serving_political_and_ideological_narratives
"Accommodating and fusing different voices and knowledge is a must for the reformation of equity, equality, and justice in AI technology creation. Art is for everyone, and the tools we use to make art, especially AI tools, should enable and empower just and equitable creation." From the conclusion of the just-published paper by Tatar, K., Ericson, P., Cotton, K., Del Prado, P. T. N., Batlle-Roca, R., Cabrero-Daniel, B., ... & Hussain, J. (2024), "A shift in artistic practices through artificial intelligence, Leonardo, 293-297."
Abstract: The explosion of content generated by artificial intelligence (AI) models has initiated a cultural shift in arts, music, and media, whereby roles are changing, values are shifting, and conventions are challenged. The vast, readily available dataset of the Internet has created an environment for AI models to be trained on any content on the Web. With AI models shared openly and used by many globally, how does this new paradigm shift challenge the status quo in artistic practices? What kind of changes will AI technology bring to music, arts, and new media?
See Also:
https://www.researchgate.net/post/ABC_Art_Blues_Culture_Forum_for_Lovers_of_Blues_Soul_and_other_Authentic_Music
https://www.researchgate.net/post/Science_Conscience
"In the economic sector, Artificial Intelligence (AI) is taking the lead. Or at least that was the idea. Many economists believe that generative AI is about to transform the global economy." From the just-published editorial paper by Justinek, G. (2024). the instability in the air. Int. J. Diplomacy and Economy, 10(1), 1. Available on: https://www.inderscience.com/info/dl.php?filename=2024/ijdipe-8282.pdf
There, one may read: "A paper published last year by Ege Erdil and Tamay Besiroglu of Epoch, a research firm, argues that ‘explosive growth’, with GDP zooming upwards, is ‘plausible with AI capable of broadly substituting for human labour’. Erik Brynjolfsson of Stanford University has said that he expects AI ‘to power a productivity boom in the coming years’. Yet, for such an economic transformation to take place, companies need to spend big on new software, communications, factories and equipment, enabling AI to slot into their production processes. An investment boom was necessary to allow previous technological breakthroughs, such as the tractor or the personal computer, to spread across the economy. From 1992 to 1999 American non-residential investment jumped by 3% of GDP, for instance, driven in large part by extra spending on computer technologies. Yet so far there is little sign of an AI splurge. Across the world, capital expenditure by businesses (or ‘capex’) is remarkably weak. These trends suggest one of two things. The first is that generative AI is a busted flush. Big tech firms love the technology, but are going to struggle to find customers for the products and services that they have spent tens of billions of dollars creating. It would not be the first time in recent history that technologists have overestimated demand for new innovations. Think of cryptocurrencies and the metaverse. The second interpretation is less gloomy, and more likely. The adoption of new general-purpose technologies tends to take time. Return to the example of the personal computer. Although Microsoft released a ground-breaking operating system in 1995, American firms only ramped up spending on software in the late 1990s. Analysis by Goldman Sachs suggests that while only 5% of chief executives expect AI to have a ‘significant impact’ on their business within one to two years, 65% think it will have an impact in the next three to five. AI is still likely to change the economy, but with a whimper not a bang (The Economist, 2024 [1])."
[1] The Economist (2024) What happened to the artificial-intelligence investment boom? Available online at: https://www.economist.com/finance-and-economics/2024/01/07/what-happened-to
the-artificial-intelligence-investment-boom
See also
https://www.researchgate.net/post/Raphael_Enthoven_thinks_that_a_machine_will_never_be_a_philosopher_Do_you_think_so