What do you think about ChatGPT and ways it's been using?
Few new uses of ChatGPT:
1.- https://cleanup.pictures/ Remove unwanted objects from #photos, people, text, and defects from any picture. 2.- www.resumeworded.com Online #resume and #Linkedln grader instantly scores your resume and Linkedln profile and gives you detailed feedback on how to get more opportunities and interviews. 3.- https://soundraw.io/ Soundraw is a #music generator for creators. Select the type of music you want genre, instruments, mood, length, etc and let Al generate beautiful songs for you. 4.- www.looka.com Design a #Logo, make a #website, and create a #brand identity you'll love with the power of Al. 5.- www.copy.ai Get a great copy that sells. #Copy.ai is an Al-powered #copywriter that generates high-quality copy for your business.”
Due to the increased traffic over the internet via every type of coding/computational skills with dark web activities. The cyber Risks have significantly increased. That is why there is a recently increased interest in cybersecurity via machine learning algorithms of artificial intelligence. The aim of the new world order of cybersecurity is to have a control over the cybercriminals and the use of ethical or humanized machine learning or cybersecurity.....
Rajko Sekulovic ChatGPT is a strong language model created by OpenAI that can generate human-like writing in response to a given prompt. The model has been trained on a vast collection of text, allowing it to create cohesive and grammatically accurate content.
It's fascinating to see how ChatGPT is being used in different sectors and applications. The examples you've given, such as Cleanup. pictures, Resumeworded, Soundraw, Looka, and Copy.ai, show the model's versatility and potential.
ChatGPT may be used to eliminate undesired elements from images and make high-quality copies for a business, saving time and effort while producing extremely accurate results. The application of ChatGPT in music generating and website design is also an excellent demonstration of the model's possibilities.
ChatGPT is an example of a Generative Pre-trained Transformer model, similar to GPT-2, GPT-3, RoBERTa, and so on. These models have been pre-trained on huge amounts of text data and may be fine-tuned for specific tasks through additional training on smaller datasets. As a result, these models can provide high-quality output for a wide range of natural language tasks.
Overall, I believe ChatGPT is a great tool for producing language that is indistinguishable from human-written material. As demonstrated by the examples you cited, it has the potential to disrupt numerous sectors by automating time-consuming and unpleasant operations. However, it is vital to highlight that these models are still trained on existing data and biases, thus a person must evaluate the final output to ensure quality and eliminate biases.
We have to look beyond the hype and look at what Chat-GPt can really do and not what we imagine it can do.
True, ChatGPT can write coherent sentences, yet it has only processed a language pattern. This does not imply that what it generates is true or accurate (Chat-GPT does not analyze the data to verify its content). The generation of sentence pattern is only as accurate as the information fed during the training. Due to the massive amount of information it is given, it is unlikely that every piece of information has been vetted as reliable (some of the earlier iteration of transformers were fed Reddit posts and go figure how accurate are those!!).
In addition to the accuracy problem there is also the potential for attacks using prompt injection.
The bottom line for me is that while the technology is good, it should be treated judiciously. That is, unless one is looking to get into trouble for blindly relying on a technology most people do not even know how it works (yes, before using it be sure to know how it works!!!).
"Yes, scientists can be fooled, their new study reports. Blinded human reviewers – when given a mix real and falsely generated abstracts -- could only spot ChatGPT generated abstracts 68% of the time. The reviewers also incorrectly identified 14% of real abstracts as being AI generated."
https://news.northwestern.edu/stories/2023/01/chatgpt-writes-convincing-fake-scientific-abstracts-that-fool-reviewers-in-study/?fj=1
ChatGPT can not clean pictures or create logo. Rajko Sekulovic . The examples with different industry are not relevant.
Google Search Has Nothing to Fear From ChatGPT
Saying ChatGPT will replace search is like saying podcasts will replace universities. They do two different things....
ChatGPT is good at what it does — generating what appears to be knowledge in a conversational manner — but a search engine it is not. It responds to prompts like you might expect a really smart person to, even if it can’t directly answer your questions....
https://undark.org/2023/01/19/google-search-has-nothing-to-fear-from-chatgpt/
We learnt that the artificial-intelligence (AI) chatbot ChatGPT can write fake abstracts that scientists have trouble distinguishing from those written by humans. And that publishers are scrambling to regulate the use of the easy-to-use, free tool, which has already started popping up on author lists...
https://greylock.com/greymatter/the-human-ai-partnership/
https://www.nature.com/articles/d41586-023-00056-7
Ljubomir Jacić ,
The nature article you shared brings out what I think is one of the main ailments of scientific publishing. How to carry out a good peer review. Using one of the quotes in the article:
'“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint'
The problem is that believable should have not been the word nor evaluation criteria of using just abstracts. All of us who do peer review must always rely on our knowledge, but, there is also a due diligence. A proper evaluation must include looking at the sources and seeing if improper citation or possible cutting and pasting is done. This is something ChatGPT cannot do and my huge concern is that the "co authors" of the work will not do because they rely on ChatGPT being right.
The other good point to highlight is some of the assumptions people make such as:
'Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.”'
As you know I this has started in RG proving that this assumption could be put on shaky ground (though he made the caveat on 'serious scientist').
Thanks as always for your your contribution
You are very welcomed, as always, dear Arturo Geigel . Best regards!
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse...
From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trust-worthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner...
https://www.nature.com/articles/d41586-023-00191-1
Some of the world’s biggest academic journal publishers have banned or curbed their authors from using the advanced chatbot, ChatGPT. Because the bot uses information from the internet to produce highly readable answers to questions, the publishers are worried that inaccurate or plagiarised work could enter the pages of academic literature...
https://theconversation.com/chatgpt-our-study-shows-ai-can-produce-academic-papers-good-enough-for-journals-just-as-some-ban-it-197762
New tools for ai recognition in text:
https://the-decoder.com/stanford-detectgpt-and-gptzerox-new-tools-for-ai-text-recognition/
New AI classifier for indicating AI-written text
https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/
Academics are very likely aware of the emergence and rapid growth of accessible and user-friendly AI writing software, such as the popular ChatGPT, and its potential utility in academia. Much of the current discourse from academics highlights fears about a rise in academic misconduct, but it would be remiss to ignore some of the potential advantages of AI.
Here, we identify three potential roles that AI could play in bridging attainment gaps...
https://www.timeshighereducation.com/campus/how-use-chatgpt-help-close-awarding-gap?utm_source=newsletter&utm_medium=email&utm_campaign=editorial-daily&spMailingID=24092568&spUserID=MTAxNzcwNzE4MTk2NAS2&spJobID=2170036016&spReportId=MjE3MDAzNjAxNgS2
A judge in Colombia consulted OpenAI's chatbot ChatGPT in preparing a ruling in a children's medical rights case...
Juan Manuel Padilla, a judge in the Caribbean city of Cartagena, said that he had sought advice from the chatbot in a case that involved excluding an autistic child from paying fees for medical appointments, therapy, and transportation, considering his parents' limited income. It is to be noted that he also used precedent from previous rulings to confirm his decision...
https://interestingengineering.com/innovation/chatgpt-makes-humane-decision-columbia
This is what happens when you over rely on a technology which is trained at scale (i.e. the data is not curated and quality cannot be assured):
https://edition.cnn.com/2023/02/08/tech/google-ai-bard-demo-error/index.html
https://www.telegraph.co.uk/technology/2023/02/08/googles-bard-ai-chatbot-gives-wrong-answer-launch-event/
Google to launch ChatGPT rival
When ChatGPT made its viral debut just over two months ago, one of the many questions it raised was how Google would be impacted. The search giant hopes to provide some answers with the introduction of its own AI chatbot, Bard, in "the coming weeks...
Bard has been rolled out to external quality testers, and that the public will soon see AI-powered features in Google Search...
The key here will be in the details. In what way is Bard differentiated from ChatGPT? Things that would be interesting to me would be more timely training data as ChatGPT stopped its training in 2021. Looking into this further, yes, Bard will be trained on timely information from the internet. Also, is Bard more reliable in accuracy? That would be extremely valuable...
https://www.linkedin.com/news/story/google-to-launch-chatgpt-rival-6165354/
Right on the heels of Google announcing Artificial Intelligence chatbot Bard, Microsoft has beefed up its search engine Bing with the latest AI sensation, OpenAI's ChatGPT...
The arrival of ChatGPT has triggered an AI arms race between tech behemoths. In addition to Google releasing Bard, China's Baidu also revealed the release of a generative AI chatbot based on a language model bigger than GPT-3...
https://interestingengineering.com/innovation/ai-powered-bing-model
"One of the biggest risks to the future of civilization is AI," Elon Musk told attendees at the World Government Summit in Dubai, United Arab Emirates.
https://www-cnbc-com.cdn.ampproject.org/c/s/www.cnbc.com/amp/2023/02/15/elon-musk-co-founder-of-chatgpt-creator-openai-warns-of-ai-society-risk.html
ChatGPT Is Dumber Than You Think
Treat it like a toy, not a tool...
ChatGPT lacks the ability to truly understand the complexity of human language and conversation. It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words. This means that any responses it generates are likely to be shallow and lacking in depth and insight...
https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
"A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me...
These A.I. models hallucinate, and make up emotions where none really exist. But so do humans. And for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same..."
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html?unlocked_article_code=-yy7PUO9L6i3F_Oubh1ZmL_VBDQfz7pWjvQHvfN-kgniDNUDUm38QyEoDIe9PSUARAjw3xMxmmfy_Y4_CFI6Jr1cAnSN91b6IrACZfV3ZfGWgRSbOovCY_OxuAQckDUWuVir2e2Nw17ThN9lk9hnfhZzrauqMMXBdhl-V0AU1GajNLNSVLZaZUbGwtQNZBX8cavFlt_HdbgVxJLGP-PBtyJRCyYB8MDuemDT_XvAazTdv5Zh34Edr5qypn2EL1y0Aw7NQ9xpGgRwHEI3mEXEeSMpfI9IlumOHPbSnaS86bJbgi-PCC4o-Em9hQg6csQkOh4uuXUQedKALsQ4c_N_6u2Qf1AngGEgrjqW&smid=url-share
Smooth language generation but poor semantic . There is no understanding, only statistical relation.
Artificial intelligence in academia is nothing new. However, the ease with which ChatGPT and other AI writing programmes generate essays, research articles or songs has sent tremors across the higher education sector. Will it be a liberating force for good that frees up cognitive space for deeper thinking? Or does its potential for cheating and shortcuts signal the end of critical thinking and academic integrity? The truth, as ever, probably lies somewhere in between, but one thing is sure: we cannot ignore it...
https://www.timeshighereducation.com/campus/collections/ai-transformers-chatgpt-are-here-so-what-next
Elon Musk is working on a "new research lab to develop an alternative to ChatGPT," OpenAI's chatbot, which he co-founded earlier and later "cut ties" with.
The tech billionaire has reached out to AI researchers in recent weeks to develop a ChatGPT "alternative."
"OpenAI was created as an open source (which is why I named it "Open" AI), [a] non-profit company to serve as a counterweight to Google," Musk said in response to a tweet last month seeking his reaction to comments he made earlier in February...
https://www.linkedin.com/pulse/elon-musk-planning-chatgpt-rival-interestingengineering/
Salesforce hopes to add the magic of large language generative models to its portfolio of sales, marketing and communications platforms by marrying OpenAI’s chat juggernaut to its own Einstein AI data-crunching machine. The resulting mix is named Einstein GPT; though, the company has not announced when its official launch will be...
https://www.techrepublic.com/article/salesforce-openai-chatgpt-powers-einstein-ai/
Dear researchers,
Please refer to a recent published paper on ChatGPT and Learning Motivation
Article Impact of ChatGPT on Learning Motivation: Teachers and Stude...
All my best!
Some scientists think this ‘bigger is better’ approach will only suck up more electricity, and hope that mimicking aspects of the brain will help AI to become smarter and more energy-efficient...
https://www.nature.com/articles/d41586-023-00641-w
Microsoft is bringing chat AI to its 365 suite with a new large language model-powered tool called Copilot, the company said today.
Microsoft 365 Copilot combines large language models, integrated with a user’s own data in Microsoft Graph (which draws from context and content, such as emails, files and meetings) and the Microsoft 365 apps. Copilot is an AI writer; it’s able to draft email responses, write copy, plan and summarize meetings, and answer questions such as “Which product was most profitable this year?”
https://www.techrepublic.com/article/microsoft-copilot-ai-productivity-365-suite/
Some standardised tasks can be automated using tools like ChatGPT. It writes good emails and small, simple texts. However, it often needs a critical review and editing. It could well be that through this process of polishing the raw, machine-made text, we become more aware of the differences between humans and machines and learn to value our creativity and playfulness...
https://www.universityworldnews.com/post.php?story=20230222121538591
Dear Rajko Sekulovic , I have just got this news!
Google Cloud opens enterprise AI tools to developers
Generative AI App Builder is a tool designed to create apps that use conversational AI for whatever the user needs, connecting directly to Google’s out-of-the-box search capabilities and foundation models.
Google Cloud’s Vertex AI platform, which enterprises can use to build and deploy machine learning models and AI applications at scale, now has access to foundation models. This means enterprise customers can discover models, create and modify prompts and fine-tune those prompts with data from their own companies. For now, Vertex AI can create text and images; Google expects video and audio to follow...
https://www.techrepublic.com/article/google-cloud-enterprise-ai-api-developers/
Generative AI, in the form of image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA, has exploded in the public sphere. By combining clever machine-learning algorithms with billions of pieces of human-generated content, these systems can do anything from create an eerily realistic image from a caption, synthesize a speech in President Joe Biden’s voice, replace one person’s likeness with another in a video, or write a coherent 800-word op-ed from a title prompt...
How to mitigate these abuses.? A key method is watermarking...
Generative AI systems can, and I believe should, watermark all their content, allowing for easier downstream identification and, if necessary, intervention. If the industry won’t do this voluntarily, lawmakers could pass regulation to enforce this rule. Unscrupulous people will, of course, not comply with these standards. But, if the major online gatekeepers – Apple and Google app stores, Amazon, Google, Microsoft cloud services and GitHub – enforce these rules by banning noncompliant software, the harm will be significantly reduced...
https://insights.cermacademy.com/412-watermarking-chatgpt-dall-e-could-help-protect-against-fraud-hany-farid-ph-d/
This wave of AI-generated content is so advanced—and moving so quickly— that it’s driving controversy. Arguments over fairness in art competitions, the ethics of mimicking artists’ styles, legal risks, and the impact on people’s livelihoods have flared. And yet, while relevance to the art industry is clear, businesses in other fields may still see it as mere novelty—and they’re making a mistake...
https://www.accenture.com/content/dam/accenture/final/accenture-com/a-com-custom-component/iconic/document/Accenture-Technology-Vision-2023-Full-Report.pdf
There exists a trend for connecting existed AI platforms, for example ChatGPT has a connection to Wolfram Alpha, see here for details:
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
I think we must wait a few months and then we'll have the first results.
ChatGPT is changing manufacturing...here's how to use it
"The chatbot can provide quick responses for common issues that customers have, enable faster diagnosis and suggest personalized recommendations."
https://www.smartindustry.com/artificial-intelligence/article/33002138/how-is-chatgpt-changing-manufacturing?utm_source=SIND+Update&utm_medium=email&utm_campaign=CPS230420035&o_eid=4175B5775701D0Y&rdx.ident[pull]=omeda|4175B5775701D0Y&oly_enc_id=4175B5775701D0Y
The light and dark sides of AI have been in the public spotlight for many years. Think facial recognition, algorithms making loan and sentencing recommendations, and medical image analysis. But the impressive – and sometimes scary – capabilities of ChatGPT, DALL-E 2 and other conversational and image-conjuring artificial intelligence programs feel like a turning point.
The key change has been the emergence within the last year of powerful generative AI, software that not only learns from vast amounts of data but also produces things – convincingly written documents, engaging conversation, photorealistic images and clones of celebrity voices...
https://insights.cermacademy.com/415-generative-ai-5-essential-reads-eric-smalley/
Generative AI thrives on exploiting people’s reflexive assumptions of authenticity by producing material that looks like ‘the real thing.’ With text, image, audio and video all becoming easier for anyone to produce through these new tools, we need to recalibrate how authenticity is judged in the first place...
The capabilities of generative AI have surprised many and will challenge everyone to think differently. But I believe humans can use AI to expand the boundaries of what is possible and create interesting, worthwhile — and, yes, authentic — works of art, writing, and design...
https://undark.org/2023/05/02/opinion-rethinking-authenticity-in-the-era-of-generative-ai/
How GPT works – see an recent example – how GPT-4 cannot answer to the question “What is the “The Information as Absolute” conception?” in https://www.kharkovforum.com/threads/what-is-the-information-as-absolute-conception.5429665/in now ten days already.
Though because of that the authors have quite strange names and live in some strange country, any information about the conception [recent version in https://www.researchgate.net/publication/363645560_The_Information_as_Absolute_-_2022_ed, https://arxiv.org/abs/1004.3712], which was basically formulated yet in 2007, and which solves or essentially clarifies all, at least ontological, problems in real philosophy, what is fundamentally impossible in mainstream philosophy,
- while such conception must be developed only by authors that have correct names and live in correct countries,
- the information about the authors and the conception was/is blocked all time everywhere that is possible; first of all more dozen of submissions of the corresponding papers were rejected by editors of a number of philosophical journals in 2007-2013 years. Later submissions weren’t made because of that is senseless.
So it looks that GPT seems strictly knows that, and at the answering to the question above seems has some insurmountable cognitive dissonance problems…
Cheers
As a number of people have mentioned - including most recently by Sergey Shevchenko - there are some disadvantages and limitations of AI. We need to recognise these - and work with them.
https://medium.com/codex/how-far-can-artificial-intelligence-go-the-8-limits-of-machine-learning-383dd9b2f7bd
“…As a number of people have mentioned - including most recently by Sergey Shevchenko - there are some disadvantages and limitations of AI. We need to recognise these - and work with them.
https://medium.com/codex/how-far-can-artificial-intelligence-go-the-8-limits-of-machine-learning-383dd9b2f7bd ….”
- really that isn’t principally essential. Really there is nothing too important in the “AI” – really any computer is some “AI”, since even in the simplest case “calculations” it well simulates just “natural human’s intellects”, which also calculate; that computer uses binary digits when humans use decimal ones is evidently the same.
So really any AI – as that is shown in the really adequate article in the link in the quote above – really “knows” only what programmers downloaded; and, though in some cases analyzes some downloaded/simulated concrete situations much more effectively [say, calculates in billions times faster], but it fundamentally isn’t able to create some really new information – that, as that is rigorously proven in the really philosophical 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
https://www.researchgate.net/publication/363645560_The_Information_as_Absolute_-_2022_ed
- only the fundamentally non-material informational systems “Consciousnesses” are able to do; while any AI, though is governed by fundamentally non-material soft shell, which was developed by some consciousnesses, remains to be eventually first of all an material informational system; more see the link above.
Cheers
ChatGPT, a large language model developed by OpenAI, is transforming the way content is written, edited, and published. There are several other tools out there but none of them are as popular as ChatGPT. While some see this as a disruptive technology that poses a threat to the research publishing industry, others view it as an opportunity for advancement. So, which is it? Or is there a middle ground?
This panel discussion will aim to address some of the most burning queries related to the use of ChatGPT in academia and scholarly communication. It will be an open forum where researchers and other industry stakeholders can ask anything and everything about generative AI tools and their benefits and limitations, as well as the extent to which these tools can be used without breaching the ethical code of conduct...
https://www.enago.com/events/ChatGPT-and-AI-tools-in-Academic-Publishing/
How ChatGPT lied to me
"I remembered a phenomenon known as the artificial-intelligence hallucination problem."
At least the AI confessed. But don’t be fooled. ChatGPT and other such large language-model programs sound smart but should be fact-checked. They aren’t ready for the responsibility of teaching humans. At times they fabricate information. Be too trusting and you’ll get the chatbot blues...
https://www.smartindustry.com/artificial-intelligence/article/33005496/how-chatgpt-lied-to-me?utm_source=SIND+Update&utm_medium=email&utm_campaign=CPS230525074&o_eid=4175B5775701D0Y&rdx.ident[pull]=omeda|4175B5775701D0Y&oly_enc_id=4175B5775701D0Y
This research aims to explore the perceptions of educators and students on the use of ChatGPT in education during the digital era...
Article The Use of ChatGPT in the Digital Era: Perspectives on Chatb...
How ChatGPT answers to questions in a dialogue, when she knows that writing of wordings ““The Information as Absolute” conception”, “informational physical model”, “Sergey Shevchenko” and “Vladimir Tokarevsky”, are strictly prohibited in any publications and any scientific, and pop-scientific, communities,
- see https://www.kharkovforum.com/threads/what-is-the-informational-physical-model.5431758/#post-71249524 and
https://www.kharkovforum.com/threads/what-is-the-information-as-absolute-conception.5429665/#post-71245266
Cheers
Text and images generated by artificial intelligence (AI) are complicating publishers’ efforts to tackle paper mills, companies that produce fake scientific papers to order...
https://www.nature.com/articles/d41586-023-01780-w
Article AN INVESTIGATION ON THE CHARACTERISTICS, ABILITIES, CONSTRAI...
The paper explores the specific characteristics and abilities of the ChatGPT Support System. Furthermore, the paper identifies and discusses the essential functions that ChatGPT plays in the contemporary era. Additionally, the building blocks of character AI are neural language models, which have been specifically designed with conversations in mind. This technology employs deep learning techniques to analyze and generate text. The model has the capability to comprehend the nuances of natural language generated by humans, through the huge volumes of data gathered from the internet. The paper further contributes to artificial intelligence (AI) and information technology body of knowledge...
“…Text and images generated by artificial intelligence (AI) are complicating publishers’ efforts to tackle paper mills, companies that produce fake scientific papers to order...
https://www.nature.com/articles/d41586-023-01780-w .…..”
- thus news article in one of utmost prestige Nature journal informs that
“…Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper-mill content, which could prove particularly difficult to detect. These were among the challenges discussed by research-integrity experts at …UNITED2ACT summit 24 May, which was convened by the Committee on Publication Ethics (COPE), which focused on the paper-mill problem.… The summit brought together international researchers, including independent research-integrity analysts, as well as representatives from funding bodies and publishers….”
Really the summit problem above looks as rather, if too, strange. Any really scientific publication must satisfy to a few quite clear criteria:
- the publication must contain new and actual, i.e. that is applicable in essentially diverse scientific branches, results;
- a presented in a publication theory, model, approach, etc., must be in accordance with existent experimental data;
- the theory, model, approach, etc. must be self-consistent;
- from the theory, model, approach, etc. any scientifically senseless consequence must not follow.
That’s all, so, say, in this case really it is quite inessential – what developed some really scientific theory, model, approach, etc., and submitted corresponding really scientific paper to any scientific publication source – either some real scientist(s) or some AI.
That is quite another thing, that in last more 50 years in mainstream publication sources, including prestige journals, numerous papers were/are published/publishing, which really by no means are in accordance with the criteria above
– that quite evidently, and completely rigorously, follows from the evident experimental fact: despite that in the mainstream prestige journals, including Nature, every month some fundamental breakthroughs are published, the recent sciences, say, physics -2023 is really the same as physics-1980 and really on 90% physics-1940, including contains till now all really fundamental flaws that were introduced in physics more 100 years ago.
I.e. really all what was/is published in the mainstream at least last 50 years [besides really technological papers] is nothing else than some fantastic fairy tales, where the authors, making really by no means scientific assumptions, further derive – though new, but “tooo new”, and so really non-applicable in sciences, results,
- and at that practically always the presented in publications theories, models, approaches, etc., either have rather questionable accordance with a small types experiments, or – mostly – “can be experimentally tested in future”.
Though in most cases these theories, models, approaches, etc. are self-consistent, that is a purely formal consistence of really unscientific transcendent assumptions and derived provisions; and, correspondingly, from these theories, models, approaches, etc., correspondingly really practically only scientifically senseless consequences follow.
And yeah, in this case in mainstream sciences quite real problem appears in that any developed enough AI now really can compose quite analogous, as published by “real scientists” above, theories, models, approaches, etc., and make thousands of completely equally “scientific” submissions in a day.
Really the situation in recent mainstream science exists, first of all, because of that in the mainstream all really fundamental phenomena/notions, first of all in this case “Matter” [and so everything in Matter, i.e. “particles”, “fields”, etc.], “Consciousness”, “Space”, “Time”, “Energy”, “Information”, are fundamentally completely transcendent/uncertain/irrational,
- and so in every case, when some mainstream authors address to some really fundamental task, the result completely obligatorily logically is nothing else than some transcendent fantastic mental constructions – as that really is in the mainstream.
Real science can be developed only basing on the philosophical 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
https://www.researchgate.net/publication/363645560_The_Information_as_Absolute_-_2022_ed
, where the phenomena/notions above are rigorously scientifically defined; and, first of all real physics should be based on the Shevchenko-Tokarevsky’s informational physical model , which is based on the conception; 3 main papers are
https://www.researchgate.net/publication/354418793_The_Informational_Conception_and_the_Base_of_Physics,
https://www.researchgate.net/publication/355361749_The_informational_physical_model_and_fundamental_problems_in_physics, and
https://www.researchgate.net/publication/369357747_The_informational_model_-Nuclear_Force
However more 60 submissions of SS&VT papers in mainstream philosophical and other scientific journals, etc.. were/are rejected by editors of the journals and moderators of preprint sources, the last case see https://www.researchgate.net/publication/369357747_The_informational_model_-Nuclear_Force , despite that all submissions are in full accordance with the criteria above, and all relations had/have only one goal – to prevent any information about the authors, conception and models, so, that if the “alive real authors problem” will be solved, some “correct” authors could “discover” that is done by real authors. And activity in Kiev some people that attempt to solve this problem again sharply increased.
And - how the poor ChatGPT cannot write something about the authors, conception and the models see, say, https://www.kharkovforum.com/threads/what-is-the-information-as-absolute-conception.5429665/#post-71245266
Cheers
ChatGPT is certainly gonna be misused and abused for production of many more badly needed papers by academicians who are in shortage of published articles !!
ChatGPT predicts and answers accurate sentences from ambiguous sentence prompts. However, it is necessary to check the answer and point out any mistakes. Therefore, it is a tool that gives wrong answers if used by users who cannot check the meaning.
New AI developments are exciting, but we cannot lose sight of the undesirable implications of these emerging technologies. AI and machine learning bring new corruption risks and it is essential to take them seriously...
https://www.transparency.org/en/blog/bribes-for-bias-can-ai-be-corrupted?utm_source=newsletter&utm_medium=email&utm_campaign=weekly-09-06-2023
With new AI software emerging on a near-daily basis, policymakers are struggling to keep up. This gives those who develop the technology the power to do as they please, which can increase corruption. It is important to establish adequate safeguards to overcome risks of misuse for private gain...
https://knowledgehub.transparency.org/product/the-corruption-risks-of-artificial-intelligence?utm_source=newsletter&utm_medium=email&utm_campaign=weekly-09-06-2023
Ljubomir Jacić , "ChatGPT" stands for "Chat Generative Pre-trained Transformer." ChatGPT is a new translator with AI, right? The meaning should be checked by the user.
Unfortunately , it has remained almost hidden : How Much we all may be checked and controlled by this increasing AI thing . I am not directly talking of its effects on job losses .I am not directly talking of its damaging effects on truly scientific ,non-automatized science production .In this post , I am only drawing attention to " How Much we all may be checked and controlled by this increasing AI thing " .
Internet itself was already a very shrewd way of checking on us .Now AI is not incrementally , but leapingly [ by leaps and bounds ] bringing us under surveillance ...........
Salesforce's series of generative AI, ChatGPT-style tools rolling out this year are designed to provide customers with large language models integrated with the company's products...
https://www.techrepublic.com/article/salesforce-launches-ai-cloud-generative-ai-tools/
Get up and running with ChatGPT with this comprehensive cheat sheet. Learn everything from how to sign up for free to enterprise use cases, and start using ChatGPT quickly and effectively...
This cheat sheet includes answers to the most common questions about ChatGPT and its competitors.
Answers to the following questions follow:
https://www.techrepublic.com/article/chatgpt-cheat-sheet/
Generative artificial intelligence (AI) tools, such as ChatGPT, are grabbing headlines, but there is a quiet revolution going on when it comes to open-source chatbots. A volunteer-developed system called BLOOM is a large language model designed for researchers. And LLaMA — a model originally developed by Facebook’s parent company, Meta — has been shrunk to the point where it can run on a laptop instead of needing a huge computing facility. Making neural networks open source will make for more accessible, more transparent AI and reduce the systems’ biases, say proponents. Critics worry that making these powerful tools broadly accessible increases the chances that they will end up in the wrong hands...
https://www.nature.com/articles/d41586-023-01970-6
Who Is Going to Make Money from Artificial Intelligence in Scholarly Communications?
Let’s stop fighting about whether AI is poison that is being poured into our ears and focus on our own roles and interests in developing it. We can work it out.
Which brings us to the matter of copyright. Who owns the cultural content that AIs hoover up to build new machines, new intelligences? The debate is on...
What publishers need is more copyright protection, not less. Many people in the scholarly publishing community have set their sights on the goal of open access (OA), in an attempt to democratize scholarly communications further. This is an admirable objective, but it is a small one: to assist humans on the perimeter of the (human) research community, especially those with little or no relationship to the industry’s major institutions and most potent brands...
https://scholarlykitchen.sspnet.org/2023/07/12/who-is-going-to-make-money-from-artificial-intelligence-in-scholarly-communications/?informz=1&nbd=6f03e560-5431-4744-8998-e00223ee7a82&nbd_source=informz
One of my friends had said the following :
Sticks and stones may break my bones, but words shall never hurt me.
That’s a classic adage.
Let’s see how this handy-dandy rule applies to generative AI.
When you make use of generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI or any other such AI app including Bard (Google), Claude (Anthropic), etc., the AI produces text essays and can interact with you via text or words. Those words are merely words. By this, I mean that there aren’t any direct consequential actions and there isn’t anything especially physically active about the essays and the interaction. For my extensive coverage of how generative AI works, see the link here and the link here, just to name a few.
There aren’t usually any sticks or stones involved.
Nothing particularly physical happens in the real world as a result of the generative AI spewing out words. That being said, the person reading the words might end up doing something of a physically tangible effort as a result of consuming the words. If the generative AI tells you to go pour a bucket of water on your head, presumably nothing would happen unless you consequently opt to find a bucket and pour water on your head.
Artificial intelligence (AI) is currently all the rage in our global economy. The launch of ChatGPT broke all of the records for user adoption – Reuters reported that ChatGPT achieved 100 million users in two months.
The AI boom has created a demand for talent, products, services, and so on, that promises a better society. However, we are also experiencing bad actors taking advantage of the situation for personal gain. Unfortunately, we have experienced bad actors throughout our history, and collectively, we must diligently fight against these bad actors...
Artificial intelligence is the future for all industries – especially scholarly publishing...
https://www.researchinformation.info/analysis-opinion/ai-new-frontier-opportunities-and-challenges?utm_campaign=RI%20Newsline%2022-08-23&utm_content=Read%20now&utm_term=Research%20Information&utm_medium=email&utm_source=Adestra
The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.
The chatbot is part of a strategy, says co-founder Sam Altman, of acclimatizing the public to the seismic changes that are imminent because of AI. The strategy worked: world leaders have clamoured to learn from Altman about how to adapt to a world shaped by AI. But questions remain about whether OpenAI is still dedicated to making AI safe, and whether any level of risk would prompt the company to slow its meteoric rise. “At the beginning, the idea of OpenAI was that superintelligence is attainable,” says Ilya Sutskever, OpenAI’s chief scientist. “It is the endgame, the final purpose of the field of AI.”
https://www.wired.com/story/what-openai-really-wants/
Article Generative AI and ChatGPT: Applications, challenges, and AI-...
"Generative AI is here to stay. Advancements in generative AI are accelerating and its disruption to business and industries will intensify. Generative AI is making a major impact on our work and lives to the point that working and collaborating with generative AI will soon become a norm, if not already a norm. Education will need to be transformed to teach the necessary hard and soft skill sets to enable students to collaborate and partner with generative AI in educational and workplace settings. Continuous learning and adaptation are necessary to upskill, reskill, and retool the workforce as AI continues to advance and redefine our workplace and our lives. We are living in an interesting and challenging time where adapting to the era of generative AI is necessary and unavoidable. Resistance is futile!.."
It is good for those who already have sufficiently good knowledge in their fields of specialties. I would never recommend it for new learners. It produces a very good stimulation of generating better flow in coding, writing, recalling etc., however, it needs good existing knowledge in the field of ongoing activity. y.
"ChatGPT is a stochastic parrot: it is incredibly efficient at stitching together words according to probability and generating convincing language, without any understanding of its meaning. The way in which LLMs learn is unnervingly similar to the way her son does, writes illustrator and cartoonist Angie Wang. “Aren’t we, after all, just a wetware neural network, a complex electrochemical machine?” In her beautifully illustrated essay, Wang explores the feeling of the vertigo that comes with the ever-evolving flood of AI-produced content, and what it means to be human..."
https://www.newyorker.com/humor/sketchbook/is-my-toddler-a-stochastic-parrot