Máquinas e robôs ajudam na medicina, muito, mas ao ajudarem nas indústrias de carros, desempregam milhares de pessoas. Acho que a agora chamada IA, AFETA A COMUNIDADE CIENTÍFICA NO BOM SENTIDO DE QUE TEMOS QUE ESTAR VIGILANTES NAS PESQUISAS SOBRE SEUS BONS OU MAUS USOS. LEMBRANDO QUE: A INTELIGÊNCIA VEM DO CORPO HUMANO, DE NOSSO CÉREBRO, NÃO DE MÁQUINAS, E A IA NÃO É ARTIFICIAL, É CONSTRUÍDA POR PESSOAS, OS CIENTISTAS (VER AULA DO NEUROCIENTISTA NO YOUTUBE: "Miguel Nicolelis explica por que a IA nem é inteligência nem é artificial".).
Why do "Organic Foods" require to meet strict scientific standard certifications in order to be marketed legally as "Organic"?
Why the are business models that incorporate the risk mitigation of "Certified Genuine Organic" necessary to conduct legal business? The worldwide list of good answers is very long and has a detailed historical timeline. Public health consequences that historically resulted from NOT having objective quality tests and legal monitoring (with damages and punitive damages) and accountability for violations.
Please consider applying a similar standard to scientific research that has been adulterated by substituting "Artificial" (non-organic) components and LABELING it as "Organic Intelligence", deliberately concealing that the research is being generated by "Artificial Intelligence".
In my opinion, such an approach to this very important issue would allow both categories of science to co-exist in a competitive market. However, it would require some operational definitions that are currently lacking (as has historically been the case in Organic vs Non-Organic product labeling).
Unrestrained harmful and deceptive labeling and their supportive socioeconomic marketing strategies and logistics have always been shown to eventually compromise general pubic safety. Is it logical and reasonable to expect any other eventual outcome from deliberately deceptive and non-disclosed use of "Artificial Intelligence" in the process of conducting bona fide scientific research? Transparency and full disclosure of the stepwise methods involved in documentation of experiments (in such a manner that experimental findings are rendered capable of reproduction and validation by third party umpires) has always been part of real science.
Utilization of Artificial (machine) Intelligence versus Organic (human) Intelligence requires "accountability of the user" for the resulting consequences (good or bad).
Consider a framework of real actionable accountability with civil and criminal damages and punitive and personal damages for consequences. That is a Fair and Equitable starting place for any real discussion of Artificial vs Organic "intelligence".
I support working in conjunction with AI, but we must guard against becoming overdependant on AI when it comes to research. We must not rely only on AI to do the thinking for us otherwise we may slowly lose the ability to conduct meaningful research based on our own judgement/skills. Hence AI can assist us to make research more efficient but we must stay in control of it. Furthermore we must ensure that the use of AI doesn't compromise research ethics.
The answers above are very helpful. Richard explores an important point to which I wish to add. Defining our terms and parameters is important. Toward this end, and when deciding how we wish to safeguard against negative effects of utilizing AI in research, we need to ask two fundamental questions. 1) How do we define our "research"? In other words, what are our planned methodologies, our philosophies behind them, and their fitting within our hypothesis? 2) How do we plan to incorporate AI into this process? Detailing the specific goals we intend to put AI toward accomplishing, we then need to analyze whether the current iteration of it can satisfactorily complete them. I would argue that when the goal is supplementing computational burdens, you are probably in a relatively "safer" place than if you are looking for AI to assist with semantic, philosophic, or other more subjective decisions. Regardless of the application, it is necessary to know what weaknesses your planned use of AI will potentially have as they relate to what you intend to use it for. From that, you need a strategy for accommodating those weaknesses or at least targeting their related outcomes for heightened validation. Of course on the latter, the next analysis boils down to: are the safeguards needed worth the time and effort for incorporating AI in the first place?