Yes, limitations and restrictions should be discussed along with applications . Each technique has some limitations but due to requirement of publication of paper, only fancy and nice things are highlighted by most. Until we discuss the negative areas of applied AI technique, it doesn't become a true research result.
Depends on the focus of the research paper. In considering whether to include this topic you should:
View the subject area of the journal/conference where you want to submit your paper to see if it is a topic that is accepted. If not, do not include it. Instead, use it for another publication if it is worth pursuing as a topic by itself.
Even if accepted you should consider whether it is a distraction from the main focus of your paper. Research papers must maintain focus if you expect evaluators to accept the paper.
Yes, limitations and restrictions should be discussed along with applications . Each technique has some limitations but due to requirement of publication of paper, only fancy and nice things are highlighted by most. Until we discuss the negative areas of applied AI technique, it doesn't become a true research result.
Should Research Papers Mention Potential Negative Effects of New AI Technologies?
Yes it should depending on the context. E.g. if a paper is touching on AI as part of the enabler, limitations of the AI as well as future research recommendation should be included in the paper.
Moreover, specific paper can be written / published about the negative effect of AI e.g. how hackers are using AI as part of their arsenal to attack cybersecurity etc.
Interesting discussion. I think that negative results should be reported more, because not all the research can be fancy and predicted. One can have a totally wrong picture of the field, and sometimes you are puzzled with what your data are telling you. That is partially consequence of hiding/autocensure negative results for quite a long, and this is the only way how the scientific endeavour can progress in a meaningful direction.
With regard to limitations and lacks of a study is the same; how on Earth your readers would know that you are also facing problems in your research? Because we are constantly asking questions, we do not know. I would like much more if we stop pretending that we are ideal and much more smart than we actually are.
While negative potential effects are very much worth noting, it is not the role of each individual AI algorithm paper to discuss the full range of potential economic, governmental, social, and technological effects. That would exceed the space available and distract from the paper. What could be noted, however, is how the advance described in the paper is related to such effects. For example, if your new AI classifier is well-suited to a certain class of social analytics (to classify applicants for a social service it may need to be held to high standards for explainability before it can be successfully deployed. This high standard may be required to support evaluation of the algorithm by the deploying social service organizations, to enable scrutiny for compliance with appropriate laws (e.g. racial or gender bias), and for the potential support of a legal challenge against the results. It would be very much worth noting this long term requirement, suggesting metrics and evaluation methods, and noting limitations in the current form of the algorithm that could be address in the future to improve against these metrics.
"Negative" in this context seems to presuppose "Positive" overall inclination of the research. For me if such binary categories are applied to a scientific research, there are probably some problems in it's base formulation or results interpretation.
Furthermore, since we are talking about "potentially negative", that adds a whole new level of abstraction, which, for me, cannot be solved with simple (or even complex) Yes or No answer, no matter the arguments.
I agree with Dr Araujo completely. Sometimes when I read someone elses' paper, I thought to myself how incredibly good the results are. And, later when I really had an access to the data I could not obtain even a close result (happened three times, because I asked to reuse the data from which colleagues already published the paper). Then you realize that those publication of course had a makeup. Like a photoshopped photograph. Sometimes I wonder how many of published papers are great because of the art of making appealing text from something which is at least slightly bending the truth? It is indeed eroding the science, but publishing became monetized some time ago. I do not know how it can be prevented.
In my Opinion, Each science filed has multi positive edges, so these techniques or methods can be applied in the best way to obtain the best results. As example, if some techniques get better in some methods, they get worst in others. So, each authors can find the pros and cons of his/her research actions.
Yes! In my candid opinion, it is absolutely necessary to indicate any negative impact in AI research paper. This negative effect, may inform the research community to develop a novel research that is capable of addressing the negative effects. This outcomes is usually documented under “Future Directions”
If the papers' authors have ideas and recommendation related to effects and application of their technology, why not? We do so about positive effects of the technologies we produce. I would expect to see such remarks in the Discussion or the Conclusion section. It is sometimes made in papers on cryptography, for instance.
However, I think that attributing effects to AI technology instead of other factors is a hard and open problem. Such analysis should be conducted very carefully.
Science can never be neutral in value. There are always positive and negative aspects. These must be discussed as far as they are known or anticipated. So the answer to your above question is YES.
All researchers should follow the ethical guidelines determining the code of conduct in doing research. As a consequence, mentioning the negative potential effects of a new AI is a prerequisite to compliance with the ethical standards associated with the academic world.
Dear Mr. Vinh T. Nguyen , all objects, all activities always have a negative effect, only negative effects are very small or very large, so it is very subjective and depends on the perception of the person who judged it. regards.
Potential Negative Effects of New AI Technologies need to be be mentioned to have all possible scenarios. How can a system become intelligent without having Potential Negative Effects scenarios?