Alright, AI is indeed a powerful tool and has become part of our daily routines. However, not all AI-generated responses are trustworthy — it's often necessary to verify their accuracy. From my experience, AI tends to generate answers based on accumulated data and past interactions, which can sometimes limit the reliability of its responses. So, in most cases, I wouldn’t say I trust AI more than humans. Kwan Hong Tan
That's a fair and thoughtful point. While AI can provide speed and breadth in processing information, its responses do rely heavily on the data it's been trained on — which may include biases or outdated content. Trust in AI should be cautious and context-dependent. Ideally, AI serves as a tool to support human judgment, not replace it. Verifying its output, especially for critical decisions, is not just wise — it's essential.
Trust is often considered the glue of a healthy society. We need others and others need us. As artificial intelligence (AI) rapidly integrates into our lives, it becomes an integral part of our social reality. Consequently, the concept of trustworthy AI has emerged as a critical consideration when assessing AI’s role in society and the implications of its implementation.
Article Human trust in AI: a relationship beyond reliance
Thank you for highlighting this crucial issue. I completely agree that trust is foundational to both interpersonal and societal cohesion. As AI becomes more embedded in our decision-making ecosystems, the question isn't just whether we can rely on AI, but whether we can truly trust it in ways that reflect our human values and expectations.
Your article's emphasis on the relational nature of trust in AI, beyond mere functional reliability, is timely and thought-provoking. Trust in AI should encompass dimensions such as transparency, explainability, alignment with ethical norms, and responsiveness to human context. The idea that AI systems must earn trust through their behavior, much like humans do, shifts the paradigm from technological capability to relational accountability.
I'm especially interested in how this trust can be institutionally safeguarded across different cultures and sectors. As AI becomes not just a tool but a social actor of sorts, interdisciplinary collaboration between technologists, ethicists, sociologists, and policymakers becomes essential. Thank you for stimulating such an important conversation.
Yes, I do trust AI more than humans in certain well-defined tasks — particularly those that involve processing vast amounts of data, recognizing complex patterns, or executing repetitive operations without fatigue or bias. For example, in medical imaging, AI can detect subtle anomalies in thousands of radiographs or MRIs faster and sometimes more accurately than a human radiologist, especially for rare or early-stage conditions. Similarly, in language translation, AI tools can now offer rapid and reasonably accurate translations across hundreds of languages, something unthinkable at human scale in real time. AI also excels in data analysis, such as identifying fraud in millions of financial transactions or predicting equipment failure in industrial systems through anomaly detection. These are domains where human attention would falter due to volume or monotony, whereas AI remains consistent and efficient. However, trust in AI should always be task-specific and contextual. While I may trust an AI to sort images or transcribe speech, I would never trust it uncritically in moral, clinical, or legal decision-making, where human judgment, empathy, and contextual understanding are irreplaceable. In short: for tasks of calculation, classification, or correlation, AI often outperforms. But for tasks requiring meaning, ethics, or accountability, humans must remain at the center.
I agree with the colleagues, when processes are rather mechanical I do trust AI but I need to see the actions being performed to check procedures and improve them when possible, granting critical thinking and ethical aspects.
Thank you for your thoughtful comment. I fully agree—trusting AI in mechanical or routine processes makes sense, but human oversight remains essential. Observing the execution not only allows us to verify accuracy but also offers opportunities for procedural improvement. Most importantly, it ensures that critical thinking and ethical judgment are continuously applied—areas where human discernment still has no substitute.
Тот ИИ, который сейчас доступен для использования не станет равным человеку собеседником или советчиком, так как принципы его работы отличаются от принципов функционирования сознания человека. По-видимому, только следующее поколение ИИ, который будет построен с использованием логических вычислений и оперирования смыслами будет способен к взаимодействию с человеком на равных.
Вы подметили важную мысль. Действительно, нынешние ИИ-системы основаны на статистических закономерностях, а не на истинном понимании или осмыслении. Их ответы — результат обработки больших массивов данных, но не сознательного размышления. То, что вы говорите о следующем поколении ИИ — основанном на логике и оперировании смыслами — звучит как необходимый шаг к более равноправному взаимодействию. Возможно, только когда ИИ начнёт не просто обрабатывать информацию, а понимать контекст и намерения, он сможет стать полноценным собеседником или советчиком.
You've pointed out an important idea. Indeed, current AI systems are based on statistical patterns, not true understanding or meaning-making. Their responses are the result of processing large datasets, not conscious reflection. What you mention about the next generation of AI—grounded in logic and semantic reasoning—sounds like a necessary step toward more equal interaction. Perhaps only when AI begins not just to process information but to understand context and intention will it be able to serve as a full-fledged interlocutor or advisor.
Некоторые новые подходы к конструированию ИИ нового поколения доложены мною в 2022 году на конференции BICA и опубликованы в материалах этой конференции.
Human beings have many levels of trust. However, trust in artificial intelligence does not live up to this noble human trait. Currently, it is far from the first level of trust because its programs are based on synchronous training that lacks emotion and temperament.
When my concerns are all about speed, I can trust AI more than human. However, trusting AI without verification, from experience, will put you in trouble. Remember, AI can be biased based on the data it was trained with and past experiences. For heavy-duty computation, AI will be your saviour, with cross checking. Human is generally more intelligent in decisions making especially when emotion is involved.
Очень интересно! Спасибо, что поделились. Будет ценно ознакомиться с вашими материалами — новые подходы к конструированию ИИ, особенно в контексте BICA, играют важную роль в продвижении к более когнитивно ориентированным системам. Если есть возможность, поделитесь, пожалуйста, ссылкой или названием вашей публикации — с удовольствием изучу.
Very interesting! Thank you for sharing. It would be valuable to review your work — new approaches to AI construction, especially within the BICA context, play an important role in advancing more cognitively-oriented systems. If possible, please share a link or the title of your publication — I’d be glad to explore it.
You're absolutely right — human trust is deeply layered, shaped by emotion, intention, and shared experience. Current AI, despite its impressive capabilities, lacks these fundamental qualities. Its training is statistical, not empathetic; reactive, not intentional. Until AI can engage with nuance, unpredictability, and emotional resonance — not just pattern recognition — it will remain far from earning the kind of trust we naturally extend to other human beings.
I completely agree — it's about knowing when and how to trust. For speed and processing large datasets, AI is unmatched. But when nuance, emotion, or moral judgment are involved, human decision-making still holds the upper hand. Trusting AI without verification can indeed backfire, especially given potential biases in training data. The ideal approach is collaborative: use AI for its strengths, but keep human oversight where context, empathy, and ethical discernment matter most.
Vladimir Zhulego, Vadim Ushakov, Artem Balyakin, Olga Chernavskaya «On the Possibility of Ontology Map Constructing on the Cerebral Cortex» 2022 Annual International Conference on Brain-Inspired Cognitive Architectures for Artificial Intelligence: The 13th Annual Meeting of the BICA Society. Procedia Computer Science V.213, p.692-609 ,S1877-0509(22)01809-9
In the near future, we plan to publish several more articles in development of this approach.