How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?

How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which previously employees of these entities enter into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?

Despite the training and updating of internal rules and regulations in many companies and enterprises regarding the proper use of intelligent chatbots, i.e., for example, the ChatGPT made available online by OpenAI and other similar intelligent applications that more technology companies are making available on the Internet, there are still situations where reckless employees enter sensitive data of the companies and enterprises where they are employed into these online tools. In such a situation, there is a high risk that the data and information entered into ChatGPT, Copilot or any other such chatbot may subsequently appear in a reply, an edited report, essay, article, etc. by this application on the smartphone, laptop, computer, etc. of another user of the said chatbot. In this way, another Internet user may accidentally or through a deliberate action of searching for specific data come into possession of particularly important, key, sensitive data for a business entity, public institution or financial institution, which may concern, for example, confidential strategic plans, i.e., information of great value to competitors or intelligence organizations of other countries. This kind of situation has already happened and occurred in some companies characterized by highly recognizable brands in specific markets for the sale of products or services. Such situations clearly indicate that it is necessary to improve internal procedures for data and information protection, improve issues of efficiency of data protection systems, early warning systems informing about the growing risk of loss of key company data, and improve systems for managing the risk of potential leakage of sensitive data and possible cybercriminal attack on internal company information systems. In addition, in parallel to improving the aforementioned systems that ensure a certain level of data and information security, internal regulations should be updated on an ongoing basis according to the scale of the risk, the development of new technologies and their implementation in the business entity, with regard to the issue of correct use by employees of chatbots available on the Internet. In parallel, training should be conducted, during which employees learn about both new opportunities and risks arising from the use of new applications and tools based on generative artificial intelligence technology made available on the Internet. Another solution to this problem may be to order the company to completely ban employees from using smart chatbots made available on the Internet. In such a situation, the company will be forced to create its own, operating as internal such applications and intelligent chatbots, which are not connected to the Internet and operate solely as integral modules of the company's internal information systems. This type of solution will probably involve the company incurring significant financial expenses as a result of creating its own such IT solutions. The costs can be significant and many small companies' financial barrier can be high. However, on the other hand, if the construction of internal IT systems equipped with their own intelligent chatbot solutions becomes an important element of competitive advantage over key direct competitors, the mentioned financial expenses will probably be considered in the category of financial resources allocated to investment and development projects that are important for the future of the company.

The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:

OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT

Article OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL I...

In view of the above, I address the following question to the esteemed community of scientists and researchers:

How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which employees of these entities previously input into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?

How do you mitigate the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?

What do you think about this topic?

What is your opinion on this issue?

Please answer,

I invite everyone to join the discussion,

Thank you very much,

Best regards,

Dariusz Prokopowicz

The above text is entirely my own work written by me on the basis of my research.

In writing this text I did not use other sources or automatic text generation systems.

Copyright by Dariusz Prokopowicz

More Dariusz Prokopowicz's questions See All
Similar questions and discussions