I think, privacy and security of AI users in acquiring information will be similar to any other commercial software communication products. I assume there would be special areas of AI related to military, national security, and alike info, that will need a special access credentials. Other than that, the AI should be wide open. Pretty much similar to Wikipedia.
I can try to answer this question from an industry perspective. Right now, I can see the industry and enterprises seeing these Generative AI/ChatGPts as more Foe than a friend as it is unclear how and if the engagement with these tools and data being submitted can potentially be used against those organizations. The main concern is the accidental submission of sensitive data of these organizations by their employees or vendors.
Though I can say the OpenAI and other vendors would be doing their due diligence and providing guard rails around security and privacy, I think for enterprises and more for govt/federal organizations, we would need Enterprise/Govt versions of these where the model and data itself sits within the firewall, but that may defy some benefit of limiting the data sets and reduce the power of Generative AI, we all are seeing right now.
I understand AWS/Azure/GCPs of the world already have or working on providing enterprise-grade Services around Generative AI and we need to explore and see how it evoles..It is an exciting time for sure,and some challenges for Cyber community as well. But with similar lines with cloud, Generative AI is here to stay and we need to figure out how best to adapt it with necessary guard rails to protect our Sensitive data.
Hello there, I’ll have a stab at this, and I hope you find it helpful.
I’ll start with the scope of the question itself first. Whenever anyone ask me “how secure is..” my first answer is “secure from what?”
Security as we understand it in computing is a fluid, relative and moving target. Data may be secure from one type of actor but not secure from a power cut for example. As Snowden demonstrated, data security often clashes with capitalist ideals of cost, that leads to organisations outsourcing trusted functions to contractors who don’t share the ethos of their employers.
There’s good quick and cheap - you get to pick 2… quick and cheap therefore means….
Given OpenAI’s desire to protect their most valuable asset, their reputation, I would assume they have a relatively (sic) mature policy set and implementation that uses all of this seasons gadgety tech buzzwords such as a 0-trust role based access control model, stepup authentication and authorisation, quantum safe crypto probably got sold there too, that kind of thing.
whether or not they test anything in production (as opposed to a test rig) is another question - according to the well respected Verizon Breach & Incident report, some 80% of *all* breaches could have been blocked/prevented by a product or tech that the victim already owned but was either misconfigured or switched off due to “operational speed of business” excuses.
I haven’t perused this year’s but it’ll be similar, they’re all here https://www.verizon.com/business/resources/reports/dbir/
Anyway, I suddenly thought “why don’t we ask it itself?” So ChatGPT 3.5 (the free one 😉) had this to say…
”OpenAI takes user data privacy and security seriously. As of my last knowledge update in September 2021, OpenAI retains user interactions with the ChatGPT app for a period of 30 days for the purpose of improving the model. However, OpenAI no longer uses this data to fine-tune the model.
To maintain privacy and security, OpenAI employs various measures including encryption and access controls. Conversations you have with ChatGPT are designed to be secure, and OpenAI aims to minimize the risk of unauthorized access or breaches.
For the most accurate and up-to-date information about OpenAI's data handling and security practices, I recommend checking OpenAI's official documentation or privacy policy, as there might have been updates since my last knowledge update.”
Data Privacy: OpenAI is committed to protecting user privacy. It has put in place policies and procedures to ensure that user data is anonymized and carefully handled during the training process.
Access Controls: Access to the data and model is restricted to authorized personnel only, and strict access controls are in place to prevent unauthorized access.
Data Retention: OpenAI retains data for a limited time and follows data retention and deletion policies to minimize the storage of user data.
Security Measures: OpenAI employs security measures to protect against unauthorized access, data breaches, and other security threats.
Regular Audits: OpenAI conducts security audits and assessments to identify and address potential vulnerabilities.
Compliance: OpenAI aims to comply with relevant data protection and privacy regulations, such as GDPR in Europe.
It's important to note that while OpenAI takes significant steps to secure data, the security of data transmission and handling also depends on the users. When interacting with ChatGPT or any online service, users should exercise caution and avoid sharing sensitive personal information.
Since data security practices may evolve over time, it's advisable to check OpenAI's latest policies and security practices on their official website or through their documentation for the most up-to-date information.