The ChatGPT iPhone app has major privacy issues you need to know

OpenAI, the business behind ChatGPT, has released the ChatGPT iPhone app, which includes their artificial intelligence bot. Thе chatbot mobilе vеrsion has alrеady bеcomе onе of thе most popular frее applications on thе App Storе. Howеvеr, bеforе you go into thе app, bе wary of bеcoming too familiar with thе bot and jеopardizing your iPhonе app privacy.

Until recently, several ‘imposter’ applications have drifted around app stores, attempting to cash in on the generative AI boom, so OpenAI would want to get its app out there.

The official app is free (the commercial version of ChatGPT is supported but not required for usage). This is a significant advantage since most AI chatbot applications need a weekly membership charge, making them prohibitively expensive, if not blatant, frauds. You can also use speech-to-text to ‘speak’ to ChatGPT, which makes sense for a conversational AI solution.

The iOS software does provide a clear trade-off that users should be aware of. Most of us are aware that ChatGPT occasionally makes things up, so there’s plenty of room for improvement in its responses – but when you open the app on your phone, you get an intriguing warning about sharing personal information because “Anonymized chats may be reviewed by our AI trainer to improve our systems.”

OpenAI privacy policy states that when you “use our services, we may collect personal information contained in the input, file uploads, or feedback you provide.” Suppose you ask ChatGPT questions that include any personal information (read: facts about you that you’d prefer not to disclose with a live soul). The questions will be submitted to OpenAI and reviewed by a human reviewer for iPhone app privacy. That is significant.

What is the purpose of this?

According to the firm, discussions are anonymized before people read them, although this eliminates identifying information from the file’s metadata, not the content of your prompt. So, whether you use ChatGPT for things like anger management, as a safe environment to vent, seek counsel, or edit and improve personal documents and messages, these are all being transmitted – and may be read by – people at OpenAI.

So, you have yet to learn if OpenAI is reading your discussions, and there is no way to opt-out. The corporation cannot read every discussion from every user, but that is something you should bear in mind when you use the app.

Users are more likely to bring up and utilize the bot throughout the day now that they can access it on their portable devices (rather than just their laptops), asking it questions from friends or family or referencing the things they see and engage with daily. It’s a very different experience than just sitting down and playing with ChatGPT on your laptop – and it raises the probability of people disclosing more personal information than they want to.

Of course, we’re not implying that ChatGPT is spying on you and taking all of your information for sinister or suspicious motives, but it’s prudent to caution you about what you put into your bot conversations. Artificial intelligence is still a developing technology that should be used with care until we’ve all become used to having these chatbots. If the creator of OpenAI advocates for controls on his product, the rest of us should tread carefully.

Leave a Comment