Launched by OpenAI late last year, the new AI-powered chat platform ChatGPT has become a hot topic of conversation in the IT world because of its ability to compose speeches and tell stories with great accuracy in a few seconds based on artificial intelligence techniques.
However, this platform or algorithm which has been fed huge amounts of data to enable it to interact with the user is not only impressive but also concerning.
Scientists and information technology experts took a closer look at ChatGPT and warned of some big problems, including data protection, data security gaps, hate speech broadcasts, and fake news.
“With all this hype … this system is yet to be examined critically,” says the founder of the “Leap in Time Lab” research laboratory and business administration professor at the Technical University of Darmstadt, Ruth Stock-Homburg.
4 Main Issues
Experts pointed to 4 major vulnerabilities in ChatGPT, including:
Possibility of Manipulation
The system is based on an extremely wide range of applications and in an experiment conducted by researchers at the Technical University of Darmstadt, researchers spent 7 weeks sending thousands of questions to the ChatGPT system, and it turned out that it could be manipulated, according to Stock Hamburg.
Sources are hard to trace
“Tracing the sources of information or content in the system is very hard”, says Stock Hamburg, for instance, if you ask the system a question containing criminal content, the rules and security mechanisms meant to deal with this issue can be easily circumvented, according to Sven Schultze, TU Darmstadt doctoral student and expert for speech AI.
With a different approach, the software shows you how to generate a fraudulent email or throws out three variants of how tricksters can proceed with the grandchild trick.
ChatGPT also provides instructions for burglary, with instructions on how to use weapons or physical violence if you meet residents, however, the sources of this criminal content cannot be easily traced through the platform.
Privacy concerns about data security and data protection remain a part of every new technology.
“What you can say is that ChatGPT collects, stores and processes a wide range of data from the users in order to then train this model accordingly at the appropriate time,” says certified Frankfurt data protection specialist Christian Holthaus.
However, Holthaus explained that the problem is that all of the servers are in the USA, which gives it unacceptable control over the data of users from every country in the world.
“That’s the real problem if you don’t manage to establish the technology in Europe or have your own,” says Holthaus. In the foreseeable future, there will be no data protection-compliant solution.
An Immature System
ChatGPT is still an immature system in the development stage, and while it is still popular with individual users at this moment, it does not represent anything for the business sector or areas related to data security, Stock Hamburg says “We have no idea how to deal with this thing because it is not yet mature.”