When ChatGPT was launched by OpenAI, it was an instant viral phenomenon.
Their servers were overcome with stress after the tool was quickly swarmed with activity from users of all forms. Writers asked for creative prompts for stories, students asked for essays and homework answers, and content creators asked for scripts for their social media posts. Its instant growth in popularity can be attributed to its free pricing, as just about anyone can make an account with OpenAI and instantly use the chatbot. Companies are looking at ChatGPT and similar large language models (LLMs) as the future of technology, but most aren’t considering the dangers and shortcomings.
Every year, the Consumer Electronics Showcase (CES) in Las Vegas shows off the newest tech coming to the market. Each show seems to stick to a certain theme, like the blockchain and cryptocurrencies last January. This year, however, AI was the key buzzword. Smart mirrors, dog collars, nail polishers, live translating earpieces, and even a product to replace the smartphone are just some of the CES products featuring generative AI functions. This also means that OpenAI’s marketing has already begun to pay off, showing just how many businesses are already adopting their chatbot.
So why is too much AI usage dangerous? Large language models such as ChatGPT don’t really know anything. In general, LLMs work by repeatedly predicting the next word in a phrase until a full response is generated. Though these models are trained using real information such as studies, books, and articles, they don’t remember exact details. It’s common to see responses with made-up facts, commonly referred to as hallucinations. Take for example the use of LLMs in law, which have mentioned legal cases that never happened. They are also very easy to persuade, given their directive is to provide assistance. When the company implemented filters to protect people from profanity and dangerous instructions, users found bypasses such as the “grandma method” where the AI would be asked to pose as a user’s grandma telling them a bedtime story about building dangerous weapons.
Despite these clear problems, OpenAI is only growing their user base of large companies attempting to replace employees. A recent notable example was the Chevy dealership that implemented an AI assistant on their website. The chatbot was told to agree to all of the customer’s demands, and follow up by proclaiming that its statements were legally binding. It was then asked to sell the user a car for $1. Unfortunately, the customer didn’t get the deal on the car, but it did expose a vulnerability that could have caused major losses for the dealership.
The technology behind LLM chatbots is not ready to be used professionally, but stories like these confirm that companies are still using them to replace real jobs.