It was a Friday after school when I had nothing else to do, so I thought I’d play around with Chat Generative Pre-trained Transformer [GPT] on my personal laptop. Here’s a transcript of the session:
Write me a one-sentence headline about the social implications and bias of artificial intelligence [AI].
ChatGPT: “‘Unveiling the Dark Mirror: The Social Implications and Bias of Artificial Intelligence Revealed.’”
Add some of your own opinion into the headline.
ChatGPT: “‘Unmasking the Hidden Biases: The Ethical Quandaries of Artificial Intelligence in Today’s Society.’”
So are you agreeing that your artificial intelligence system is biased?
ChatGPT: “As an artificial intelligence language model created by OpenAI, I do not possess opinions or biases. However, the headline is crafted to highlight the broader societal concern of potential bias in artificial intelligence systems, as it is an important and ongoing topic of discussion and research within the field of AI ethics.”
While engaging in this conversation, I wanted to get an idea of how the technology would respond to a user explicitly picking at its flaws. I was honestly surprised to see that the chatbot responded by admitting that the study of AI’s impact on society is a concurring issue.
As many of us have heard, watched or read in the news, OpenAI launched ChatGPT last November, luring over 1 million users, including myself, within five days of release to try out the free online program.
AI chatbots that converse with human-like responses are nothing new (think Siri), but ChatGPT made a unique mark in the industry with its scaringly accurate and quick text output. In addition, the technology contains the ability to remember its previous conversations and generate a response that builds off that knowledge.
I began my first conversation with ChatGPT out of curiosity last year in December by simply asking it to write me an essay on this emerging company, OpenAI. Within seconds, the AI supplied me with a five-paragraph answer, and I was suddenly fully knowledgeable about San Francisco-based firm’s accomplishments and goals.
I continued to ask more complicated questions in an attempt to test its intelligence capacity, including a made-up mathematical integral problem that took me about three minutes to solve on paper myself; it took the AI less than 10 seconds to respond with a complete explanation and answer.
I didn’t hesitate to inquire ChatGPT with my immediate next question:
“Where did you learn all this?”
As expected, its response confirmed my understanding of ChatGPT’s machine learning model based on various sources from the internet. Machine learning allows computer programs to train on a wide array of datasets and algorithms to imitate the way humans learn.
However, the data supplied to ChatGPT contains apparent flaws, making it prone to create hallucinations, AI responses that sound plausible but are factually incorrect or unrelated to the given context.
I have never felt tempted to ask ChatGPT to do my homework, knowing that completing homework myself will help me in the long run for finals and Advanced Placement exams. Frankly, I also always held doubts about the accuracy of the AI’s responses.
Since machine learning reflects existing bias in its wide training dataset, its potential for discriminatory outcomes poses a major concern. These actions could spread misinformation to a large group of people, especially because OpenAI does not implement age restrictions to access the program.
ChatGPT users also caught the technology leaking information, including other people’s conversations, email addresses and credit card numbers. According to a Friday, March 24, Open AI announcement, the company responded that only about 1.2% of ChatGPT Plus subscribers active during a specific time frame were affected.
After learning of this issue in mid-March, I felt slightly hesitant to engage in personal conversations and even more doubtful to purchase a ChatGPT Plus subscription for $20 per month despite faster responses and priority access to new features before regular users. Fortunately, OpenAI fixed the problem within four days, but I will be more cautious of saying anything personal to the AI that could potentially leak my information in the future.
For ChatGPT Plus subscribers on Tuesday, March 14, OpenAI effectively addressed these issues with the release of GPT-4, its most recent language model system. Needless to say, I was absolutely amazed by GPT-4’s capabilities in comparison to the previous version.
Without the $20 monthly subscription, I can only read about other people’s experiences with GPT-4 and speculate on its capabilities. The new version handles longer texts — over 25,000 words — to initiate document search and analysis, displays advanced reasoning capabilities and exhibits enhanced creativity.
Its biggest breakthrough, however, is the visual input feature that allows the AI to make decisions, generate captions and analyze photos. The demonstrated example on OpenAI’s website shows GPT-4 explaining jokes from an image and interpreting data charts.
Finally, GPT-4 is 82% less likely to respond to disallowed content and 40% more likely to produce factual responses than GPT-3.5, a previous OpenAI model, according to a Tuesday, March 14, OpenAI article. With these major improvements from ChatGPT, OpenAI openly admits to its potential for false information.
“GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations and adversarial prompts,” according to the March 14 article. “We encourage and facilitate transparency, user education and wider AI literacy as society adopts these models.”
With the rapid developments not only with ChatGPT, but also in the broader AI industry, I worry about the intelligence of our future generation. It’s terrifying to imagine a generation of police officers, doctors, lawyers and other service workers who relied on AI to obtain their degrees in higher education.
As of now, the AI lacks boundaries for the types of responses it can output and can immediately provide answers to problems of any subject area. The scope of ChatGPT is uncontrollable and should be more strictly regulated, especially in educational institutions.
Active ChatGPT users, including myself, should be aware of its risks and understand that complete reliability on the AI model is undeniably dangerous.
In response to the rapid development of OpenAI’s chatbot systems, over 20,000 scientists and AI policy experts signed an open letter calling for the halt of advanced AI development. Technologists including CEO of SpaceX, Tesla and Twitter Elon Musk and Apple co-founder Steve Wozniak signed in approval of the letter, though OpenAI has not yet responded to the proposal.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” according to the March 22 letter. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Compliance with this letter would be an integral part of the development of ethical and unharmful AI. OpenAI and other companies such as Microsoft and Google, which are in the competitive race for more advanced technology, must weigh the social implications of powerful AI systems.
These companies must assure their users with a more accurate, safe, transparent and trustworthy system before releasing rushed technology created under low levels of management. However, OpenAI should continue to reinforce ChatGPT and GPT-4’s behavior and build reliable algorithms with concurrent user feedback.
Ultimately, the more I interact with ChatGPT, the less I notice its human-like intelligence and creativity. Rather, its unpredictability and risks become more conspicuous.