A group of faculty and students from different colleges at the University of Tennessee, Knoxville, recently released a guidance document on artificial intelligence (AI) text-generation titled, “How does GPT work? Understanding Generative AI Models.” Their goal is to help UT’s students, faculty and staff and the public develop a more informed understanding of AI and of ChatGPT’s functions.
ChatGPT: Q&A
Drawing on information from their paper, ChuanRen Liu, an associate professor in the Department of Business Analytics and Statistics at UT’s Haslam College of Business, and Harshvardhan, a business analytics doctoral candidate, recently answered questions about GPT’s capabilities.
To start your analysis, your team asked ChatGPT to define ChatGPT. It replied: “ChatGPT is a large language model trained by OpenAI, based on the GPT (Generative Pre-trained Transformer) architecture. It uses deep learning techniques to generate human-like responses to text-based inputs, such as chat messages, emails or social media posts. ChatGPT has been trained on a vast amount of text data and is capable of understanding natural language and context, allowing it to provide accurate and coherent responses to a wide range of questions and topics. Its capabilities make it a useful tool for various applications, such as chatbots, customer service, and language translation.” That seems accurate, but it uses several terms, like “GPT (Generative Pre-trained Transformer) architecture,” which readers probably don’t encounter every day. Can you simplify it?
Harshvardhan: Yes. ChatGPT is a computer program created by OpenAI, an artificial intelligence research lab, which is good at understanding and replying to messages like texts, emails or social media posts using language that sounds like it’s coming from a person. It learns from lots of written text and can understand what people mean when they talk or write. This makes it helpful for many applications like chatting, helping customers and translating languages.
Liu: Also, in our paper, we include appendices that give explanations of technical terms like those above, which are frequently used in reference to generative AI models. We hope this will give readers a clearer understanding of how ChatGPT works.
In your paper, you note that ChatGPT can perform human-like undertakings, such as solving quadratic equations and giving definitions of morality plays. It appears this technology covers an incredible array of human knowledge. What should college faculty and students take away from this?
Harshvardhan: This technology is capable of many feats, including writing computer code and essays. However, it “hallucinates” various facts and gives wrong information while showing complete confidence in its accuracy. In such a situation, the user has an added responsibility to verify the information produced by GPT. ChatGPT also has a hard limit on its information: it has only been trained on data till September 2021. Given these limitations, the primary concern would be about the accuracy of its information.
Liu: ChatGPT seems good at generating ideas — it’s good at brainstorming because there are no obviously wrong ideas in brainstorming, right? However, since ChatGPT is ultimately an AI system trained on historical data, the information it generates can hinder creative progress. If users just rely on it for brainstorming, it would be as if AI is dictating the ideas, and we humans simply implement them.
Warnings about the dangers of AI have been in the news lately. If a person asks you if we should be worried about ChatGPT taking over the world, how would you respond?
Harshvardhan: Such discussions are unfounded. There are certainly going to be systematic changes, but the fears are highly exaggerated. AI systems, including ChatGPT, are controlled by humans and have profound limitations. If you are worried about it, spend some time working with ChatGPT, and you will see this is so.
Liu: As a tool to assist humans, there are many issues surrounding generative AI models to consider. It would certainly make humans more productive than before, which could have several impacts. (1) Work of some occupations that takes 10 people to perform may soon require only five. (2) If every worker is twice as productive, the resulting economy grows by a factor equal to, if not greater than, two. (3) Also, if every worker is twice as productive, many workers will be displaced, although fears of technology taking jobs from people have been around since at least the Luddites. (4) We might end up with more free time, although studies have shown that, historically, this is not true.
What are some ways technology like Chat GPT is being used that readers might expect to encounter in everyday life? Are things like Alexa and Siri similar to ChatGPT?
Liu: There is really no other general-purpose AI technology like ChatGPT. Alexa, Siri, “smart homes,” they aren’t very smart compared with the new large language models (LLMs). There is a critical difference between those “rule-based” systems and the generative transformers that understand the complexity of the user’s question and generates a thoughtful reply.
Harshvardhan: GPT will be more visible when Microsoft launches its Office Copilot or Google launches generative AI in its systems, or when more technology companies integrate such systems.
So, ChatGPT isn’t going to take over the world. But its use has staggering implications. What applications do you think it could be most successful with in the future? Also, where do you see the biggest areas of concern in its future applications?
Harshvardhan: ChatGPT has a promising future in various applications, including customer service, education, language translation, content creation and personal assistance. As AI technology advances, it can significantly improve these sectors by providing personalized experiences, breaking down language barriers and streamlining tasks. However, it’s essential to balance the potential benefits with the need to address potential risks and concerns.
Liu: Some of the concerns related to ChatGPT and AI applications include misinformation, malicious use, built-in bias, privacy issues, job displacement and social and economic disparities. To ensure the ethical use of AI, developers, policymakers and stakeholders must collaborate on responsible AI development, regulation and implementation. This collaborative approach will help minimize risks and maximize the positive impact of AI on society.
Want to Know More About ChatGPT?
For a more detailed breakdown of how ChatGPT works and its potential uses and drawbacks, see the full paper “How does GPT work? Understanding Generative AI Models” by Harshvardhan and his co-authors, Sally Corran Harris, distinguished lecturer and the associate director of Undergraduate Studies in UT’s Department of English; Dania Bilal, professor of information sciences at UT; Lena Shoemaker, writer and student in UT’s Department of English; and Alexander Yu, student in UT’s Department of Electrical Engineering and Computer Science.
To explore ChatGPT firsthand, visit Open AI’s blog.
About the Department of Business Analytics and Statistics at UT’s Haslam College of Business
The Department of Business Analytics and Statistics’ mission is to create knowledge through research and to disseminate that knowledge through its degree programs. The faculty uses the results of its application-focused research to educate students on how to effect positive change within organizations by emphasizing soft skills, such as communication and team building, alongside the targeted and effective use of analytics. The department’s continually evolving curriculum draws upon state-of-the-art theoretical and practical content from the fields of statistics, machine learning and operations research.
—
CONTACT:
Scott McNutt, business writer/publicist, rmcnutt4@utk.edu