Imagine this: you finally decide to reach out to a medical professional about your smoking habits, and they place you in front of a conversational artificial intelligence (AI) bot. Or maybe you’re plowing through piles of research papers for hours when software can summarize them for you in seconds.
Large language models (LLMs) are an exciting AI tool, developing at a dizzying pace with what seems like endless possibilities. LLMs are deep learning models that can process and generate language. Many experts have cautioned against its potential detrimental uses. Research on advancing what AI can do is important. However, it’s just as important to understand what it can’t.
From October 4–5, the Data Sciences Institute and Schwartz Reisman Institute for Technology and Society at U of T held a hackathon and symposium titled, “Responsible LLM-Human Collaboration.” I attended a few talks on October 5, ranging from bias in AI models to ethical challenges in AI reasoning.
AI’s trustworthiness
How reliable are the answers provided by these AI-powered conversational agents?
Zhijing Jin — an incoming computer science assistant professor at U of T — posed this question. She started off her talk on improving LLMs’ reliability and cooperation through causal reasoning with a striking example. Asking the Google AI search agent how many rocks you should eat per day generated the following response: “At least one a day, keeps the doctor away.”
Common sense tells us that’s not quite the right answer. These AI errors are called ‘hallucinations,’ which are misleading results generated by AI models. While these agents are good at combing through large databases for quick responses, Jin asked the crowd: what do we do if their sources are wrong?
Other presenters highlighted these so-called hallucinations as a prevalent issue, from software engineers to data analysts alike. Enamul Hoque Prince, associate professor and the director at York University’s School of Information Technology, mentioned ChatGPT’s ability to interpret visual data. He found that when prompted to make a conclusion about a chart displaying a decreasing trend anywhere between 2020–2023, it tended to incorrectly blame the COVID-19 pandemic.
So, what can we do about it? Jin mentioned the importance of understanding the cause of phenomena, as opposed to their correlation. When you ask an AI model a question, its database may only give an answer that is most common with that question, instead of what the answer should be. Causal queries — requests inputted into an AI system that looks at the cause and effect of different outputs — are one of the tools they utilize on the output answer. Think of it as fact-checking its response with other prompts to determine how LLM’s make these mistakes.
Ethical biases in AI
Society is privy to biases about other groups and identities. Is AI the same?
Yasir Zaki, assistant computer science professor at the New York University in Abu Dhabi, researched this question. Zaki looked at an image generator called SDXL and simply asked it to generate a photo of a person.
After SDXL generated 10,000 images, 47 per cent of the images represented white people and 65 per cent of them were men.
When prompted to show a photo of a person with specific professions, SDXL displayed biases as well, with photos of men dominating most professions, while many photos generated of women were for stereotypically female positions such as nurses or secretaries.
Is this inherent to the model? Zaki found that part of the bias comes from the data the model was trained on, and part of it is how the model has been tuned. They developed their own models that better distribute the results to emulate our population.
What can we do as casual users of AI?
Overall, caution should be taken when using any artificial intelligence tool. While researchers are working on bridging the gap in reliable and ethical uses, the question remains on who the responsibility falls onto. We all have a role in protecting and ensuring reliable information is being used. At the end of the day, our brain is the most important tool in our arsenal.
As someone who is slightly afraid of AI, I left the symposium with a feeling of bewilderment at all the potential that these LLMs can offer in our daily lives. The conference offered a less fearful and more realistic look at the problems AI proposed and what needs to be improved, demystifying the grand promises many people attribute to the technology.
No comments to display.