On the sunlit morning of May 7, the Schwartz Reisman Institute for Technology and Society’s annual Absolutely Interdisciplinary conference began inside the brand-new Schwartz Reisman Innovation Campus at U of T. The highly anticipated academic event connected researchers and industry experts of varying specializations with the public to discuss key findings, problems, and the next steps in artificial intelligence (AI) research.
University of Michigan Professor and Philosopher, Peter Railton, and Johns Hopkins University Professor and Economist, Gillian Hadfield, opened the day with a session entitled “A world of natural and artificial agents in a shared environment,” to discuss the common ground between ethical competence and normativity.
Can AI models be competent?
The Merriam-Webster dictionary defines competence as “the quality or state of having sufficient knowledge, judgment, skill, or strength.” For example, for a human being to have a favourite novel, they must have previously acquired the competence to understand the language that the novel is written in.
Competence is often a characteristic of many AI models, particularly in agents. In this context, an “agent” refers to an entity that has “the capacity to act.”
According to technology company International Business Machines also known as IBM, an AI agent is not limited by a static dataset, as it can “obtain up-to-date information” on the fly. In contrast, models like AI research organization OpenAI’s ChatGPT respond to inputs based on a constrained, time-bound dataset.
Railton’s examples of specific AI competencies included generating natural language and images, interpreting images, and participating in strategic gameplay. He attributed the emergence of these competencies to the practice of training. In the context of AI models, training often entails a back-and-forth interaction between human trainers and the AI to identify the associations between words. Over time, this allows a model to fine-tune its ability to use the language to interact with a user.
Railton suggested that if these competencies have successfully emerged in AI models, it might also be possible for ethical competence — the capacity to act on principles of ethics — to emerge in a similar way. Ethics — a field of thought that concerns itself with acting morally among other agents — is relevant to conversations about digital privacy, security, and safety. If AI models can acquire ethical competence, they could theoretically help safeguard against the frequent concerns of data misuse and data breaches in the digital landscape.
Railton explained that ethical competence is further complicated by determining whose actions need to be regulated, controlled, or limited, and by whom — especially when both human and AI agents can play active roles in perpetuating these risks.
This principal problem can be broken down into two main sub-problems: algorithmic bias — the tendency for some algorithms to disproportionately represent, misclassify, or omit critical demographic data — and AI disclosing individuals and corporations’ private information to other parties without consent. Grounding models in clear ethical standards for interaction is a step toward a safer environment for both natural and artificial agents.
A model for AI governance
Hadfield’s research seeks to constructively address questions rising in the field of normativity. Normativity is the practice of labelling actions in a society as “okay” and “not okay,” producing a set of guidelines for behaviour known as norms.
The field of normativity conceptualizes this act of self-questioning as a distinct mode of operation for agents. Balance within a group of agents is achieved when they have coordinated around what is considered “okay” and “not okay.” This builds what Hadfield called “a shared classification institution,” or a comprehensive understanding of how to behave correctly in a society that enforces these norms to experience social acceptance.
Hadfield added that achieving balanced society-wide relations might involve including what she called “silly rules” — rules that don’t result in any direct reward for following them. Silly rules can provide additional context, helping an AI agent more easily infer how to participate in its society and act according to its norms.
During the Q&A session, Hadfield commented on the dangers of assuming that AI agents will always make sound ethical judgment. Railton suggested that agents should not assume what is ethical in advance. Instead, he proposed incorporating human feedback to refine their ability to develop the self-awareness needed to adopt a proper moral stance. As such, AI agents may always need to rely on human beings for direction to develop ethical judgement.
Despite the ongoing anxiety surrounding the unprecedented advancements in AI technology, this session offered a hopeful glimpse into a future where AI and humans could co-exist peacefully.
No comments to display.