The Values & Preferences in Reasoning Symposium (VPR), 2026, hosted at the Artificial Intelligence and Simulation of Behaviour (AISB) 2026 Convention, is calling for papers that aims to improve the reasoning with values and preferences in AI agents. Accepted papers will give a 30-minute presentation and their work will be published in the convention proceedings.
AI Agents increasingly have more autonomy and become more proactive. The reasoning of these agents should take into consideration values and norms, and they should be able to adapt their reasoning to different situations and contexts.
Values—typically moral values—can be used to inform the reasoning and behaviour of AI agents, where the reasoning and behaviour related to the values of an agent or between agents and organisations. Recent advancements in reasoning with values have seen the use of various frameworks that, in principle, aim to describe how values can be used to inform reasoning towards a conclusion or a behaviour towards a goal.
In this symposium, we’re interested in how these frameworks operate, and how they can contribute to the reasoning and behaviour of AI agents using ethical/moral/or deontological values. More specifically, we’re interested in the definition, representation, relationships, and reasoning with three fundamental concepts: values; preferences; and goals. Values are taken to be ‘abstract principles that guide behaviour’—such as equality, autonomy, fairness; preferences are taken to be mechanisms to determine which from amongst choices is ‘more important’ than others; and goals describe a ‘state-of-affairs’ that is desirable for an agent.
However, the questions arise:
Why have both values and preferences rather than one or the other?
What are the relationships between values and preferences to goals?
The representation of values, preferences, and goals
The relationships between values, preferences, and goals, e.g., between values and preferences or values/preferences and goals
The similarities or differences between values and preferences
Argument, reasoning, or decision making when values are in conflict
Alignment and misalignment of values/preferences
Changes in values and preferences in context to enable dynamic and reactive reasoning
Software and programming libraries for the development of value-based or preference-based agents
Open-source datasets or simulations to allow for the comparison of different methodologies
Applications of value-based or preference-based reasoning
17 February 2026: Extended Abstract submission (1500-2000 words)
17 March 2026: Reviews & acceptance released
28 April 2026: Full Paper Submission
1-2 April 2026: AISB 2026 Conference
For more information (such as the formatting instructions), please see the Call for Papers page.
Jay Paul Morgan, j.p.morgan@swansea.ac.uk
Adam Wyner, a.z.wyner@swansea.ac.uk
The full list of programme committee members can be found on the Organisers page.
The AISB 2026 conference will be held at the University of Sussex, Brighton. More information can be found on the Venue page.