Call for papers
With AI’s impact on our everyday lives becoming more and more tangible, the need for ethical and trustworthy AI has gained strong recognition by governments, industry, the general public, as well as academics, as evidenced by the numerous ethical guidelines and requirements. The stated goal of the EU strategy is to develop AI in such a way that applications using AI are reliable, robust, explainable, ethically guided and hence trustworthy. Ensuring AI is trustworthy and reliable requires ensuring that it fulfils human needs and respects human values. To achieve this, there is a need to develop software systems that reason about human values and norms, implement these values through norms, and ensure the alignment of behaviour with those values and norms. We argue that just as values guide our own morality, values can guide the morality of software agents and systems, bringing machine morality closer to reality. The result would be value-aware systems that take value-aligned decisions, interpret human behaviour in terms of values and enrich human reasoning by enhancing the human's value-awareness.
Today there is a growing wealth of work in the field of AI on accounting for human values and working towards value-aligned behaviour, such as how AI learns human values, how individual values can be aggregated to the level of groups, how arguments that explicitly reference values can be made, how decision making can be value-driven, and how norms are selected to maximise value-alignment. VALE 2023 intends to bring in research on value engineering together and foster in-depth discussions on the topic.
Topic Areas:
The relevant topics include, but are not limited to:
Value and norm representation
Value and norm learning
Value and norm agreement
Value and norm conflict resolution
Value-driven argumentation and negotiation
Value-driven decision making
Value-driven system design
Value-alignment
Value-driven explainability
Legal questions in value and norm enforcement
Important Dates
(All times Anywhere on Earth (AoE), UTC-12)
Paper submission deadline: 30 June 2023 25 July 2023
Notification of acceptance: 01 September 2023
Camera-ready submission deadline: 10 September 2023
VALE workshop: 30 September 2023
Formatting & Submission
Papers should be formatted according to the ECAI 2023 formatting style. An Overleaf template is made available for authors.
All papers must be written in English and submitted in PDF format. They should not exceed 7 pages (not including references).
Papers will be reviewed on a single-blind basis, which means that papers should list the authors.
Papers should be submitted via the ChairingTool submission site.
Workshop Attendance
Submission of a paper is regarded as a commitment that, should the paper be accepted, at least one of the authors will attend the workshop in person to present their work.
Proceedings
Proceedings will be made available on preprint servers (such as arXiv) before the conference.
VALE 2023 also plans to have best papers published at one of the AI journals relevant to the topic of value engineering in AI. More details will follow.
Ethics Policy
VALE follow's ECAI 2023's ethics policy:
Reported research should avoid harm, be honest and trustworthy, fair and non-discriminatory, and respect privacy and intellectual property. The ACM, the European Code of Conduct for Research Integrity and AAAI codes of ethics provide guidelines that we wish to promote.
When relevant, authors can include in the main body of their paper, or on the reference page, an ethics statement that addresses both ethical issues regarding the research being reported, and the broader ethical impact of the work.
Reviewers will be asked to flag violations of ethical and/or fairness considerations. Such flagged submissions will be reviewed by an ethics advisor. Authors may be required to revise their submission to include discussion of possible ethical concerns and their mitigation.