Achieving our vision of citizen-centric AI systems requires several novel advances in the area of artificial intelligence.
First, to safeguard the privacy of individuals, new approaches to understanding the constraints and preferences of citizens are needed. These approaches will be distributed in nature - that is, they will not depend on collecting detailed data from individuals but will allow citizens to manage and retain their own data. To achieve this, we will develop intelligent software agents that act on behalf of each citizen, that store personal data locally and only communicate limited information to others when necessary.
Second, to incentivise positive behaviour modifications and discourage exploitation, we will draw on the field of mechanism design to model how self-interested decision-makers behave in strategic settings and how beneficial actions can be incentivised. A particular challenge will be to deal with limited information, uncertainty about preferences and a constantly changing environment that necessitates incentives to be dynamically adapted via appropriate learning mechanisms.
Finally, to enable an inclusive feedback loop involving citizens and other stakeholders, new interaction mechanisms are needed that can provide explanations for actions as well as information about whether the system is making fair decisions. While there is a wealth of emerging work on explainability and fairness in AI, this typically deals with simple one-shot problems. In contrast, we will consider more realistic and complex sequential settings, where actions have long-term consequences (including fairness) that may not be immediately apparent.