Citizen-Centric Artificial Intelligence Systems Workshop 2025
Tue Aug 19 2025
Category: Uncategorised,
On January 30th, 2025, the CCAIS team hosted its annual workshop, continuing its efforts to engage stakeholders of various backgrounds in the research and development of artificial intelligence (AI) technologies. This is a key component of the CCAIS approach to ensure that AI efforts remain focused on benefiting citizen end users.
The workshop focused on two main themes: the sustainability of current AI systems, and future visions for AI agents. Each topic was explored by an invited keynote speaker, followed by interactive roundtable discussions, where the audience had the opportunity to tackle related questions in a group setting. Moreover, the CCAIS team had the opportunity to sprinkle in bits of their research throughout the day in the form of lightning talks and a poster and demo session.
Let’s dive in!
Reception & Introduction

The workshop was introduced by Sebastian Stein, the lead researcher in the CCAIS project. The introductory presentation highlighted the key dilemma that CCAIS aims to address: AI applications have massive potential for benefiting humans, but how do we ensure that benefit is realised?
This mission is of crucial importance, given that the public’s perception of AI tends to be rather negative, despite recent industry pushes for the technology. We thus must push for AI development to be informed by what the public needs and wants from these systems – AI needs to be both socially beneficial and acceptable.
Lightning Talks

Ensuring that AI developments focus on benefiting citizens involves a surprising number of fields. From ethical alignment to the cuteness levels of embodied agents, the research possibilities are rich and varied. And so are the members of the CCAIS team, who had the chance to quickly introduce their research in one-minute lightning keynote talks:
- Connor Watson: Long-term AI Building Design System Energy Cost, Air Quality, and Thermal Comfort Modelling (slide)
- Bruno Arcanjo: Nudge: Personalized AI for Bettering Human Habits; EVtonomy – Personalized Route Planner for Electric Vehicle (slides, poster)
- Jim Dilkes: Trustworthy Human-AI Cooperation (slide)
- Beining Zhang: Multi-Party Negotiation (slide)
- Jan Buermann: Incentivising Efficient Citizen-Centric Energy Usage (slide)
- Ezhilarasi Periyathambi: FEVER Project; AI for Smart Energy Systems (slides, poster)
- Fariba Dehghan: AI-Supported Decision Optimisation for Grid Independent EV Charging Stations (AIDOC) (slide)
- Behrad Koohy: Artificial Intelligence and Mechanism Design for Routing of Artificial Intelligence and Mechanism Design for Routing of Connected and Autonomous Vehicles (slide, poster)
- Jayati Deshmukh: Ethical Alignment in Citizen-Centric AI; Serious Game for Ethical Preference Elicitation (slides, poster)
- Sarah Kiden: Trustworthy AI Systems / Stakeholder Engagement (slide)
- Zhaoxing Li: Citizen-Centric Multiagent Systems Based on LLMs; A Human-in-the-loop Multi-Robot Collaboration Framework Based on LLMs (slides, poster 1, poster 2, poster 3)
- Vahid Yazdanpanah: Computational Models for Responsibility and Trust (slide)

Researchers beyond the CCAIS team also had the chance to quickly introduce their research:
- Kyrill Potapov, UCL: PowerShift: Human-Centred AI for the Equitable Smart Grid
- Carolina Are, Northumbria: Platforms’ Governance of Nuanced Content
- Mouri Hoque Nadia, Southampton: Revolutionizing Deep-Sea Fishing with Autonomous Underwater Vehicles for Sustainable Marine Ecosystem
- Jessica Woodgate, Bristol: Operationalising Normative Ethics for Prosocial Decisions
AI Sustainability
AI as it is referred to today is mainly based on deep learning (DL). OpenAI’s ChatGPT, perhaps the most popular of the current AI chatbots, is currently powered by the GPT-5 model: a massive DL model trained on practically the entire text corpus of the internet. Despite the recent boom in AI, the foundational frameworks that enable DL to work so well have been around for more than a decade. What has allowed for the recent explosion of AI capabilities is the development of incredibly powerful hardware, needed to train and serve these large models, and the availability of massive amounts of text data on which to train them.
However, the increased capabilities of these large models are not without shortcomings and concerns, one of them being the sustainability of these technologies. Indeed, the aforementioned powerful hardware consumes an astounding amount of energy, requires tons of water to cool, and is constructed using rare natural resources. Moreover, there is human cost associated with the labelling of training data and further fine-tuning the model, labour which is usually extremely poorly paid. Bringing awareness of these problems, as well as to potential solutions, was the goal of the morning session.
Invited Keynote Speaker

Starting off the AI sustainability topic, Cathleen Berger gave a brilliant presentation on the environmental costs of AI models, and the lack of transparency from companies regarding these metrics.
Cathleen highlighted just how obfuscated the real environmental impact of AI is, as companies do not provide resource consumption numbers specifically for AI applications. Moreover, while we can estimate some of the indirect effects associated with AI, a large portion of systemic effects on society remains impossible to calculate. In short, “AI’s impact is a corporate secret”.
Moreover, Cathleen pointed out how tech giants such as Google and Meta are pivoting to funding nuclear energy research as a solution to the rapidly increasing energy costs of massive AI models. At the same time, these companies keep failing to meet their carbon emissions targets, reinforcing that big tech is not yet sustainable and that nuclear energy advances are a band aid rather than a solution.
The conclusion left us with important questions: how can we ensure more people understand the environmental impact of AI? How do we inoculate the common citizen to the powerful big tech-made narratives? And, finally, do we need to change our public security approach to allow for user-centric governance?
Discussion Session

Bruno Arcanjo followed with an overview talk highlighting some other aspects of the environmental costs of AI.
Starting on a positive note, Bruno mentioned how Google has used AI to improve their data centre energy efficiency – a clear example that AI can indeed help in optimising energy usage. However, in recent years, the trend has unfortunately been the opposite, with most tech companies increasing their energy and water usage.
Bruno also brought up the human costs of training these models. Large amounts of training data needed to be labelled for these models to be possible, and these human workers are often severely underpaid.
The conclusion pled for a return to AI for the benefit of humanity, from environmental progress to more Nobel prizes in health research.
The room then broke for discussion, forming five groups, each with a different discussion question. By far, the biggest conclusion was the need for AI regulation. Policy should cover fair energy usage, responsibility for AI-made decisions, fairness for users, and data protection.

Poster Session
While the lightning talks gave an opportunity for researchers to quickly introduce their research, the poster session encouraged direct discussion with the audience.


AI Otherwise
In the afternoon, the workshop focused on other aspects of citizen-centric AI, such as expectations for human-machine interaction.
Invited Speaker Keynote

Joel Z. Leibo from Google DeepMind gave a fantastic presentation on his recent work on “AI appropriateness”. The talk, titled after the corresponding paper “A theory of appropriateness with applications to generative artificial intelligence” focused on investigating how humans navigate complex environments together with other humans while retaining socially acceptable behaviour, that is, appropriateness.
The key feature of appropriateness is how context-dependent it is. Acceptable actions under a given environment are not necessarily acceptable in a different environment, even if the environmental differences are, at first glance, minor. Moreover, the same environment with different agents also impacts what actions are deemed appropriate. Indeed, the relationship between agents is as important as the context in which they find themselves.
Joel put forward a theory of appropriateness in human society, how it functions in practice and how it is biologically motivated. This analysis is the groundwork for guidance on how we can responsibly develop and deploy generative AI technology which acts reasonably under diverse contexts.
Discussion Session

The presentation was followed by a talk by Sarah Kiden, a research fellow from the CCAIS team whose research focuses on AI governance and stakeholder engagement.
The theme of the discussion sessions was “Envisioning Trusted Personal Agents with End Users”. It asked the groups to interact with lo-fi artifacts created in previous workshops held as part of Sarah’s research. These artifacts embody the AI agents that participants envision for themselves in the future, with distinct visual characteristics and functionality features. The discussion parties were then asked to place the artefacts on a yearly timeline of when they thought they would be feasible to be practically deployed.
Interestingly, lots of challenges arose that were not strictly technological. For flying agents, the surrounding infrastructure to support the agents, and not the agent itself, was the main concern. For fact-checking agents, the control over what a “fact” is, and who the agent should give authority to quickly becomes a serious problem. The topic of responsibility was also a big dilemma – if your agent hurts someone, who is at fault?
Overall, the session was incredibly helpful at bringing to light AI related concerns that are not well covered in highly technological research.
Panel Session

The final activity on the agenda was a panel session formed by Sebastian Stein, Paula Palade, Adrian James, Cathleen Berger, and Joel Z. Leibo, with Vahid Yazdanpanah as chair, presenting an opportunity for the audience to directly question the group of experts.
One key area of discussion was if and how AI could help society move towards the ambitious Net Zero goal. While concerns about the high energy usage of current foundational models remained relevant, there was also a healthy amount of optimism. Seb argued that AI could help develop incentive systems that nudge individuals towards more environmentally sustainable behaviour. Moreover, he pointed out that these systems may not require the enormous models we see popular today, but may simply need smaller, more traditional machine learning techniques which do not require nearly as many resources to construct and use. Paula also shared a lot of optimism about AI usage in transportation. She highlighted how AI is a fundamental component of autonomous driving and how, in turn, autonomous vehicles can help with more efficient fuel consumption while simultaneously reducing road accidents. Furthermore, Paula spoke to how AI can help bring mobility to currently disenfranchised citizens, creating more accessibility to various transportation methods.

Adrian pointed out how he sees potential for AI systems to improve manufacturing efficiency and reducing wasted resources in current processes. However, he was also quick to mention that he fears that society is potentially being overpromised on AI’s capabilities and we must remain grounded when assessing what the technology can achieve. This was clearly in line with Cathleen’s morning presentation, and she again emphasised that we need much improved transparency from the developers of AI models to be able to assess their positive and negative impacts. Moreover, Joel brought an important consideration to the real benefits of improved efficiency and possible rebound effects. While an individual process might indeed become more efficient, this usually leads to increased usage of that process and thus to higher absolute resource usage. The implication seems to be that, if we want to consume less resources, we need a societal mindset change regarding our consumption habits.
Regarding trustworthiness of AI systems, Joel discussed how we now have computer systems that can communicate in natural language, bringing interesting possibilities for modular systems. Each module in a system can communicate with another using language that humans can directly interpret, making it easier to understand the actions of each component with respect to what messages it receives. Nevertheless, the message generation process within each module is still a black-box, and further research in the area is required.
Closely related to trust is the topic of truth. Chat bots are powered by large language models which are trained on vast amounts of data, mostly collected from the internet, and what they consider to be true is completely dependent on the training process. This is problematic due to how biased sources of information tend to be. The design choices of these models have a significant impact on what they output when asked about sensitive subjects, and currently it is completely up to corporations to make these decisions. How do we ensure that these chat bots, which millions of people interact with, remain truthful and factual? Should corporations be held accountable for what their chats bots produce? Is it up to the user to conduct their due diligence? Clearly questions to be asked when designing AI governance policies.
Conclusions & Future Outlooks

This workshop brought together researchers and stakeholders from various backgrounds to discuss the implications of an AI-powered world – positive and negative. Clearly, there’s much progress to be made in ensuring that AI technologies are developed for the benefit of citizens, in an environmentally sustainable fashion. Regulation was by far the most cited potential solution, albeit with concerns about hampering innovation and the lack of literacy in the topic by government bodies. Nevertheless, participants also showed great enthusiasm about the possibilities that AI systems bring to the table, from automating dull work to enhancing the human experience with highly customisable agents. The key takeaway seems to be that the future is indeed bright if we manage to steer the technology in the right direction.
Our blog posts
View allSwipe