Increasing AI transparency, explainability, interpretability, and energy-efficiency

Increasing AI transparency, explainability, interpretability, and energy-efficiency is an Interest Group of Shef.AI, the multidisciplinary community for AI researchers at TUoS, which is supported by the Centre for Machine Intelligence.

close up image of microchip/circuit
Off

Introduction

With AI tools becoming more widely adopted by users, the ethical considerations involved in their creation and deployment are becoming increasingly more important. This interest group focusses on how AI can be made more explainable, transparent, interpretable and sustainable. Our members range from different backgrounds across the university; some with expertise in developing explainable and efficient AI systems, to those that want to learn more about it. We have recently included a sub-group on the 'Theory of AI'

Research areas

  • Techniques for developing explainability, transparency and interpretability of AI systems
  • Methodologies for increasing the sustainable creation and use of AI
  • Resource constrained systems
  • The theory of AI

Aims of the interest group

  • To bring together experts in explainable and sustainable AI with researchers wanting to learn more about it.
  • To promote the ethical considerations relevant to AI across different use cases.
  • To create a community knowledge base of case studies and best practices of explainable, transparent and efficient AI

Contacts

IG lead:

Matthew Ellis <m.o.ellis@sheffield.ac.uk>

Co-leads:

Jonni Virtema <j.t.virtema@sheffield.ac.uk>

Gavin Boyce <g.boyce@sheffield.ac.uk>

Maria-Cruz Villa-Uriol <m.villa-uriol@sheffield.ac.uk>


Join this interest group

Sign up to this interest group using 

Centres of excellence

The University's cross-faculty research centres harness our interdisciplinary expertise to solve the world's most pressing challenges.