Our core values:
Common understanding of definitions and areas of Responsible AI
The community must operate in common well-understood terms and refer to the common ontology to avoid misinterpretation and ensure holistic representation of different subjects and risks related to the domains of knowledge in Responsible AI.
Openness and inclusivity for all domains in Responsible AI
The community is open for leaders and practitioners from different domains and principles of Responsible AI1. We welcome everyone, who is willing to operationalise responsible use of AI.
From comprehensiveness to simplicity
We embrace complexity and challenges in the field of Responsible AI to find ways to simplify operationalisation of Responsible AI where possible. We aim to demystify responsible AI for those who find it too complex or inactionable. We appreciate experts who can bring complex topics to understandable terms also to beginners in the field.
“By-design” thinking
Our goal is to introduce best practices as early as possible into the AI use cases to avoid retrofitting an extensive/substantial set of mitigations of risks or rework later down the value chain.
Member collaboration for community values over individual agendas
This value highlights a commitment to mutual support and collective problem-solving in a psychologically trusted and safe space. We may call out companies and their contribution, but only based on their contribution to the community’s values.
Constant learning, curiosity and critical thinking
As we live in a highly evolving field and as responsible AI itself is still a nascent field, we strive to constantly learn, be curious, and where necessary, adapt our thinking to accommodate new development. We apply critical thinking where possible to avoid overreliance on AI outcomes.
Documented and openly shared learning and practice
Each community meeting serves as a transparent outcome to be shared with other community members and practitioners. We aim to publish openly all community meetings and decisions and any material produced. The community produces and publishes material in an accessible manner based on community contributions.
Active contribution over discussions without outcomes
Our goal is to contribute to operationalisation of Responsible AI practices to ensure that Responsible AI fundamental principles are integrated into real-life solutions and that responsible AI mindset is advocated both within and outside the community. We encourage members to take the lead and foster a community built on trust and fair representation.
We welcome questions and challenges – nobody knows “it” all
We believe Responsible AI best succeeds in an environment which is not afraid to seek clarity and raise questions on any aspect in members’ minds. Challenges are accepted as intellectual challenges in understanding various points of view, which together realise better cumulative outcomes and learning experiences for all.
Embrace value-driven principles for business
We believe integration of Responsible AI and in particular appropriate prioritisation of human values leads into longer term value for business alike. We embrace equal representation of people and consider the impact of technological decisions.
1 Principles of Responsible AI: Accountability, Fairness and bias, Transparency and explainability, Human agency and oversight, Technical robustness and safety, Privacy and data governance, Security, Societal and environmental wellbeing
Learn more about our guiding principles