AI Ethics, Usage & Governance Policy
Purpose
Artificial Intelligence (AI) is already reshaping how Gravity Global works, enabling more creativity, unlocking efficiencies, and delivering stronger results for our clients. Our aim is to embrace these opportunities responsibly and safely, with clear governance in place. This policy sets out our approach to the ethical use of AI, our risk management framework, and the standards we uphold in relation to privacy, security, sustainability and compliance.
Summary of Our Commitment:
Gravity Global will leverage AI to:
- Enhance creativity and operational efficiency
- Deliver better outcomes for clients
- Safeguard privacy, security, and IP
- Operate sustainably and responsibly
1. Our Principles for AI Usage
Gravity Global’s AI strategy and usage is underpinned by the following core principles:
Accountability
We remain accountable for how we use AI and are committed to ensuring all AI tools and outputs are subject to appropriate human oversight and governance. Where issues arise from our use of AI, we will take appropriate and proportionate steps to address them in line with our legal obligations, contractual commitments, and ethical standards.
Fairness
We actively seek to prevent bias in AI applications and ensure that outcomes are fair, ethical and align with our values.
Privacy and Security
We uphold the highest standards of privacy, data protection, and security in our use of AI. AI tools are assessed to ensure compliance with relevant laws and our own governance standards. Specifically, we do not use third-party AI tools that share or allow training on our inputs or outputs.
Sustainability
We recognise AI’s environmental impact and favour solutions and partners aligned with our sustainability goals.
Human Oversight
AI is a tool to enhance human creativity and insight, not a replacement. All AI-generated outputs are subject to human review and validation.
2. Safety and Governance
We take a structured and proactive approach to AI governance and safety:
- Oversight is provided by a cross-functional governance group referred to herein as the AI Council, comprising the CTO, Chief Innovation Officer, and Head of AI, reporting into the Executive Board
- This group oversees the application, adoption and risks of AI within Gravity Global, including compliance with emerging regulation (e.g., EU AI Act) and evolving IP law
- We maintain a due diligence process for third-party AI tools and AI components of existing tools, assessing legal, security, ethical, and data risks before adoption
- We actively monitor the regulatory landscape and review our processes to ensure continued compliance
Training & Awareness:
All employees receive training on the responsible use of AI, covering privacy, security, intellectual property, bias, and data protection. Ongoing guidance ensures our teams are equipped to use AI effectively and ethically.
Prohibited Uses:
AI must not be used in any way that:
- Violates laws or regulations
- Infringes on data privacy, confidentiality, or intellectual property
- Produces outputs intended to mislead, deceive, or discriminate
3. Privacy and Security in AI Usage
We maintain robust privacy and security governance to protect our clients, suppliers, and our business:
- Our AI Council regularly reviews privacy and security risks relating to AI and data ethics
- We adhere to our Data Protection Policy
- We work in close partnership with client privacy teams to ensure alignment on data governance and risk management
4. AI and Sustainability
Gravity Global is committed to minimising the environmental impact of AI through:
- Working with partners aligned to sustainability best practices in AI
- Using AI to drive efficiency gains that reduce overall resource usage across our operation