Ethics & Policies
Legislation is currently being formulated in the EU and US, but we believe everyone should use AI tools responsibly, ethically and legally. The risks of misinformation, bias or malicious use are real, so we all need to guard against this in our daily work.
To maintain public trust and ensure the responsible use of generative AI tools, a good start is to think about the “FASTER” principles:
- Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations
- Accountable: take responsibility for the content generated by these tools. This includes making sure it is factual, legal, ethical, and compliant with the terms of use
- Secure: ensure that the infrastructure and tools are appropriate and that privacy and personal information are protected
- Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; document decisions and be able to provide explanations if tools are used to support decision-making
- Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
- Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to improved outcomes for citizens identify appropriate tools for the task; AI tools aren’t the best choice in every situation
In collaboration with Campaign Lab, Brand Response has come up with some further suggestions on this topic.
Make sure your organisation has policies around how AI is used and developed
To provide a firm foundation begin by creating AI polices. These documents, meant for staff, volunteers, and other stakeholders, should be crafted in consultation with AI experts, your legal advisors, and be in line with your organisation’s mission, editorial standards and ethics.
All organisations today are increasingly recognizing the need to establish policies surrounding the use of AI. As many individuals and teams within organisations are already actively experimenting with and deploying AI tools, it becomes imperative to acknowledge this trend and introduce appropriate structures to steer such experimentation in the right direction.Providing guidance is a crucial component of this process.
Beyond the immediate operational considerations, there are pressing issues related to security, privacy, and wider ethical choices that need to be addressed. It's essential to strike a balance where individuals feel both protected by these policies and empowered to innovate, ensuring that the transformative potential of AI is harnessed responsibly and effectively.
Who should policies be for
- Staff: Staff may already be using these tools quietly in their day to day work. Ensuring they have the right guidance around limiting the sensitivity of data they put into these models is important. You should also seek to encourage and empower them during this process. Many staff can feel wary of these new tools and potentially fearful about how their jobs could change. Putting them in the driving seat to own these changes needs to be a key part of any policy.
- Volunteers and campaigners: Volunteers may already be trying out these tools. This technology has the capacity to greatly increase their impact and ability to deliver. Here it is important to have clear guidelines in order to protect the institution's reputation so if something does go wrong, you can establish that clear guidance was provided.
- Partners: Partners you are working with may already be using this technology. If they are delivering work for you it is important that there is some transparency about how they are using it, ensuring that they adhere to your ethical guidelines or expectations.
- Citizens: In a public facing organisation which is accountable to citizens it is important that you are transparent about how AI will be used in your organisation. This is important for giving guarantees around data security and privacy but also about safeguarding the reputation of the institution and modelling good practice and governance on this issue within civil society.
Examples on how to do it well
There are a number of organisations who have developed policies on how they will be using AI within their organisations. They vary in terms of complexity and form. Some are more ethical statements to guide the use of AI, others are lists of guidelines or practical frameworks. There should be clear ethical guidelines combined with a practical framework.
An organisation such as the European Parliament should ideally create a policy with all these elements:
- Internal guidelines for staff around use of AI (e.g. ChatGPT)
- Internal guidelines for using AI in product or project development
- Policies for use of AI in volunteers, partners and suppliers
- Public statement about organisation’s position on use of AI
A selection of example policies:
- LLM policy for campaigners (Code Hope Labs) (Campaigns)
- Guidance to civil servants on use of generative AI (UK Government)
- GenAI policy (The Guardian) (News)
- Sample corporate policy for GenAI, SOCITM (Company/ Commercial)
- A matrix of ~40 public process frameworks for Responsible AI, Center for Security & Emerging Technology (Frameworks)
Key questions to ask when formulating policy
- Transparency - does my organisation need to be transparent about our use of AI?
- Purpose - why will we use AI? What is ethical?
- Privacy & security - what data will we put into these models?
- Fact checking - how will we make sure we are checking the accuracy of what is produced?
- Human in the loop - how do we make sure there is a human in the loop? Will we commit to not fully automating all processes? What’s the impact on jobs in my organisation?
- Infrastructure - how are we thinking about the risks of building on unstable tech? What will be our thresholds for stability? When should we develop workflows that depend on this tech?
- GDPR and data privacy - get expert advice on this, especially if dealing with personal data in your interaction with AI.