Code of Ethics and Practical Rules for use of Artificial Intelligence Tools
Published: Jul 24, 2023 Reading time: 7 minutesWe are seeing rapid developments in deploying advanced artificial intelligence (AI) tools and want to take an ethical and practical approach as an organisation.
Although machine learning, applied statistics, and other elements of AI have been used for some time as translators, voice assistants, advertising systems, photo galleries and spam filtering, they have recently experienced a boom with large language models and image generators appearing in many publicly available applications.
Below, we prepared general principles and practical rules on how (not) to use these publicly available services to ensure the security of our data or transparency.
General principles
- We are accountable and transparent
- We respect human rights
- We educate and seek to ensure that AI benefits society as a whole
- We care about security and data protection
Practical Rules for using AI tools
- Working with large language models (LLMs - text generators and editors, chatbots etc.)
- Use of image, video and audio generators
- Use of other AI tools
- Working with organisation data
We will revise and refine the following code and rules as needs and developments arise.
General principles
The basic premise for our use of AI tools is to develop and extend human capabilities to benefit freedom and society as a whole and to secure individual rights.
We are accountable and transparent
- The responsibility for using AI tools and products lies with the human, e.g. specific employees and their supervisors.
- We use AI tools transparently. Their use must be transparent to beneficiaries, donors, partners, communication counterparts, and other stakeholders. For example, we never retrospectively edit existing photos, videos or audio or generate them in a way that could be confused with genuine photographs, videos or sound recordings, except in exceptional cases, see below.
- The use of AI tools is justified to improve the quality of our work. We will not generate more content at the expense of quality.
- We need to be able to explain the use of AI to ensure we are in control of our own decision-making and can properly interpret the results.
We respect human rights
- We use AI tools in accordance with the Universal Declaration of Human Rights and Freedoms. We take care to avoid discrimination against groups and individuals based on race, origin, gender, religion, sexual orientation, or political opinion.
- We ensure our use of AI preserves equality and human dignity and does not exploit individuals' or groups' human and psychological vulnerabilities nor create information and power imbalances against stakeholders.
- We consider social and environmental impacts. We examine and evaluate whether the tools, resources, and processes we use have a broader negative impact on society and the environment.
We educate and seek to ensure that AI benefits society as a whole
- In education, we strive to ensure that AI benefits all children. We inform and educate educators, students* and pupils* so that they understand how it works, how to use its tools, and their limitations and risks.
- We strive to make access to AI, information about the benefits and risks, and knowledge of use available to the most vulnerable so that all of society can benefit.
- We use AI tools to extend the accessibility of our services and information to people with hearing, visual, intellectual, and other disabilities.
We care about security and data protection
- We protect personal data and other confidential information so that AI tools comply with GDPR and applicable data protection and security laws and regulations and cannot be misused by a third party.
Practical Rules for using AI tools
Working with large language models (LLMs - text generators and editors, chatbots etc.)
- We can use public tools like ChatGPT, Bing, Bard, etc., for inspiration, text suggestions, or stylistic editing. They can also be used for translation, summarisation, synonym and technical term searches, tone editing, shortening, subtitle suggestions and other content work. Large Langue Models and Specialised LLMs can also solve mathematical and logical problems, create, edit, add tables, categorise, find Excel functions, etc.
- However, we do not rely on these tools to discover or verify factual information. Back-check all text and information AI generates, as models sometimes convincingly invent things.
- When using publicly available LLMs, we protect personal data and sensitive internal information. We never enter the personal data of clients or partners and any other sensitive internal information, such as information in internal financial reports or grant applications.
- We do not generate and use AI-generated text mechanically and without reason. We don't want to increase production at the cost of quality nor replace originality with mediocrity. AI tools help us add quality to our texts, they should be always better than if we wrote them ourselves.
- The employee is always responsible for the texts created with the help of AI and signs them on their behalf.
- We keep children safe; always make sure to check the guidelines of the particular AI system we intend to use. According to ChatGPT rules, for example, children under the age of 13 are not allowed to use the app on their own, even if they have the consent of a legal guardian. Children 13 and older can use ChatGPT with the legal guardian's consent, but supervision is also recommended here.
Use of image, video and audio generators
- We never generate images, videos or audio that could be construed as documentation of our activities or give a false impression of actual events or our work. Audiovisual material in our outputs is not just an illustration of our activities but gives these activities credibility that we must not compromise.
- We never retroactively edit existing photos, videos or audio using AI tools, except for technical and stylistic modifications, without affecting the content or, in justified cases, anonymising persons or places to ensure security. In this case, we transparently provide information about their use with the image, video or audio.
- We never generate images, videos or audio that could be confused with the medium of photography, video, or actual sound recording. An acceptable exception is when we want to illustrate the capabilities of AI tools for educational purposes, in which case we transparently include information about its use.
- We can use image, video or sound generators to create drawings and illustrations, especially in artwork that mimics handmade or digital art styles and in graphic design. In such cases, we must ascertain that we are entitled to use them in this way and disclose their use transparently, along with the author's name.
Use of other AI tools
- Following the Code and the conditions above, we can use many AI tools to improve and streamline our work. For example, there are tools for creating presentations, transcribing
- spoken word to text, tools for evaluation, automation, programming, working with databases, personalisation or fighting misinformation. We weigh the advisability and risks against the organisation's values and overall benefit when using AI tools.
Working with organisation data
- We always consider the level of risk when using concrete AI tool.
- We take particular care to protect personal data and any internal or confidential information. Never enter personal client or partner data, internal financial data, or any other sensitive information into tools not explicitly approved for use within the organisation.
- We only enter content into publicly accessible AI tools we would be comfortable posting anywhere on the Internet without leaking internal information with reputational or security risks. This also applies to apps or services that you connect to with our private account.