Organizational Integrity and Artificial Intelligence Policy
350 New Hampshire values integrity, honesty, and transparency in the work that we do. It is important for us to consider the ethical and environmental impacts of artificial intelligence (AI) when it comes to the work we do. Here are some of the concerns.
Algorithmic bias: AI systems have the potential to perpetuate harmful biases.
Privacy and data protection: AI uses personal data to learn and causes concern for privacy laws and regulations. AI is increasingly being used for surveillance in systems that uphold white supremacy and cause harm.
Ethical concerns: artists, writers, and others who create materials for entertainment, information, and other purposes put a lot of time into their work. AI hurts artists’ well-being and has implications for the future of creative work.
Environmental concerns: generative AI has a massive impact on the environment. Data centers use a huge amount of electricity, water, and resources.
Social Implications: AI has a massive impact on society, employment, and equity.
Workers Rights: Data centers currently rely on workers who are exploited and underpaid. Big tech companies are lying about working conditions, punishing workers for talking about these conditions, and using workarounds in contractor work to pay people less to keep systems running and tag media for AI learning.
Commitment to authenticity: Our volunteers and supporters deserve original work from us and should trust our communications come from scientific research, trusted sources and knowledge gained from our experiences in community with them.
Definitions
Generative AI: a form of artificial intelligence that creates new text, data, images, and forms with suggestions from the user.
Examples: OpenAI’s ChatGPT, “create real magic,” Jasper, Github, Google Bard
Anything that is creating a brand-new object from a prompt is usually generative AI.
Predictive AI: a form of artificial intelligence that uses data to forecast something, or predict it. This form of AI uses smaller amounts of data and has a lesser impact than generative AI.
Examples: spell check, siri/alexa/google assistant, smart thermostats, fraud detection, google translate, spam filters, weather prediction
Predictive AI looks for patterns and trends and does not have the ability to generate new materials. It uses what data is given to it by the user or the platform (think GPS guidance and figuring out the fastest route)
Generative AI and predictive AI are added to more digital tools every year, and staff are not expected to completely avoid it. Predictive AI tools are acceptable for staff to use.
350 New Hampshire staff should never use generative AI to:
Fill out applications
Write emails
Write op-eds or media materials
Create images for social media or art displays
Answer questions that you could find with a regular search engine search, or by asking a friend
Generate campaign plans or scripts or any other written materials.
350NH asks our staff team to make a conscientious choice to not use AI in the places where they are able to make that choice. Staff are advised to turn off AI-functions in tools wherever possible. We encourage conversations about the impacts of AI and ask that volunteers and partner organizations avoid generative AI use in our work together.
350NH expects applicants and candidates to office to use their true voice and opinions to fill out any applications for jobs, questionnaires for endorsement, or other forms with 350NH.
We work for the betterment of society and for a just and sustainable future for all. We want to have genuine conversations with our community members that is not shielded by AI-generated responses.