Our Responsible AI Policy
Mightybytes uses this policy to govern more responsible AI use for both our internal operations and on behalf of our clients and other stakeholders.
In tandem with a company Code of Ethics and our B Corp certification, Mightybytes publishes a series of publicly available governance documents outlining company policies and practices. To see a full list of these documents, visit our B Corp hub.
This document reflects company policies for using AI-enabled tools on behalf of our clients or in our own internal operations. If you find them useful, feel free to incorporate them into your own projects or operations.
Responsible AI Principles
Mightybytes follows relevant Corporate Digital Responsibility (CDR) principles and the World Wide Web Consortium’s Web Sustainability Guidelines to reduce risk and improve the social, environmental, and economic impacts of our digital projects and operations.
When considering AI use for a project, program, or deliverable, we ask the following questions based on CDR Principles:
- Purpose and trust: How might we build trust with customers, employees, and other stakeholders through more responsible AI use?
- Fair and equitable access: Can we ensure that diverse stakeholder groups are represented in the creative process and able to easily access informational outputs?
- Promote societal well-being: How will we protect data privacy, empower users, and prevent harm in AI and related projects?
- Consider economic and societal impact: Have we evaluated the broader effects of AI-related project decisions on society and the economy? What will we do to reduce the potential for unintended consequences?
- Accelerate impact economy progress: Have we first reviewed and considered AI-related products and services that are ecologically and societally beneficial?
- Support a sustainable planet: Have we identified ways to minimize risk, improve resilience, and reduce the environmental footprint of AI-related technology on a project?
- Reduce tech’s impact on climate and environment: What supply chain decisions will minimize negative consequences of AI-related digital infrastructure and operations? How might we find more responsible partners?
Risk Reduction & Compliance
AI tools carry risks, including environmental degradation, data privacy, human rights issues, algorithmic bias, and potential reputational harm due to misinformation caused by AI ‘hallucinations’. Mightybytes makes every attempt to protect stakeholder data, comply with applicable regulations, and promote practices that reduce AI-related risk where possible.
What are the risks of Artificial Intelligence?
For a full breakdown of known AI risks that inform this policy, read our post Is Ethical and Sustainable AI Possible?
Read the ArticleStakeholder Well-Being
- If possible, choose tools or suppliers with public claims about how they address issues important to stakeholders, such as data privacy, misinformation, use of renewable energy, worker exploitation, and so on.
- Communicate transparently about known risks to stakeholder well-being when using AI on projects. Help AI stakeholders understand what those risks are and how to identify them.
- Always strive to understand when AI is the appropriate choice to help data stakeholders meet their needs and when it is not.
- Create clear practices to vet AI-produced data for truth and accuracy and require human oversight to AI processes.
Privacy
- Review AI tools with data stakeholders. Collect privacy policies in a single place for easy review.
- Never use client information in AI tools without informed consent. Email is an appropriate channel to receive this consent.
- Get informed consent when using an AI note taker in meetings and reference note taker’s privacy policy with a link at each meeting’s outset.
Risk Management
- To reduce risk for algorithmic bias, give preference to safer and more inclusive models that train their LLM with data representing diverse perspectives.
- AI-produced content must be rigorously fact-checked by humans with sources cited, regardless of whether you are delivering something to a client or working on an internal project.
- AI practices will follow any existing or emerging legislative guidance in the relevant jurisdictions. Similarly, Mightybytes will actively support regulations that provide increased transparency on the part of AI company practices, especially when it comes to climate and environmental disclosures.
- Third-party suppliers can comprise the majority of an organization’s scope three emissions. If possible, green suppliers will be given preference over those that power tools with fossil fuels. We acknowledge that lack of environmental data disclosure on the part of AI providers often makes tracking the environmental impact of these tools difficult.
Our Responsible AI Practices
When using AI for projects, Mightybytes follows the practices below to the best of our abilities.
Responsible Strategy
- We strive to use AI in a well-defined context where it will improve the quality of relevant interactions and enhance humanity. We will always ask whether AI is the right tool for the job.
- Next, we commit to quality control and human oversight as a strategic approach. If we can’t deliver quality results using AI tools, are they worth using?
- We prioritize more responsible, accessible, and sustainable tooling whenever possible.
- To reduce risk, we don’t create accounts with any AI tool before thoroughly vetting their public-facing policies and reviewing licensing agreements. We will share third-party supplier policies at the outset of any project.
- Free tools can be trialed without client or company data. All accounts should have two-factor authentication. Unused accounts will be deleted.
Responsible Design & Development
- We will consider whether AI-enabled tools can effectively illustrate concepts, summarize ideas, prototype features, or communicate value in design and development processes, workshops, etc. (see quality control and human oversight above).
- When using AI tools in design and development processes, we will always communicate potential risks or possibility for unintended consequences to project stakeholders.
- We prioritize clean, performant, secure, accessible, bug-free, semantic code on every project.
- We organize AI projects for clients to track usage and keep stakeholder data in a single safe, secure place so it can easily be deleted when necessary.
Responsible Marketing
- With appropriate informed consent from relevant stakeholders, we use AI on tasks it is best suited for, such as summarizing, creating outlines, or taking notes.
- While AI might support content workflows, final draft content delivered by Mightybytes should always be written by humans in our own voice and tone (see editorial guidelines and publishing standards) or following client specifications.
- We will take the necessary steps to fact check any claims made by AI tools.
- We focus on E-E-A-T principles—Experience, Expertise, Authoritativeness, Trustworthiness in all our content.
AI Tooling
Mightybytes vets the AI tools we use to prioritize those that offer transparent public data disclosures and clear social and environmental policies. Security, data privacy, accessibility, and sustainability top this list. In absence of those, we vet AI tools on a case-by-case basis.
Privacy Policies for AI-Enabled tools
- GreenPT: Terms of Service
- Zoom: Privacy Statement
- Adobe Creative Cloud: Privacy Policy
- LinkedIn: Privacy Policy
- MSTY: Privacy Policy
- SEMRush: Privacy Policy
- Asana: Privacy Policy
- ChatGPT: Privacy Policy
- Slack: Privacy Policy
- Google Analytics, Ads, and related products: Privacy Policy
- Lyssna: Privacy Policy
Inappropriate Uses of AI
Mightybytes deems the following uses of AI as inappropriate because they increase potential for risk or unintended consequences.
- Mightybytes never uses AI to write content wholesale. Every piece of content Mightybytes publishes is reviewed by humans to ensure accuracy and adherence to E-E-A-T principles..
- Uploading client data to AI tools without explicit permission is prohibited.
- Using AI for research without fact-checking the results is unacceptable.
- Using AI for programming tasks without thorough testing and quality control is similarly unacceptable.
- Generating images, videos, or other media for publication (versus testing or design reviews) with AI tools that train their models on copyrighted material or otherwise ignore intellectual property rights is problematic.
Education & Training
Finally, as with all new policies and practices we recommend, Mightybytes provides documentation and educational materials to help clients and other stakeholders better understand potential risks and responsible use.
Mightybytes does this to raise awareness, share information transparently, and to educate the public and our stakeholders as part of our Education Impact Business Model. To that end, if you would like training or consulting on any of the practices described in this policy, please get in touch.
Resources & References:
The resources below have inspired or otherwise informed this policy.
- Wholegrain Digital’s Guide to Ethical Use of AI
- Open Source Responsible AI Toolkits
- Towards Data Science’s Critical Tools for Ethical/Sustainable AI
- Safer AI’s rating system
- GadellNet’s Building Ethical AI Practices
- Marit Digital’s AI Policy
- Enkrypt AI’s Safety Leaderboard for auditing potential model risk
- Hugging Face’s AI Energy Score for models