LunarLab understands that AI systems have the potential to improve the user experience, but that like all technology, they also have a risk of causing harm. As a company, we recognize the risks associated with using these technologies. In order to mitigate these risks, we will follow the policy guidelines outlined below whenever AI system are used in any capacity at the company.

Policy

The following policy applies to all generative AI tools and large language models.

  1. Transparency: We will be transparent about the use of AI in our design process. If and when AI is used during the course of any project work, we will proactively inform our clients about the use of AI. This includes how AI was used in the project, how we maintained human oversight, what input data and prompts were used, and the potential risks associated with AI.
  2. Data Privacy: LunarLab will keep all customer data and proprietary company data strictly private. Customer data or proprietary company data should never be added to an AI system or used in a prompt for any reason.
  3. Human Oversight: We will ensure that humans have oversight and control over any AI systems that are used. We will maintain human oversight throughout the design process and ensure that AI systems are not making decisions that should be made by humans. We will never include AI-generated content in our deliverables without comprehensive, intentional human oversight.
  4. Human Voices First: We will never use AI to substitute or replace human voices or insights. Validation, user testing, and other types of research should always be conducted with real humans rather than with AI-generated personas. Any AI use should center the needs of real humans.
  5. Ethical Use: We will only use AI systems for purposes that we feel are ethical, and that align with our values and those of our clients. We will not use AI to manipulate or deceive people, or in any other way that could cause harm. We will also consider the broader societal impacts of AI use. Any AI used by LunarLab should be used with the goal of benefiting the company; its employees, contractors, or vendors; clients; stakeholders; end users; or society in general.
  6. Customer Guidance: LunarLab employees are not AI experts. However, if LunarLab is made aware that a client plans to use AI in their systems, LunarLab employees should provide guidance on ethical AI use to the best of our ability. When designing these systems, we will work to ensure that the systems are free from bias, manipulation, or opportunities for discrimination or harm. We will recommend that the system is tested ethically with diverse user groups to identify problems before launching the product. Although we cannot identify all potential problems and we won’t get it right every time, we will provide the best guidance we can.
  7. Continual Improvement: We will continually assess our Responsible AI Use policy and seek ways to improve our AI practices. We will work with our employees to identify areas where we can act responsibly, and implement changes accordingly.
  8. Regular Review: We will review and assess our Responsible AI Use policy on an annual basis to ensure that it remains current and effective.

Employees may contact Elizabeth Anderson ([email protected]) for any additional questions or concerns about the use of AI at LunarLab.