Generative AI Acceptable Use Policy

Purpose

This policy outlines acceptable standards and guidelines for using generative AI tools to ensure responsible, ethical, and secure practices within Deft.

Scope

This policy applies to all Deft employees, contractors, and partners using generative AI tools in connection with Deft business activities.

Policy Standards

1. Approved Tools

  • Only use generative AI platforms or tools approved by Deft IT and Security teams.

  • Notify IT of any new generative AI tools under consideration for approval.

2. Data Privacy and Security

  • No Confidential Information: Do not input confidential, sensitive, or personal data into generative AI systems unless explicitly authorized.

  • Anonymized Inputs: Use anonymized or sanitized data inputs when necessary for experimentation or analysis.

  • Encryption: Ensure that data interactions with AI tools are encrypted (TLS 1.2+).

3. Intellectual Property

  • Verify intellectual property ownership or rights of content generated by generative AI tools.

  • Clearly distinguish between AI-generated and human-created content where required.

4. Ethical Use

  • Avoid creating content that is misleading, biased, discriminatory, offensive, or inappropriate.

  • Regularly review generated outputs for accuracy, appropriateness, and ethical alignment with Deft’s standards.

5. Governance and Monitoring

  • Report all incidents of misuse, data leaks, or policy violations immediately to the Deft security team.

  • Regular audits will be conducted to ensure compliance with this policy.

Responsibilities

  • Employees and Users: Comply with all aspects of this policy.

  • Managers: Provide oversight, ensure compliance, and facilitate training as required.

  • Security and Compliance Teams: Maintain policy, approve tools, and manage incidents.

Enforcement

Failure to comply with this policy may result in disciplinary action, including suspension of tool access or other measures as deemed appropriate by Deft management.

Updates

This policy will be reviewed and updated regularly to reflect best practices and emerging risks associated with generative AI technologies.

Policy Version: 1.1
Date: January 2025
Policy Owner: Ian Crocombe