AI Usage Policy

There are, understandably, many questions around privacy and security when it comes to generative AI.

This video is a breakdown from a UK-based lawyer of Openai's Privacy Policy. He also explores how the model is trained and explains the regulations and audits that rigorously enforce the Privacy Policy.  Please note this review is for "Plus" accounts. The data control setting he refers to are by default off on the "Teams" and "Enterprise" plans, giving even higher levels of control and security.

His summary starts at 7 minutes 20 seconds.

Simpler With AI's AI Usage Policy


1. Introduction

Purpose:
The Simplerwith.ai AI Usage Policy outlines the ethical and responsible use of third-party AI technologies, including generative AI tools such as ChatGPT, within the company. This policy serves as a guideline for all employees, contractors, and partners to ensure that AI is integrated into our operations and client services in a manner that aligns with our core values: simplicity, innovation, impact, and continuous support.

As a company specializing in AI training and consultancy, we recognize the importance of setting a high standard for AI use. Our mission to simplify AI adoption for businesses depends on our ability to provide reliable, transparent, and ethical AI solutions. This policy is designed to safeguard data privacy, promote fairness, and uphold accountability in all AI-related activities, both internally and in client-facing projects.

By adhering to this policy, Simplerwith.ai aims to build trust with our clients and partners, mitigate risks associated with AI misuse, and stay compliant with relevant legal and regulatory standards, including the UK’s Data Protection Act 2018GDPR, and other applicable UK and international AI regulations such as the AI Safety Act.

2. Scope

This AI Usage Policy applies to all employees, contractors, and third-party vendors who engage with AI technologies as part of their work at Simplerwith.ai. It encompasses both internal usage of AI tools and their application in client-facing projects.

Internal Use:

This policy governs the responsible use of AI tools, including generative AI platforms like ChatGPT, used in daily operations, training programs, and consultancy services. The focus is on ensuring data privacy, security, and fairness while maximizing productivity.

Client-Facing Use:

This policy ensures that AI tools used during consultancy engagements are deployed transparently and ethically. AI-driven outputs and recommendations are to be communicated openly, avoiding biases, and serving the best interests of the client. Explicit client consent will be obtained before applying AI-driven solutions that could impact decision-making, with the option to contest AI-generated decisions as part of their rights under GDPR.

Third-Party Vendors:

All external AI vendors will be evaluated individually to confirm their alignment with Simplerwith.ai’s ethical standards, particularly regarding privacy, fairness, and transparencyData Protection Impact Assessments (DPIAs) will be conducted for any high-risk AI systems provided by third-party vendors, ensuring compliance with GDPR and other relevant legal frameworks.

3. Principles

3.1. Ethical AI Use

AI tools must align with Simplerwith.ai’s mission to empower businesses through transparent and effective solutions, ensuring the protection of human rights, privacy, and client autonomy.

3.2. Transparency

Employees must clearly communicate the role of AI in decision-making, ensuring clients understand the capabilities and limitations of AI tools. Any biases or limitations in AI models must be disclosed, including the steps taken to mitigate these risks.

3.3. Accountability

The AI governance committee will oversee the application of AI, with regular audits to ensure compliance. Employees are individually responsible for their ethical use of AI tools and must immediately report any issues with AI outputs that could negatively impact clients or business processes. A whistleblowing mechanism, including an external reporting option, will be available to address concerns about unethical AI use.

3.4. Fairness and Non-Discrimination

AI tools will be assessed regularly through fairness audits to prevent the perpetuation of biases. Metrics that assess bias across protected characteristics (e.g., gender, race, disability) will be used to ensure AI-driven decisions are equitable. Simplerwith.ai complies with the Equality Act 2010, ensuring that AI solutions serve all clients fairly.

3.5. Security and Privacy

Simplerwith.ai will ensure that all AI applications comply with the UK’s Data Protection Act 2018 and GDPR. Strong measures will be taken to secure sensitive data and protect client privacy, including regular impact assessments and data minimization practices.

4. Implementation

4.1. Governance Structure

The AI Governance Committee will oversee the use of AI across the company, ensuring alignment with ethical, legal, and business standards.

Composition and Roles:

The AI Governance Committee will be composed of key members from within the company, with a focus on ensuring that individuals with relevant expertise are included. Roles typically include:

  • Chairperson: Responsible for leading the governance efforts and overseeing AI use.
  • Compliance or Legal Lead: Monitors legal and regulatory compliance, such as GDPR.
  • Technical Expert: Evaluates the technical aspects of AI tools, including data privacy and security.

If certain expertise is not available in-house, external experts will be consulted to ensure the company has access to the necessary skills, particularly in areas like ethics, data privacy, or technical risk assessment.

Meeting Frequency and Decision-Making:

  • The committee will meet annually to review AI usage, assess risks, and ensure compliance with ethical and legal standards.
  • Additional meetings may be scheduled if urgent concerns arise, such as AI-related issues or significant regulatory updates.
  • Decision-Making Process: Decisions will be made through consensus where possible. If consensus cannot be reached, the Chairperson will have the final say.

Authority and Powers:

  • The committee has the authority to veto or pause AI projects if non-compliance, ethical risks, or legal violations are identified. This ensures that AI tools are used responsibly and in accordance with company values and regulations.
  • The committee can also mandate corrective actions, including modifying AI usage, conducting further risk assessments, or adjusting workflows to mitigate risks.

4.2. Training and Education

Mandatory, regular training will be provided on ethical AI use, covering topics like data security, bias detection, and transparency. Employees will be required to complete annual certifications to demonstrate up-to-date knowledge of ethical AI usage, data protection, and regulatory requirements. This certification process will ensure that all personnel remain informed about emerging risks and developments in AI technology and regulation.

4.3. Vendor Management

AI vendors will be assessed on a case-by-case basis to ensure they meet Simplerwith.ai’s standards for security, fairness, and transparency. Vendor contracts will clearly outline expectations for ethical AI use, compliance with data protection laws, and adherence to GDPR standards.

5. Compliance and Monitoring

5.1. Regular Audits

The AI governance committee will conduct regular audits to ensure adherence to data privacy, fairness, and transparency standards. Fairness audits will include specific assessments of AI-driven decisions to prevent bias, ensuring compliance with the Equality Act 2010 and GDPR.

5.2. Reporting Mechanisms

Employees can report AI-related concerns directly to the AI governance committee. Reports will be treated confidentially, with an external whistleblowing option available.

5.3. Accountability for Violations

Any violations of the policy may result in further training, suspension of AI tool access, or termination of employment or contracts, depending on the severity. Legal liability for AI-related harms will be reviewed by the AI governance committee to determine responsibility for damages caused by improper AI use.

6. Review and Updates

6.1. Policy Review Frequency

The AI policy will be reviewed annually and updated as needed, particularly in response to technological or regulatory changes.

6.2. Involving Stakeholders in the Review Process

Feedback from employees, clients, and vendors will be considered in the review process to ensure the policy is practical and effective.

6.3. Communicating Updates

All updates to the policy will be communicated through company-wide announcements, and relevant training will be provided if needed.

7. Contact Information

For questions or concerns related to the AI Usage Policy, please contact:

  • AI Governance Committee
  • Email: governance@simplerwith.ai
  • Phone: 01273 011 482
>