top of page

The Future Fox: Responsible AI Policy

Summary

​

This document outlines The Future Fox's Responsible AI Policy, detailing how we use AI in our community engagement and consultation software, ConsultAI and PlaceBuilder. The document explains the scope of AI applications, data handling practices, guiding principles, and customer guidance, including limitations and recommended practices. The Future Fox is working in compliance with ISO 42001:2023 and encourages customers to declare AI use in reports for transparency. We set out our commitment to transparency, privacy, fairness, and accountability in AI usage.

 

Our Commitment

​

The Future Fox is at the forefront of providing artificial intelligence (AI) powered community engagement and consultation software. We believe in the power of AI to transform community engagement and consultation for the better. We also recognise the responsibility that comes with using AI. That’s why we’ve established a robust AI Policy aligned with ISO 42001:2023, the international standard for trustworthy AI management.

 

Scope of AI usage

​

The Future Fox uses a combination of custom algorithms, Natural Language Processing (NLP) techniques, and generative AI or Large Language Models (LLMs) in our software to analyse consultation comments from the public and stakeholders, to determine actionable insights and reports for our customers. This level of automation significantly reduces manual workload while providing consistent and structured results. The functionalities in our software that use AI are:

 

Topic detection and extraction

  • Automatically determines topics in the dataset of comments

 

Topic and sentiment classification

  • Automatically assigns topics or categories to free-text comments.

  • Identifies the overall sentiment (e.g., positive, negative, neutral) and can detect emotional tone to aid deeper understanding.

  • Extracts statements relevant to specific topics

 

Summarisation

  • Generates concise, plain-language summaries of comments and data segments, in various report formats, which customers can review and edit before sharing with stakeholders.

 

Document transcription 

  • Optical Character Recognition (OCR) for automatic transcription of documents in pdf, jpeg format.

 

Data storage, hosting and processing

​

All data is hosted on The Future Fox’s secure AWS environment in the UK for storage and core processing.  The Future Fox’s AI systems use third party model providers and data subprocessors under commercial license only, each holding appropriate and recognised data protection, information security, and AI management credentials. Our subprocessors are: 

  • Amazon Web Services (UK) - servers, data storage, Textract (OCR document transcription service)

  • OpenAI (US) - comments (eg suggestions, preferences, representations etc) data only, which may at times include personal data, is shared with this subprocessor. 

  • Cloudflare (US) - content delivery

  • Microsoft 365/PowerBI (UK) - administration

 

When our AI features rely on OpenAI’s large language models (LLMs), data is transferred temporarily for inference under OpenAI’s commercial terms. They may retain the content of these requests for up to 30 days for abuse monitoring purposes, but none of it is used to train OpenAI’s models. After the short retention window, the data is deleted. While we separate personal data from comment data before AI processing, some comments may inadvertently include personal data. Our approach ensures that any personal data in comments, if present, remains protected at all times.

 

When our AI features rely on AWS Textract, all data is processed within our secure UK environment, and none is used to train AWS’s models.

 

We work closely with our customers’ Data Protection teams to ensure our architectures meet their compliance requirements, including data minimisation and lawful processing principles. Where appropriate, we can also configure self-hosted models owned and managed by the customer.

 

Guiding principles of our usage of AI 

​

Our use of AI is grounded in a commitment to responsible innovation, in line with our company mission and culture. We strive to ensure that our AI solutions serve the best interests of both our customers and the communities they engage. The following principles define how we apply, manage, and continually refine our AI capabilities.

 

Transparency

  • We let you know when AI is being used in our products, including our ConsultAI platform.

  • We provide our customers with information about how our AI features operate and how data is treated throughout the lifecycle.

 

Privacy and Data Protection

  • We safeguard personal information and comply with all relevant data protection laws, primarily the UK GDPR.

  • We include privacy-by-design features, such as separating personal data before AI processing.

  • Your data is not used to train AI models.

  • Your data is retained for as long as is necessary to fulfil our services to you and will be deleted from our systems and all of our appointed subprocessors in accordance with our offboarding policy.

 

Fairness and bias mitigation

  • We regularly test our AI models for biases and continually refine them to reduce unfair or unintended outcomes.

  • We empower customers to flag and correct any classifications they believe are inaccurate or biased.

 

Accountability and human oversight

  • We designate clear roles for those who develop and operate our AI tools.

  • Our team, not algorithms, remains the ultimate decision-makers—particularly for high-impact outcomes affecting people or communities.

  • We build with a Human-in-the-Loop foundation: ConsultAI supplements, rather than replaces, the expertise of our customers. We encourage human review and interpretation of AI outputs in ConsultAI through the design of our software and support.

 

Security and reliability

  • We follow strict security protocols to protect our AI systems and your data.

  • Every AI component undergoes testing to ensure accuracy and reliability before deployment.

 

Ongoing compliance, continuous improvement

​

The Future Fox is developing its AI management system for compliance with ISO 42001 by the end of 2025, following BSI training in February 2025. We continuously update our AI approach in light of new insights, regulations, and guidance and stay abreast of industry best practices and frameworks. We welcome discussion on the responsible use of AI with our customers and their evolving needs.

 

Guidance for customers

​

We recommend you declare the use of AI in generating your reports, to support transparency and trust.

 

Example disclaimer: This report was produced with assistance from The Future Fox’s ConsultAI software, that classifies and summarises consultation responses. This helped us rapidly analyse large volumes of feedback, save significant resources, and meet tight timescales in order to progress the project. We reviewed and validated the AI-generated insights, and the final interpretations and decisions remain ours. We ensured that personal data was protected and never used to train AI models or retained by third parties.

 

Known limitations and recommended customer practices

​

While we invest in rigorous testing to ensure our AI features perform reliably and accurately, including manual inspections of every dataset, some limitations are currently inherent to automated analysis and the use of generative AI:

  • Model accuracy: Although we achieve and monitor high accuracy in classifying and summarising comments, no model is perfect.

  • Contextual nuances: AI may misunderstand slang, technical jargon, cultural references, and languages other than English. 

  • Potential bias: Machine learning models can inadvertently reflect biases present in training data. We continuously monitor and mitigate this risk.

  • Hallucinations or overreach: AI systems may generate plausible but factually incorrect statements in summaries, though we actively work to mitigate such occurrences.

 

What we recommend:

  • Explain AI use: If you embed ConsultAI’s summarised or classified outputs into official documentation, clearly note that an AI tool was used. An example disclaimer is provided above.

  • Review outputs: Always review and validate the AI’s findings. 

  • Provide feedback: If you spot inaccuracies or biases, report them to us so we can review, learn and improve.

  • Statutory consultations: If using ConsultAI for official processes (e.g., local plan representations), confirm your compliance with relevant regulatory guidance, such as PINS guidance on using AI in the planning system. 

  • Ask us: If you are unsure, please contact us for assistance.

 

Questions or concerns?

​

If you have questions about how AI is used at The Future Fox or in ConsultAI, please contact our team at hello@thefuturefox.com. We welcome feedback on how we can enhance our AI system’s fairness, transparency, and overall efficacy.

 

Please note this summary provides an overview of The Future Fox’s AI commitments. For detailed information about our AI development, security measures, and data safeguards, please refer to our full AI Policy or reach out to us directly.

bottom of page