Responsible AI and Ethics Commitment

Effective Date: 02 June 2025

1. Introduction

At Kami, our mission is to empower students and teachers worldwide with innovative tools that transform the learning experience. The integration of powerful AI technologies, specifically Google’s Gemini models, via our powerful backend integration platform, represents a significant step forward in achieving this mission.

This Responsible AI and Ethics Policy outlines our commitment to developing and deploying these AI technologies in a manner that is safe, fair, transparent, and accountable. It serves as a guiding framework for all our stakeholders, from our internal development teams to the educators and students who use our products. Our approach is founded on the core principle that AI should augment the capabilities of teachers and enrich the learning journey for students, always prioritizing human well-being and ethical considerations.

This is a living webpage. As AI technology evolves and we learn from our users and the broader community, we will continuously review and update our practices to meet the highest ethical standards.

2. Governance

Effective governance is the bedrock of responsible AI. We have established a clear framework to ensure our AI systems are developed and managed with oversight, intention, and a deep sense of responsibility.

  • 2.1 Principles and Values: Our AI development and deployment are guided by the following core principles:

  • Student and Teacher Centricity: AI will be used to support pedagogical goals, enhance learning outcomes, and reduce teacher workload. It is a tool to empower, not replace, the human element in education.
  • Fairness and Equity: We are committed to the responsible implementation of AI. We build upon the principles outlined in Google’s Responsible AI Practices and strive to provide inclusive and equitable opportunities for all learners.

  • Transparency: We believe in being open and honest about how we use AI. We will strive to make our AI systems’ operations and limitations understandable to our users.

  • Accountability: We take responsibility for the AI systems we deploy. We have established clear lines of ownership and mechanisms for redress.
  • Privacy and Security by Design: Protecting user data is paramount. Our AI systems are architected from the ground up to safeguard privacy and security, as detailed in our Privacy Policy.


  • 2.2 Roles and Responsibilities:

  • AI Ethics Review: A cross-functional team comprising senior leadership from product, engineering, and data privacy is responsible for overseeing the implementation of this policy, reviewing new AI features, and guiding the company’s overall AI strategy.
  • Development Teams: Our engineers and product managers are responsible for implementing this policy in the design, development, and testing of all AI features. This includes conducting bias assessments and ensuring adherence to data privacy protocols.
  • Users (Teachers): We empower educators to use AI tools responsibly within their classrooms, providing them with guidance and context on the capabilities and limitations of the technology.


  • 2.3 Risk Management: We have implemented a continuous AI risk management process:

  • Impact Assessments: Before deploying any new AI feature, we conduct a thorough assessment to identify potential risks, including data privacy vulnerabilities, security threats, and potential negative impacts on student learning or well-being.
  • Mitigation Strategies: We develop and implement a clear mitigation strategy for each identified risk. For instance, our architectural decision not to submit PII and to process data only after conversion to PDF is a key mitigation for privacy risks.
  • Architectural Safeguards: We deliberately abstract the AI interaction layer. Neither students nor teachers directly interact with or control the underlying prompt engine from the front-end. All requests are managed and sanitized through our secure backend API, which is designed to constrain inputs and structure outputs. This architectural choice significantly minimizes risks such as prompt injection, misuse, and exposure to unintended AI behavior.
  • Regular Audits: We conduct regular internal and third-party audits of our AI systems to ensure they are functioning as intended and continue to align with our ethical principles.


3. Design and Development

Our ethical principles are embedded in every stage of our product design and development lifecycle.

  • 3.1 Human-Centered Design:

  • Teacher in the Loop: Our AI features are designed to keep the teacher in control. The AI provides suggestions, automates mundane tasks, and offers insights, but the final pedagogical decisions rest with the educator.
  • Inclusive Design: We actively seek input from a diverse range of educators and students during the design process to ensure our AI tools are accessible, intuitive, and meet the needs of users from all backgrounds.
  • User Autonomy and Control: We empower our users with ultimate control over their experience. The use of AI features is optional. We provide granular controls for administrators at the school or district level, as well as for individual teachers, to disable all AI functionality. This ensures that the decision to use AI rests entirely with our educational partners and their users.


  • 3.2 Data Management:

  • No PII in AI Processing: As a strict policy, no Personally Identifiable Information (PII) is submitted to our AI models. We have controls in place which acts as a layer of abstraction to protect user data.
  • Purpose Limitation: We do not use any customer data to fine-tune or improve our AI models, for quality assurance or testing any data used is anonymized, aggregated, and used for the sole purpose of enhancing the safety, accuracy, and performance of the educational feature.
  • Data Security: We adhere to the highest standards of data security, employing robust encryption and access control measures to protect all data within the Kami ecosystem.


  • 3.3 Algorithmic Transparency and Explainability: We recognize that the inner workings of Large Language Models (LLMs) can be complex. While perfect explainability is an ongoing challenge in the AI field, we are committed to providing transparency.

  • Clear Explanations: We will provide users with clear, age-appropriate explanations of what our AI features do and how they work.
  • Limitations and Confidence: Where applicable, we will indicate the limitations of the AI’s output and provide context to help users interpret the information correctly.


4. Testing and Deployment

We are committed to a rigorous testing and monitoring regime to ensure our AI systems are safe, reliable, and perform as expected.

  • 4.1 Testing and Validation:

  • Bias Testing: We employ a variety of techniques to test for and identify potential biases in new gemini AI models we utilise where possible, this is across different customer groups and learning contexts.
  • Performance and Reliability: Our AI features undergo extensive testing in simulated and real-world educational settings to ensure they are accurate, reliable, and robust.


  • 4.2 Monitoring and Auditing:

  • Continuous Monitoring: Deployed AI systems are continuously monitored for performance degradation and the emergence of any unintended consequences.
  • Feedback Loops: We have established channels for teachers and students to provide feedback on the performance and behavior of our AI tools, which is a critical component of our monitoring process.


  • 4.3 Human Oversight:

  • Human oversight is a non-negotiable aspect of our AI deployment. Our teams regularly review the performance and impact of our AI systems.
  • We maintain the ability to intervene, override, or disable AI systems if they are found to be causing harm or performing outside of our established ethical guidelines.


5. Communication and Stakeholder Engagement

Building trust in AI requires open communication and active engagement with all stakeholders.

  • 5.1 Transparency with Users:

  • Clear Communication: We will always be transparent with our users when they are interacting with an AI-powered feature within Kami.
  • Educational Resources: We will provide clear documentation, tutorials, to help teachers and students understand and make the most of our AI tools in a responsible manner.
  • Policy Updates: Our Terms of Service and Privacy Policy are regularly updated to reflect our use of AI and will always be readily accessible.


  • 5.2 Stakeholder Engagement:

  • We are committed to engaging in an ongoing dialogue with educators, students, parents, policymakers, and industry peers about the responsible use of AI in education.
  • We will actively participate in industry forums and working groups to help shape the standards and best practices for AI in the EdTech sector.


6. Accountability and Continual Improvement

We hold ourselves accountable for our AI systems and are dedicated to a cycle of continuous improvement.

  • 6.1 Accountability Mechanisms:

  • Clear Channels for Redress: We have established clear and accessible channels for users to report concerns, ask questions, or appeal decisions made by AI systems. Our support and privacy teams are trained to handle these inquiries with care and urgency.
  • Responsibility for Outcomes: We take responsibility for the tools we build. In the event of unintended negative outcomes caused by our AI features, we are committed to investigating the cause and taking appropriate remedial action


  • 6.2 Continual Learning and Improvement:

  • This policy and our AI practices are not static. We are committed to a process of continual learning.
  • We will regularly review and update our AI systems, governance processes, and this policy based on user feedback, new research, technological advancements, and the evolving regulatory landscape.


For questions or feedback regarding this policy, please contact us at privacy@kamiapp.com or ethics@kamiapp.com.