Transparency and Accountability

Transparency and Accountability

We believe in Tell Sid being community-lead. With that, we demonstrate transparency within each update, but also on an annual basis.

Last Updated: 15 May 2025

Last Updated: 15 May 2025

Why we publish transparency updates

Trust is earned. Once a year we share clear data on how Tell Sid is used, what safety incidents occur, and any law-enforcement or government requests we receive. We also explain major feature changes and the methods behind our safety systems.


What we will report every six months

  • Volume metrics (from testing)– types of conversations, average session length, median response time.

  • Safety interventions – number of blocked or safe-completed requests, broken down by category (child-sexual, self-harm, hate, extremism, illicit behaviour).

  • User reports – how many reports we received, how quickly we replied, outcomes.

  • Law-enforcement requests – count and broad nature of any legally binding demands.

  • Infrastructure uptime – percentage availability and any significant incidents.

  • Model or policy updates – summary of changes to the AI model, training data or moderation rules.


We do not reveal personal data or anything that could compromise user privacy or security.


Publication schedule

Annual Report → covers 15 May 2025 – 15 May 2026 • published each July.

The first report will appear July 2026 and will be linked on this page.


Methodology notes

Metrics are generated automatically from anonymised logs that retain no persistent identifiers. Safety-intervention counts come from Tell Sid’s rule-engine events plus OpenAI’s moderation responses. Numbers are independently reviewed by at least one external member of our Safety & Ethics Board before publication.


Request a deeper dive

Researchers, journalists or regulators who need additional context can e-mail safety@insinto.ai. Where legally and commercially possible we will share more detailed, aggregated data.


Change history

15 May 2025 – Initial framework published. No historical data yet.

Why we publish transparency updates

Trust is earned. Once a year we share clear data on how Tell Sid is used, what safety incidents occur, and any law-enforcement or government requests we receive. We also explain major feature changes and the methods behind our safety systems.


What we will report every six months

  • Volume metrics (from testing)– types of conversations, average session length, median response time.

  • Safety interventions – number of blocked or safe-completed requests, broken down by category (child-sexual, self-harm, hate, extremism, illicit behaviour).

  • User reports – how many reports we received, how quickly we replied, outcomes.

  • Law-enforcement requests – count and broad nature of any legally binding demands.

  • Infrastructure uptime – percentage availability and any significant incidents.

  • Model or policy updates – summary of changes to the AI model, training data or moderation rules.


We do not reveal personal data or anything that could compromise user privacy or security.


Publication schedule

Annual Report → covers 15 May 2025 – 15 May 2026 • published each July.

The first report will appear July 2026 and will be linked on this page.


Methodology notes

Metrics are generated automatically from anonymised logs that retain no persistent identifiers. Safety-intervention counts come from Tell Sid’s rule-engine events plus OpenAI’s moderation responses. Numbers are independently reviewed by at least one external member of our Safety & Ethics Board before publication.


Request a deeper dive

Researchers, journalists or regulators who need additional context can e-mail safety@insinto.ai. Where legally and commercially possible we will share more detailed, aggregated data.


Change history

15 May 2025 – Initial framework published. No historical data yet.

Why we publish transparency updates

Trust is earned. Once a year we share clear data on how Tell Sid is used, what safety incidents occur, and any law-enforcement or government requests we receive. We also explain major feature changes and the methods behind our safety systems.


What we will report every six months

  • Volume metrics (from testing)– types of conversations, average session length, median response time.

  • Safety interventions – number of blocked or safe-completed requests, broken down by category (child-sexual, self-harm, hate, extremism, illicit behaviour).

  • User reports – how many reports we received, how quickly we replied, outcomes.

  • Law-enforcement requests – count and broad nature of any legally binding demands.

  • Infrastructure uptime – percentage availability and any significant incidents.

  • Model or policy updates – summary of changes to the AI model, training data or moderation rules.


We do not reveal personal data or anything that could compromise user privacy or security.


Publication schedule

Annual Report → covers 15 May 2025 – 15 May 2026 • published each July.

The first report will appear July 2026 and will be linked on this page.


Methodology notes

Metrics are generated automatically from anonymised logs that retain no persistent identifiers. Safety-intervention counts come from Tell Sid’s rule-engine events plus OpenAI’s moderation responses. Numbers are independently reviewed by at least one external member of our Safety & Ethics Board before publication.


Request a deeper dive

Researchers, journalists or regulators who need additional context can e-mail safety@insinto.ai. Where legally and commercially possible we will share more detailed, aggregated data.


Change history

15 May 2025 – Initial framework published. No historical data yet.

Copyright © INSINTO LTD 2025. All rights reserved.

Copyright © INSINTO LTD 2025. All rights reserved.

Copyright © INSINTO LTD 2025. All rights reserved.

Copyright © INSINTO LTD 2025. All rights reserved.

Copyright © INSINTO LTD 2025. All rights reserved.