AI data security

This page describes the Observe AI data security measures and practices.

Zero data retention (ZDR)

Q: Zero Data Retention: Does ZDR policy apply to o11y AI Assistant and MCP Server, or only AI SRE?

A: The ZDR policy applies to o11y Assistant and the MCP Server as well.

Data residency

Q: Where is data processed by OpenAI and Anthropic (US, EU)? Can we specify region?

A: Our AI features are currently only available in the US and use the US services for Open AI and Anthropic.

Encryption

Q: Is data encrypted in transit to AI providers? What encryption standards?

A: All requests to external providers are encrypted in transit using TLS v1.2+.

Audit logs

Q: What audit capabilities exist for tracking AI queries and data access?

A: All AI queries and data access are done using the users context and are audited and tracked to the user as per our standard audit capabilities.

Data retention

Q: What is the actual retention period for data sent to OpenAI/Anthropic despite ZDR claims?

A: All data is processed in memory and data is not stored or logged.

Subprocessors

Q: Do OpenAI/Anthropic use additional subprocessors for model hosting?

A: Yes.

Opt-out

Q: Can AI features be disabled at workspace/organization level?

A: Yes.

MCP Server security

Q: What additional security controls exist for MCP Server given higher risk?

A: We employ token based authentication as per our standard for API access, RBAC is locked to the user’s account making that is making the requests, network isolation (the server operates in a restricted network environment) can be enforced to and from the customer using the existing Privatelink connection, full audit logging is provided for any queries made on the users behalf, and encryption of all traffic and activities is strictly enforced per the customer's and Observe’s encryption policies.

Prompt injection

Q: What protections exist against prompt injection attacks in AI SRE?

A: We use the prompt injection protection provided by the AI models we leverage for prompt processing.(OpenAI, Claude) Observe also puts some additional protections in place to prevent mis use of the AI for non SRE related tasks.

All customer queries against their data in Observe are done in the context of the user Role Based Access Controls (RBAC) of the user and thus will be limited to the scope and access that the user is provided.

Model updates

Q: How are changes to underlying LLM models communicated and assessed?

A: Changes to AI models are addressed in our release notes. Changes are driven through our standard development and testing lifecycle before release and may also include an early access period where user feedback is collected and used to assess the quality.

Model hosting

Q: If it requires for your to use Amazon Bedrock (in your AWS account), will you be able to configure your system accordingly?

A: We currently do not leverage Amazon Bedrock as a provider for our models. OpenAI, Anthropic (Claude), and potentially Snowflake Cortex are the providers we use at the current time.