Origin has announced the alpha launch of the world’s first confidential AI development environment designed to allow enterprises to use AI for software development without exposing proprietary code or sensitive data, according to an announcement shared with Finbold on April 7.
The platform is aimed at developers and engineering teams in sectors such as finance, healthcare, defense, and government, where handling sensitive data and intellectual property presents heightened risks.
Cryptographic environment aims to protect code and data in AI workflows
According to the company, the rapid adoption of AI coding tools has raised concerns around how source code, prompts, and internal data are processed and stored.
The announcement cites industry data indicating that 79% of companies using AI lack visibility into how these systems handle data, which can introduce compliance and security risks.
Origin’s platform is designed to address these challenges by keeping development workflows within a cryptographically secured trusted execution environment (TEE).
In contrast to conventional AI coding setups, the system is intended to ensure that sensitive information remains within a protected environment.
The platform allows teams to designate projects for confidential compute and use TEE-enabled models for sensitive inference, with cryptographic attestation available to verify how those requests are handled. This enables developers to integrate AI into production workflows while maintaining control over sensitive material and generating records for compliance and security review.
Regulated industries face stricter data and compliance requirements
Ahmad Shadid, co-founder and CEO of Origin, said the need for stronger safeguards is particularly pronounced in regulated sectors such as finance and fintech, where organizations routinely handle highly sensitive data.
According to Shadid, this includes personally identifiable information such as names, addresses, and account details, as well as proprietary financial data like trading strategies, risk models, and client positions. Such data is often subject to strict regulatory frameworks or represents critical intellectual property.
He noted that even firms that do not process large volumes of personal data are required to demonstrate clear data governance, access controls, and auditability, with regulators expecting visibility into how data is stored, accessed, and used.
“The enterprises we are building for have watched AI coding tools get blocked at security review, not because the tools were bad, but because they could not answer the question their security teams were asking: can you prove our code was protected, not just tell us it was. That is the question Origin is built to answer,” said Ahmad Shadid, co-founder and CEO of Origin.
The system is powered by Origin’s proprietary OLLM AI Gateway, which provides visibility into inference activity while supporting both standard large language models and TEE-enabled models for higher-assurance use cases.
According to the release, the platform combines hardware-backed confidential computing, a privacy-focused approach to data retention, and exportable usage and security records. For confidential workflows, it generates cryptographic attestation tied to verified enclave execution, providing proof that sensitive processes were handled within a secure environment.
Origin said the platform is designed for organizations that require both development speed and strict security controls, particularly those operating in regulated industries or managing high-value intellectual property.
Featured image via Shutterstock.