Data Privacy and Model Integrity at Cradle
Data Privacy and Model Integrity at Cradle
Data Privacy and Model Integrity at Cradle
Cradle
Cradle
May 1, 2024
May 1, 2024
Our customers trust us with their sequence and assay data, and it's our responsibility to honor that trust with the utmost care and integrity. We want to take this opportunity to clarify our stance on how we handle the privacy of your data, especially regarding the training of our generative AI models. We will cover our safeguards of customer data and our approach to data security in a future blog post, stay tuned for that.
Model Weights as Customer Data: A Basic Principle
We treat model weights derived from customer data with the same level of confidentiality as the data itself. Our system ensures complete isolation of model weights for each customer. Access to insights, predictions, and intelligence derived from these models is strictly limited to accounts authorized by the customer. This principle is the basis of our commitment to data privacy and security.
Optimizing Algorithms, Not Sharing Weights
While we maintain multiple layers of security barriers between customers' model weights, we continually strive to enhance the performance, efficiency, and accuracy of our algorithms. The distinction between optimizing our algorithms and training models for our customers is subtle, yet important to highlight. When Cradle's machine learning team develops a new learning algorithm or improves an existing one, they will benchmark the algorithm's performance on several public and private datasets (always subject to the data owner’s permission) to ensure that the proposed changes outperform the current state of the art. This benchmarking and quality control process does not entail any sharing or transferring any specific data or model weights between customers; following the same principle stated above. In actual fact, once the benchmarking is finished, we explicitly destroy all models created during the benchmarking process.
As an analogy, think of Cradle’s platform as providing inflatable balloons (think untrained machine learning models) and customer data as the air used to inflate our balloons (think training our models). Cradle continuously develops new and better balloon materials, colors and shapes to hold customer air and will evaluate these novel designs through a rigorous quality control process. It’s important to note that inflated balloons are only ever accessed by the respective customer that provided the air. Balloons that are inflated for quality control purposes will be popped after quality has been assessed.
Transparent and Responsible AI Development
Our approach to AI development is founded on transparency and responsibility. We believe in keeping our customers informed about how their data is used and how we're advancing our generative AI capabilities in a responsible manner. By distinguishing between algorithm optimization and the confidentiality of model weights, we aim to provide our customers with the best of both worlds: unparalleled AI-driven protein designs with strong data privacy guarantees.
Our customers trust us with their sequence and assay data, and it's our responsibility to honor that trust with the utmost care and integrity. We want to take this opportunity to clarify our stance on how we handle the privacy of your data, especially regarding the training of our generative AI models. We will cover our safeguards of customer data and our approach to data security in a future blog post, stay tuned for that.
Model Weights as Customer Data: A Basic Principle
We treat model weights derived from customer data with the same level of confidentiality as the data itself. Our system ensures complete isolation of model weights for each customer. Access to insights, predictions, and intelligence derived from these models is strictly limited to accounts authorized by the customer. This principle is the basis of our commitment to data privacy and security.
Optimizing Algorithms, Not Sharing Weights
While we maintain multiple layers of security barriers between customers' model weights, we continually strive to enhance the performance, efficiency, and accuracy of our algorithms. The distinction between optimizing our algorithms and training models for our customers is subtle, yet important to highlight. When Cradle's machine learning team develops a new learning algorithm or improves an existing one, they will benchmark the algorithm's performance on several public and private datasets (always subject to the data owner’s permission) to ensure that the proposed changes outperform the current state of the art. This benchmarking and quality control process does not entail any sharing or transferring any specific data or model weights between customers; following the same principle stated above. In actual fact, once the benchmarking is finished, we explicitly destroy all models created during the benchmarking process.
As an analogy, think of Cradle’s platform as providing inflatable balloons (think untrained machine learning models) and customer data as the air used to inflate our balloons (think training our models). Cradle continuously develops new and better balloon materials, colors and shapes to hold customer air and will evaluate these novel designs through a rigorous quality control process. It’s important to note that inflated balloons are only ever accessed by the respective customer that provided the air. Balloons that are inflated for quality control purposes will be popped after quality has been assessed.
Transparent and Responsible AI Development
Our approach to AI development is founded on transparency and responsibility. We believe in keeping our customers informed about how their data is used and how we're advancing our generative AI capabilities in a responsible manner. By distinguishing between algorithm optimization and the confidentiality of model weights, we aim to provide our customers with the best of both worlds: unparalleled AI-driven protein designs with strong data privacy guarantees.
8x improvement in EGFR binding affinity: winning the Adaptyv Bio protein design competition
8x improvement in EGFR binding affinity: winning the Adaptyv Bio protein design competition
8x improvement in EGFR binding affinity: winning the Adaptyv Bio protein design competition
Dec 10, 2024
Dec 10, 2024
Cradle raises $73M Series B to Put AI-Powered Protein Engineering in Every Lab
Cradle raises $73M Series B to Put AI-Powered Protein Engineering in Every Lab
Cradle raises $73M Series B to Put AI-Powered Protein Engineering in Every Lab
Nov 26, 2024
Nov 26, 2024
We're Funding the Creation of an Open-Source Antibody Dataset
We're Funding the Creation of an Open-Source Antibody Dataset
We're Funding the Creation of an Open-Source Antibody Dataset
Nov 11, 2024
Nov 11, 2024
'Align to Innovate' benchmark: state-of-the-art enzyme engineering with fully-automated GenAI
'Align to Innovate' benchmark: state-of-the-art enzyme engineering with fully-automated GenAI
'Align to Innovate' benchmark: state-of-the-art enzyme engineering with fully-automated GenAI
Oct 3, 2024
Oct 3, 2024
Cultural values at Cradle
Cultural values at Cradle
Cultural values at Cradle
Oct 2, 2024
Oct 2, 2024
Stay in the loop
Stay in the loop
Stay in the loop
Get new posts and other Cradle updates directly to your inbox. No spam :)