skip to content
Decrypt LOL

Get Cyber-Smart in Just 5 Minutes a Week

Decrypt delivers quick and insightful updates on cybersecurity. No spam, no data sharing—just the info you need to stay secure.

Read the latest edition

ObfuscaTune Enables Secure Fine-Tuning of Proprietary LLMs

/ 1 min read

🔒✨ ObfuscaTune enables secure fine-tuning of proprietary LLMs on private data. This research introduces ObfuscaTune, a novel method for offsite fine-tuning and inference of proprietary large language models (LLMs) on confidential datasets while ensuring the privacy of both the model and the data. Utilizing a third-party cloud provider, the approach employs an effective obfuscation technique combined with confidential computing, requiring only 5% of model parameters to be placed in a trusted execution environment (TEE). The effectiveness of ObfuscaTune is validated through experiments on GPT-2 models across four NLP benchmark datasets, demonstrating its utility-preserving capabilities. The study also emphasizes the importance of using random matrices with low condition numbers to minimize errors from obfuscation.

Source
{entry.data.source.title}
Original