Toward Efficient Privacy-Preserving Neural Networks with Homomorphic Encryption
Ponente(s): Luis Bernardo Pulido Gaytan, Andrei Tchernykh (chernykh@cicese.mx)
Neural Network (NN) modeling demands considerable computing power for internal calculations and training with big data in a reasonable amount of time. In recent years, clouds provide services to facilitate this process, but it introduces new security threats of data breaches. The inherent insecure nature of cloud environments imposes several restrictions related to the privacy of confidential data. Modern encryption techniques ensure security and are considered as the best option to protect stored data and data in transit from an unauthorized third-party. However, a decryption process is necessary when the data must be processed or analyzed, falling into the initial problem of data vulnerability. Efficient processing of sensitive data implies beyond an accurate prediction, analysis, or classification; it also implies adequate privacy handling. Under that premise, Homomorphic Encryption (HE), considered the holy grail of cryptography, emerges as a promising mechanism for expanding the scope of public cloud services for processing highly confidential data. HE allows a non-trustworthy third-party resource to perform arithmetic addition and multiplication operations on encrypted information without disclosing confidential data. Nonetheless, when implementing machine learning models such as NNs, it becomes fundamental to implement non-arithmetic operations in a homomorphic cipher, e.g., comparison and sign detection operations. Hence, the challenge consists of finding cryptographically compatible replacement functions to operate over encrypted data. In this talk, fundamental concepts of privacy-preserving NNs with HE will be presented, highlighting relevant research, development, and obtained results in this domain by our research team. Finally, I will close the talk by discussing open challenges and opportunities for homomorphically evaluate NN models.