UCLA researchers in the Department of Mathematicshave developed a method to maintain data privacy.
Companies use machine learning (ML) algorithms to analyze their user base for information to improve targeted advertisements and customer tracking. However, with many parameters in the accumulated data sets, the algorithms can memorize the training data, making it possible to recover sensitive user information and break privacy. Current methods to overcome this privacy issue, such as adding ‘noise’ (artificial data), improve security but decrease data accuracy. Therefore, there is a need for improved ML algorithms that maintain user privacy without decreasing data analysis accuracy.
UCLA researched have developed a ML algorithm that produces models with improved data protection without decreasing user data accuracy. The algorithm reduces training and validation loss and improves the generalization of the trained private models. The algorithm has been successfully tested to create models that were 10% more accurate and equal/better data privacy than models created by existing methods. Additionally, the method was easier to implement and required negligible additional computational power and memory cost compared to existing methods.
The method has been tested and developed.
Digital Privacy, Advertisement, Data Safety, Algorithms, Data sets, Internet Security