[1] Local differential privacy (LDP) is an approach to mitigate the concern of data fusion and analysis techniques used to expose individuals to attacks and disclosures.
[2] LDP has been widely adopted to alleviate contemporary privacy concerns in the era of big data.
[3] In 2003, Alexandre V. Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant[4] gave a definition equivalent to local differential privacy.
The prototypical example of a mechanism with local differential privacy is the randomized response survey technique proposed by Stanley L. Warner in 1965.
[6] Warner's innovation was the introduction of the “untrusted curator” model, where the entity collecting the data may not be trustworthy.
Before users' responses are sent to the curator, the answers are randomized in a controlled manner, guaranteeing differential privacy while still allowing valid population-wide statistical inferences.
The era of big data exhibits a high demand for machine learning services that provide privacy protection for users.
Demand for such services has pushed research into algorithmic paradigms that provably satisfy specific privacy requirements.
Anomaly detection is formally defined as the process of identifying unexpected items or events in data sets.
The rise of social networking in the current era has led to many potential concerns related to information privacy.
As more and more users rely on social networks, they are often threatened by privacy breaches, unauthorized access to personal information, and leakage of sensitive data.
The experimental results demonstrate that the proposed method achieves high data utility on the basis of improved privacy preservation.
Furthermore, local differential privacy sanitized data are suitable for use in subsequent analyses, such as anomaly detection.
Blockchains implement distributed, secured, and shared ledgers used to record and track data within a decentralized network, and they have successfully replaced certain prior systems of economic transactions within and between organizations.
Recent smartphones, for example, utilize facial recognition to unlock the users phone as well as authorize the payment with their credit card.
In Chamikara's academic article, he proposes a privacy-preserving technique for “controlled information release”, where they disguise an original face image and prevent leakage of the biometric features while identifying a person.
In the study sponsored by the Andalusian Research Institute in Data Science and computational Intelligence, they developed a Sherpa.ai FL, 1,2 which is an open-research unified FL and DP framework that aims to foster the research and development of AI services at the edges and to preserve data privacy.
The characteristics of FL and DP tested and summarized in the study suggests that they make them good candidates to support AI services at the edges and to preserve data privacy through their finding that by setting the value of
The performance and evaluation done in their study shows that the proposal leads to less communication overhead than the existing data aggregation models currently in place.
However, vehicles’ data privacy is argued to be a major barrier toward the application and development of IoV, thus causing a wide range of attention.
[16] The topic of spam phone calls has been increasingly relevant, and though it has been a growing nuisance to the current digital world, researchers have been looking at potential solutions in minimizing this issue.
Furthermore, a number of commercial and smartphone apps that promise to block spam phone calls have been created, but they come with a subtle cost.
By proposing this model, the researcher Hu's personalized differential privacy protection method is broken down and addresses the issue of adding independent and uncorrelated noise and the same degree of scrambling results in low privacy protection and poor data availability.
Other formal definitions of local differential privacy concern algorithms that categorize all users' data as input and output a collection of all responses (such as the definition in Raef Bassily, Kobbi Nissim, Uri Stemmer and Abhradeep Guha Thakurta's 2017 paper[19]).