If we only protect privacy against weak adversaries then a smarter adversary will be able to breach the privacy of individuals in practice. For example, an adversary who knows "it is flu season and there are likely to be many elderly patients with flu symptoms in the hospital." can be smart enough to breach the privacy of individuals.
On the other hand if we protect against extremely strong adversaries then our published table is close to being useless.
That is why we would like to protect against realistic adversaries in the middle ground. So how do "realistic" adversaries look like?
In our VLDB paper, we introduce e-privacy which protects against adversaries who have collected some statistics about the population and use them to build their belief about individuals. An adversary can have the following knowledge:
- Knowledge about the general population. The adversaries obtain this knowledge from other datasets, for example, through articles that they have read that contain survey data or by looking at other data sets.
- Knowledge about specific individuals in the table. The adversaries obtain this knowledge as in prior work, for example, by knowing some facts about their friends or neighbors.
In the paper we explain how to take a table and generalize it in order to protect against these adversaries. This was impossible if you wanted to protect against the super strong adversaries. But now, we can find out which tables we can publish defending against realistic adversaries.
We believe that e-privacy with its defense against realistic adversaries is very useful in practice, and thus we hope that you will want to experiment with it and try it out on your own data. We are eager to learn about your experience!
For more details on the framework and an experimental evaluation of information content of data published with e-privacy please refer to our paper. If you want to know more about our projects on data privacy you can find more information here.
No comments:
Post a Comment