Short bio: Computer Scientist, FOSS supporter (read more)
Tux Machines (TM)-specific
I’ve often defended Google’s thirst to know things about people with a butler analogy. Good software should, like a butler, try hard to understand your preferences and act on them for you without you even realizing they are there. That means learning and remembering things you’ve done in the past, and using that to base recommendations on.
Like a butler, you want your tools to work intelligently based on context and history, and Google is without doubt one of those tools- for many of us, the most important single tool in our computing lives. The problem, of course, is that your butler has a lot of incentives to keep your private information private.
Google’s incentives run at least partially the other way- they have strong incentives to mine that data extensively, to share it with others, and to collect well more than most people might think is useful, in the name of being the ultimate butler. And these incentives lead to risks- incentives to share with third parties that you might not trust; risks that things might be subpoenaed; risks that they might leak to Google employees or even outside Google; risks that effective advertising might use such information to manipulate your political views. On balance, most of us are going to look at these issues and decide that we’re OK with Google knowing these things, because the risks are remote and the benefits tangible. So we acknowledge there is a tension between privacy and functionality, and move on.