By Joel Hruska
Update: Evernote
has reversed course on its previous plan to make machine-learning an
opt-out system. Instead, they company will allow customers to opt-in to
allowing their notes to be used for machine learning. “[W]e will make
machine learning technologies available to our users, but no employees
will be reading note content as part of this process unless users opt
in,” Evernote wrote in a new blog post. “We will invite Evernote
customers to help us build a better product by joining the program.”
Original story below
Earlier this week, Evernote changed its
privacy policies and set off a storm of user concern in the process. The
company has made two major updates to its policy, though one of them
was apparently a clarification of a pre-existing stance rather than a
new announcement. First, Evernote announced that beginning on January
23, 2017, customer data will be used to train machine learning
algorithms that Evernote believes will enhance its service. Second,
Evernote added a clause to its
privacy policy notifying customers that their data could be accessed by Evernote employees under certain circumstances.
Evernote’s
privacy policy
is actually one of the best I’ve ever seen as far as layout and
structure, with clear, precise designations and minimal use of weasel
words. What alarmed privacy advocates was a specific change to the
section on when Evernote employees are allowed to access user data.
We’ve captured both the older privacy policy and the new version to
highlight the difference. First, here’s the old one, from December 1,
2016:
Here’s the new edition:
The difference here lies in the vague
phrase “improve the service.” Not only could this be stretched to cover
just about anything, Evernote’s simultaneous announcement that it would
use customer data for machine learning rubbed many people the wrong way.
Since then, the company has backpedaled frantically. CEO Chris O’Neill
has written a
lengthy missive
attempting to explain and qualify some of the changes to Evernote’s
policy, telling people that the handful of engineers allowed to see user
data are carefully vetted and hand-selected, and that the machine
learning tests Evernote intends to conduct are something consumers
can opt out of.
There are a few different ways to read this
situation. On the one hand, Evernote’s privacy policy is well-written
and concise, the CEO has laid out the precise circumstances under which
anyone is allowed to read notes you store on the service, and there
doesn’t seem to be any nefarious plans here. On the other hand, Evernote
is exploiting its relationship with its customers. No one signed up for
Evernote because they wanted to help train a machine learning system,
yet the company has made the feature opt-out, rather than opt-in. If you
don’t change your settings, your data will be used.
There’s a persistent willingness to treat
information about our lives as just “data,” even though that data is
increasingly used to make decisions about what kinds of
products and services
are marketed to you. Banks and financial institutions have actively
explored using Facebook data to calculate what kind of borrower you are
likely to be and whether or not you’ll pay them back. Over in the UK,
one company openly advertises itself as using this information to spy on
potential renters. Police departments have signed agreements with
license plate reader companies in order to avoid data retention time
limits.
The question isn’t whether these types of
action are legal, or even whether Evernote itself has some nefarious
master plan (yes they are, and no it doesn’t). The question is, what
kind of society are we creating by training people to treat their
personal data as a commodity to be readily handed over to half a hundred
services? I don’t pretend to have the answers. But I fear we don’t
spend half enough time, as a society, considering the questions.
Comments
Post a Comment