Skip to main content

Experience Management

Can Search Get Too Personal?

2000px-HAL9000.svgThe following article caught my attention this afternoon:

The UX Of Ethics: Should Google Tell You If You Have Cancer?

“If I’m on a park bench, and I’m next to someone, and I hear them talking about symptoms of cancer, am I obligated to turn around and tell them they might have cancer?”

… artificial intelligence products are rapidly approaching the same diagnostic power. Google Search can already predict coming flu trends with some level of success. It’s not hard to imagine a system that can track my searches over a year—an ache, a cough, a rash—recognizing a cascade of symptoms that point to a disease with surprising accuracy.

source: http://www.fastcodesign.com/3058943/the-ux-of-ethics-should-google-tell-you-if-you-have-cancer

I have not often considered the ethical implications of enterprise search.  The article above reminded me that the information people type into those small search boxes can be extremely sensitive and revealing.  Do you recall the 2006 release of anonymized search queries by AOL?  Some very simple detective work made it possible to identify real people and real situations, even with usernames removed.  For example, a vanity search for someone’s own name followed later in the day by a search for a rare medical condition or a financial problem.  And it got more personal and embarrassing from there.
Our search queries leave behind more crumbs than my children.  And seriously, that’s a lot of crumbs.  We search for directions to and from our own homes.  We seek answer to life ups and downs – medical, financial, relationships, gossip, etc.  To the operator of a search application, this data can be incredibly useful, and dangerously powerful.  Is it possible, as the article above suggests, to engineer search analytics to the point that we learn something about the user before they realize it themselves?  And if so, is it wise, or even ethical, to communicate that back to them?

“Dave, I noticed you have been searching for documents about employee resignation letters.  Should I track this topic for you?”

This might sound like science fiction, but I argue it is closer than we think.  Almost all major search engines now include algorithms that can adjust search results based on a user’s previous behavior on the site – typically to serve them the most relevant content or the items they are most likely to purchase.  But every action has a reaction.  If an algorithm moves some results higher, others will fall below the fold.  Could my past search behavior make it harder, or impossible, to find something I urgently need in the future?  Could a search engine [un]intentionally bury information I need to complete a critical report for my boss because it wasn’t something I typically search for?  Could it be prudent to restrict the use of these types of features on sites that recommend access to medical care or help you make financial decisions?  Is there a Do No Harm directive for search engines?
Going further, who has access to all of this sensitive search data?  The implications of biasing search results for different users or drawing radical conclusions using artificial intelligence are clear.  But the risk of search data falling into malicious or criminal hands is even greater.  Imagine someone figuring out what brand of home alarm system you were searching for on Amazon, or that you were searching for a book on dealing with depression. Or what about privacy at work?  Does the IT administrator have an obligation to report sensitive queries to management?  Is it ethical to mine intranet search queries for sentiment analysis, looking for disgruntled or vulnerable employees?  I am aware that there are data privacy laws at play in the workplace, but in my experience, search logs are rarely protected.
Until the robots completely take over, I suggest giving some thought to the ethics and privacy implications of search applications.  Give some thought to the negative or harmful possibilities.  Make sure query logs are adequately secured.  And if you offer machine-generated recommendations or information, consider the implications of false or misleading intelligence.
 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Chad Johnson

Chad is a Principal of Search and Knowledge Discovery at Perficient. He was previously the Director of Perficient's national Google for Work practice.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram