The human search engine

I’m working my way through danah boyd’s recent book, It’s Complicated: The Social Lives of Networked Teens and really enjoy it.  It describes the Internet in a way that feels like it actually is, situated somewhere between our worst fears, and highest aspirations for technology.  Framing youth through their use of social media also serves to forefront broader dynamics affecting the lives of young people.

In challenging the idea of universally high levels of youth literacy and agency with technology, boyd makes the observation that both youth and adults often have skewed notions of trust around Internet information sources. A simplified version of the observation is this: information users demonize Wikipedia articles and deify Google search results.

Boyd says, one reason for the trust in Google’s results over information in Wikipedia is the idea that an algorithm lacks the bias of human authors or editors.  Young people often lack an educational background that lets them understand how bias also exists in software and that ultimately leads to skills for critical consumption of any Internet information, regardless of the source.

Living with two teenagers, I often see how research projects are not engaging or exciting and how choosing sources feels like a process guided by confusing, unfounded rules rather than critical thinking.

This made me wonder, what would an activity look like that helps participants think critically about information on the Internet and better understand the technology that delivers it? I sketched this idea out, which I call “The Human Search Engine”.

The human search engine

Shout out: this is largely inspired by FreeGeek Chicago’s The Human Internet activity.

The idea of an algorithm is explained, possibly by having one participant or group of participants control the motion of a volunteer, robot style.

Collectively, the group brainstorms categories of good information and bad information listing these where they can all be seen.

Participants  break into smaller groups.  Each group is given or asked to find 5-10 pieces of media that would match a given search query, ideally about a topic of their choosing.

Participants then order the pieces of information in order of highest to lowest quality.  They must then consolidate their reasoning into an “algorithm” that would generalize the ranking of results in their search engine.

Finally, participants reconvene and are given another group’s search query/ results to pass through their ranking algorithm.  They rank the results and then share how their algorithm works.

Tweaks/variations

  • What practices could be used to game the search engine algorithm and elevate low-quality information the ranking?
  • How would you design an algorithm to censor certain kinds of information.