Sarah Hartman-Caverly MS(LIS), MSIS

Click here to edit subtitle

Thought Experiments

"Your transgressions, they will follow you forever"

Posted by smhartman on November 13, 2017 at 10:20 AM Comments comments (0)

"Your transgressions, they will follow you forever — it is really a permanent record of your so-called trustworthiness. So your behavior could impact your children or your grandchildren for decades to come. There seems to be no limits, there seems to be no boundaries, as to how far this can go....

"It's really easy to point our finger at China without stopping and actually saying, "well how far is this culture of surveillance from the West?" It sounds like completely nightmarish territory that the West would never descend into, in terms of using these trust algorithms that are unfairly reductive about people. But then when you really look into the amount of data that companies are collecting, and how they're using that data to get a complete picture of how we behave, where we are at any given time, what our political views are — we're not that far off. It's just the government doesn't own that data."

Amulya Shankar and Rachel Botsman. "What's your citizen 'trust score'? China moves to rate its 1.3 billion citizens." 9 Nov. 2017.  Read more via

"A handful of people... will steer what a billion people are thinking today."

Posted by smhartman on October 12, 2017 at 2:10 PM Comments comments (0)

Interesting insights from Silicon Valley developers on the implications of social media and smart technology for autonomy and intellectual freedom, including:

  • How the "attention economy" reengineers the Internet to manufacture consent
  • How lack of age diversity in product engineering roles contributed to the failure to consider the unintended consequences of design choices
  • How design choices reflect the 'habit-forming' intent of Silicon Valley
  • The role of user choice and agency - and how these are undermined by product design
  • How technology companies exploit users' social and psychological vulnerabilities
  • The implications of 'continuous partial attention' for our ability to think, reason, and make decisions with intent - and what this means for democratic self-governance
  • The human existential threat of attention control technology
Paul Lewis, "'Our minds can be hijacked': The tech insiders who fear a smartphone dystopia."  Read more via The Guardian.

"The Coming Software Apocalypse"

Posted by smhartman on September 29, 2017 at 10:55 AM Comments comments (0)

"The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination."

James Somers, "The Coming Software Apocalypse."  Read more via

"[Arguments] are civilization"

Posted by smhartman on September 28, 2017 at 11:00 AM Comments comments (0)

"You can't lose an argument, because if you're proven wrong, you've got the truth, which is more valuable than whatever you had before."

Hear more from Stefan Molyneux via The Alex Jones Channel (YouTube).

"Information literacy is not the antidote to fake news, because the institutions for teaching it can't be trusted either"

Posted by smhartman on September 28, 2017 at 9:15 AM Comments comments (0)

On hegemony, reason, and the trust deficit between the public and the academy:

"[Walter] Benjamin ends The Work of Art in the Age of Mechanical Reproduction by arguing that 'fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate. Fascism sees its salvation in giving these masses not their right, but instead a chance to express themselves.' This recasts social media in a more sinister light. Fascism is on the rise not because students can’t tell fake news from the slanted news promulgated by hegemonic interests. Rather, fascism is resurgent because freedom of expression has turned out to have little to do with what we can create and much more to do with how much we can consume."

Rolin Moe, "All I Know Is What's On the Internet."  Read more at

"Will humans have the wisdom to manage artificial intelligence effectively?"

Posted by smhartman on September 12, 2017 at 12:50 AM Comments comments (0)

“Everything I love about civilization is the product of intelligence,” [says Max Tegmark of MIT]. “If we can amplify our own intelligence with AI, we have the potential to solve all of the terrible problems we’re stumped by today and create a future where humanity can flourish like never before. Or we can screw up like never before because of poor planning. I would really like to see us get this done right.”

Read more via