Sarfraz Manzoor writes an article on Technology in Education well worth reading in The Guardian. Steiner schools eschew the use of tablets and computers in the classroom, and of course this turns out not to be the handicap many assume it will be. In fact many readers will be even more surprised to see a number of studies noting that lessened use of technology in classrooms often correlates with more learning. Andreas Schleicher is quoted- I saw his keynote at the Global Education & Skill Forum- talking about how their OECD international analyses have not yet shown technology to improve learning. This ought to be a wake-up call that we cannot simply invent new technology and throw it at education; we need to be more ethnographic, more aware of just what good learning means, and how to co-design changes to the learning environment with the very important stakeholders we keep ignoring: teachers, students, parents. Good article, Manzoor.
Always nice to find a use of the word trump with a small t. In The Guardian, Alex Hern writes a short article from Google I/O about Android’s intention to reduce the use of passwords by combining multifactorial user behavior evidence together: how you type and swipe, what you look like, where you are, how you sound et cetera– to create “Trust API” that determines how likely it is that you are, in fact, the device’s owner. The semantic inflation of calling this “Trust” is already noteworthy– another very human, sociological term turned on its head through techno-semantic inflation. But it is also worth noting that the trope is trust, and the means is always-on surveillance: the ability of interactive devices around us to measure every aspect of our behavior. Always.
In this month’s The Atlantic, Matthew Shaer pens an important article, A Reasonable Doubt, about the technology and sociology of DNA matching and its use in jurisprudence. One of the recurring themes we often consider vis a vis Robot Futures is how power relationships are influenced by technological progress, and there are multiple layers to consider in the case of DNA testing, all borne out by this article. First, there is the initial presumption of infallibility many of us have because of our pro-technology bias. Of course we discover that the black and white is in fact grey (thanks Stephen Crane) and that the technology of DNA matching, given a mixed sample, is so incredibly qualitative that ten labs can provide ten answers. Depressing, but eye-opening. But then there is the forward technology march: let’s solve the problem by removing humans from the equation! Shaer goes on to describe a Pittsburgh startup that uses a trade-secret protected algorithm to match DNA samples automatically, without depending on human judgement. Much as war-fighting robots are supposed to avoid human ethical shortcoming by avoiding human decision-making, so this DNA matcher is seen in a dozen states as the new gold standard because there are fewer places for a lab worker to be directly involved. But, ironically, the algorithm is hidden behind the veil of a trade secret, and so our ability to truly audit the process goes missing, and our faith in technology is only further amped. We need a new field, a Sociology of Technology field, similar to VTSS but more short-term; we need to understand just how we identify shortcomings in technology-human systems, and then how our techno-optimist solutions often drive the invention of yet newer problems. Cognitive Tutors, AI assistants, Automated DNA matching– all of them are so much more nuanced in systems-level effects that we make them out to be.
Huffpost did a brief interview and story on how speaking verbally to ever more sophisticated AI systems may change just who we are, written by Andy Campbell in the article, Talking to Our Computers is Changing Who We Are.
The last chapter of Robot Futures talks about concept of technology as a tool for community empowerment; at the CREATE Lab we developed one specific example of such a system for a Coke plant near Pittsburgh, and the Allegheny County Clean Air Now newsletter has a wonderfully detailed article talking about the positive social impact of such empowerment technology in a very real case.
It is so rare for a prediction quoted in the paper to actually get tested…so here is a lucky break in HowStuffWorks on robot waiters in restaurants.
A close reading of Oculus’ terms of service shows that they can fuse information about you that they collect together with other information about you that they purchase from others, all to then create custom marketing to you. And they explicitly state that they will also then collect information (and use arbitrarily) about just how you respond to said marketing! A good article about this, with examples from their Terms of Service, is in Extreme Tech.
We have just announced a program that provides an example of technology directed specifically toward citizen empowerment: a national library program for indoor air quality. The Press Release just went public.
Here is an article just out by Huffpost Tech’s Damon Beres following an interview with me about the concept of mediocracy: The Future of Voting Could be a Dystopian Nightmare.
Thanks so Samuel Gibbs of The Guardian, now we know just what it takes to reverse the trend of human underemployment and human labor to robot capital resource extraction. Mercedes has a new car that is so difficult to assemble that they have reversed trend, replacing robots with people. And they admitted in the article that in fact this new trend safeguards human jobs. What a concept! To find out exactly how tire cap valves cause this, partly, read the original article: Mercedes-Benz swaps robots for people on its assembly lines.