RATING: 6/10…READ: June 14, 2013
While many are looking to the internet to solve all our problems, this book takes the opposite view and criticizes internet solutionism. There is much good criticism, but the book as a whole feels more like a rant then a cohesive argument. Worth checking out.
Should we introduce game incentives into a process that has previously worked through appeals to one’s duties and obligations?
It very well may be that, by optimizing our behavior locally (i.e., getting people to recycle with the help of games and increased peer surveillance), we’ll end up with suboptimal behavior globally, that is, once the right incentives are missing in one simple environment, we might no longer want to perform our civic duties elsewhere.
Education is not the transmission of information or ideas. Education is the training needed to make use of information and ideas.
“A cook,” he wrote in another essay, “is not a man who first has a vision of a pie and then tries to make it; he is a man skilled in cookery, and both his projects and his achievements spring from that skill.”
Here is modernity in a nutshell: We are left with possibly better food but without the joy of cooking.
“Internet-centrism”—the chief of which is the firm conviction that we are living through unique, revolutionary times, in which the previous truths no longer hold, everything is undergoing profound change, and the need to “fix things” runs as high as ever.
“The past is not transformed into the ‘modern world’ at any single moment: we should never be surprised to find that seventeenth-century scientific practitioners often had about them as much of the ancient as the modern.”
Shirky, who also works as a consultant, knows the mantra of his trade: every crisis is to be recast as an opportunity. Thus, we hear that “nothing will work, but everything might. Now is the time for experiments, lots and lots of experiments.” This, however, is Shirky the good cop—the one who thinks resistance is not futile. Shirky the bad cop, however, is not so sure and often succumbs to a weird form of digital fatalism, which borders on digital defeatism: “There is never going to be a moment when we as a society ask ourselves, Do we want this? Do we want the changes that the new flood of production and access and spread of information is going to bring about?” For Shirky the bad cop, everything has already been determined by the information gods; all we can do is accept the inevitable and enjoy the revolutionary ride.
Methodologically, Wu’s treatment of information industries is very close to Eisenstein’s treatment of print culture: he starts by simply projecting the qualities he associates with “the Internet” back into the past and assuming that the industries and technologies he studies have a nature, a fixed set of qualities and propensities, then proceeds to celebrate selectively those examples that support those qualities and discard those that don’t. So Wu starts with the hunch that the openness of “the Internet” is under threat, travels back in history to find trends that suggest all information industries have experienced similar pressures, and returns to the present to announce that history reveals that openness is indeed under threat on “the Internet.”
This is the other, darker side of epochalism: while new solutions are generated because we think that we are living in unique and exceptional times and anything Internet-incompatible ought to be swept away, we also believe that whatever problems “the Internet” presents ought to be dealt with in a manner that won’t affect “the Internet.”
Thus, writes Heald, “the ‘right’ varieties of transparency are valued because they are believed to contribute, for example, to effective, accountable, and legitimate government and to promoting fairness in society.” This means, among other things, that there are also “wrong” varieties of transparency, which might lead to populism, thwart deliberation, and increase discrimination. It’s hard to believe that when Vladimir Putin orders workers to install Web cams at polling stations across Russia, his invocation of transparency rhetoric serves functions other than legitimizing his own stay in power by pretending that Russian elections are even more democratic and transparent than those of Russia’s Western critics (the trick here, of course, is to find ways to rig the elections while on camera—not exactly a very challenging task for Russian bureaucrats).
James Madison voiced that concern in the context of deliberations at the 1787 American Constitutional Convention, writing that “had the members committed themselves publicly at first, they would have afterwards supposed consistency required them to maintain their ground, whereas by secret discussion no man felt himself obliged to retain his opinions any longer than he was satisfied of their propriety and truth, and was open to the force of argument.”
In O’Neill’s view, fostering trust is a much more important public objective than fostering transparency, and if the latter undermines the former, perhaps we should curb our enthusiasm about what the world of networks and databases has to offer.
Thus, write Stealth Democracy’s authors, “members would be doing something much more beneficial to the greater good by remaining in their offices or committee rooms, meeting with constituents, studying, or discussing issues with fellow committee members. But the pressures of publicity force them to dash off to vote on every non-issue, no matter how foregone the conclusion.”
Jean-Jacques Rousseau, in Discourse on Inequality (1754), was already complaining that “books and auditing of accounts, instead of exposing frauds, only conceal them; for prudence is never so ready to conceive new precautions as knavery is to elude them.”
In his final book, the late Tony Judt spoke of the dismal “discursive shift . . . towards economics” that had taken over the public debate in the late 1970s. “Intellectuals don’t ask if something is right or wrong, but whether a policy is efficient or inefficient. They don’t ask if a measure is good or bad, but whether or not it improves productivity,” lamented Judt. He continued, “The reason they do this is not necessarily because they are uninterested in society, but because they have come to assume, rather uncritically, that the point of economic policy is to generate resources.”
Reductionism in itself is not bad and can even be intellectually liberating—as long as we find a way to remind ourselves constantly about what is being reduced and what parts of reality are being shed off in order to zoom in on a particular indicator or model of politics.
Does “open government” refer to making train schedules and city maps more accessible? Or does it refer to publishing data that could embarrass politicians and end careers?
if we don’t subject highly ambiguous terms like “open government” to closer scrutiny, if we don’t cleanse them of Internet-centrism and the double meanings it generates, we might unwillingly allow some governments to claim progress where there is none, while stalling more important reforms.
“It’s much cooler (and frankly less politically controversial) for any government to put government health databases online . . . than it is for the same government to provide greater transparency around the financing of political parties in the country.” As long as the open-government solutionists are so preoccupied with the means—with the quality of standards and databases—and not the actual content that these standards and databases seek to disseminate, little progress will occur.
While better crime statistics might help some people avoid buying properties in dodgy neighborhoods, they would also make it harder for other people to sell those properties. As a result, those who already live in these dodgy neighborhoods might be less willing to report crimes in the first place. In fact, in a 2011 survey by an insurance company, 11 percent of respondents claimed to have seen an incident but chose not to report it, worried that higher crime statistics for their neighborhood would significantly reduce the value of their properties.
as is typical of solutionism, neither Miller nor Johnson displays any basic understanding of the intricacies of the political process, reducing it to the only variable under solutionists’ control: votes. Neither of them mentions that the legislative process also involves discussion, bargaining, compromise, and deliberation; voting is just the final stage of a much longer sequence of events, which, for the most part, remains conventional and predominantly invisible (unless, of course, something goes wrong and media get hold of the story).
As political theorist Bernard Crick once wrote, “Boredom with established truths is the great enemy of the free man.”
Polish dissident Adam Michnik was onto something when he defined democracy as “eternal imperfection, a mixture of sinfulness, saintliness, and monkey business.” Try marketing a hair dryer with that slogan.
“The fundamental danger is that consumerism may foster privatized and resentful citizens whose expectations of government can never be met, and cannot develop the concern for the public good that must be the foundation of democratic engagement and support for public services.”
Flinders points out, treating citizens as consumers leads them to think that politics can deliver the same “standards of service that they would commonly expect from the private sector . . . [which] is the political equivalent of suicide.”
Politicians used to be shamed with their unflattering attendance statistics; soon, they will be confronted with various “truthfulness” indexes based on everything they have ever said.
Can we really trust PolitiFact’s decision to label something “mostly false” when perhaps it should be “mostly true”?
“The paradox of liberal democracy is that it encourages hypocrisy because the politics of persuasion require . . . a certain amount of dissimulation on the part of all speakers. On the other hand, the structure of open political competition exaggerates the importance and the prevalence of hypocrisy because it is the vice of which all parties can, and do, accuse each other. It is not at all clear that zealous candor would serve liberal politics particularly well.”
it’s not that there’s more hypocrisy today; it’s just that, with twenty-four-hour political exposure in the media, it’s much easier to find.
defining a policy in rather ambiguous, vague terms might help politicians to garner support from many different quarters; precision might come later on. “‘Defending American interests’ is an ambiguous idea around which everyone unites,” she notes. Ambiguity makes it possible to actually get things done, giving politicians some breathing space to work on a problem without getting distracted by the attention of the media and the public.
As Hazem Kandil, a political sociologist and expert on Egypt, noted in Dissent more than a year after the Arab Spring, “as long as revolutionaries cannot organize their ranks and encourage their fellow citizens to make difficult choices, take risks, and accept short-term instability, then there is little hope that the people themselves will be able to turn their gallant uprising into a complete revolution.”
hoping that distributing tablets or e-readers might solve Africa’s problems with illiteracy is one thing, but actually getting governments to commit to choosing readers over textbooks or building new schools or hiring more teachers is quite another. The former is a problem of application; the latter, of allocation. No amount of technology will solve the latter problem, since this debate is animated by very different ideas about what teaching is and how government funds should be distributed—ideas that have little to do with the technology in question.
Thus, Noveck complains that “most of the work at the intersection of technology and democracy has focused on how to create demographically representative conversations. The focus is on deliberation, not collaboration; on talk instead of action; on information, not decision-making.”
As two scholars of technocracy observe, its fundamental assumption “is that disagreements occur not because people are bound to differ but because they are misinformed.”
What hasn’t changed since Crick wrote his critique of technological thinking is the fact that fixing politics without first getting a thorough understanding of what it is and what it is for is still a very dangerous undertaking. Or, to put it bluntly, it’s never been cheaper to act on one’s stupidity.
the idea of equality on which Google search is based is quite shallow: yes, everyone can vote with “links”—but those who have the resources to generate more links, perhaps by paying influential sites to link to them, or to game the system through search engine optimization have much more power than those who don’t. It’s anything but “one person, one vote.” At best, this is more of an oligarchy than a democracy. Besides, Google’s ranking algorithm considers at least two hundred other factors—for example, the loading speed of the website—in addition to how many other sites link to a particular page.
Google’s Kafkaesque reading of democracy goes something like this: you enter a booth to cast a vote only to discover that the electoral commission is also going to consider your fashion taste, your accent, the weather outside, and many other factors—of which, predictably, you cannot be informed.
“Algorithms may bring us new artists, but because they build their judgment on what was popular in the past, we will likely end up with some of the same kind of forgettable pop we already have. It’s a clear foible of the technology that all these years of so-so music are included in its analysis.”
Yes, Google’s self-driving cars would make driving easier and perhaps even cut the number of deaths on the road, but a reasonable transportation system ought to pursue many other objectives. Would self-driving cars result in inferior public transportation as more people took up driving? Would it lead to even greater suburban sprawl as, now that they no longer had to drive, people could do e-mail during their commute and thus would tolerate spending more time in the car?
If the goal is to get consumers to go places and fill their stomachs with pleasant food between uploading photos to Instagram and posting updates to Twitter, then Yelp is perfect. But if one views cooking as an art that has its own standards of excellence and its own intellectual and artisan tradition, if one grants that cuisine also has a mission to educate and provoke, then Yelp perhaps falls short of the mark.
If greater participation in culture through digital technologies and the network structures in which they are embedded favors the market, discourages artistic innovation, or is bought at the expense of critical reflection on art, on what grounds can that be considered democratic? If, on the other hand, democracy means the expansion of opportunities for deliberation, for publicness, or for genuine diversity, the current situation falls short.
Adorno once remarked, “without expertise, without a habitual knowledge of the familiar, the new that is taking shape can hardly be understood.”
Writer Daniel Mendelsohn gets to the very heart of the difference when he writes that “all criticism is based on that equation: KNOWLEDGE + TASTE = MEANINGFUL JUDGMENT.
Kelly offers the best of all possible worlds: technology is both what we make it of it and an autonomous force with its own wants and desires and largely independent of humans. Kelly’s thought is full of such doublespeak, by which we are simultaneously promised control over technology and assured that we need no such control because it’s too late.
There are few empirically rigorous studies of Moore’s law, but Finnish innovation scholar Ilkka Tuomi has done perhaps the most impressive work, digging up industry data, calculating actual growth rates, and tracking various expressions and references to Moore’s law in the media. Tuomi’s conclusion? “Strictly speaking there is no such Law. Most discussions that quote Moore’s Law are historically inaccurate and extend its scope far beyond available empirical evidence,” he writes. Furthermore, notes Tuomi, “sociologically Moore’s Law is a fascinating case of how myths are manufactured in the modern society and how such myths rapidly propagate into scientific articles, speeches of leading industrialists, and government policy reports around the world.”
That this “what technology wants” kind of discourse allows technology companies to present their business strategies as a natural unfolding of history is not something we should treat lightly. Technology wants nothing—and neither does “the Internet.”
Datasexuals are to Silicon Valley what hipsters are to Brooklyn: both are ubiquitous and, after a certain point, annoying.
“Assuming that one estimated the value of a piece of music according to how much of it could be counted, calculated, and expressed in formulas: how absurd would such a ‘scientific’ estimation of music be! What would one have comprehended, understood, grasped of it? Nothing, really nothing of what is ‘music’ in it!” he wrote.
Any learning enterprise that begins with the assumption that ideas have a bottom line will succeed in churning out the next generation of Bain consultants, but will it produce any talented essayists?
As literary critic Terry Eagleton once put it, “Being human . . . is something you have to get good at, like playing the tuba or tolerating bores at sherry parties.” Remove the bores and replace the tuba with a self-tracking app, and you shrink the space in which our humanity can emerge.
The Quantified Self movement, in its current form, is madly devoted to articulating facts—that’s what numbers are good for—but it still has no way of generating narratives out of them. In fact, it might even block the formation of narratives, as self-trackers gain too much respect for the numbers and forget that other ways of telling the story—and generating action out of it—are possible.
For Proust, the key to describing reality, both past and present, was not seeking more data but putting our imaginations to good use by connecting our senses with our memories (this, in part, explains why Proust thought that the novel was much better positioned to do this job than cinema or photography).
Sometimes three photos can evoke stronger memories than two hundred hours of footage. As philosopher Björn Krondorfer points out, “In an age of memory inflation, the archiving and memorializing itself can be seen as permission to forget.”
The triumph of psychology over philosophy is not limited to industrial design; policy designers and social engineers have succumbed to this trend as well—all in the name of science, for psychology and neuroscience are presumed to be more scientific than philosophy simply because they run experiments and tests. But the fact that matters of morality do not lend themselves to easy measurement does not mean we should disregard such concerns and recast them in neuroscientific and psychological terms. Nowhere is this tendency more evident than in discussions of willpower, in which once highly complex and painful decisions about right and wrong are now recast as instances of strong or weak will—which we can address by managing our willpower reserves carefully, much as we do our bank accounts.
Tierney and Baumeister’s promises do sound very sweet: “Instead of paying doctors and hospitals to repair your body, you can monitor yourself to avoid illness. Instead of heeding marketers’ offerings of fast foods and instant pleasures, you can set up your life so that you’re bombarded with messages promoting health and conscientiousness.” Here is the mind-set of an atomized consumer who couldn’t care less about health-care reform but is only preoccupied with maximizing his or her own well-being. Presumably, those who cannot afford self-tracking devices or don’t want to self-track due to privacy concerns will be dismissed as unsophisticated technophobes. This is reminiscent of Bogost’s shit-crayons metaphor: yes, some of us might find ingenious engineering solutions to resist insidious marketing, but in all this celebration of modern technology, shouldn’t we also do something about the marketing itself? Why force consumers to monitor themselves and hone their willpower techniques if we can make it harder for food companies to sell unhealthy food or target children? Instead, political action all but disappears; rather than reforming the system, we just tinker with ourselves and tend to our reservoirs of willpower the way Swiss bankers tend to their vaults.
The growing appeal of self-tracking, nudges, gamification, and even situational crime prevention and digital preemption can only be understood in the broader intellectual context of the last few decades. As already noted, the sad reality is that philosophy, with its preoccupation with virtue and the good life, has been all but defeated by psychology, neuroscience, economics (of the rational-choice variety), and their various combinations, like behavioral economics. Hence, instead of investigating and scrutinizing the motivations for our actions, trying to separate the good ones from the bad, policymakers fixate on giving us the right incentives or removing the option to do the wrong thing altogether. Better safe than sorry, as the saying goes.
Constructing a world preoccupied only with the most efficient outcomes—rather than with the processes through which those outcomes are achieved—is not likely to make them aware of the depth of human passion, dignity, and respect. We don’t earn our dignity by collecting badges; we do it by behaving in a dignified manner, often in situations in which we have other options. Tinker with this spiritual pasture, and those options might go away—along with the very possibility of dignity.
To quote Bernard Crick once again, “Boredom with established truths is a great enemy of free men.” Perhaps it wouldn’t be such a bad thing for our newly empowered geeks and engineers to recognize that there are good reasons not to run our politics as a startup; that our politicians face competing demands and that the quest to eradicate lies and hypocrisy may do more harm than good; that there are good reasons to value subjective but high-quality criticism, even if it doesn’t stem from the “wisdom of crowds”; that the dream of flawless communication across nations may not only be unachievable but also undesirable; that humans are complex and occasionally irrational creatures who care about why they do certain things as much as they care about what it is they are doing; that numbers often tell us less than we think and quantification as such might actually thwart reforms.