Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
digital rhetoric and professional communication
Updated: 2 hours 34 min ago

the broken fun of the humanities

4 December, 2018 - 14:49

The moral of this story is probably that some Chronicle of Higher Ed clickbait articles are too absurd to pass by, in this case, Timothy Aubry’s “Should Studying Literature be Fun?” I find this to be such a bizarre question and ultimately I’m unsure what it has to do with the concerns of the article itself.

Aubry observes “So much of academic life seems colored by high-stakes political struggles.” Huh. Not sure, but you’ve got to love the passive voice there. Who is doing this coloring I wonder. This, it seems to my reading of the essay, is part of the “not fun” portion of academic life and studying literature (we can discuss disambiguating those two some other time). Here though it’s the decades long history of canon-busting, recovering voices, and incorporating new cultural perspectives that is familiar fare, or as Aubry terms it “The urge to dethrone literary heroes on the basis of their bad politics.” What is apparently lost (or wait, maybe not) is an opportunity for an aesthetic appreciation of literature. He notes the (again familiar) rite of graduate school passage where one learns to abandon (or at least not vocalize) one’s love of literature. “It wasn’t that professors spent much time debunking aesthetic judgment. Those battles had already been fought and won. It was just that certain questions to do with beauty or pleasure almost never arose; you learned not to ask them the same way you learned to stop liking bands like Coldplay.”

These are all familiar stories to me about grad school and English Studies. (Don’t worry, rhetoric has parallel processes to those of literary studies.) You can decide on their veracity for yourself.

My thought though was that I wasn’t really sure what any of that had to do with experiencing fun. I’ve witnessed glee in the critical evisceration of authors, scholars, fellow faculty and/or students. Plenty of people appear to love a good public pillorying on social media, in some online magazine, or maybe at a conference. And I don’t mean that as a negative judgment. My point is just that, from what I can tell, people enjoy these activities. On the other hand, I’m not sure that aesthetic appreciation is inherently fun. I’m not saying it couldn’t be fun for some people. I’m just saying I don’t think it’s intrinsically more enjoyable than a good expression of righteous indignation and anger. 

Now that said, I do recognize that there’s always been some odd pseudo-(?), Neo-(?), post-(?) Puritanical urge to insist that none of this critical/political stuff is fun and certainly none of it is done for personal enjoyment! Maybe that’s some version of the mommy/daddy “this hurts me more than it hurts you” (no, it doesn’t). Or an ethical/rhetorical warning that (to appear) to enjoy doling out judgment and punishment undermines its moral foundation: sober as a judge as the saying goes. Or perhaps, as Aubry suggests, a way of indicating the seriousness of our academic work.

So Aubry ends with what I’d consider a commonplace. Specifically he switches what he presents at first as an either/or (politics or aesthetics) and tries to turn it into a both/and.

Moreover, to struggle against inequity and discrimination, it is important not only to stop celebrating those bad modes of writing that denigrate particular groups, but also to work to spread the opportunity to have good, fulfilling aesthetic experiences as widely as possible — even when those experiences contribute nothing to the improvement of society other than themselves. To affirm literature’s aesthetic value is to argue that it does something more than serve as an instrument for a particular politics, that the experiences it fosters are worth pursuing not only because they reaffirm our political views or further our ideological aims, but because they represent a mode of fulfillment — a quickening of our perceptions, a dilation of our temporal experiences, a revitalization of our thought and feeling — unavailable elsewhere.

In short, there’s gotta be some overlap in that good politics/good aesthetics Venn diagram, right? I don’t know. You could ask Plato or maybe enjoy some good Socialist Realist theater.

But let me end on some fun. The work we do should be fun. Not all the time of course but I’m going to go out on a limb and say, on balance, at the end of the day, if you don’t enjoy the work you’re doing then maybe you should consider doing something else (or at least working somewhere else). I know that can be easier said than done for a variety of personal/unique reasons. But as general career advice and even more generally as a way of defining the work undertaken by humanities faculty: yes, you should be able to enjoy it.

Hell, work/life is hard enough as it is without insisting that you shouldn’t enjoy it whenever it’s possible to. What a weird idea. But the fact that this whole “no fun” notion is all too familiar is just another odd broken thing about the humanities or maybe academia.

Categories: Author Blogs

fake news and the distribution of critical thinking

12 November, 2018 - 14:09

Wired published an article a few days back based on this research from the journal Cognition. As the Wired article’s title suggests, if you want to be resistant to fake news then “don’t be lazy.” Basically this particular study indicates that people who exhibit critical thinking skills are more resistant to fake news than those who do not, regardless of ideological bent and regardless of whether the fake news favors them ideologically. Here’s the abstract to that article:

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news –even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se– a finding that opens potential avenues for fighting fake news.

It’s worth noting that these findings are somewhat inconsistent with other research (like this) which suggests that even when people demonstrate critical literacy/numeracy they tend to “use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks.”

One thing that lies outside the scope of either of these pieces of research is how one acquires a capacity for the kind of critical-analytical thinking described here. Our received notions about this is that it is either innate (some people are just smarter than others) or learned. However, even in the learned case, our typical sense is that it’s kind of a one-shot deal or inoculation. E.g., you can learn critical thinking in high school and/or college, and once you have it you pretty much don’t lose it. However, we all know that isn’t true. If you’re drunk or tired or angry or excited or even just distracted, these “higher reasoning” skills suffer.

And I think I’ve just described the mental states of a significant portion of social media users while they’re on social media.

Both the Wired article and the researchers it cites moralize this situation by accusing those who fall prey to fake news of laziness (which is a mortal sin after all). Maybe. But that judgment fails to account for the media ecological conditions of social media, specifically its ubiquity/pervasiveness. It fails to account for its intentional design as an intrusive and addictive technology. So I say “maybe” as we can certainly ask more of one another when it comes to sharing fake news, and I think most people have become more skeptical in the last few years in regards to what they read online.

On the other hand, if one thinks about cognition as a distributed phenomenon then one would want to account for the media-ecological conditions that made social media such fertile ground for fake news and then ask how we might change those conditions. Clearly some of that is happening as social media corporations begin to own some modicum of responsibility here in terms of trying to detect and stop the spread of fake news. But I wonder if other strategies might not be possible. Namely, if we can design social media, smartphones, and related tech to incite our interactions with them, then can we also design them to facilitate a critical-analytical orientation? I’m not sure. It’s quite possible that those are irreconcilable intentions–simultaneously spurring our desire to engage while also encouraging a more deliberative approach to that engagement. For example, we might just decide “I don’t want to go on Facebook, Twitter, etc. right now because that’s too much work.”

Part of that challenge too is the see-saw of content. Just to give a quick example. I took a look at the first ten posts in my FB feed. 4 were personal updates. 3 were colleagues talking about their classes, asking advice, etc. 3 were articles shared, of which two were political news (one from USA Today and the other from Washington Post). I’m sure you get something similar, by which I mean that your rhetorical relationship to the author of the post and/or the content is shifting: family and old friends, work colleagues, neighbors, etc. and humorous videos/memes, personal news with varying emotional registers, interesting stories, advertising, political commentary, and news. You wouldn’t want to take the same critical-analytical orientation to each of these.

I’m just spitballing here but maybe we’d prefer to not have all this stuff in a single stream. Maybe with some intelligent digital assistant support we could split it up, so that when I’m interested in political news (and up for the responsibility of being a critical-analytical reader), I can dive into that feed, but that I’m not expected to be at my level best every time I idly turn to Facebook.



Categories: Author Blogs

the challenges of reading Latour

5 November, 2018 - 14:32

A couple of Latour-related articles have been going around lately, particularly this article in the NY Times and more recently this critical piece by Alex Galloway at least partly occasioned by the Times article. Galloway’s rejection of Latour (and Deleuzian, new materialism in general, if one reads other works of his) comes down to the infelicity of this kind of thinking for his political project. That is, it is, in my view, an ideological objection. And I don’t have any problem with that. Well, let me rephrase that. I don’t have any problem with people–academics or otherwise–having a goal and selecting the best tools for achieving that goal.

That said, at the end I think the only conclusion you can draw is that Latour doesn’t share Galloway’s political commitments, is not seeking to carry out Galloway’s political objectives through his research, and that therefore Galloway believes his work has little or no merit.

I will leave it up to you to determine whether or not you find that piece of news useful.

In passing though, I will point out what strike me as some misreadings of Latour. Galloway writes,

Latour very clearly enacts a “reticular decision” of economic exchange in which markets and networks are sufficient to describe any situation whatsoever. And thus to avoid these Latourian difficulties one might “degrow” this particular reticular decision — so engorged, so sufficient — refusing to decide in favor of the network, and ultimately discovering the network’s generic insufficiency. Latour does the reverse. Networks overflow with sufficient capacity.

I see this as a key point in Galloway’s critique as this notion of a reticular fallacy is something he has turned to before. As is suggested here, the reticular fallacy has to do with seeing everything as rhizomatic or networked or horizontal, plus assuming such structures are intrinsically better, freer, more just, or some such. I completely agree that it would be an error to see everything that way or assume there’s something necessarily better.

But I am confused as to how one sees that in Latour. Take for example, the concept of plasma as discussed in Reassembling the Social

plasma, namely that which is not yet format- ted, not yet measured, not yet socialized, not yet engaged in metro- logical chains, and not yet covered, surveyed, mobilized, or subjectified. How big is it? Take a map of London and imagine that the social world visited so far occupies no more room than the subway. The plasma would be the rest of London, all its buildings, inhabitants, climates, plants, cats, palaces, horse guards. (244)

To be clear, one can be critical of plasma also, but it strikes me that networks are like the subway system. They are hardly capacious at all despite Galloway’s assertion. And if plasma seems like a fairly minor point in Latour’s work, then one might try reading An Inquiry into Modes of Existence, which begins with networks as one of fifteen modes–a number which he does not claim to be exhaustive. Really Galloway’s point is that he believes Latour’s way of thinking is not progressive, that it merely reiterates an existing perspective when “The goal of critical thinking, indeed the very definition of thought in the broadest sense, is to establish a relationship of the two vis-a-vis its object, a relation of difference, distinction, decision, opposition.”

I can agree with that, but it’s that same value that is the basis of my dissatisfaction with Galloway’s argument. While he argues that Latour’s thought creates no difference or distinction in relation to its object of study, my complaint with Galloway is that he never really enters into a relationship with his object of study, having already predetermined his opposition. Perhaps that is just his rhetorical style. Maybe somewhere along the way, in the distant past, he engaged with Latour’s work in a way that was open to its possibilities. However reading this, you’d wonder how far along Galloway went before he came to this judgment or if he arrived at the text with this judgment in hand. And I don’t really care if the latter was the case. Most people are true believers of one sort or another. He already knows what the world is, how it can change, and how it should change. In that light the purpose of humanities scholarship can only be a political-rhetorical one: to persuade people to accept one’s beliefs and take up one’s cause.

The error one can find in Latourian-Deleuzian thinking comes when it is used in this same way, as if networks, rhizomes, becomings, etc. represent a teleology, as if we’d all be better off as nomads, schizos, or something. That would be a reticular fallacy as Galloway might put it. However I wouldn’t attribute such claims to either Latour or Deleuze themselves.

Latour’s methods might only be useful to people who do not believe they know how some part of the world works before they examine it and/or who are uncertain about how to act next. Even then, it’s quite possible that you won’t find Latour’s methods all that useful to you–if it doesn’t create more understanding and more importantly if doesn’t expand your capacity to act effectively in the world.



Categories: Author Blogs

why we can’t have nice things

4 September, 2018 - 11:25

It’s that old saying but one that might cut in two directions. Yes, “we” can’t have nice things because “you” are always ruining them with your irresponsible behavior, lack of class, etc. But possibly we also can’t have nice things because we’re always getting crap shoved in front of us. Or both. Facebook is case in point. Sure it’s a cesspool because of the way people behave, but it’s also crap in and of itself. Why can’t we have a better way to live online? And why don’t we live that better way?

n+1 has a piece apropos to this topic (h/t to Casey pointing this out, via Fb of course). It focuses on the treacherous minefield (are there other kinds of minefields?) that is the social media environment surrounding op-ed writing–in online journals like their own but principally in mainstream media, specifically the NY Times and Washington Post. There’s a range of concerns and complaints here. Authors and editors who write/publish works knowing they’ll be re-litigating them on Twitter. And readers have it no better. “In the not so distant past, we could sit with an article and decide for ourselves, in something resembling isolation, whether it made any sense or not. Now the frantic give-and-take leaves us with little sovereignty over our own opinions.” Surely I am far from the only one who encounters something shared in social media and the ensuing “conversation” and thinks “I have something to say about that, but why bother?”

In the few days since this piece was posted there’s been a whole story about the New Yorker Festival announcing Steve Bannon as a headliner, a bunch of other celebrities dropping out, a flurry of social media complaints, Bannon being dropped, and resulting analysis over whether or not that’s the right decision. My wife turns on MSNBC this morning and the pundit crowd is tut-tutting the decision, trotting out the typical argument about how these ideas need to be dragged into the light of day and debated in the public square where they will wilt. How naive is that? As if they aren’t doing that every day already on Morning Joe with their collection of refugees from the GOP. As if the Times and the Post don’t have their own cadres of neocon pundits.

It’s a peculiar, though founding, fantasy of the US that at their core people are the same, they are kind, they are rational, they have a “strong moral compass,” and so on.

But here’s the thing. At their core, people are pretty stupid. I don’t mean most people are stupid or people are stupid these days. I don’t mean people who don’t resemble me are stupid. I mean we are all stupid in the sense that as individuals, as independent entities, to the extent that we can be independent (try going it on your own without oxygen), we lack the cognitive resources to make the kinds of judgments necessary for democratic participation, especially in the very complex global present.

I mean this in roughly the same way as Nick Bostrom does when he observes, “Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” To put it more generously, I mean it in the way Katherine Hayles does when she takes a cue from Edwin Hutchins’ theory of distributed cognition and writes “Modern humans are capable of more sophisticated cognition than cavemen not because moderns are smarter, Hutchins concludes, but because they have constructed smarter environments in which to work.” Of course this is also Bostrom’s observation (and fear): that these smarter environments are becoming to smart for their own, or at least our own, good. But that’s a different subject.

It is in this context that one might be tempted by Nick Carr’s “google is making us stupid” claim, but really my assertion is simply that we don’t need any help being stupid. Instead we might want to ask what it means to suggest a la Hayles that we are “capable of more sophisticated cognition.” Can we be more precise about the nature of those capacities? In what ways are our environments smarter? What does smarter mean?

Empirically, we have access to a tremendous amount of media/data. In a digital context, media are data and data are mediated; it’s the resampling of the McLuhan maxim. We also have unprecedented capacities for communication. The choke point in this system is human consciousness, so of course we need to build smarter environments that can swim up the media/data stream and handle that firehose for us. So the first problem is that environment turns out to not be that smart. As you know, people connecting to Google, Facebook, Twitter, Reddit, etc. etc. are not really demonstrating much capability for “sophisticated cognition,” at least not by any sense of that term I can conjure. To the contrary, wading into this media/data stream seems to reinforce poor reasoning and bad information. Maybe it’s confirmation bias or the Dunning-Kruger effect. IDK.

The other part of this is communication. I am reminded of a line. I’m probably misremembering it but my memory is that it comes from Virilio’s Art of the Motor. It has something to do with how when train travel became available in Europe, the belief in France was that it would reduce wars on the continent by making it possible for people to travel and improve understanding among nations. Meanwhile in Germany the realization was that trains would make moving troops and supplies to the front more efficient. The arrival of the Internet, especially the social media that have made billions of humans into online participants, was similarly meant to foster mutual understanding among people around the world… I don’t think I need to say more about that, do you?

So maybe this post appears to be moving toward a conclusion that this stuff is bad, but it’s not. I’m not in the business of making judgments like that. I am, however, in the business of evaluating the rhetorical effects of digital media. Give a proto-human a bone club and he’ll bash his neighbor’s skull in (a la 2001, which I just recently saw again). Give the same, slightly more evolved human a web connection and he’ll join in conspiracy theories about how those bones were put there to test our faith in a young Earth. That’s what humans are: just not very smart. It’s not really a fixable situation. But the situation isn’t hopeless. We actually have managed to disabuse ourselves of bad ideas in the past; it could happen again.

But you can’t really change people’s minds by talking to them. You change people’s minds by changing the environment in which they think, the distributed part of their distributed cognition. Not understanding this is a common error. The idea of a public debate is that a critical mass of people are there so that when the audience is persuaded the whole community is shifted. But that doesn’t happen anymore. It certainly isn’t happening in your social media feed.


Categories: Author Blogs