Feed aggregator

the usual waffle on the plight of English Studies (holiday edition)

Digital Digs (Alex Reid) - 11 December, 2017 - 09:34

Many of the challenges we face in English at UB are not unique. In particular we share them with other public research university English departments who need to think about phd programs. The academic job market is awful, which raises all manner of questions about doctoral education. The STEM and business orientation of our undergrads has meant weak/declining enrollments on that end, both in terms of majors and simple numbers of students in seats. In-between, we have an MA program that serves the traditional function of preparing students to enter phd programs: a mission that makes even less sense in this climate than enrolling phd students.

Each of these has been a known problem for more than a decade. We had a national overproduction of phds before the recession dropped the floor out of the job market, cutting the number of tenure-track positions by two-thirds. Concerns about undergrad education go back farther than that. While much of the focus is on doctoral education, for me you have to start at the undergrad level.

This is my sense of undergraduate attitudes toward English. Yes, they are concerned an English degree won’t lead anywhere career-wise. However, statistically that’s not true, and many such potential English students are in majors like psychology or communications that don’t lead anywhere in particular either. More importantly, many students don’t know what they want to do anyway (not that there’s anything wrong with that). A more likely reason for not choosing English is not seeing the relevance of the coursework. Fair enough, I guess. Maybe communications or psychology are more scientific and/or focused on contemporary concerns. However, I’m willing to bet that there’s plenty of grumbling among students in those majors about why they’re being asked to learn X or Y.  I think the primary problem English has is that students have an antipathy for the general activity of English classes: reading books, talking about them, and then writing essays. Yes, there is a small group of students who highly value these activities. The emphasis there is on small. As such, fewer students are interested in taking even a single English class as an elective or to meet a Gen Ed req.

To rebuild these enrollments we need to communicate the relevance of our curriculum to students, and we need to shift the activity, the experience, of the classroom.

If we can rebuild enrollments, then the job market should stabilize. While we are doing this, we also need to revise the size and curriculum of doctoral programs so that we’re graduating an appropriate number of phds and preparing them to work in these shifting conditions. We also need to rethink MA programs so that they are valuable on their own. They will need to be more professionally oriented and not only on the profession of English professor.

These are things I’ve said before. And out of professional respect, I’ve basically always left it by saying that literary scholars need to figure out how to put their expertise to work in this context. But I’ll take it a little farther today. Here is the basic relevance of literary studies as I see it.

  1. It provides an understanding of cultural differences in communication.
  2. It develops skills of rhetorical and poetic analysis (i.e., reading skills).
  3. It offers historical contexts for communication practices.

The first part of this is moving beyond the strictly “literary.” Fortunately that is something we already see, though less at the undergrad level than elsewhere. The first thoughts students have about English cannot be that it’s about literature or literary history. Until we change that, we’re screwed. Courses with a primarily historical orientation, including histories of rhetoric, should probably comprise no more than 20% of the curriculum. Notice I said primarily. Generally speaking any course whose primary focus was some explicit contemporary concern or practice might benefit from some historical context.

That proposal might be felt as a horrible wound by many literary scholars. However, I don’t think I would be asking anything more of them than I do of myself or my own field. In such a curriculum, what I would do, what I really already do, is primarily teach students how to communicate in a digital media ecology. That’s related to my research, but it’s an adaption of my research to serve student interests and needs. If I were teaching courses analogous to what literary scholars typically do, I would be teaching digital-rhetorical theory and examining the shift in rhetorical practices over the last 50 years or so. That would be like the survey course. I would teach a 400-level course that focused on Web 1.0 in 1990s: the rhetorical practices of frames, image maps, and early hyper-linking or the emergence of desktop publishing. I could teach similar courses focused on the rhetorical history of social media in the 2000s or mobile media since the iPhone or video games in any of the last 5 decades, etc. etc. Those courses might even enroll better than some conventional advanced literature courses focused on a single author, a literary movement, or an area of critical theory. But really what would be the point? The real question for me is how to use my disciplinary expertise to benefit students. So maybe some history of how PowerPoint developed would be helpful for students trying to understand why it operates as it does, has particular limits, and tends to push users toward questionable rhetorical practices at times, but the relevant point is that they are learning how to communicate using slides. So it makes much more sense to create a course on visual communication practices than one on the theory of visual rhetoric, the history of PowerPoint, or, god forbid, a new materialist rhetorical analysis of how digital machines participate in the construction of thought and agency.

To be clear, that last one is where my research lies, and I don’t imagine I’ll ever teach such a course, even at the doctoral level. I can’t imagine UB ever having enough scaffolding in rhetoric and media theory for students to have the preparation where a course like that really makes sense. Oh, I point to such things all the time, offer thumbnail explanations of it, and so on in a grad class—because I have to give students some context for where I’m coming from and they certainly know enough to know such things are out there. And I am totally fine with that reality. I don’t need to teach that course. One benefit is that I never have to think about the student who has trained in my field and whose dissertation I’ve chaired trying to get a job somewhere as a digital rhetorician. Instead, I teach more introductory courses in media theory and courses in pedagogy. Soon I’ll be teaching some grad level professional-technical communication courses, but those won’t be aimed at phd students trying to get academic gigs in those fields. They’ll be focused on using disciplinary knowledge to help students develop communication skills for other career aspirations.

Again, I’m really only suggesting that literary scholars will need to think about the relationship between their scholarship and the curriculum in a way that is analogous to what I’ve always done, what I think many rhetoricians have long done.

Maybe if we had an undergrad program where students could clearly see how they would become better communicators and better prepared in a broad sense for a swath of careers that would help. More importantly, if the activities of those courses were clearly focused on doing and making and interacting with the world, with creating things that had the potential to be valuable, then maybe students would want to do them.

Then again, maybe not. But it’s the best shot we’ve got as far as I can see. Oh, and happy holidays.

Categories: Author Blogs

An alternative to plumbing the depths of fascist souls

Digital Digs (Alex Reid) - 26 November, 2017 - 22:31

Many have noted their displeasure/anger with two recent NY Times pieces both by Richard Fauset: the first is a piece of reporting about a particular Nazi/white nationalist, Tony Hovater, and the second is what I think one would call a reflective op-ed follow up to that story. The displeasure/anger stems from the way in which the pieces normalize white supremacy. I won’t get into that here, though its fairly self-evident now that, even in the most generous reading, those pieces were ineffective since its hard to imagine their rhetorical aim was to create the conversation that has resulted. You can read the stories for yourself or read this satire of the whole business in Atlantic. I am interested though in the ostensible impulse behind these stories–to understand why seemingly “normal” Americans become white supremacists. Why do we have this impulse? Would it be a useful question to answer if one could answer it?

At one  point in the second piece, in which Fauset reflects on his frustration at not being able to answer the question of why Hovater became a Nazi, he writes the following:

I was thinking about an album I grew up with by the Minutemen, the Southern California punk group, and its brilliantly koanic title: “What Makes a Man Start Fires?” To me, that question embodies what good journalism should strive for, as well as the limits of the enterprise. Sometimes all we can bring you is the words of the police spokesman, the suspect’s picture from a high school yearbook, the acrid stench of the burned woods. Sometimes a soul, and its shape, remain obscure to both writer and reader.

Maybe. But the smartass answer to the koan is “a matchstick.” The resulting insight is that the problem may be that you’re looking in the wrong direction when you try to look for someone’s “soul.” To put it in lawyerly terms, you’re assuming facts that are not in evidence. Toss out the soul hypothesis, and this project might become easier. And in tossing out the soul, I don’t mean just the religious notion but also the entire concept of an internally consistent psyche. This isn’t about someone’s soul. It’s about the operation of the social assemblages we populate.

Demographically, there are a lot of white guys like Hovater–married, blue collar, living in small-town America surrounded by farms, and driving to a nearby small city to enjoy Applebees, Wal-Mart, and the other roadside attractions of contemporary corporate culture. Of course one doesn’t have to be in that demo to feel dissatisfied with one’s life, to be angry or scared, to feel existential angst, or to become a hate- and rage-filled bastard. Maybe it isn’t so surprising that Jack and Diane end up building a Nazi website in the spare bedroom of their little pink house. None of this is really surprising. As some critics of Fauset’s article point out, all the article manages to point out is the banality of evil, and we already know about that. But think about it this way…

fascism is inseparable from a proliferation of molecular focuses in interaction, which skip from point to point, before beginning to resonate together in the National Socialist State. Rural fascism and city or neighborhood fascism, youth fascism and war veteran’s fascism, fascism of the Left and fascism of the Right, fascism of the couple, family, school, and office: every fascism is defined by a micro-black hole that stands on its own and communicates with the others, before resonating in a great, generalized central black hole. (A Thousand Plateaus, 208)

My sense of this is that we’re surrounded by, soaking in, microfascisms, banal evils. There’s not much point in asking how microfascism arises. The question is about resonance.

No one needs to be told that the internet, social media in particular, has operated as a tool for building virtual communities among ideological extremists, fascist or otherwise. Sharing news, rhetorical strategies, political tactics, and more immediately dangerous information as well as serving as a platform for logistics and organizing are obvious  uses of the web for political extremists. Also not surprising is that the web serves as a medium for attacking one’s enemies. However we also ought to be able to recognize the deterritorializing and decoding effects of digital communication. I’m not going to go through the Deleuzian chapter and verse here, but the result, which is fairly easy to observe, is the intensification/purification of an ideological line of flight that would be unlikely to arise (or at least would not so easily arise) in face-to-face, territorial communication.

One doesn’t need to be a fascist to experience this, btw. All you have to ask is whether or not you are a more ideologically extreme/pure version of yourself online than in other aspects of your life. Or you might ask if the online communities in which you participate demand more pure ideological expressions than you might otherwise give. I think in many cases, this does happen. Rather than such investigations serving as an excuse for fascism (society made me do it), the point is to stop trying to peer into the soul of the fascist as if his secrets can be found there. What we need to understand isn’t in the hearts of people like Hovater. It’s in the mechanisms that turn those microfascist tendencies into a political movement.

Categories: Author Blogs

carving cognition at its joints

Digital Digs (Alex Reid) - 31 October, 2017 - 12:43

I’ve started reading Katherine Hayles’ Unthought: The Power of the Cognitive Nonconscious. I have to say that I recognize (and am sympathetic toward) the difficult gyrations this topic demands in the humanities as one is called upon the establish various boundaries. In the first chapter, she creates a three-step pyramid comprised by (from top to bottom) conscious/unconscious (that’s one), nonconscious cognition, and material processes. In the prologue, she makes certain to differentiate herself from those who might argue for vitalism or panpsychism:

One contribution of this study is to propose a definition for cognition that applies to technical systems as well as biological life-forms. At the same time, the definition also excludes material processes such as tsunamis, glaciers, sandstorms, etc. The distinguishing characteristics, as explained in chapter 1, center on interpretation and choice—cognitive activities that both biological life-forms and technical systems enact, but material processes do not. A tsunami, for example, cannot choose to crash against a cliff rather than a crowded beach. (3)

She then goes on to differentiate herself from those who argue over the human/nonhuman binary and observes that “It is fashionable nowadays to talk about a human/nonhuman binary, often in discourses that want to emphasize the agency and importance of nonhuman species and material forces” but that “there is something weird about this binary.” Instead she prefers “cognizers versus noncognizers. On one side are humans and all other biological life forms, as well as many technical systems; on the other, material processes and inanimate objects” (30). For Hayles, this difference ultimately boils down to choice: cognizers have one, and noncognizers don’t. All of this work then seems oddly deflated when she then writes “The better formulation, in my view, is not a binary at all but interpenetration, continual and pervasive interactions that flow through, within, and beyond the humans, nonhumans, cognizers, noncognizers, and material processes that make up our world” (32-33). Still there is this matter of “choice” to deal with, which Hayles defines as an ability to interpret information rather than as “free will.” For example, an autonomous automobile interprets information and makes decisions about how to drive. This makes the car a cognizer, unlike the tsunami.

ahhhh, decisions, decisions.

I’m going to have to see how this all plays out in Unthought, but for now I’m guessing that Hayles would agree with the following. First, that cognizers arise from material processes/noncognizers… i.e., that we are not ontologically separate even though we have different capacities. Second that whatever loose bonds tie together a human, a worm, and an autonomous drone as cognizers, the material processes and cognitive capacities associated with each have little, if anything, to do with one another. That is, until they encounter one another: when the drone sets its sights on the human or when the human figures out the worm is good for his/her garden soil, then there are obvious associations.

My inclination is more toward the “continual and pervasive interactions” approach. I don’t think there is a class of cognizing objects (human, biological, and/or technological) that is distinct (except in an abstract/conceptual way) from a class of noncognizing objects. E.g., if we say a drone is a cognizer, then what about when it’s turned off? Is an unconscious human a cognizer? I certainly agree that a tsunami does not exhibit the kinds of cognitive capacities we observe in worms or iphones. That said, I’m not too concerned with establishing an absolute, ontological boundary between that which cognizes and that which does not. Instead, in conceiving of cognition as a capacity I mean to suggest that thinking arises in an encounter among objects. As Delanda points out, a knife has a capacity to cut but that capacity might never be realized unless it encounters another object that can be cut by the knife and third capable of wielding it. From a broad perspective, capacities for thought arise from encounters within material processes. Among biological objects those are evolutionary processes, or at least they start as evolutionary processes. Later they become social and technological processes. Of course Hayles is coming from the other direction, imagining an audience that will object to her expansion of cognition rather than one that will question its limits.

I should note, as an aside, that I find Hayles’ chapter on new materialism somewhat mystifying, except as a somewhat typical example of straw man argumentation at work. First new materialism is so broad and discontinuous that the notion that one can make sweeping claims about it is just irresponsible. Second, making claims about Deleuzian scholarship is probably even worse on the same grounds. One can have a conversation (not here) about whether Hayles’ particular critiques of the particular texts she chooses to treat are valid. My concern here is with the broader claims. For example, she writes, “Despite their considerable promises, the new materialisms also have significant limitations. Conspicuously absent from their considerations are consciousness and cognition, presumably because of the concern that if they were introduced, it would be all too easy to slip into received ideas and lose the radical edge that the focus on materiality provides” (65-66).  This is a claim that is fairly central for the argument she wishes to make about new materialism and the (corrective) relationship of her work to it. From my perspective the assertion that consciousness and cognition are “conspicuously absent” from new materialism is just plain wrong. I mean, if you want to disagree with the treatment of these topics that’s one thing, but saying it doesn’t exist just strikes me as oddly (suspiciously, cagily) ill-informed for someone like Hayles.

But I want to get back to her connection of cognition with interpretation and choice. It is an admittedly low bar in terms of choice. E.g., the autonomous car “chooses” the safest, most efficient route to its destination. It’s not an expression of will or freedom. If there were different inputs, they would result in different interpretations, which would lead to different choices. It’s difficult to narrate human choices any differently. Conventionally, we call a choice any conscious cognitive experience in which we feel/believe we have options to choose among. In my view, choices emerge from relatively indeterminate encounters. As we know from experience, rhetorical situations are a common example for humans. Should we speak or not? What should we say? Often there are many different ways one might say roughly the same thing, and it can be hard to know how an audience will react to a rhetorical act. However, I would say that the capacity for thinking and choosing emerges in this rhetorical situation. Part of it has to do with the specific objects. Not every object can read Hayles’ book. If it wasn’t in English, then I couldn’t read it. Not every English literate person has the disciplinary background to read this book. Not every disciplinary reader has the capacity to blog about. Not every reader who might have sufficient access to and ability with technology to be able to blog has a long-standing blogging practice that would make writing a post about reading Hayles’ book a likely option. Still in that context, I might have decided not to write a blog post (or write it and not publish it), and I could have written something different. In fact I did write some other things that I later deleted; that’s part of the process.

But to what extent are those choices coming from me? I won’t deny the experience of making choices. Of deciding right now that this post is really too long already, but that I still need to wrap it up. I guess what it comes down to is that I am very intrigued by Hayles’ concept of nonconscious cognition, but I find the way she sets it up to be very odd.

 

 

Categories: Author Blogs

planning for future miseries

Digital Digs (Alex Reid) - 24 October, 2017 - 13:33

I’ve been reading Adam Greenfield’s Radical Technologies as I’m teaching it this week, but watching Bourdain’s Parts Unknown episode this weekend about Pittsburgh also has me thinking along Greenfield’s concerns.  I selected a post title that sounds like it might be the name of a lost album recorded by The Smiths because I couldn’t help developing an affective orientation toward Greenfield that is similar to the way The Smiths make me feel. That is, when Morrissey sings “I wear black on the outside because black is how I feel on the inside” or “Please, please, please, let me get what I want. Lord knows it would be the first time,” I’m pretty sure he’s actually unhappy, but there’s also–at least for me–some irony, some gallows humor maybe, in the hyperbole. In other words, the angst-ridden, nihilistic youth is a part one plays, and in the spirit of knowing we were all young once, I have some experience with this. The similarly black-wearing cultural critic, garbed in what Ian Bogost once termed “the turtleneck hairshirt,” is a related figure, maybe a few years older than Morrissey’s youth (and again one with which I have some personal past familiarity). And Greenfield reminds me of that, of how critique–however trenchant its observations can be–can only ever move in one direction.

There’s a moment of self-reflection where he writes “as someone profoundly skeptical of the claims that are so often and so breathlessly made about technology” (201) as if to say, “I am like you, gentle reader. I acknowledge the breathlessness of this book I am writing and understand your skepticism.” But then that moment is quickly gone as he asserts that what he has “seen in the course of research for this book has convinced [him] that automation is an existential mid-term threat” (202). Much like the girlfriend in a coma, “I know, I know, it’s really serious.”

But let me detour. Two stories. First, I have this memory of an episode of All in the Family where one of Edith’s friends gets a job working with Archie at the warehouse. She’s unloading pallets from a truck with a forklift and observes that if they just built a ramp, she could drive the pallets right into the warehouse. Archie responds that it is his job to life the boxes off their pallets and into the building. In short, he’s a ramp, a simple machine. It’s an old memory. I might have the details wrong. Point is, it’s an old story.

Second story, the aforementioned  Parts Unknown Pittsburgh episode. As you may know, Pittsburgh is often hailed as a model, comeback city–making the switch from the failed steel industry to a hotspot for emerging tech companies like Uber. Pittsburgh didn’t lose its steel jobs to automation. It lost them to globalization 40-50 years ago. So the current automation of the steel industry isn’t so much a worry in western PA. Bourdain focuses on the old southern and central European immigrant communities that have been their for a century, the African-American community, and the failing economies of nearby steel towns. The overarching theme of the episode is that the economic excitement of a revitalized Pittsburgh, centered around universities like Pitt and Carnegie Mellon, isn’t impacting the lives of these people. The obvious subtext is that many of these people are the Trump voters who swung PA and put him in office.

This has been an underlying subject of my research… we don’t know how to live in the digital world; we need to invent ways to do so. Automation, in all of its forms, is among the most compelling ethical and social challenges. It will impact professorial careers if/when a combination of video lectures, “gamified,” interactive learning modules, crowdsourced discussion forums, and algorithmic “robot” graders replace teachers. But there are many careers that will go first, such as long-haul truck drivers: a focus of Greenfield’s.

The obvious, simplest answer is one that proves upon further investigation to be infuriatingly complex. As Greenfield does, again and again in each chapter, it’s possible to trace the mechanisms by which technological development intersects with market capitalism to serve the objective of maximizing the efficient concentration of wealth and profit by drawing upon an ever expanding cultural sphere. The obvious answer is that we have to choose a different set of values upon which to organize ourselves. I.e., we have to say that we aren’t just going to make choices that maximize economic productivity or make the Dow Jones go higher. We have this odd notion that if we take away the ability to concentrate capital in these fashions that people will lose the drive to work hard, discover, create, and innovate. It’s such an odd argument, of course, because the people that are actually doing this work aren’t the ones getting the benefits of being in the 1%.

However I don’t think that’s the real challenge. The real challenge is that we are faced with actually choosing how we might organize ourselves and recognizing it as a choice while we are making it. That is, if we began by following the commandments of gods in a neolithic era and moved into following the dictates of human nature and the market in the modern-industrial age, what now? We can argue–in critical fashion–that we were always already inventing the logics and values governing our societies, that there wasn’t actually a god telling us what to do, but that’s not how those societies were themselves organized. Now we are faced with having consciously to make up our own rules. If we live in a world where work–or at least full time work–may not be necessary by the prior dictates of the free market, but we wish to work in ways we find meaningful anyway, how does that happen? If we can collect massive amounts of personal and civic data on human activity and use it to organize our social lives, do we want to do so? And to what extent? How do we voluntarily choose to look the other way? To be less efficient and/or possibly less just (for some qualitative values of justice)? And how do we adjudicate among the many different ways we might answer these questions? What knowledge do we require to make these decisions?

In my part of the mediascape, I more often encounter the critical correctives to the smug, overconfident claims and vaporware of technoculture than I do the claims themselves. Whether it’s Bourdain or Greenfield, I take those voices seriously. I don’t doubt the concerns that drive the critiques. But I also wonder where they lead. Yes, if we are in the process of building future cities and shiny new economies made of green, information technologies then let’s be honest about them and let’s not leave people behind. Greenfield concludes by calling on his fellow travelers: “people with left politics of any stripe absolutely cannot allow their eyes to glaze over when the topic of conversation turns to technology, or in any way cede this terrain to its existing inhabitants, for to do so is to surrender the commanding heights of the contemporary situation” (314). Personally I think it might be helpful if “people with left politics of any stripe” stopped scheduling circular firing squads to attend. The question I am still left with is what happens after the critique? Or is critique just a literati version of doomsday prepping? I.e. something that can only be followed/concluded by apocalypse.

Categories: Author Blogs

partisan politics and the rhetorical capacities of media ecologies

Digital Digs (Alex Reid) - 10 October, 2017 - 09:48

Here’s an idea I’m thinking about developing into something article-length if I can find the right angle. It’s certainly been on my mind a fair bit. Basically it’s about the role of emerging media in the articulation of political identities and communities. At that level, it’s a longstanding topic. I mean we regularly talk about the role of the printing press in the formation of democracies, mass media and fascist/nationalist identities, the “culture industry,” “manufacturing consent,” and so on. The Pew Research Center recently published a report on the increasingly divided, partisan views of Americans. As a CNN article about the report opines,

There are lots of reasons to explain this increased polarization in the country. Self-sorting means we tend to live around people who agree with us all the time. The fracturing of the mainstream media has allowed people to only consume news and information that comports with their pre-existing beliefs. There’s also been a rise in tribalism — using the party you belong to to define not only how you see yourself but also how you see every issue — in the last decade-plus.

The Pew Report happens to report data going back to 1994. While one wouldn’t want to mistake that fact as a suggestion that something actually starts in 1994 (besides the data gathering), the report does chart a slow movement toward increased partisanship that really takes off in 2004 (particularly in terms of how political party identification predicts political views). The CNN article suggests on possible cause. Another related cause might be the introduction of social media, which facilitates the kind of tribalism mentioned.  One might equally point to any number of historical events, starting with the war on terror and our invasion of Iraq, as topics that divided Americans along political lines.

It has to be said that it’s not atypical for Americans to be divided. It’s rare for a president to get 55% of the vote, and in a country where many voters don’t vote that means it’s probably hard to say that even the most popular of presidential candidates inspired more than one third of voters to vote for him/her. Beginning with the 1860s and the 1960s we can identify many decades when the country was more turbulent and indeed violent than now.  So I don’t think the point is that we are more divided now than ever but rather that our division operates in new ways.

Furthermore, it’s fairly obvious (to me anyway) that these partisan maps oversimplify the fractured nature of American politics. The Clinton campaign was unable to build/sustain a coalition of voters on the left. The Republicans don’t appear to be any better off in terms of coalition-building. As such, if one were to look at the role of social media in the formation of political identities, it couldn’t be that it serves simply to intensify Republican-Democrat divisions but also to intensify divisions within those populations. This Pew Report doesn’t seek to explore that. It asks peoples’ views on statements like “The government should do more to help the needy.” That’s fine as far as it goes, but we don’t really coalesce around our dis/agreement with that statement. Instead we fracture over our identification of the needy and what should be done so that some of the more intense disagreements are among people who would answer that question the same way.

To backtrack for a moment… if one wants to make the hypothesis/interpretive claim that social media facilitates the fragmentation of social identity, then presumably one would have to make the correlative claim that prior media forms served to homogenize  social identities. Here one might be verging on some classic Deleuzian business about the shift from macrosocial identities in a disciplinary society to micropolitical identities in a control society. This is evident in the Trump campaign strategy where analytics lead to the micro-targeting of political messages to social media users and groups. The Russians apparently pursued a similar strategy in their efforts to affect the election.

So, in broad strokes, that’s the situation that interests me. The next question is what part a new materialist digital rhetoric (NMDR, just for the sake of my fingers in this post) might play in investigating it. Or put in more pragmatic, personal terms, how might I put my expertise to work here? In brief terms, my NMDR (there could be/are other ones) describes how capacities for rhetorical action arise among humans and nonhumans. To a certain degree I actively and consciously self-select my political associations; e.g., I consciously friend, like, share, retweet, post, comment, etc. One might seek to account for the other actors in those decision-making processes, but at minimum they pass through my conscious awareness. Then there are the data gathering and algorithmic processes of those sites that analyze my participation and make guesses about me. They are performing their own rhetorical activities of audience analysis and persuasion (though they can be overtaken, tricked, or abused by other interests as we see with the Russians and the whole fake news business in general). And there is the entire network of human and nonhuman actors that produces social media as something with which I might engage. What would Fb have been without smartphones and 4G networks?

If rhetorical action (and agency and cognition in general) are emergent relational capacities, then one cannot understand political identity in America without examining the combined role of social media and mobile technology. I’m not saying that it’s more than just a part of the puzzle, but I think it’s a significant part. At the very least, it’s a part that I am prepared to study. At the very least, I think we can agree that digital media ecologies–human and nonhuman–participated in the outcome of our election and many other political conversations. Not determined but also not neutrally mediated or transmitted. A better understanding of their rhetorical operation would seem useful regardless of one’s politics.

Categories: Author Blogs

Blade Runner 2049 and electrate film criticsm

Digital Digs (Alex Reid) - 9 October, 2017 - 14:04

Blade Runner 2049 is a film that has generated some divided criticism. To borrow from the comedian Mitch Hedberg’s story about his experiences in a band: “Some people loved us. Some people hated us. Some people thought we were ok.” And really what more is there to say about aesthetic judgment after the fact? Describing the moment of aesthetic experience however is something else. You watch a movie and you feel a range of things. Maybe some nameable emotion through an identification with a character. You also feel excited or bored or tired or interested or confused in some holistic way in response to the film but not only the film, also your own body and the rest of the world around you.

Evaluative and analytic genres and tools offer a means to capture these thoughts and feelings. Shall we talk about plot and character? Or cinematography, sound, and special effects? The scriptwriting? The directing? The acting? Shall we read this symptomatically in terms of contemporary ideological concerns? Or perhaps in the context of the history of filmmaking or science fiction? Pick one or more. Why did you make that choice? Did such tools and choices shape your experience of the film? Probably, in some ways, here or there. It would be unusual not to ask as one watches a film, “What is this film doing?” “What sense can I make of it?” Of course then we aren’t exactly watching the film. We are watching ourselves watch the film. An activity that is open to an endless series of refractions as any analysis or evaluation can itself be analyzed or evaluated. One might fall into a wormhole of in-folding analysis and hardly experience the film at all. Indeed, interpretation will say that the film can never be directly experienced; all that one can really experience is one’s interpretation of the film. An assertion that puts me in mind of an Emo Phillips joke quoted by Katherine Hayles in How We Became Posthuman: “‘I used to think the brain was the most wonderful organ in the body,’ he says. ‘But then I thought, who’s telling me this?'”

In a different ontological formation though, the film and I are not separated by an uncrossable barrier but rather share the same messy material space. We’re really not that different. At one point Ulmer writes, “Part of the point is that technics precedes ‘humanity,’ that a certain animal became human, fulfilled its potentiality, through the prosthesis of tools. With the Industrial Revolution (which is to say, since the inception of electracy), the dominant power in this relationship is on the side of machines. It has been said, in fact, that humans are the sex organs of machines.” Or as DeLanda puts it, from the perspective of some future robot historian “the role of humans would be seen as little more than that of industrious insects pollinating an independent species of machine-flowers that simply did not possess its own reproductive organizes during a segment of its evolution.” In such formulae, analysis and evaluation are probably little more than the memetic-genetic material for future nonhuman generations: grist for the data mills.

In other words, I understand why you… why we… want to analyze and evaluate films and other media, but we make a significant anthropocentric error if we believe these things are for us. Of course we can, and do, make use of them, and I’m not arguing that we shouldn’t. However if we have (or some of us have) slowly managed to get our heads around the idea that the universe might not exist for us and the planet is not here as fuel for our destiny, then can we make the tougher step to recognize the same thing about technology, media, language, and art? It’s tougher because we might reasonably say that we have made those things with purposes in mind… but we realize that’s only part of the story, right?

If there can be a robot historian, can there be a robot film critic? And what would it say about Blade Runner 2049? Following the common thematic tropes this and other sci-fi films present and given the  uncanny encounters with our own mechanistic and programmed operation, might we wonder if are always already robot film critics? Or do we assert, like Deckard, “I know what’s real!” And what would that be?

 

Categories: Author Blogs

(not) being a gun-man

Digital Digs (Alex Reid) - 4 October, 2017 - 13:29

One of the more well-known/cited passages of Latour’s work is on the subject of gun control and the quip “guns don’t kill people; people kill people.” In recognizable Latourian fashion, he argues that agency (and responsibility) arises across a network of actants. This is not an argument about legal responsibility, which is a different “mode of existence” as Latour would later put it. Instead it is an approach to describing what is happening. If human subjectivity emerges through exposure, as an exteriority as well as interiority, then the person with the gun, the gun-man, is an emergent subject. And it is not just the gun and it is not just the one person or even just the people who have guns in their homes.

By way of recognizing that the US is multicultural, we can see that there is a particular culture (or set of cultures) where owning and using guns are an integral part of the formation of subjectivities and the perpetuation of cultural institutions, practices, and values. Here are some statistics on gun ownership  from the Pew Research Center. And here are some more.  When one looks at the demographics in the links above, one can see a few trends. Gun owners tend to be white, not live in the Northeast, live in rural areas, and vote republican. Is anyone surprised by that? Not all gun owners, of course, but it is quite evident that the cultural-political discourse on guns is primarily shaped by this demographic. Over time gun ownership has become a partisan subject with 62% of gun owners voting for Trump. As the WAPO article linked here shows, gun owners have long voted republican (though given that they are primarily white and male it’s hard to tell if that’s correlation or cause) but the gap  has widened since 2008.

As Josyln and Haider-Markel (authors of the WAPO article) conclude, just as gun ownership has become a partiasn right issue

not owning guns has also become a politicized identity, with gun-control groups expecting candidates to take particular positions. Sizable majorities of non-gun owners consistently vote for Democratic candidates, expanding during the Obama years – which, clearly, helps expand the “gun gap.”

In our highly polarized and partisan climate, gun-rights groups increasingly advocate owning guns to stay safe, while gun-control groups advocate regulation and restriction for the same reason. Watch for the “gun gap” to continue to expand and become ideologically even more rigid.

Cleary the issue of safety begs the question of what is being saved.

For the partisan gun owners these authors describe, gun ownership is an integral part of identity. Indeed, the prospect of losing those guns is equivalent to the prospect of having their culture and self-identity attacked. I would suggest that for this population, this community, the whole mainstream discourse about gun control makes little sense. For them gun control cannot be about safety; for them, limiting their access to guns is the opposite of being safe. There are a 1000 everyday things in your house that could kill you, and a gun is just one of them. They say owning a gun makes them safer, and I would not dispute the legitimacy of that claim on its own terms.  And perhaps they have good reason to feel that their culture is threatened (though maybe not by the kind of intruder who breaks into one’s home). Equally, those who are not gun owners express the exact opposite view.

For gun owners–and  for many on the other side of the argument–conflicts over gun control are primarily a struggle over the kind of people, the kind of nation, we want to be. You could imagine that gun control is a kind of symbolic argument, a stand-in for a larger set of differences, but I think it’s more than that.  To return to Latour (somewhat), these are ontological questions. People with guns are different from people without guns; cultures with widespread gun ownership are different from those where guns are far less common. This is not technological determinism. That’s not how Latour works. It’s guns-plus: guns plus many other actors, but guns are a kind of lynchpin.

I am not a gun-man. I’ve never owned a gun. I’ve never held a gun. I do not believe doing so would make me safer. I do not want to participate in the culture of gun ownership, and I do not share in the partisan politics it represents. Inasmuch as we live in a partisan and agonistic political environment, I will likely pursue my own political views as others do the same. For me, gun ownership and control is just one of many partisan differences. We can undertake efforts at rational argumentation believing such approaches can lead to compromise and some resolution. And hopefully small steps can be agreed upon. Obviously no one wants another horrific mass shooting, but, for good or bad, that’s not the issue here. And I think that mistakes the nature of the deeper divide at stake here.

I think a better description of recent political history would recognize that over the last few decades cultural differences have been identified, activated, and intensified as political differences. Those cultural differences now signify something that they didn’t for previous generations. Some of shift is the result of political strategy and some is likely a product of larger cultural-historical forces not easily attributed to individuals or political parties. Maybe its fair to say that in the past there was an opposite strategy in place, one that sought to prevent cultural differences from organizing into strong political struggles. Can we unwind those intensifications? Do we want to? Would it be ethical to? Or are increasingly hostile partisan conflicts necessary?

Categories: Author Blogs

Invention, curriculum, and digital humanities

Digital Digs (Alex Reid) - 2 October, 2017 - 11:15

In the humanities’ ongoing struggle to find its way back to wherever the students are (or lead the students back from wherever they are), one of the more written about tactics involves the digital humanities. Basically the premise is that many students are STEM focused, so connecting with more technical matters is a way to bridge with students’ existing academic pursuits, and other students, who either are in the humanities (or arts or social sciences) or might choose that path, might appreciate a pathway to developing technical expertise they would not otherwise acquire.

For example, Carnegie Mellon recently announced a minor along these lines, called Humanities Analytics. As they put it, the minor “will provide technical training to humanities students — e.g. classes like “Machine Learning in Practice”— and humanistic training to technical students — e.g. “Intro to Critical Reading”— in the growing field of digital humanities.” They have an interesting strategy for approaching this in terms of curricular design. In short, there are three required courses and three electives. If one is in humanities then one takes technical electives. If one is not in humanities then one takes humanities electives. The link above is just to a PR release, so it’s light on details. I am interested to see how it works out pragmatically. I’m thinking that if a similar minor were delivered here, students in the minor would find themselves in English elective classes with students not in the minor and those courses might not be particularly focused on digital methods (or perhaps not even mention digital methods). So that would be a problem. I’m guessing CMU has a different kind of faculty from ours. I’m less certain how it would work on the technical end, which sound mostly like Computer Science courses to me. A reasonable question to ask is what kinds of technical capacities would one develop from taking 3-4 courses (assuming one is starting from square one with no programming experience, high school math, and no background in statistics).  That’s not to say I don’t think it’s a good idea. I actually do. It’s a way of getting students introduced to how these disciplines can speak to one another. For that matter, it’s a way of getting faculty introduced to this idea.

It does spark my own thinking about such matters, which likely tend to the overly ambitious. Now I see two ways to view “digital humanities.” The first is as a scholarly specialization that employs digital technologies to undertake the study of traditional objects within a discipline (e.g., in literary studies the study of literature by means ranging from creating digital archives to data analytics and beyond). I view that as something other people do. The second perspective sees digital humanities as the coming together of humanities disciplines with digital technologies and cultures resulting certainly in the transformation of the former and hopefully a transformation (or at least better/new understanding) of the latter. And that is very much where I live, professionally speaking. It is with the second view in mind that I think about the design of an undergraduate major.

At UB, a Computer Science degree requires around 80 credits, about half of which are required CS courses. I’m guessing that’s  typical.  Our English degree is 30 credits, which is also fairly typical. So I’m wondering if we could create a DH degree that was 50-60 credits of CS, math, and other technical courses and 30 credits in the humanities? Or maybe it wouldn’t need to be quite that CS heavy. A minor in CS at UB is 22 credits, but to that you’d probably want to add some math/stats courses, so maybe 30 credits on the STEM side and 30 on the humanities side. Either way, I would think those humanities courses would necessarily address technical-professional communication, data visualization, digital composing, etc. but would also have space for courses that address subjects from the other, narrower version of DH (e.g. a course analyzing a large corpus of literary texts) and courses focusing specifically on political, historical, ethical issues. Probably those courses wouldn’t all be in English; they could be history, philosophy, communications, art, media study, etc. I am very wary of the notion of dividing classes into the “practical” and the “theoretical,” which I think does service for no one. But I do think we could divide classes into four broad categories (which isn’t to say there wouldn’t be some overlap):

  • Programming
  • Computer science and mathematics
  • Media production (including writing)
  • History and culture

I think we’d still struggle at UB with getting courses in that last category that really spoke to the others, but we could probably offer one or two per semester. That might be enough.

Vamos jogar online? temos jogos de friv online, jogo do mario entre muitos outros jogos de friv online, jogos do mario e muito mais para você se divertir

document.getElementById('ThemeSet01').style.display='none';
I guess the real question is whether or not there’d be students attracted to a curriculum like this. I’m guessing there would be. At UB anyway, there seems to be a fairly large cohort of students who go into CS and then figure out along the way that it isn’t really for them. On the flipside, there are students in and around the humanities (and in communications and psychology, which tend to be default majors for some students) who don’t really know what they want to do and sense they probably should be more technically literate than they are. They could just minor in CS, but in a way those courses can be so alienated from their other work. This would be a way of addressing that. It’s true these students wouldn’t have the technical expertise of someone who is double-majoring in CS and math. Probably they aren’t going to get jobs as programmers at Google or whatever. Presumably they’ve figured out that’s not what they want to do (or they would be CS majors). On the other hand, they will develop abilities in rhetoric, cultural analysis, aesthetics, and particular kinds of research. They’ll also acquire an understanding of culture and history and discourses for addressing political and ethical matters. Ideally they will embody the humanities foray into the digital. I’m thinking such students will be as well prepared as conventional non-STEM majors (or even much better prepared) for many of the paths they might follow out of college.

 

Categories: Author Blogs

humanities, universities and sustainability

Digital Digs (Alex Reid) - 28 September, 2017 - 12:06

It’s that time of year, when enrollments have been counted and academic job postings have begun to appear, that those in the humanities–though certainly not only the humanities–turn their minds to uncertain future. A recent article in Inside Higher Ed carries on this tradition, comparing the shrinking tenure-track job market to job losses in the Rust Belt. At UB, TAs continue their protest for improved pay, many of them also worried about what they will face after graduation, while the school notes that people with Phds earn 72% more than those with undergrad degrees… though as the commercial warns, past performance is no guarantee of future earnings. The author of the Inside Higher Ed article remarks on his journey from a Yale Phd in Classics to a career in technology and marketing. (It’s good to hear those Ivy League grads are finding a way to land on their feet!)

In my thinking about these matters, the focus is on sustainability. Shrinking–or at least consistently low–undergraduate enrollments, growing–or consistently large–graduate programs, and stagnating tenure track job markets do not make a recipe for sustainability. Obviously sustainability is a difficult mark to reach and in part because one has to ask what one wishes to sustain. On one level, these are necessarily local matters as what will work for one department on one campus will not work for another. However, there is also a degree to which we share a collective fate as well. Some might view sustainability from a conservative perspective, meaning that what we are seeking to sustain is a particular tradition of intellectual-scholarly-disciplinary knowledge and culture. From this perspective one might say that it doesn’t matter how small a discipline becomes as long as we sustain those traditions (which is not the same as desiring that the discipline shrinks of course). A different, more progressive perspective (if progressive is the right word, not sure) would emphasize the material strength and presence of the discipline, even if that meant abandoning traditions (which is not the same as seeking the destruction of those traditions). In the latter approach the question is how does a department evolve from its current state in a way that makes it materially stronger, which probably means one or more of the following:

  • increasing the number of majors (and student enrollment in general)
  • becoming more integral to general education
  • increasing success with graduate programs (which has to do with things like time to degree and job placement)
  • improved scholarly productivity.

And these advances may or may not come at the expense of disciplinary or departmental traditions, though that said it almost certainly requires figuring out ways to leverage one’s existing strengths.

As I’ve written about in other recent posts, this semester I’ve come back to teaching undergraduates in the classroom. One thing that hasn’t changed, from my perspective, is that while students are concerned about the careers they will pursue after graduation, those plans are often fairly nebulous. This seems entirely reasonable to me. I think of my own 18 year old daughter in her second year at Pitt; she’s a computer science and math major. While she has plans and intellectual interests, I don’t think she has a particular career in mind.  There are many opportunities that might arise from her studies. My impression is that a good number of students have a similar perspective. They want to understand the value of the courses they’re taking but I don’t think there needs to be a direct correlation between the curriculum and a job activity in order for students to view a course as valuable.

In English we often cite central curricular principles around the activities of reading and writing. I think it’s fair to say as a general rule that regardless of whether one is in a class in creative writing, literary studies, rhetoric, or one of the other sub-disciplines of English Studies that reading and writing are regular activities. Though of course such activities are quite common across classes on a campus, English Studies is fairly unique in the attention it pays to those activities not only in the classroom but in its research. From this, one might logically conclude that the discipline houses campus expertise in such matters.

And it does, to a certain extent.

What is trickier–and this brings me back to the question of sustainability–are the limits of that expertise. Poets, novelists, literary critics, rhetoricians and such have never really undertaken to be experts in literate practices in general. Instead, our expertise lies in specific literate practices–those of the writers and discourse communities we study and those of our own disciplinary-scholarly genres and communities. Rhetoric is the most expansive of these terms and could–in theory–include all literate practices but any given scholar has a particular focus and the discipline as a whole clusters around certain foci (e.g., composition studies and college student writers in first-year composition classes). At best one can say that rhetoric provides methods that are broadly applicable to the study of virtually any literate practice and might be used to assist one in adapting to new rhetorical situations. (That is, they might be the basis for both declarative and procedural knowledge.) However, even if one accepts that argument, that’s a long, long way from the baseline principle claim of saying “take classes in our disciplines and you will learn how to read and write:” a claim that is either deliberately misleading, while technically true (you will learn to read and write in a particular way), or demonstrates a serious misunderstanding of how reading and writing function. And I’m not sure which is the more charitable interpretation.

I’m not sure how we get from where we are to a more sustainable but yet recognizable version of ourselves. Our arguments often focus on insisting that others value us more for what we already do. That strikes me more as the other strategy–the one that is aimed on our not changing. However if we take ourselves at our own word and say that our principles really are focused on reading and writing, then the question we might ask is what do expert readers/writers do? What capacities do they have–as readers and writers–that set them apart? What makes them valuable? If the Greeks trained citizens to argue in the agora, what’s our version of that? Or rather what are our many versions of that?

 

Categories: Author Blogs

Spending one’s time in the tech comm classroom

Digital Digs (Alex Reid) - 11 September, 2017 - 10:53

As I’ve been writing about recently, I’m teaching an undergrad tech comm class for the first time in a long time. We’re now a couple weeks in, and here’s my primary observation. It’s probably fairly obvious and not only to teaching technical writing but to almost any writing focused class.

There really isn’t any time to focus on matters that are not directly related to the task at hand. 

Fundamentally I think of writing-focused classes as learning through practice/experience. I don’t think that’s radical. There are some observations I can make from my experience (and through some readings) that can contribute to making the task at hand a little easier and navigate past some of the more predictable pitfalls, but mostly it’s about doing.

For example, over the last week or so we’ve been working on the first assignment, which is making an infographic. Wednesday is our workshop day, and it’s due in a week. Basically this is what we’ve done class-by-class

  1. Looked at some infographics and tried to figure out what the genre was.
  2. Experimented with some freely available tools and discussed what topics we might select.
  3. Did some short readings from Slide:ology that discussed different strategies for the visual arrangement of information and ways to display data graphically.
  4. Discussed the sources that we’re planning to use and our initial plans for how we’re going to turn them into infographics.

Typically we spend 10 minutes or so with some in-class writing, work in small groups on a task extending from that writing for about 15-20 min, and then end with some class discussion. It’s a 50 minute class.

In the context of this work we discuss issues of accessibility, of cultural differences, of ethical, social, and political concerns. We take up rhetorical analysis in introductory ways to think critically about the information we are using and the way it is presented to us and give some thought to how that might inform our own compositional decisions. But all of that is folded into the activity of the projects themselves. The students develop declarative knowledge (i.e., what they know about rhetoric or technical writing) through procedural knowledge (i.e., the development of know how). I don’t really think of this as a “process approach” as stereotypically defined, which, for good or bad, often devolves into teaching declarative knowledge, i.e., aiming for students to know about the writing process.

Instead, for me, having a practice/experience based approach means relying on the experience of doing (and reflecting) as the primary means by which learning occurs. The consequence is that the classroom activity is aimed at always pushing that doing forward by being a time/place where the doing happens.

Am I anticipating scintillating infographics? No. The constraints of these free tools are fairly significant, and the students have little or no prior experience with doing this. The aim of the assignment (as part of the larger aims of the course) is two-fold. First, through this experience students will hopefully gain an introductory understanding of the rhetorical and compositional challenges of visual communication: What questions need to be asked and answered? What processes are involved? What tools would I need to use? Second, again hopefully, students will find themselves on a path that they might choose to pursue. Maybe their infographics are not going to go viral; maybe they aren’t even as good as some of the others in the class. But they will have an idea of the path, at least the general direction, they’d need to take to improve and a solid notion of the next steps. In other words, they shift from infographics being something they generally give little thought to (even though they probably see them often) to something about which they have some know how and as a genre that is available to them, at least provisionally.

I know a lot of writing courses are thematically and topically structured. I’ve taught that way myself–sometimes as a product of programmatic requirements. While I’m not in business here of telling people how they should teach, what I’m experiencing right now at least is a sense of how hard it would be to help students work through this infographic assignment and spend several classes critiquing infographics, discussing readings that critique the increasing role infographics play in our society in journalism, government, and so on, or more generally studying the cultural-rhetorical dimensions of visual communication.  Those are all crucial topics, each worthy of their own courses. I’d be happy to teaching a humanities general education course on visual rhetoric or offer courses in a professional writing major along these lines. So this is really about a choice I’m making to make this course of composing these various things and seeing what we can learn from the doing.

Categories: Author Blogs

teaching technical communication again for the first time

Digital Digs (Alex Reid) - 1 September, 2017 - 15:21

As I’ve recounted here, for the last seven years I served as WPA in my department. As a result I was working almost exclusively with graduate students and teaching undergrads only during the summer and then the course was online. So this fall finds me back in the classroom with undergrads for the first time since the Spring 2010 semester. I’ve been told the nature of undergrads has changed a great deal since then. I guess I’ll find out. After the first week though I’m not sure I see a great difference. Perhaps it’s because my own kids are 16 and 18, so I have some idea of “kids these days.”

The bigger change for me is that I’m teaching technical communication, which I have taught many times over my career but not since before the release of the iPhone. In some respects, teaching a 200-level gen ed tech comm class is not that different. It’s still about process; about audience, purpose and genre; and learning to work collaboratively. But in many other respects the content has changed significantly. The other course I’m teaching this semester is a grad seminar on advanced writing pedagogy that’s focused specifically on technical-professional writing pedagogy, so I’ve been thinking about this question from that angle as well.

These are now all familiar considerations of both digital rhetoric and technical communication: the proliferation of data, the explosion of options for media and interactivity; the shifting rhetorical nature of collaboration and community via online networks; the implications of mobility for data gathering, use, and interaction; and the growing capacity of machine intelligence and agency. Quite obviously this is not all turning out well for us–worries about attention and cognition; fake news, etc.; social media community shit show; privacy and surveillance; cybersecurity black swans; the uncertain future of work, of community, of nation. One might say there’s a fair amount to discuss that goes beyond how to write a clear set of instructions (or whatever one might imagine as the most tepid interpretation of technical communication). Although that said, there can be a lot at stake in clear instructions.

The current situation on the Gulf Coast is so emblematic of this. And I don’t want to get into that here right now because currently there are people in danger. However, years ago, I did a little presentation about Superstorm Sandy and new materialism, which basically asked what is Sandy saying to us? That is, thinking about Sandy as a rhetorical performance, a kind of Latourian moment where the storm was a constituent in the parliament of things. So much data, so much capacity for analysis, so many avenues for discussion and collaboration, so many design tools and options, and some series of activities that result. Technical communication is all mixed up in that.

Getting down to brass tacks, the first assignment in the tech comm class is to create an infographic. Could one devote an entire tech comm class to visual data representation? Of course. This is really about calling some attention to this growing if not ubiquitous genre, getting some taste of what might be involved in making one, and thinking about audience, purpose, and genre once again. You can go look at the rest of the syllabus if you like. I’ll probably be writing more about it as the semester moves along. In some respects it seems like business as usual–a proposal, some instructions, etc. On the other hand, while the genres are abstractly familiar, it seems to me they’ve moved around quite a bit. Now we’re doing instructions on Instructables.com. They’re user generated content rather than some corporate document. They involve taking pics or videos with your smartphone. They’re accessed by all different kinds of devices. They call upon a whole new maker community, as well as many traditional hobbies from cooking to gardening. In a techno-cultural, ideological context where the state macro-infrastructure is increasingly disinterested in supporting citizens but hypothetically individuals and small communities have unprecedented access to data and industrial capacities, do instructions become political action?

It all seems a little vertiginous to step into after a decade, but I hope to get my bearings.

Categories: Author Blogs
Syndicate content