Showing posts with label infoecology. Show all posts
Showing posts with label infoecology. Show all posts

Monday, July 9, 2018

The funny side of systematic literature reviews

I find myself thinking about systematic literature reviews these days. It is an unexpected thing to be randomly thinking about, to be sure, so I guess that means I'm officially an academic now. My habitus is augmented.

The quickest way to introduce systematic literature reviews is through a detour to unsystematic literature reviews. The unsystematic approach is easy to grasp: you simply grab a hold of any books or articles that seem relevant and start reading. At the other end of the reading process, you know more than you did before. This is generally a good way to go about learning (especially if you have a nice local library to draw from), and should not be underestimated.

It is not, however, systematic.

The lack of systematicity is something of a problem, though. Not to the learning process, mind, but to the performative aspect of being an academic. It is not cool or hip to say that you've read a lot of books and keep tabs on new articles in your field, and thus know a thing or two. This is not the image of a structured, rigorous and disciplined scientific mind that academia wants to project (both to itself and to the public), so something has to be done. A system has to be created, to let everyone involved claim that they followed proper procedure and did not leave things to chance. Thus, systematic literature reviews.

Depending on where you are in the process, the systematic approach can take many guises. If you are just learning about science and scientific literature, having a system in place to guide you through the reading is immensely helpful. It gives permission to look at a search result of 2931 articles and cut it down to a more manageable number. If it is a robust system, it specifies that search engines giveth what you asketh, and that you probably should be more specific in your search. Moreover, knowing which questions to ask the articles beforehand gives a structure to the reading, and allows for paying closer attention to the important parts. And so on, through all the steps. Having a template to follow answers a lot of questions, even if you find yourself deviating from it.

When you've been at being an academic for a while, the presence of an adopted system can shield you from the burden of overreading. There are always more books and articles than can be readily read, and every text ever written can be criticized on the basis of not taking something into account. By using the system, the age-old question of "why did you choose to include these texts but not these other texts" can finally be put to rest. The systematic literature review unburdens the load by defining exactly which texts are relevant and which are not. And thus, the rigorous and disciplined reading can commence, conscience clear.

Next up the abstraction ladder, we find another use of these systematic reviews. When research has to be summarized and administrated, it simply will not do to go with something as unscientific as a gut feeling. The scientists involved might know what's what, but this intricate insider knowledge is not easily translated into something outsiders can partake of. Outsiders, such as the non-scientist bureaucrats put in place to administrate the funding mechanisms that determine which research efforts are awarded grants and which do not. By strategically employing review systems that include desired results (and exclude undesired results), funding can be directed in certain directions under the guise of impartial systematicity. Administrators (or their superiors) can claim all the academic benefits of rigorously following the method laid out for all to see, while at the same time subtly steering research efforts without having to be explicit about it. It is systematic, disciplined and impartial, whilst also being ruthlessly political.

The key takeaway here is not that systematic literature reviews are bad (problematic, maybe, but not bad). Rather, it is a reminder that the presence of a system does not in itself guarantee a robust outcome. Like all methodologies, there are strengths and weaknesses to consider in each particular case, sometimes more obvious than not. When a systematic review finds that only articles published by (say) economists are relevant to a particular issue, despite decades of scholarly publishing on the subject on other disciplines, the issue is not a lack of systematicity, but too much of it. A flawless execution of review methodology does not preclude asking what is up with such unrepresentative results.

I find it amusing that strategic and rhetorical dimensions of academia are obscured by reference to systematicity and specialized vocabulary (the terminology surrounding systematic literature reviews is something to behold). Not least because academics are the very people best positioned to problematize the living bejeezus out of just these kinds of subtle processes.

It's funny that way.

Tuesday, April 10, 2018

Musing on being less on Twitter

Over the recent months, I have found myself looking less and less at Twitter. This manifests itself in many forms, the most dramatic being that I nowadays only occasionally turn on my middle monitor, which main use is to display a never ending live-updating stream of tweets flowing like a less stylized version of the Matrix. The monitor just stands there, a black mirror in portrait mode.

The strange thing is that Janetter - my ancient twitter client that new users can not run due to long-forgotten arbitrary API limits - still runs, in preparation for the ever rarer occasions when I turn the monitor on just to see the flow of tweets again. As if closing the program would be some kind of definite gesture, irrevocable once performed.

Less strange is that I find my thinking has changed. This is to be expected - as Byung-Chul Han noted, it is difficult to focus during a noisy party. But it is also more subtle than simply having less input to process. I find that I direct myself towards different company. Even if I were to think about something that happened to be trending on Twitter right this very instant, it would be from a different starting point, with different aims.

"Company" is the key term here, I suspect. Booth uses it to muse on the fact that we spend time in someone's company when we read their words, and conversely become company as others read ours. The quality of our company, both reading and writing, in many ways shape who we are, and who we try to be. Good company inspires upwards, while bad company keeps you down.

In more Twitter-related terms, this manifests as an implicit demand to become company to those we follow and those who follow us. As we think through the issues introduced and reiterated by those in our timelines, we ever so gradually come to feel the pressure to add our own thoughts to the flow. After seeing fifteen tweets about something, it becomes almost a knee-jerk reaction to write a sixteenth. Even if we only just heard about something mere minutes ago, we feel compelled to have said something about it.

This dynamic creates a very specific and other-directed way of thinking. You build up a sensitivity to trends and keywords, and act on what you see. Others see this as well, and react to your reactions; the fact that you both see and react to the same things is an immense sense of community; it is sometimes referred to as social media validation. It is company, good or bad.

This thinking is like riding a bike, though. True, once learned, you do not forget it. But if you've not rode a bike in a while, there is a strong possibility that the muscles used to pedal things forward have become less muscular than you remember, and thus the going is slower than it used to be. You still know what to look for - the trends, the keywords, the subtweets - but it is an effort to care. An uphill effort, to combine metaphors.

Thus, on the ever rarer occasions when I power up my middle monitor, I see what is going on and how it unfolds. The impulse to contribute to the goings on and insert myself into the company, however, is not strong enough for me to do it as often and as energetically as I used to. I'm simply not in that frame of mind any more. My thoughts and words are directed elsewhere.

It is only prudent that I mention this somewhere. For future reference.

Thursday, March 22, 2018

That thesis I wrote about Patreon

I wrote a thesis about Patreon.

There are different ways of going about writing a thesis about Patreon. An intuitive approach would be an instrumental, goal-oriented investigation as to which strategies work and which do not. The findings of such an investigation could then be distilled into a simple list of do's and don'ts, which readers could implement in short order and (probably, maybe, hopefully) generate more revenue.

I did not write that kind of thesis. If you came here looking for simple, straightforward advice about how to run your Patreon page, then this wall of text is not for you. (Neither are the posts about my my other theses, for that matter, despite them all relating to each other in interesting ways.)

What I did was seemingly simple. I asked a straightforward question, and saw where it took me. The question was thus: what is Patreon, and what does being on it do to you?

As with all straightforward questions, the answer turns out to be everything but clear cut and easy to summarize. In order to answer it, we have to answer a couple of sub-questions first, just to make sure everyone is on the same page.

Seeing as this was a thesis in Rhetoric (Americans call it Composition and/or Speech; the discipline has different names depending on where you happen to be geographically), the first of these sub-questions is what we mean by "rhetoric". To summarize hundreds of years of back and forths, there are two main answers to this question. The first is the (neo-)Aristotelian answer that it is the art of finding the best possible means of convincing someone in a particular situation. In this case, rhetoric would be a set of strategies for maximizing Patreon donations, with varying degrees of excellence in execution. The other answer looks at the situation as a whole and asks what it implies for those who participate in it, and if things could be done differently. In this case, rhetoric consists of analyzing what it means to have a Patreon page, which implicit assumptions inform interactions on this page, and how these assumptions might lead to outcomes that were neither expected nor beneficial for the participants.

As you might have gleaned from the gist of things, my thesis fell firmly into the latter category. Hence the lack of simple, straightforward advice in list form.

We need to keep the different kinds of rhetoric in mind, as the difference between them tells us something about what goes on with regards to Patreon. Specifically, we shall look at the concept of "ethos" and how it plays out differently in the two paradigms.

In the (neo-)Aristotelian framework, ethos is a means of persuasion. The word "ethos" connotes everything that is related to the person doing the talking, and how these aspects of self are being used to convince the audience to do something. In this case, the "something" is donating. There are many possible means, depending on who is doing the asking for donations. For instance, various ailments or difficulties can be leveraged to generate sympathy, which creates a willingness to donate. Similarly, skills can be leveraged to show how donations go towards new projects (e.g. donate so I can afford to make a new movie or whatever). Or a common goal can be invoked, along with a more or less defined correlation between donating and achieving this goal (e.g. most fundraisers and charity drives). And so on and so forth. In short, ethos is a means to an end, and it is used as such.

In the more modern framework, "ethos" is more akin to "ethics", in that it connotes a way of being in the world. It is not as directly interested in solving the problem at hand, as it is in understanding the communicative process in a wider context. For instance, it does not see communication in terms of problems to solve (in this case, how to get people to donate), but rather as a series of interactions which generate certain expectations on future interactions. It also emphasizes the role of choice on the part of the person doing the communication - they can choose to present themselves this way or that, and they do so on the basis of available knowledge and ethical propensities. A person does not present themselves in a certain way only in order to solve a problem, but also as a way of being in the world. A Patreon page is not just an invitation to donate - it is also a declaration: this is who I am and what I do.

This might seem like a subtle difference, and it is. Thus, an example is in order, to put the two perspectives in perspective.

Let's say we have a rhetor without any particular political opinions one way or the other. One day, he (let's make it a he) stumbles upon an alt-right blog, and notices two things. First, that it gets a lot of donations. Second, that it is very formulaic and uncreative, and mostly posts the same things over and over and over again with minor variations. Based on these two observations, he decides to hack the process and start his own blog in a similar vein. Not because he agrees with the opinions expressed, but because it seems an easier way to get an income than doing more labor-intensive work. After a while, his low-effort blog gets noticed by the true believers, and the donation money starts to pour in. Seeing as it works, he puts a little more effort into it, and eventually finds himself being a part of this political ecology. Not because he believes in what he writes, but because the donation money keeps coming his way.

Seen through the (neo-)Aristotelian framework, he has solved the problem. By presenting himself as someone who holds these particular beliefs, he manages to persuade his audience to donate money. He has succeeded with what he set out to do, and his audience is happy to see him keep at it.

As you might imagine, the modern framework is less than sympathetic to this course of action. For one, he uses his powers of rhetoric to exploit those who are vulnerable to this kind of industrially produced propaganda, in a sense preying on the weak. For another, his participation in this political milieu reinforces its message and makes it a more prevalent presence in the online spaces he frequents; there is strength in numbers, and he now numbers among them. Moreover, this is not the best use of his rhetorical skills, and he could contribute better things to the world than a low-effort repetition of insincerely held opinions.

In the former case, our fictive rhetor makes good use of ethos, as he manages to present himself as a fellow extremist, thus getting his audience to donate. In the latter, he fails his ethical obligation to be a good person whose presence in the world makes a positive difference when all is said and done. He has not been good company.

If you have read this far, you might have thought that we have moved rather orthogonally with regards to what Patreon is and how being on it affects its users. But I reckon you also understand why simply asking what to do in order to make donations happen is insufficient in order to understand what is going on. It is more than merely a quest to maximize the monthly donations, and the analysis has to widen in order to take all the relevant aspects into account.

With this in mind, we can pose the question of what Patreon is. In the simplest terms possible, it is a web site that allows people to ask for money from other people. Patreon also provides an economic infrastructure for getting said donations from here to there. Anyone can create a Patreon page and ask for donations on it. Moreover, they can present themselves in whatever terms they like in order to make these donations happen. This is, in short, it.

(To be sure, there are certain limitations as to who is allowed on the site, mostly relating to contradictory US social values. In order to keep things brief, I'm going to gloss over this fact with the quickness.)

This presents us with an interesting rhetorical situation. On the one hand, Patreon users are free to define themselves however they like, applying every bit of autonomy and rhetorical prowess they can muster. On the other hand, the very act of being on Patreon is a message in and of itself. Patreon exists to facilitate donations, and anyone who has a page is asking for such donations - even if they do not write anything on their page at all. There is communication going on between the lines whether the user acknowledges it or not. At the end of the day, a Patreon page is a Patreon page.

During the course of my thesis writing, I identified three strategies (broadly defined) for writing a Patreon page. Here, I present them in falling order of popularity.

The most common strategy is to describe what happens when someone donates. This is heavily encouraged through the system of rewards and goals; if an individual donates x amount of dollars, they get a reward, and if the accumulated donations reach a certain level, some action which could not previously be performed will now be performed. In this way, the relationship between the parties involved is well defined: everyone knows what will happen, and donors can weigh their options before choosing a course of action.

Another common strategy is to not have rewards, but to frame donations as encouragements to keep whatever activity is at hand going. The donation becomes its own reward, as it were. There are still overall goals (e.g. at x amount of total donations there will be an upgrade of recording equipment) but individuals are not rewarded above and beyond knowing that the thing they enjoy can keep doing its thing.

A less common strategy is to flat out not reward donations at all, but accept them nevertheless. This might be done for tax reasons (some legislations exempt gifts from taxation, and explicitly not giving anything in return qualifies the exchange as a gift rather than a business transaction). They might also do it to avoid getting into a situation where gratitude is required (those who choose to donate even though they know they will receive nothing in return know that this is not a purchase). Or it might simply be because the user simply can't be bothered to think of something to write. There are no goals, no rewards, but the option to donate is open nevertheless.

It would seem at first glance that this last strategy is counter to the whole concept of having a donation page. But - as we saw earlier - simply having a Patreon page is a message in and of itself, and sometimes this is enough to get the point across.

All of these strategies deal with the tension between freedom and autonomy. Freedom means doing what you want to do, while autonomy means defining your own laws (or, in this case, your own goals). The tension comes into being whenever you want to do something that requires more effort than simply doing it. For instance, reading a book requires that you keep reading until you've read all you decided to read. At any point you are free to stop reading, but if you want to finish the reading, you have to make the decision to limit your range of options until it is completed. If you set a goal for yourself, you also have to discipline yourself until the goal is achieved.

The tension here is that both freedom and autonomy are limitations of each other. The defining characteristic of autonomy is that you choose your own rules and goals. Once you set upon the path of realizing the chosen course of action, however, you must limit yourself to doing the things that lead to attaining the goal. Not because someone else tells you to, but because this is what you decided to do. Whether it happens to be reading a book, finishing an education, or performing some other feat, the dynamic remains the same: once your decision has been made, you have to stick to it. Even if you at times feel like doing something else.

An example of this (to stick with the literary theme) is writing a book. The only way to finish it is to sit down and write. It might be tempting to go outside to enjoy the nice weather, or binge watch all seasons of Buffy, or go hang out with friends. At all points in time, you are free to go do these things. But if you ever want to finish that book - the goal that you, by your own volition, set for yourself - you have to set these freedoms aside and focus upon the task of writing.

Looking back on the three strategies outlined above, we can see how the tension plays out in each of them. The third strategy - that of not rewarding patrons - maximizes the amount of freedom in the relationship between parties. No reward is given, no reward is expected, and donations keep happening in so far as the donors find it in their interest to continue. The creator, for their part, can choose whichever creative direction they desire, unburdened by expectations and obligations. What you see is 100% what you get, take it or leave it.

This can be contrasted with the first strategy, that of giving specific rewards to everyone who donates a particular amount of money. Here, autonomy is maximized, is as far as the creator can choose which rewards are awarded at which levels of donation. However, over time, this might lead to the creators finding themselves spending more time than initially expected making sure that donors get their just rewards. Making a donation is, in a sense, to enter into a contract, and it is up to the creators to live up to their part of the bargain. The freedom of the present is bound by autonomy expressed in the past. (Whether this is a productive relationship between creators and donors, or an inescapable iron cage where next month's rent depends on cranking out yet another unit, is always a contextual question.)

The middle strategy is, of course, a combination of the two. A degree of freedom is maintained, but if donations reach a certain level, something will happen. This something, while it is not a reward or contract in the same way as we saw above, is still a promise, and as such brings with it the obligation to fulfill it. (If nothing else, it looks - and sounds - bad if the audio recording equipment has not been upgraded for months and months after reaching the goal.)

I should stress that there is nothing inherently wrong with aiming for either autonomy or freedom in these matters. The point of this wall of text is not to say that you should do either instead of the other. Rather, the point - the thesis, as it were - is that you ought to make an informed choice when you create a Patreon page, and write it in such a way that you can live with who you potentially become. Giving lots of rewards is labor-intensive, but it is also an efficient strategy to get those donations to happen. Conversely, you might find that your creative efforts are hampered by the amount of extra effort you have accidentally committed yourself to. It all depends on who you are and what you are about.

Seen in this light, we are rapidly approaching an answer to the question of what Patreon is and how it affects its users. Moreover, we are able to ask new and interesting questions with regards to the ethos/ethics of online donation services. Given that Patreon users are free to define themselves and what they do (and for how much money this will be done), the tension between freedom and autonomy becomes front and center. Having a Patreon page becomes not only a way of asking for money, it also becomes an act of self-definition: this is who I am and what I do.

So. Donate to my Patreon, maybe?

Monday, December 18, 2017

Spoiler warnings and you

There is a new Star Wars on the loose. With it comes the division of the entire human race into two categories, as radical as it is universal: those who have seen it, and those who have not seen it. The gulf between these categories is immense and absolute; there are no in-betweens.

Except, of course, in the case of spoilers.

Given that I at one point was a media studies major, spoilers are utterly irrelevant to me. There is no "right" way to consume media - there are only degrees and ways of paying attention. Knowing how something ends prior to seeing it does nothing to change the experience. Everything important lies in how the medium is being used (and sometimes abused). The narrative aspects are a part of that, but there are many other parts of equal importance, and a movie is at all times the interplay between all of its parts.

To be sure, there are movies that rely heavily on surprise endings. Good ones are described in terms of subverting expectations; bad ones in terms of deus ex machinas. If a movie does not hold water despite the surprise being foretold, it is not spoiled - it was always-already a bad movie. We do not rewatch favorite movies because they surprise us, but because they are good company. If a movie is bad company, it will be thus even if someone already told you the butler did it. The goodness and badness lies not in you having knowledge - it lies on the level of production, geometry and acting.

Very few viewers found themselves disliking the recent remake of the Orient Express because they knew how it ended. The enjoyment and/or dissatisfaction lies elsewhere.

Looking around on social media tells me that this is not a view widely held. There are people posting spoilers, people yelling at the aforementioned group for posting spoilers, and people decrying the posting of spoilers in general. It is something of a trending topic, especially in relation to the new Star Wars movie. Posting spoilers is framed akin to murdering the movie, a sin above and beyond the pale. Friendships have ended over it.

It is interesting to note this difference in perspectives on media. On the one hand, there is the view that spoilers are irrelevant. On the other, the view that spoilers are everything. Both are valid experiences of being human. The fact that both views can coexist and seldom interact with each other tells us something about this world we live in.

I do not know exactly what it tells us. But it would be nice if someone posted a spoiler of it. -

Saturday, November 11, 2017

Small logistics

There are a large number of small things that are easy to learn, yet which at the same time are utterly impossible to figure out. If someone shows them to you, them look like the easiest thing in the world, but if you have to speedlearn them on your own, difficulties ensue.

A dramatic example of this is a young man finding himself in the situation of having to unclasp a bra. It is a very small thing indeed, and the logistics involved can be performed without much thought, and yet. Difficulties ensue. Possibly also a non-zero amount of fumbling.

Similar (possibly, but not always, less dramatic) instances of small logistics occur just about everywhere, most of them having become so routine it takes an act of effort to notice them. Computer interfaces, what to say when ordering fast food, the art of performing an academic citation - these are all instances of small logistics where the knowing of how to get it done has merged into the back of one's mind. Once upon a time you had to learn these things, before they became obvious.

It pays off to pay attention to these things. Not only do you become aware of what you are (quite literally) doing, but you also gain the opportunity to think about other ways of doing these very things. And, if you notice someone not quite knowing how to move things along, the insight into just what they need to learn for future reference.

It's the little things, as the saying goes.

Saturday, October 14, 2017

The application of memories

Sometimes, you stumble upon a song you haven't heard in a while, and go "oh yeah, I remember this, this exists". It sparks a memory of times past, and of the emotional equilibrium (or lack thereof) that went along with them. It might be a strong memory, or a passing one. Either way, the memory chord is struck.

Most of the time, nothing much comes of it. You just remember the memory, and then move on. It is the way of things. The world is big and contains many memories.

Sometimes, you stumble upon a song from an artist you only ever heard the one song from. Out of curiosity, you decide to check if there were any other songs made back in the days, and if they are anything like what you've heard so far. After some listening, you discover that there is and that they aren't. In fact, the rest of the artist's production is nothing like that one song; it is an unexplored field of newness that awaits personal discovery.

At times, this is how new favorite artists are found.

To be sure, this process has been made simpler through systems of file sharing - whether they be spotify or discography torrents. Any time you remember something, the option is always there to shore up everything this person has ever done and peruse. All that is needed is a memory, and a name.

It is one of those things that is easy to take for granted. But it is useful, nonetheless.

Monday, September 11, 2017

What Mastodon needs (and then some)

Mastodon participation requires non-trivial levels of literacy.

I need you to look at this statement. It is not a condemnation, it is not an accusation; it is merely a statement of fact. An important statement.

If we look closely at the statement, we see that it includes five components, which can be parsed thusly:
Mastodon
participation
requires
non-trivial levels [of]
literacy

Depending on which part we choose to emphasize, the statement will take on different implications. I suspect that the most immediate reading is to emphasize the "non-trivial levels", and hurry to the conclusion that we need to take action to lower these barriers to participation. While this is by no means a wrong conclusion - removing barriers to participation is seldom wrong - it is not the only conclusion.

Let's look at the statement as a whole. What does it mean that that Mastodon requires non-trivial levels of literacy?

It means that you have to be able to read, and be able to read well, in order to get things done. Not only do you need to be able to look at words and know what they mean - you have to be able to look at who is saying them, when and why, and from all this contextual information piece together what is going on. Above all this, you have to navigate the situation - both as it stands at any particular moment, and in a more general overarching sense - in order to figure out how to appropriately respond to what's going on.

Not to put a fine point on it: this is a non-trivial amount of literacy.

Depending on where we place our emphasis, we end up with different questions and different calls to action. What does it mean for Mastodon to require something? What even is Mastodon, and who gets to define it? What does participation mean, and how do we organize it? Does literacy include the capacity to code?

This post is not meant to answer any questions, or even to pose them in anything resembling a comprehensive fashion. Rather, it serves as something to anchor your thoughts on as the Mastodon project toots forward. And as a reminder that:

Mastodon participation requires non-trivial levels of literacy.

Thursday, August 31, 2017

Discursive notches

There is a strange process afoot, which I suspect is easier to describe than to explain. In its most basic form, it goes something like this:

Someone has an online presence, most commonly in the form of a content creator. They describe themselves as rational, skeptic and free-thinking, often with an undertone of anti-authoritarianism. They position themselves in opposition to conservatives on a number of issues, for instance when it comes to the role of religion in politics. Their god-terms (to wit) are science, rationality and skepticism, with the corresponding devil-terms of religion and tradition.

Fast-forward a couple of years, and things have radically changed. While there might be lingering traces of skeptical roots, the overall tone and messaging have changed. If the tone was polemical before, it has now intensified and become increasingly specific. The prior focus on denouncing anti-scientific sentiments has been replaced with denouncing leftist SJW feminists, wherever these may be found. Similarly, the notion of free-thinking has been replaced with what can only be called a liturgy: there is a number of stock phrases that are used almost verbatim by members of the community.

The transition from the one type of person to the other seems contingent to me. Out of all the possible developmental paths things could have taken, this one underwent the formality of actually happening. Things could have been different, but they are not.

The question posed by this state of things is: why? What led these self-awowed critical thinkers to join the relentless chant against the so-called SJWs?

A less obvious question is why those who, today, display interest in the skeptical line of thinking tend to follow the same trajectory as those who did years ago. What compels them to undertake the same journey, even though the present-day discourse bears little resemblance to the source material? What discursive notches are at play?

There is a strange process afoot, which I suspect is easier to describe than to explain. A first step in explaining it is to notice it.

Saturday, August 19, 2017

Intersectional lines of flight

In the most recent anomaly, I use the concept of international supply chains to illustrate the possibilities of intersectional analyses. It is both a joke and an illustration: a joke in that it is not a concept you would expect to see in a text on intersectionality, and an illustration in that there is no real reason why it could not be included in an intersectional analysis. One would have to make a case for including it, but that goes for every other methodological aspect as well, so it is not unique in that regard.

There are always more potential analyses than actualized ones. This is due to the fact that it is easier to come up with ideas than to go through the months long painstaking process of gathering and processing the data. There really is nothing stopping anyone from saying "hey, we should analyze x in the light of y" - the only effort involved is to have the idea in the first place. And ideas are plentiful.

If you've read your Feyerabend, you can have ungodly amounts of fun generating ideas for potential analyses about the most counterintuitive objects from the most unexpected of angles. Indeed, if you've read your Giddens, you have seen it in action; that famous introduction sure is effective in showing how coffee is not just a beverage but also a social institution, a major economic commodity, a marker of social status, and a whole host of other things condensed (and percolated) into one singular thing. There are no real limits to how many approaches you can use - in theory and in mind.

In practice, there are limits about. Some limits are related to energy - you only have so much of it. Some limits are related to genres and conventions - you are expected to follow the written and unwritten rules for how to go about things. Some limits are related to empirical applicability - some approaches simply will not work.

The first kind of limit is absolute. The second one is negotiable.

Among those who for whatever reason oppose the notion of intersectionality, it is common to make reference to the third kind of limit. "Atoms do not have genders", they might say, implying that an intersectional analysis of physics is impossible. More specifically, they imply that the objective (and thus scientific) ontic universe cannot be understood using the methods and concepts of the social sciences, and that true scientists should be left alone to pursue their important work unperturbed.

They are usually perturbed when an intersectional analysis about how 'objectivity' is a gendered concept with roots in imperialist colonial practices, and thus cannot be used uncritically to convey what they want to convey. The fact that this is a successful application of intersectional analysis is shoved aside by the assertion that no, it isn't.

Thus, we find ourselves back at the second kind of limit. Genres and conventions.

If you read enough about intersectionality, you will eventually come across appeals to include animals in the overall roster of categories. In its mildest forms, this pans out as arguments to strengthen animal protection laws; if it is unethical to let humans suffer, then surely it is unethical to let other forms of life suffer, too. In more radical forms, we find militant veganism (though, to be sure, it is likely militant vegans found their way to where they are by other routes than methodological considerations). Somewhere between these positions, there is a point where it becomes unstrategic to include animals in your analysis.

It is not difficult to come up with intersectional analyses which include animals. For instance: there is a class (or, perhaps more fittingly, caste) system in place with regards to animals. Some animals (dogs, cats) are pets, and kept around the house. Some animals are slaves to be exploited to the fullest extent of their biology (mutated, deformed fowl who live their life in dark factories). Some animals are poached for their alleged medicinal properties (tigers, elephants). Some animals are national symbols (bald-headed eagles). I probably do not need to flesh out the differences to successfully convey that there is something to be learnt by performing an analysis along these lines. Or that international supply chains might be involved somehow.

But.

It is unstrategic to perform such analyses. They do not get funded, for one. They also do not tend to be read with a sense of delighted gratitude; more often than not they are dismissed as prattling sentimental nonsense, along with their authors. There are limits to what a serious participant of contemporary discourse can say, and it is solid strategy to be aware of these limits.

Indeed, these very limits are very rewarding to perform an intersectional analysis of. I would go so far as to say it is a good idea. -

Friday, May 19, 2017

My computer broke down, can you learn it?

With the recent update to Windows being in the news (not in small part thanks to a computer-eating virus which eats non-updated versions), I've been thinking about how knowledge is situated. Which might seem like a strange connection to make, until you are confronted with this question:

"My computer broke down, can you fix it?"

This is a very common situation to find oneself in, especially if one has acquired a reputation for being able to fix computers. (Even if it only came about from helping someone change the background image that one time.) The knowledge required to navigate this situation is not, however, primarily related to computers. Mostly, it comes down to knowing the asker, their general level of computer literacy and the problems they've asked you to fix in the past. It is a very particular skill set, and over time you develop it through use and abuse.

The aforementioned recent update seems to have crashlanded a fair number of systems, if anecdotal evidence is anything to go by. This made me think about whether I could fix my system if it went down as well, and after poking around for a bit (and making an extra backup of all the things for good measure), I figured that I probably could, given time.

If someone were to ask me to fix the very same problem on their system, I probably couldn't. Not because of my admittedly limited skill in these matters, but because of the different situations in which the problem is situated. If it's just me and my broken computer, then I can take my time, tinker with it, fiddle with the knobs and overall do things that are not directly goal-oriented but which nevertheless gets to the point eventually. It'd be a learning experience, albeit a terrifying one.

If it's not just me, then a whole host of other constraints and situationally specific conditions apply. For one thing, the asker might not have the patience with me learning on the job; they might want the situation dealt with and gone, and me taking my time is the opposite of that. There's also the added element of risk - tinkering is never 100% safe, and accidentally making the problem worse is equally the opposite of the solution. Being risk-averse is good, but it is also slow (yes, even slower), which overall is not conducive to getting things done in a brisk manner.

The point here is not that computers are fragile (though they are), but that knowing something rarely is a yes/no proposition. Mostentimes, we know something sufficiently well that if we were to try it out on our own we'd probably turn out all right, more or less. More often than not, the things we know stem from some first attempt that went in an orthogonal direction from well, but which nevertheless sparked the learning process that led us to where we are. We tinker, we fiddle, and eventually we figure things out.

Though, to be sure, having someone around who you can ask about these things as you go along learning speeds things up immensely.

Do be kind to their patient hearts.

Monday, April 3, 2017

Automated anti-content

So I was thinking about bots in microblogs today, and it occurred to me that they have the potential of being pure anti-content. A realization which, when stated in these terms, raises two questions. The first is "microblog, really?", and the second is "what is this anti-content you speak of?".

To answer the first question: yup, really. It's faster than describing a subset of social medias that are defined by short messages visible for a short period of time, mainly in the form of scrolling down the screen in real time. Gotta go fast.

The second question is more interesting. "Content" is a word that describes some kind of stuff, in general. It doesn't really matter what it is - as long as it is something and can fit into a defined media for a defined period of time, it is content. A person screaming into a mic for twenty minutes is content. It is as generic as it gets.

Anti-content, then. It is not generic, but is is also not original. An example would be the UTC time bot, which tweets the correct (albeit non-UTC) time once an hour. Another example is the TootBot, which toots every fifteen minutes. It is not content, but it is definitely something. You are not going to enthusiastically wake your friends in the middle of the night to tell them about the latest UTC update (though you might wake them about the toot bot), but you are going to notice them when they make their predictable rounds yet again.

Anti-content is not content. But it is familiar.

The thing about humans is that they like what is familiar. It provides a fixed point of reference, a stable framework to build upon, and - not to be underestimated - something to talk about. Stuff happens, stuff changes, but you can rely on these things to remain the same. And because they remain as they are, they can be used to anchor things which have yet to become and/or need a boost to get going.

Or, to rephrase: you can refer to them without accidentally starting a fight. After all, they have nothing to say worth screaming about. They are anti-content. And they are a part of the community, albeit a very small part. They say very little, but every time you see them, they remind you of the familiar.

And now that you have read this, you will never look upon these automated little bots the same way again. Enjoy!

Wednesday, February 22, 2017

Postmodernism, a primer

There has been a lot of talk about postmodernism lately, and the only thing larger than the distaste for it is the confusion about what it actually is. While it might be tempting to label this as a postmodern state of things, it's not. It's just confused, and confusion is not postmodernism. The latter might lead to the former, but that is the extent of the connection between the two.

If you've ever read a textbook that in some way deals with postmodernism, then you've probably encountered the introductory statement that the word consists of two parts - post and modernism. Post- as a prefix means that whatever it is fixed to happened in the past. When it is fixed to modernism, we get a word that means "the stuff that happened after modernism". Modernism came first, then postmodernism - in that order.

There are two main reasons for including introductory remarks of this kind. The first is that it has become tradition and convention at this point, and it's easier to latch on to what has already been established than to be creative. The second is that you cannot treat postmodernism as an entity unto itself - it has to be understood in relation to what came before. If you do not understand modernity, you will not understand postmodernity. The one came from the other, and it could not have happened in any other way.

It is vitally important to underscore this intimate relationship. It is a historical progression which is not merely chronological - the tendencies and practices set in motion in the modern time period kept going in the postmodern time period. They are linked, similar and connected.

The modern project was (and is) one of enlightened critical thinking. Traditional institutions, mainly those of monarchies and churches, were no longer to be seen as the absolute authorities when it came to the truth. Instead of relying on ancient authorities (or very present authorities, as it were), the moderns wanted to rely on science and reason.

An example of this shift from ancient authority to a more modern way of thinking is Galileo and the notion that the Earth goes around the sun. Using the tools at hand, Galileo figured out that Earth is not the center of the solar system. The traditional authorities, who held that the Earth was in fact the center, did not agree, and much ado was made about it. In the end, you know how it all turned out.

This ambition to test things by means of science and reason wasn't limited to one person and one particular way of looking at things. Over time, it became the default mode for everything - everything could be questioned, measured, re-examined and put to the test. Those things that were found to not hold up to the standards of scientific testing were thrown out, and those things that did hold up were expanded upon.

The scientific implications of this are fairly obvious: you can get a whole lot more done if you are allowed to freely use the scientific method, without having to make sure everything you find corresponds to what the authorities want you to say. Science builds on science alone, and its findings are all the more robust for it.

The social implications, however, are less straightforward. If long-held beliefs about the cosmos as a whole could be questioned and challenged, then so could long-held beliefs about things of a smaller and more private nature. If the church was wrong about the Earth being at the center of the solar system, then it might also be wrong about marriage, sexuality, and other social institutions. Everything is up for questioning. Everything.

This process of questioning everything kept going, and over time more and more things that were once taken for granted were put to the task of defending themselves. Everything that was once solid melted away, and what came instead was something completely different. Where once kings and bishops ruled, there are now scientists and bureaucrats. And marketers.

Mind you, this is all part of modernity. This is the part that came before postmodernism became a thing. Postmodernism is what happened after this process had been around for a while and become the status quo.

The thing about questioning everything is that you can't really keep doing it forever. At some point, you arrive at the conclusion that some questions have been answered once and for all, and thus that there is no need to go back to them. You begin to take things for granted, and enshrine them as the way things are supposed to be. There are other, more important things to do than reinventing the wheel. There is an order to things and a tradition to consider, both of which are as they should be. The product of modernity is a new range of authorities which dictate what is to be taken for granted and what is to be questioned.

Postmodernism is a return to the very modern urge to question everything and make present institutions answer for themselves. It is, in essence, a return to the modern impulse to trust reason and science rather than tradition or authority - even if these very same traditions and authorities have used reason and science in the process of becoming what they are. But instead of asking whether the Earth revolves around the sun or not, it asks: why do we do the things we do the way we do them, and might there not be a better way to go about it?

Postmodernism happened after the modern project. Post-modernism. But it is still very modern. It is modernity turned upon itself.

If you, after having read this, are slightly more confused about postmodernism, then that is good. It will have primed you for this next statement:

Academics stopped talking about postmodernism some decades ago, and are baffled at its return to fame in news and popular culture.

As final words, I say only this: its resurgence is not postmodern. It is merely confusing. -

Thursday, December 22, 2016

Dress for the future you want, not the one you foresee

An understated aspect of my Discursive Anomalies is that they are not one-off affairs. I carry them with me, and try them out on things I encounter. They are, in a way, a toolbox. The nature of these tools or what situations they are meant to improve is as of yet unknown, and that is part of the point. When the time comes, the tools will be there.

Lately, I have been thinking back on the post about Jonathon Green's depiction of the 60s counterculture of Great Britain. It accomplishes something it really ought not to accomplish: by describing many contemporary constituent parts of a time period, without really piecing them together, it conveys a better sense of the times than a more integrated approach would. It is all nows: one now after another, juxtaposed in such a manner as to bring context through sheer numbers. It is not a point of view, but you end up with one nevertheless.

It is all very backwards, and all very straightforward. Integrated and holistic points of view are artifacts of hindsight, not readily available to those living in the moment. In the moment, there are only constituent parts, who disappear when we find ourselves with something more interesting to do.

I wonder what a similar depiction of our time would look like. What the distinguishing characteristics and vital constituent parts will turn out to be.

I suspect it would be a mixture of things we take for granted and things we cannot see due to being too close to them. The Trump election would most likely warrant a mention, alongside some massive landslide of a long-term change that happens on the other side of the world we have yet to see the ramifications of. The rattling of sabers on both sides of the old Cold War will probably be discussed as an ambient factor, but the real background tune of the future has every probability of being recorded in a suburb of an African town whose name we will never know. Perhaps meme culture will be a thing; perhaps it turns out a revived ancient tribal practice performs the same functions with far greater efficiency, sneaking in from the periphery.

History has a way of becoming those things that happened alongside those other things we paid attention to.

This state of things is a hopeful one. It implies that the world is not limited to what can be seen in the news. It also implies, through the same logic, that there are still surprises left in the world, ready to strike from so far out of left field that they cannot but be discursively anomalous.

It implies that we could be the one causing these unforeseen consequences, by engaging in some fit of passion that in hindsight turned out to be more important than we could have imagined.

That is a good future. We should prepare for it. -

Sunday, October 9, 2016

Horse notes

I'm writing a thesis on horse_ebooks (because of course that's what I'd be doing), and one of the possible avenues of approach I'm investigating is Bakhtin's notion of genre. Because you are interested in the Horse, I'm going to share a few notes on this notion with you. To further a common understanding of the situation.

A classic understanding of communication and utterances is that someone wants to say something. They have some inner thought or emotion they wish to express, and in order to express it they turn to language. Using their understanding of grammar and their available vocabulary, they effort to produce some discourse that will hopefully convey the message across to the listener. It's a directed process, from one self to another.

Bakhtin is not a fan of this classic understanding. Rather, he proposes we understand communication in terms of genres. While it is true that communication takes place between individuals, it's not a question of one person talking directly to another person. Instead, it is a question of a person in a particular situation talking to others who are also in the same particular situation, and this situation has distinct and non-subtle effects on how the things being said are interpreted. The situation is as much a part of the communicative effort as the individuals in it.

A trivial example of this is a wedding ceremony. Everyone gathered has a certain understanding of what is going on, and unless something out of the ordinary happens, the situation will unfold as expected. Everyone knows the genre of wedding ceremonies, and this knowledge informs how those present understand what is occurring there and then. And, conversely, that it'd be weird if someone would act in a manner not in accordance with this genre.

Someone suddenly standing up and giving a rousing oration on the need to lower import taxes would be extremely out of place, and possibly cause a minor scandal. Whether or not there's actually a need to lower these tariffs is beside the point - there's a wedding going on, after all.

This kind of situational awareness is not unique to weddings, to be sure. It goes for all social situations, in general. However, there are only a certain number of such situations, and most of them tend to resemble each other over time. They become genres, albeit informal ones, and the understanding of those present informs what can be said in future such situations. If you are able to mobilize an understanding of the relevant genres, you will be able to make things happen in future situations pertaining to them.

The next time you hear someone relate an anecdote of someone acting strange at work, then they are giving an account of someone not understanding the genres at work. There is a certain expectation of how people ought to behave, and someone didn't act in accordance with these expectations. To amusing or confounding effect.

I imagine you might be thinking to yourself - how does this relate to the Horse? Which is a both understandable and crucial question

Remember how Bakhtin wasn't a fan of the classic understanding of communication? How it's not about one person saying something in a void, but rather a process of shared understanding of specific situations?

This becomes relevant in the context of the Horse, as it becomes meaningless to analyze it in terms of semantics and intention. It does not try to convey some sort of message, and decoding what it might be intending to communicate is a pointless exercise. It is communication without a subject, as it were.

Yet, it has over a hundred thousand followers. Clearly, it accomplished something with its tweets. And my hunch is that Bakhtin's notion of genre as social expectations might help uncover what this is.

Friday, September 9, 2016

Against content

Containerization is one of the forgotten obvious aspects of modern life. Containers are everywhere, and many modern cities have huge areas dedicated to their loading and unloading. Containers move hither and dither, but unless you are actively working with logistics, you are not likely to think about them other than as something that has always been there. Indeed, it would be very strange and disconcerting if they weren't there - the rhythm and ambiance of daily city life would be perturbed without them at the periphery of perception. Containers are as staple as the goods they contain, as it were.

Like malls, if you've seen one, you've seen them all. One container is identical to any other container, except for eventual external markings of corporate ownership. They are all the same size, weigh the same and handle the same. Which is the point. No matter where you are, you can pack things in containers and transport them anywhere else in the world. Wherever you go, there will be infrastructure ready to accept and process your container - since they are all identical.

The point of this standardization is to make it easier to move things around. Since the containers are all the same, it doesn't matter what happens to be inside them. Writ large, this means that the various trucks, trains and airplanes used to move things can be designed to move a certain number of containers, and set in motion once they have loaded the desired number. Moving any one container is the same as moving any other, and large amounts of content can be moved efficiently as it can all be processed through the same system, rather than in parallel systems that all move differently. One size fits all.

It might be surprising to find out that the process of containerization began rather recently, and that harbors, airports and train stations used to have trained crews on hand to load and unload different cargoes in the manners that suited them. Furniture had to be handled in a different manner than, say, foodstuffs, and each category of things had to have specialized infrastructure and institutionalized knowledge sets in place in order to be processed properly and efficiently. Which, as you might imagine, is more resource and labor intensive than having an all-encompassing system being able to process all the things.

This before-time is still in living memory, and there are plenty of stories of logistical mishaps to be told from those days. You have but to know whom to ask.

The reason for this text coming to being is not, however, the fascinating global process of logistical standardization in and of itself. Rather, it's how this same process has begun to happen in a more metaphorical way in the present. It can all be summed up in one singular word, and you will understand the significance of the above paragraphs once you see it:

Content.

The notion of content is problematic, to say the least. It assumes that all mediated things are, in some fashion, identical, and that the particulars of any given media artifact does not matter. Writing, movies, computer games, music - it's all content. In the standardized world of content delivery, it's all the same. All of human culture has been reduced to one singular ubiquitous gray goo, and the point of it all is not to distinguish one artifact from another, but to keep consumers busy with enough content to maintain a satisfactory profit margin.

This is a rather nihilistic view of culture, and if you spend too much time with it you end up thinking of your creative processes as content creation. You're not writing to express ideas or influence people; you're writing to give readers enough content to keep reading. You're not making music that will move souls and provide katharsis for a new age; you're filling out the minutes until you have enough content. You're not creating anything in particular, but rather a sustained generalized discursive noise that will keep your audience content - if you'll pardon the pun.

This is not to say that there isn't uses for such lines of thinking. Some things become easier to do once you realize that most of it is content - for example functional writing such as journalism or graduate theses.  These things become less cumbersome to do once you realize that it's not about you, and that the main thing is getting words on a page. But it shouldn't be your only line of thinking about your creative processes, or even the main one. You're not doing what you do because you have to, but because you want to.

Content can be created by pressing record and screaming into a microphone for three hours. If we follow the logic of containerization of culture and ideas, we end up in a place where there really is no point to go those extra miles in order to say something in particular. When the aim is to fill out empty containers with content, anything goes. And it goes with expedient efficiency.

You're not a content creator. You're a writer, artist, game maker, musician - you're doing things in order to express something that wouldn't be expressed if it weren't for you. You're contributing to this world. You're a context creator.

What you do matters.

Keep at it.

Sunday, June 12, 2016

Ordinary people Twitter is not a slur

Recently, I've begun to think more and more about this phrase. Ordinary people twitter is not a slur. As a slogan, it lacks the necessary brutal directness of impact. As a subtle statement, it creeps up on you and surprises you when you least expect it.

In order to understand this phrase, it is necessary to understand what this mythical "ordinary people twitter" is. Who are these people, and what do they want?

Thing is. It is more of a negative identity than anything else. That is to say, it's easier to say what it is not in order to gradually approach an understanding of it, than to approach it head on with a declarative statement such as "ordinary people twitter is".

Ordinary people twitter does not have an emergency strategy for when hundreds of angry young men emerge in your mentions and threaten to spill over to your friends and family. It does not have such strategies for the very good reason that it does not need them. The thought of needing these strategies is absurd on the face of it - yet there it is.

But what do you do when your notifications are all about how much people hate you? What do you do when your family texts you to say that strange people are calling them? What do you do when they are outside your home, after learning the address from a public posting?

If you scratch your head in confusion at these questions and their relevance to twitter - congratulations. You are a solid member of ordinary people twitter.

The phrase "ordinary people twitter is not a slur" is a very nostalgic statement. It reminds of a time when you could post that you were getting a sandwich and get at most one fav. Nothing much happened, and that was okay. You posted ordinary things about ordinary things, and that was that. Life moved on.

Being a member of ordinary people twitter is not a bad thing. It is a good thing.

But it suggests the extraordinary nature of the situation the rest of us find ourselves in. The extraordinary twitter. Those who keep a watchful eye open for people acting in explicit bad faith, and live with the awareness that twitter admins - working on the assumption that ordinary people twitter is the only twitter - won't do pretty much of anything to help when the raging hordes come hording.

This is not a healthy state of mind. And it is not a healthy state of things. For anyone involved.

Thursday, June 2, 2016

The information complexity of bee sexuality

Recently I began to see people ambiently talking about bee sexuality. Which, as you might imagine, made me go wtf, until I stumbled upon the context (apparently, worker bees are all female, and Bee Movie got it wrong). Upon finding this out, the wtf factor disappeared, and so did my interest in the matter. But it did get me thinking about information processing.

Information processing happens in iteration cycles. The information differs from case to case, but the general process is the same every time, with up to five stages if the information is complex enough.

The first stage is the wtf stage. You have encountered something, and have no reference points for what it might be. The thing just exists, an intrusion into your ordinary mode of understanding the world around you. There are things that make sense, and there are things that do not. This thing is clearly in the latter category.

The second stage is the huh stage. You've been given or acquired some context to the thing, and started to make sense of it. You still don't understand it, but whenever you encounter it again, you can confidently go "huh, I've seen this before".

The third stage is the exploratory stage. You've begun to understand the thing, and are exploring the possibilities afforded by it. Thoughts that follow the lines of "if, then" are starting to enter your head, and you try it out just to see if the thens then. Just to see if you've actually understood the thing, and to satisfy your emerging curiosity.

The fourth stage is the experimental stage. You've grasped the thing, and now try to relate it to other things previously grasped. Using your accumulated body of knowledge, you try to find where the things belong and where it does not, and where it would produce interesting results if introduced. Some of your experiments will succeed, others will fail, some will fail spectacularly.

The fifth stage is the meh stage. You've understood the thing, done the thing, done the permutations of the thing, and know where to apply it to best effect. In short, you're rather bored with it, and can do it in your sleep or mindless working hours if called upon to do so.

Of course, this is not a thing that happens once and then never again. It happens all the time, all around us. Different people are at different stages, and that which engenders a wtf reaction in one person is a meh to another person. Nothing is static - everything is constantly processed.

The things to look out for are the iteration cycles. While these stages are pretty agnostic to the online/offline divide, the online has the advantage of faster iteration cycles. Things can go from wtf to meh faster than you think, and more things can undergo this transition in parallel than you imagine. Which means that, left to its own devices, the online can produce some spectacularly fast mehs, and generate demand for very particular wtfs that seem very far from the offline experience.

So the next time you stumble upon discussions of bee sexuality, remember this post. Introducing it to the context might produce some interesting results. -

Tuesday, January 12, 2016

Noted

Sometimes, big things happen. Things of the nature of big celebrities dying. These thing naturally generate a lot of attention, buzz and discussion - they wouldn't be big otherwise.

This sudden burst of activity tends to generate a secondary burst of activity. That is to say, comments on how there seems to be a lot of attention given to that big thing that's going on. Sometimes with a connotation that this is attention poorly spent, and that there are other things in the world to attend to.

This in turn spawns a tertiary burst of activity, noting that the secondary burst is if anything even worse than the first, as it gives those involved even more excuses to spend even more time and attention on the inciting incident. Which is the opposite of what they want.

As you might have already begun to suspect, this is an inherently iterative process, and has the potential to become a discursive perpetual motion machine.

The only winning move might be not to play.

Take note.

Monday, September 14, 2015

Metasocial by the numbers

I recently realized that I didn't have someone's phone number. Which, as I pondered the realization, made me think about the interplay of change and inertia. Of how as things change, they yet somehow remain themselves.

On the one hand, it would be a trivial thing to simply conjure it up using search engines. I have enough circumstantial information to narrow down the results enough to glean the signal from the noise (probably to such an accuracy that there will only be one search result). This, however, would be slightly stalky, and should I proceed along this course and send a text down the line, it might well be perceived as such.

On the other hand, simply asking them (or any of our mutual friends) about it might be construed and misconstrued in any number of ways. As you might well imagine.

This is interesting, as it shows that information is not socially neutral. Moreover, the way information is acquired is anything but socially neutral. It never was, to be sure, but it is even less so now. Even though information wants to be free, acting on it is not.

Plus ça change, plus c’est la même chose.

In thinking about this, I also realized that this line of inquiry is more interesting than actually, you know, getting a hold of that number. Which only goes to show that attention is socially awkward. -

Tuesday, August 25, 2015

War and not-quite peace

War is something most of us have been forced to realize is something huge and hard to grasp. Not least due to the plethora of movies to be found on the subject. Some of them try to depict war as something glorious and heroic, making heroes of young men, giving even the scrappiest of lads a chance to perform great feats in the name of the State or the Nation (or Virtue, or whatever). Other movies give a different account, less optimistic and more realistic: open wounds, maggots, death, decay, psychic trauma, extreme stress, exhaustion, uncertainty with regards to as to whether there will be a tomorrow, the constant presence of very lethal and very mobile things with every intent on closing the distance, and, worst of all, bad food.

If I had such a bent, I'd quip about this second image being unnervingly close to everyday life. However, that would be making light of the topic, and that simply would not do.

Even everyday life is described in different ways, in movies and in other mediums. Just as in the depiction of war, there are cheerful rosy accounts insisting that there is goodness and beauty to be found even in the smallest of God's creations, should we just keep our eyes open. And, conversely, the opposite: the depiction of the relentlessly grey Monday mornings with their brutal disdifferentiation of past, present and future into a brutal-eternal now, where the prospects of anything ever changing are engineered out of the realm of possibility. (That is, until the protagonist meets a manic pixie dream girl who changes everything. But still.)

If I at this point would say that there is an inherent similarity between war and everyday life, I'd be somewhat disingenuous. The fact that both war and everyday life have been depicted in similar ways at different times doesn't say anything about either war or everyday life, and to compare these discourses is more of a discourse analysis than a comparison proper.

There are thinkers who have endeavored to connect war and everyday life in a more concrete way. Paul Virilio writes in War and cinema about how the ways we use to communicate with each other - radio, cell phones, internet, the works - were invented by the military for military use in wartime. Which might seem a simple restatement that war is the mother of all invention, and that when inventions have been invented it's hard to uninvent them, and that they might as well be put to use by those who need them. Which, to be sure, isn't much to phone home or tweet about. Virilio's twist on this is that the military paradigms that these inventions were first used in - the paradigms of war - slowly but surely are bleeding over to everyday life and civil society, along with the inventions themselves.

Before we continue this line of thinking, let's turn to Clausewitz. In his monumental book On war, he discusses just about everything there is to discuss about war. He gives us the rather counterintuitive definition of war as the continuation of politics through other means. This might seem odd, but think about wars happen. It's not because two people hate each other - the hate tends to be a product of the war, rather than the other way around. Rather, war happens when a state sees something it wants in/of another state, and use the military to procure these things. Politics through other means, as it were.

He then continues to differentiate between two types of objectives, present in every war. The first kind is the political objective, which is to say what the government of the attacking country (and the defending country, to be sure) wants to achieve, whatever it might be. The second is the military objective, which is what the military needs to conquer in order to secure the political goal (fortifications, strategic locations, transport networks, supply lines etc). If these objectives coincide, as when the objective is the annexation of territory, then achieving the one achieves the other. If the political goal is more diffuse, then the relation between political and military objectives is less clear. In either case, the projection of military force is a means to an end rather than an end unto itself.

The overall military objective, in any war, is to destroy the enemy. Or, rather, to destroy their capacity to enact and project resistance. If the enemy has been effectively and totally incapacitated (ponder this word), then they have lost. By definition, there is nothing they can do. When this happens, the only option is to surrender and give in to the political demands, whatever those might be.

We see here how politics and resistance are intrinsically linked to war as such. Politics is to want something, and if this want is to procure something someone else has, then this someone else can either mobilize a resistance to this will, or give in to it. There is no third option.

Returning to Virilio. The military inventions, first used in wars and related situations, have ever so slowly found their way into civil society. And along with them, the paradigm that necessitated and facilitated their use. The military has an endemic interest in keeping its troops ready to either attack or defend, to either overcome resistance or mobilize it. The ability to quickly and effectively organize large numbers of people has always been a key military interest, and is a critical component of every hostile situation.

As these military technologies become civil, military thinking has as well. We all carry cell phones, and we have grown accustomed to changing our plans whilst out in the field. Or out on the town. We do it more or less automatically these days, and feel strangely incapacitated whenever we - for any reason - can't do it. When our phone runs out of battery, it's not just our phone that's lacking in functionality. A part of who we see ourselves as is no longer operational. Which, incidentally, is how we see people who for whatever reason choose to live outside our infotechnobubble - nonfunctional people.

The thought of organizing ourselves in collective and efficient resistances (plural) became manifest in the wake of Gategate. [A Swedish 2010 event concerning two policemen demanding that a recording of them acting objectionably be deleted. The recording was subsequently recovered and spread far and wide through social media. The name denotes the fact that this took place near the gates to the Stockholm subway, which is to say a literal gate.] Before this, the possibility of quickly organize a response to government abuse was latent, dormant; after, it became something of a civic duty to document and signal boost these whenever they occurred. Whilst it is not always clear what these resistances attack or defend, it still weighs heavily upon us as an imperative in our daily lives.

Now, this is not to say that this is necessarily a negative thing. Being able to resist the inherent totalitarian tendencies of the state is a prerequisite for a functioning democracy. Being able to quickly organize a meeting on short notice in order to discuss what is to be done in a crisis situation can save lives. But it is worth pondering that while it is true that this country [Sweden] hasn't been in a formal state of war for over two hundred years, there is still a constant presence of war in our everyday lives. Not as a heroic adventure or a sudden onset of post-traumatic panic, but as an ambivalent gray something which is neither this nor that. It's in the air, but it's not something we usually think about. It's just there, waiting, a latent possibility inherent in being. The constant readiness to mobilize. At a moment's notice.

It is sometimes said that these new communication technologies have changed our lives beyond recognition. I think we've barely even begun to scratch the surface of this statement. Or even begun to suspect how many kilometers this surface extends. Old virtues become incommensurate with new realities; old imperatives subsumed by new ones. "Be a good person", they used to say. "Be able to resist", we now say. But resist whom, in whose name? What new objectives are posited by our politically mobilized selves and communities?

What even is this new everyday life we are suddenly living?

Originally published February 23, 2010