Thursday, July 27, 2017

In the mood for some discourse

Both the two most recent discursive anomalies share a theme. That theme is, somewhat unexpectedly, mood. Or, put another way: the way reading a particular text makes you feel, and how that feeling affects your thoughts.

In case you are reading in the future, the two anomalies in question are the ones about Hyde and Booth. Since texts are always retroactively present, you can sneak over to read them without missing a beat. Go on. These words will still be here.

Mood is an underrated concept. Sometimes it is dismissed outright, as part of the overall category of 'feelings'. At other times, it is seen as a distraction from the main point of interest, e.g. 'not being in the mood', 'being in a bad mood'. There is a tendency to see mood as something that happens beside the point, and that reality happens without you while you are distracted by these irrelevant moods of yours.

Besides being both rude and bordering on gaslighting, these takes have the additional drawback of being wrong.

Booth is perhaps most explicit in his discussion of moods. One of his premises is that the reason you keep reading a particular text - a romance novel, a cartoon, a crime novel - is that you want more of whatever it is you are reading. The point is not to see if the lovers stick together, what the punchline might be or whodunnit, but to extend the present experience of reading, whatever it might be. The act of reading the text puts you into a certain (albeit at times intangible) mood, and it is this mood that fiction provides. Far from being a side point, mood is for Booth the express purpose of reading. And, by extension, writing; to create an artifact in the world that conveys the kind of mood the author is interested in conveying, and thus creating an opportunity to explore this mood - both by experiencing it through reading, and by the creative act of criticism.

If you are a podcast listener, you might have experienced a peculiar kind of sensation: that of listening to people talk about something you are utterly uninterested in, but find the discussion itself fascinating and worthwhile. This is the mood Booth writes about; the state of mind the act of partaking of something puts you in, regardless of what the subject matter happens to be.

When Booth says that books are friends, this is what he means. You can pick them off the shelves and read for a while, and be comforted by their company; they raise your mood, as friends are wont to do. His approach to criticism is this: if what you have written can provide good company, then it has merit, and writing should strive to attain such merit. To be good company.

Hyde approaches the same theme from another angle, that of rhetoric and philosophy. Moods are not just something that happens while reading, but are the guiding principle behind our thoughts and actions. If we like the places we inhabit - dwell, in his word - we will act towards them in certain ways, presumably with the intention to preserve and decorate these places. If we do not like them, the mood will be different, and our actions will follow suit. Mood is what motivates us: thus understanding mood means understanding ourselves and our place in the world.

The punk aesthetic can be understood in this light. It defines itself against the status quo and seeks to rebel against it. The point is to be something different than what is on offer by the powers that be. The fact that it is seen as ugly and vulgar by those who are attuned to the mood of the times is one of punk's express aesthetic purposes, and only adds to the appeal of those who share the sentiment.

Hyde maintains that seeing mood as guiding principle places a certain ethical responsibility on us as discursive actors in the world. When we write something, we do not simply convey a certain number of facts in a certain order and with a certain degree of accuracy - we also convey a mood. More so when engaging in public speaking, as our presence defines the mood in the room with regard to the subject matter discussed. What we say and how we say it matters, and it falls upon us to think about our impact on those who listen.

Taken together, these two variations on the theme of mood gives us a foundation on which to build further thinking about critical reading and writing. At its most basic, it allows us to ask what mood a particular artifact puts us in or is written to foster. It also allows us to reflect on our own writing, and ask ourselves if we convey the appropriate mood alongside what we want to say. At its most simple, thinking about moods this way asks us to pay attention, and to act on what we see.

More indirectly, the notion of mood gives us an opening to understand why certain people like certain works or genres. There is no shortage of writers and podcasters who do little else but repackage things that have already been said elsewhere, but who add the element of mood. Being able to understand that it is this mood that draws their audience allows us to understand why they do what they do - 'they' being both audience and authors.

A benign example is why readers like the rapt wittiness of someone like Jane Austen; the way she depicts social interactions and relations is a very distinct kind of mood indeed. On a less pleasant note, many partake of racist media just for the sake of the mood therein: hearing someone else talk about the negroes and their decadent ways gives permission to maintain that mood and mode of thinking. Keeping mood in mind allows us to understand - and critique - these things in a more interesting way.

Closer to home, it also opens the door to understanding home decoration. The point is not just simply to look good, but also to suggest a certain mood. A sidenote, to be sure, but I want to imply the general applicability of these things.

I suspect that both works discussed above might be slightly obscure to the general reader. Booth published the Company We Keep in 1988, and Hyde's anthology about the Ethos of Rhetoric came out in 2004. I also suspect that, should you have stumbled upon these books in the wild, you might not have found them particularly interesting - they are both, in a way, intended for specialized audiences. While the point of writing discursive anomalies about a particular thing is to encourage readers to pick up these things and read for themselves, in this case the point is more to convey the general mood of these two books. To introduce you to a concept you might otherwise miss.

But, then again: that is the point of most writing about writing. -

Monday, July 24, 2017

Human-level intelligences and you

There has been much ado over the years about computers becoming as intelligent as humans. Several goals have been set up and surpassed, and for each feat of computer engineering we have learnt that intelligence is a slippery thing that requires ever more refined metrics to accurately measure. Beating a human in chess was once thought a hard thing to do, but then we built a computer that could do it - and very little besides it. It is a very narrowly defined skill being put to the test, and it turns out intelligence is not the key factor that determines victory or defeat.

Fast forward a bit, and we have computers giving trivia nerds a run for their money in Jeopardy. Turns out intelligence isn't the defining factor here either, on both sides. For computers, it's all a matter of being able to crawl through large amounts of available data fast enough to generate a sentence. For humans, it's a matter of having encountered something in the past and being able to recount it in a timely fashion. Similar tasks, indeed. but neither require intelligence. Either the sorting algorithm is optimized enough to get the processing done on time, or it is not. Either you remember that character from that one soap opera you saw years and years ago, or you do not.

The win condition is clearly defined, but the path to fulfilling it does not require intelligence proper. It can go either way, based on what basically amounts to a coin toss, and however you want to go about defining intelligence, that probably is not it.

The question of computers becoming as intelligent as humans has ever so gradually been replaced with an understanding that computers do not have to be. In the case of chess, a specialized dumb computer gets the work done; the same goes for other tasks, with similar degrees of dumb specialization. Get the dumb computer to do it really really way, and the job gets done.

If all you need is a hammer, build a good one.

A more interesting (and more unsettling question) is when a human becomes as intelligent as a human. This might seem somewhat tautological: 1 = 1, after all. Humans are human. But humans have this peculiar quality of being made, not born. As creatures of culture, we have to learn the proper ways to go about living, being and doing, And - more to the point - we can fail to learn these things.

Just what "these things" are is a matter of some debate, and has shifted over the years. A quick way to gauge where the standards are at any moment in time would be to look at the national curricula for the educational system of where you happen to be, and analyze what is given importance and what is not. There are always some things given more attention than others, some aspect promoted above others. And, at the core, some things are deemed to be of such importance that all citizens need to know them. Some minimum of knowledge to be had by all. Some minimum level of intelligence.

And there are always a number of citizens who do not qualify. Who are not, for any given definition of intelligence, up for it.

When does a human become as intelligent as a human?

Friday, July 14, 2017

Some words on media permanence

It is a strange thing about media artifacts that some of them age well, while others do not. Some can be forgotten for decades, only to find a new audience willing and able to engage with them. Others can not be revived as easily, and are thus consigned to reside only in the memories of those who were there at the time.

To be sure, this applies to things that are not media artifacts, too. Things happen, and after they have happened you were either there or you were not, and your memories of the event are shaped accordingly. It is a very important aspect of the human condition.

But the point of media artifacts is that it is possible to return to them at a later date. They are supposed to have some sort of permanence - it is a key feature. Books remain as written, pictures as pictured, movies as directed. It would be a substantial design flaw if these things did not last.

Though, then again, some things do not last. Books fall apart, movies fade, hard drives crash. Entropy is not kind to supposedly eternal things. Look upon these works, ye mighty.

But. All these things aside: some media artifacts age well, and some do not. Some can be readily introduced to new audiences, while others remain indecipherable mysteries even upon close encounter. There is a difference, and it is very distinct from the question of whether or not we're trying to jam a VHS tape into a Betamax player.

This difference can be clearly seen if we contrast Deep Space Nine and Babylon 5. Despite being from roughly the same time, belonging to the same genre and sharing a non-insignificant portion of plot elements, one of these television series is instantly accessible to contemporary audiences while the other is not. Though it pains me to say, it takes a non-trivial effort on the part of those who are not nostalgically attached to Babylon 5 to view it with contemporary eyes. A certain sensibility has been lost, and the gloriously cheesy CGI effects turn into obstacles to further viewing. Surmountable obstacles, but obstacles nonetheless.

The same goes for computer games. I imagine that, should we use the Civilization series as a benchmark, there would be different cutoff points for different audiences. For my part, the first iteration is unplayable, and I suspect many of my younger peers would balk at Civilization 2. I also have fears that 3 or even 4 might be too much of a learning curve for those who were not there to remember it. Not because the games are inherently impossible to play, but because the contemporary frameworks for how games are supposed to work (and how intuitive user interfaces are supposed to be) have shifted between then and now.

A certain sensibility has been lost.

It would be a mistake to label this development as either good or bad. The young ones have not destroyed theater by their use of the lyre, despite all accounts to the contrary. These changes are simply something that have happened, and have to be understood as such. Moreover, it is something to take into account as yet another generation grows up in a society overflowing with media artifacts, old and new.

Some of these artifacts will constitute shared experiences, while others will not. Such is the way of these things.

Saturday, July 8, 2017

Care for future history

These are strange times.

Since you're living in these times, the above statement is probably not a surprise to you. In fact, it might very well be the least surprising statement of our time. Especially if you happen to have a presence on twitter, and even more so if this presence is in the parts where the statement "this is not normal" is commonplace, or where a certain president makes his rounds. The two are related, in that the former refer to the latter: it is a reminder and an incantation to ensure that you do not get used to these strange new times and start to see them as normal.

These times are not normal. These times are strange.

In the future, there will doubtless be summaries and retrospectives of these times. More than likely, these will be written with academic rigor, historical nuance and critical stringency. Even more likely, all the effort put into making these retrospectives such will be made moot by this simple counterquestion:

Surely, it wasn't that strange?

We can see this future approaching. Less strange times will come, and frames of reference will be desensitized to the strangeness of our time. In a future where it is not common for presidents to tweet at the television as if encountering the subject matter for the first time, the claim that there once were such a president will be extraordinary.

Surely, it wasn't that strange?

It behooves us - we who live in these strange times - to leave behind cultural artifacts that underline and underscore just how strange these times were. Small nuggets of contemporaneity that give credence to the strangeness we ever so gradually come to take for granted. Give the future clear direction that, yep, there is a before and an after, but not yet, and we knew it.

It is the implicit challenge of our time.

Better get to it.

Sunday, May 21, 2017

Concerning the Dark Souls of US presidencies

It has been said that the current president is the Dark Souls of US presidencies. Which, to be sure, has a certain ring to it, but it lacks the virtue of truth. Let's explore the issue for a spell.

Dark Souls is a series of games built around the notion of gradual player progression. The games might seem hard at first, but if you stick with it you learn how to overcome that difficulty and become good at what the games ask you to do. The difficulty is not mechanical - the challenges do not require superhuman reflexes or superior skills to overcome - but rather psychological. By failing, again and again, the player gradually learns what needs to be learnt. The reward for this application of patience is the opportunity to excel whenever new situations arise that require the very thing just learnt. It is the player leveling up, rather than the player character.

Meanwhile, in the background of all this character development, a world and its long tragic backstory is ever so subtly unfolding. It is not a simple backstory, where this happened after that, but a series of subtle implications of social relations and emotional states of mind. Complex social processes led to cascading catastrophic outcomes which in turn sparked other social processes which -

It is a deep and complex backstory, and for the sake of brevity, it will all be ignored. Suffice to say that much of it is left unsaid, and that the player will have to piece it together from archeological fragments, old legends and features of geography.

From this description alone, you might see what I'm getting at. Gradual self-improvement through patience, slowly unfolding understanding of past events through contextual knowledge, and the characterization of subtle states of mind - neither of these things are applicable to the current president, even with excessive use of shoehorns or cherrypickers.

There probably is a past president that would live up the title of the Dark Souls of US presidencies. But that is a topic for another cycle.

Friday, May 19, 2017

My computer broke down, can you learn it?

With the recent update to Windows being in the news (not in small part thanks to a computer-eating virus which eats non-updated versions), I've been thinking about how knowledge is situated. Which might seem like a strange connection to make, until you are confronted with this question:

"My computer broke down, can you fix it?"

This is a very common situation to find oneself in, especially if one has acquired a reputation for being able to fix computers. (Even if it only came about from helping someone change the background image that one time.) The knowledge required to navigate this situation is not, however, primarily related to computers. Mostly, it comes down to knowing the asker, their general level of computer literacy and the problems they've asked you to fix in the past. It is a very particular skill set, and over time you develop it through use and abuse.

The aforementioned recent update seems to have crashlanded a fair number of systems, if anecdotal evidence is anything to go by. This made me think about whether I could fix my system if it went down as well, and after poking around for a bit (and making an extra backup of all the things for good measure), I figured that I probably could, given time.

If someone were to ask me to fix the very same problem on their system, I probably couldn't. Not because of my admittedly limited skill in these matters, but because of the different situations in which the problem is situated. If it's just me and my broken computer, then I can take my time, tinker with it, fiddle with the knobs and overall do things that are not directly goal-oriented but which nevertheless gets to the point eventually. It'd be a learning experience, albeit a terrifying one.

If it's not just me, then a whole host of other constraints and situationally specific conditions apply. For one thing, the asker might not have the patience with me learning on the job; they might want the situation dealt with and gone, and me taking my time is the opposite of that. There's also the added element of risk - tinkering is never 100% safe, and accidentally making the problem worse is equally the opposite of the solution. Being risk-averse is good, but it is also slow (yes, even slower), which overall is not conducive to getting things done in a brisk manner.

The point here is not that computers are fragile (though they are), but that knowing something rarely is a yes/no proposition. Mostentimes, we know something sufficiently well that if we were to try it out on our own we'd probably turn out all right, more or less. More often than not, the things we know stem from some first attempt that went in an orthogonal direction from well, but which nevertheless sparked the learning process that led us to where we are. We tinker, we fiddle, and eventually we figure things out.

Though, to be sure, having someone around who you can ask about these things as you go along learning speeds things up immensely.

Do be kind to their patient hearts.

Monday, April 3, 2017

Automated anti-content

So I was thinking about bots in microblogs today, and it occurred to me that they have the potential of being pure anti-content. A realization which, when stated in these terms, raises two questions. The first is "microblog, really?", and the second is "what is this anti-content you speak of?".

To answer the first question: yup, really. It's faster than describing a subset of social medias that are defined by short messages visible for a short period of time, mainly in the form of scrolling down the screen in real time. Gotta go fast.

The second question is more interesting. "Content" is a word that describes some kind of stuff, in general. It doesn't really matter what it is - as long as it is something and can fit into a defined media for a defined period of time, it is content. A person screaming into a mic for twenty minutes is content. It is as generic as it gets.

Anti-content, then. It is not generic, but is is also not original. An example would be the UTC time bot, which tweets the correct (albeit non-UTC) time once an hour. Another example is the TootBot, which toots every fifteen minutes. It is not content, but it is definitely something. You are not going to enthusiastically wake your friends in the middle of the night to tell them about the latest UTC update (though you might wake them about the toot bot), but you are going to notice them when they make their predictable rounds yet again.

Anti-content is not content. But it is familiar.

The thing about humans is that they like what is familiar. It provides a fixed point of reference, a stable framework to build upon, and - not to be underestimated - something to talk about. Stuff happens, stuff changes, but you can rely on these things to remain the same. And because they remain as they are, they can be used to anchor things which have yet to become and/or need a boost to get going.

Or, to rephrase: you can refer to them without accidentally starting a fight. After all, they have nothing to say worth screaming about. They are anti-content. And they are a part of the community, albeit a very small part. They say very little, but every time you see them, they remind you of the familiar.

And now that you have read this, you will never look upon these automated little bots the same way again. Enjoy!

Tuesday, March 28, 2017

Free speech vs rational debate

An interesting thing about the most vocal defenders of free speech at all costs is that they often conflate free speech and rational debate. Which is a strange thing to do - if you argue something with loudness and extreme forwardness, the least that could be expected from you is that you know what you are on about. Yet, somehow, free speech maximalists often show a brutal lack of understanding of the difference between rational debate and free speech.

To illustrate the difference, I shall describe a case where it is not rational to engage in public debate, and where the debate itself has detrimental effects to the society within which it takes place. The debate in question is whether it is the right course of action to exterminate a specific group of people.

For those who belong to this specific group, it is not rational to participate in such debates. The most immediate reason is that you might lose. No matter how unlikely, the mere possibility of losing is reason enough to stay clear of such debates. To the proponents of the extermination policy, your participation in the debate is an additional justification for their point of view. "They can't even defend themselves!" they'd claim, and then move from word to action. Perhaps not immediately, but eventually the final day would come.

The tragic part is that you would lose even if you won. If you won, it would most likely be because you gave reasons for why your extermination is a bad idea. These reasons might be good in and of themselves, but there would be a finite amount of them, and with enough journalistic efficiency these reasons could be summarized into a list. From the very moment the debate ended, this list would constitute the reasons society abstains from exterminating you.

The existence of such a list would constitute an opening for those who favor your extermination. One by one, the proponents could work to undermine these reasons, until they are no longer seen as sufficient reasons for abstaining. The debate would reopen, and you would find yourself in a weaker position than last time around. You would yet again have to defend your right to exist, and you would have to do it using an ever shrinking range of possible arguments in your favor.

Needless to say, this process would continue until there are no reasons left. And then the proponents of your extermination would have won.

This is detrimental not only to the group targeted for extermination, but also for the society as a whole. For each round of these debates, the society would slip one step closer to enacting genocidal policies. Which, to any decent and moral person, is not a desirable outcome.

The rational thing to do in order to avoid such an outcome is to simply not have these debates. Exorcise them public discourse, and keep them off the realms of possible topics. Do not entertain the thoughts, shun those who persist in proposing them, ban them from polite conversation. Keep the opinion marginalized. No good outcome can come from having these debates, and thus the rational thing to do is to simply not have them.

Free speech maximalists want to have these debates anyway, in the name of free speech. But they conflate free speech with rational debate, and as you have seen, there is a very concrete case where these two things are mutually exclusive. If they are to be honest to themselves, they will eventually have to make a choice between one or the other.

If you began reading this post with the opinion that we should have these debates anyway, and still hold that opinion, then I want you to be fully aware of what you are proposing. I fully trust that you will, in your own time and on your own terms, make the rational choice.

Monday, March 6, 2017

What cyborg Harry Potter can teach us about teaching

After revisiting the recount of my masters thesis, I realized that it is rather German. That is to say, it goes on at length to establish some general principle, but then doesn't bother to give examples of how this principle is realized. Which is a style of writing well suited for some purposes, but, let's face it, is also rather annoying. So let's contextualize this general principle for a spell, by relating fan fiction to the subject of history.

The general principle is that people learn by doing things that they are interested in doing. This happens automatically, without the addition of directed conscious effort. When someone does something, the doing of that thing places them in situations and frames of mind which facilitate the process of learning, and the more doing that takes place, the more learning subsequently follows. Being interested bring with it the propensity of doing more of it, and of paying attention whilst doing it. It is, in most cases, a self-perpetuating process.

This is rather straightforward, and the biggest drawback with this line of thinking is that it takes too many words to convey with regards to how straightforward it is. You begin reading, work through the verbiage, and then conclude at the end that it would have been sufficient to just say "you learn by doing". Which is true, but it also goes to show how much effort you have to put in to convey something straightforward. In retrospect, it is obvious, but you have to go through the process before it becomes retrospectively obvious.

Thus, we have what we need to get to work: the general principle of learning by doing, and the notion of retroactive obviousness. Let's move on to fan fiction and the subject of history. Specifically, let's move on to how the notion of 'canon' relates to the teaching of history.

Canon, in the context of fan fiction, denotes a particular set of works which can be considered official or true (as far as fictional depictions are true). In the case of, say, Harry Potter, the books written by Rowling are canonical, and the specific words found within these books carry significance in that they are the source material from which all knowledge of the fictional universe are garnered. Any further discussion about the Harry Potter universe will have to take these books as written, and conform to the limits imposed by Rowling having written them in a specific way instead of another.

Or, to put it another way: it is canonical that Harry Potter is a wizard that attended Hogwarts, a school for magically proficient youngsters. It is, however, not canon that Harry at a young age underwent a series of radical medical procedures which replaced everything but his visible exterior with cybernetic machinery, and that he is a robot that passes for a human child. The former is canon, the latter I just made up. Those who want to talk about what happened in the narrative universe of Harry Potter have to stick to what actually happened in the narrative - which is to say, the source material, as written.

Any particular work of fan fiction set in a particular narrative universe has to be related to the source material, in various ways. The fan work has to cohere with the source material (i.e. be about wizard Harry rather than cyborg Harry), and it has to cohere enough that assumptions from/about the source material carry over to the fan work. The closer to the source material a fan work manages to cohere, the more interesting things it has to say about the canonical narrative universe.

This introduces an element of evaluation to the act of reading fan fiction (and even more so to writing it). The act of reading also becomes an act of comparing - does the fan work cohere with the source material, and if there are inconsistencies, where are they? A critical reader can move back and forth between the different texts to find out whether they cohere, contradict or - more interestingly - pose further questions about the source material that are revealed through the act of writing the particular work in question.

Whether or not a reader actually makes the effort to make such comparisons depends entirely upon their level of interest. But, as we stated at the top of this post, people do the things they are interested in, and it is by doing the things they are interested in that they end up learning what they actually learn.

Thus, those who are interested in fan fiction about Harry Potter will eventually learn the skills associated with comparing a fan work with canonical works, by virtue of following their interest. They will find out which works are considered canonical, which works are not canonical and which works occupy ambiguous gray areas between these two poles. Or how to handle situations where canonical works disagree - such as when the books and their movie adaptations contradict each other. Which canonical authority has preference?

If you are a teacher of history, then these are the very questions you wish your students to engage with. Not about Harry Potter, mind, but about the general validity of narratives told about the past. Which works are canonical, which are not, and what do you do with all the gray sources in between? Which statements about the past can be substantiated with references to the source material, and which are but speculation? How do you position yourself as a critical reader with regards to the source material at hand? What do you do when you encounter a text about a historical equivalent of cyborg Harry? These are questions that practitioners of fan fiction engage with, albeit not always explicitly.

The pedagogical challenge that follows from the general principle that learning follows from doing what you are interested in, is to identify what students are interested in and which skill sets they have developed during the course of following their interests. By doing this, a teacher can utilize the retroactive obviousness inherent in applying what a student already knows to new situations. Rather than restarting from square one, we do something more interesting.

Fortunately, everyone is interested in something.  But that goes without saying.

Obviously.

Sunday, February 26, 2017

Roundabout canons

Every academic discipline has a canon. That is to say, a series of texts that most of those who are active in the field have read, or at least have some sort of working understanding of. The exact composition of these texts vary from field to field (and over time), but at any given moment you can be sure that there is a set of books most practitioners within a particular field of knowledge knows about. The canon as a general category, whilst undefined in its particulars, still exists.

It is markedly more defined at local levels. It is especially defined at local sites of education, where there are syllabi that explicitly specify which texts are included in the mandatory coursework. Teachers are expected to know these texts well enough to teach them, and students are expected to read them well enough to mobilize some parts of their content through some sort of practice. Such as writing an essay on just what the mandatory texts have to say.

Invariably, there will be some students who are just not feeling it when it comes to going through the academic motions. Invariably, these students will turn to the internet for an easy way out. Invariably, some of these students will yoink a text from the internet and turn it in as if it were their own.

Thing is. If the texts and/or the subject matter remains the same over the years, patterns will emerge. Students will be faced with the same task of producing some work on a topic, and they will conduct the same web searches year after year. And, if general laziness is a constant, they will find the same first-page results and turn them in, unaware of their participation in an ever more established tradition. [A fun sidenote: I have a few blog posts which receive a boost in traffic two times a year, which coincide very closely to when their subject matter is taught at my local university.]

What I wonder is - how many times does a particular web-copied text need to be turned in before those in charge of grading start to recognize it? Or, phrased another way: how many iterations does it take for these easy-to-find texts to become part of the local canon?

A canon is wider than merely those lists found in official documents, such as syllabi. Informal inclusion is a very real phenomena, and when a particular text keeps showing up again and again and again -

Now there is food for thought.

Wednesday, February 22, 2017

Postmodernism, a primer

There has been a lot of talk about postmodernism lately, and the only thing larger than the distaste for it is the confusion about what it actually is. While it might be tempting to label this as a postmodern state of things, it's not. It's just confused, and confusion is not postmodernism. The latter might lead to the former, but that is the extent of the connection between the two.

If you've ever read a textbook that in some way deals with postmodernism, then you've probably encountered the introductory statement that the word consists of two parts - post and modernism. Post- as a prefix means that whatever it is fixed to happened in the past. When it is fixed to modernism, we get a word that means "the stuff that happened after modernism". Modernism came first, then postmodernism - in that order.

There are two main reasons for including introductory remarks of this kind. The first is that it has become tradition and convention at this point, and it's easier to latch on to what has already been established than to be creative. The second is that you cannot treat postmodernism as an entity unto itself - it has to be understood in relation to what came before. If you do not understand modernity, you will not understand postmodernity. The one came from the other, and it could not have happened in any other way.

It is vitally important to underscore this intimate relationship. It is a historical progression which is not merely chronological - the tendencies and practices set in motion in the modern time period kept going in the postmodern time period. They are linked, similar and connected.

The modern project was (and is) one of enlightened critical thinking. Traditional institutions, mainly those of monarchies and churches, were no longer to be seen as the absolute authorities when it came to the truth. Instead of relying on ancient authorities (or very present authorities, as it were), the moderns wanted to rely on science and reason.

An example of this shift from ancient authority to a more modern way of thinking is Galileo and the notion that the Earth goes around the sun. Using the tools at hand, Galileo figured out that Earth is not the center of the solar system. The traditional authorities, who held that the Earth was in fact the center, did not agree, and much ado was made about it. In the end, you know how it all turned out.

This ambition to test things by means of science and reason wasn't limited to one person and one particular way of looking at things. Over time, it became the default mode for everything - everything could be questioned, measured, re-examined and put to the test. Those things that were found to not hold up to the standards of scientific testing were thrown out, and those things that did hold up were expanded upon.

The scientific implications of this are fairly obvious: you can get a whole lot more done if you are allowed to freely use the scientific method, without having to make sure everything you find corresponds to what the authorities want you to say. Science builds on science alone, and its findings are all the more robust for it.

The social implications, however, are less straightforward. If long-held beliefs about the cosmos as a whole could be questioned and challenged, then so could long-held beliefs about things of a smaller and more private nature. If the church was wrong about the Earth being at the center of the solar system, then it might also be wrong about marriage, sexuality, and other social institutions. Everything is up for questioning. Everything.

This process of questioning everything kept going, and over time more and more things that were once taken for granted were put to the task of defending themselves. Everything that was once solid melted away, and what came instead was something completely different. Where once kings and bishops ruled, there are now scientists and bureaucrats. And marketers.

Mind you, this is all part of modernity. This is the part that came before postmodernism became a thing. Postmodernism is what happened after this process had been around for a while and become the status quo.

The thing about questioning everything is that you can't really keep doing it forever. At some point, you arrive at the conclusion that some questions have been answered once and for all, and thus that there is no need to go back to them. You begin to take things for granted, and enshrine them as the way things are supposed to be. There are other, more important things to do than reinventing the wheel. There is an order to things and a tradition to consider, both of which are as they should be. The product of modernity is a new range of authorities which dictate what is to be taken for granted and what is to be questioned.

Postmodernism is a return to the very modern urge to question everything and make present institutions answer for themselves. It is, in essence, a return to the modern impulse to trust reason and science rather than tradition or authority - even if these very same traditions and authorities have used reason and science in the process of becoming what they are. But instead of asking whether the Earth revolves around the sun or not, it asks: why do we do the things we do the way we do them, and might there not be a better way to go about it?

Postmodernism happened after the modern project. Post-modernism. But it is still very modern. It is modernity turned upon itself.

If you, after having read this, are slightly more confused about postmodernism, then that is good. It will have primed you for this next statement:

Academics stopped talking about postmodernism some decades ago, and are baffled at its return to fame in news and popular culture.

As final words, I say only this: its resurgence is not postmodern. It is merely confusing. -

Friday, February 17, 2017

All is good that is good

It is often said that it is impossible to argue about taste. De gustibus non est disputandum. Some people like some things, other people like other things, and no amount of arguing is going to change this one indisputable state of things. This is where it is at, and thus here we are.

Nevertheless, we often find ourselves in situations where we want to convey why we like something. In matters of literal taste, the argument is simple: just present the person we want to convince with a tasting of the good stuff, and let the taste buds do their thing. Either we succeed or we do not; the outcome depends entirely on factors outside our control. Regardless of outcome, the attempt was made.

When it comes to more abstract things, such as music or writing, a similar approach is also available. Give someone a tasting of the music and writing, and see how they react. Either they get it, and your work is done, or they don't get it, and -

It is possible you at this point want to argue why that thing you like is good. Why the poem your friend is utterly indifferent to is actually amazing, why that song owns the sky and everything below it - why they should like it, too.

This situation presents something of a problem. If you really really like something, then its awesomeness is so self-evident and obvious that it is difficult to find some mean of reducing it to mere words or communicative motions. No discursive gesture would convey just how good it is, and attempts to convey it anyway often stray into unrelated territories, causing confusion or disagreement. Which, one might reasonably assume, is the opposite of what you wanted to accomplish.

A first move from here might be to simply state that you like the thing. This may or may not be useful information to the other person - it all depends on your particular relationship and suchlike. But it provides a baseline for further attempts to convey the goodness.

A second move might be to say that someone else likes the thing. Preferably, this third person is someone you both like and acknowledge as someone whose opinion matters. If they like it, then there's got to be something to it, right?

A third move might be to make a more generalized claim about mass (or niche) appeal. If it's famous, then it must be good, or it wouldn't be famous; if it's niche, then it must also be good, as it is an expression of the virtues of the niche.

As lines of argument go, these are rather flawed. But they are also very common. They are human.

Thing is. Giving reasons for why things are good or bad is hard. There are no readily available frameworks for it, and those frameworks that do exist require a non-trivial amount of effort to get in to. Most of them hide behind camouflage strategies such as the name "literary critique", and get progressively more invisible from there.

Maybe the proper thing to do is to cut our friends some slack. Give them the benefit of the doubt when their eyes get that enthusiastic gleam. -

Wednesday, February 8, 2017

A thought

The strange thing about thoughts is that most of them are irrelevant. You think them, they flow through the mechanisms of cognition, and then nothing. Nothing comes of it. In the grand scheme of things, whatever thought happened in those irrelevant moments could be replaced by any other thought, and nothing would have changed. Thoughts occupy time, and that is about all they do.

Except, of course, when they do more than that.

Thing is. Most thoughts are never recorded. They happen, take place, and are gone. Some of them are important, some are irreverent, some would make a difference if only they were jotted down somewhere.

But we never get around to thinking we ought to record them. And then they are gone.

Just thought I'd remind you that you still have the option.

Saturday, January 28, 2017

That thesis I wrote about fan fiction and computer games: a summary

As you might recall, I recently wrote a summary of my bachelor's thesis in education, in which I abstracted the nuts and bolts and told you the good stuff that came out of the process. Since then, I managed to write a follow up master's thesis about fan fiction and computer games, and what follows below is a similar summary. But before we get to the fan fiction and computer games, some general introductory remarks about education are in order.

Schools aim to teach things. This is not just a tautology, but also a mission statement. Every school has a set of explicit goals of what students are to learn during the course of their education. The curriculum states these goals and the way to reach them, often in explicit terms.

Students are graded depending on how well they have achieved these goals. If you ask a lay person about education, they will usually arrive at some sort of correlation between performance and grades. They will also, with varying degrees of explicitness, place the burden of effort upon the students. A well-performing student gets good grades, while a poor-performing student gets bad grades. It is up to the student to achieve, effort and perform - school is but the arena wherein such feats are to be accomplished. On graduation day, the student receives a paper which states in objective terms what they know and do not know, and to what degree they know what they know. [I use the word 'student' throughout this post, even though it is framed mostly in terms of pupils. I mention this here to preempt confusion.]

What is in a curriculum matters, as does the way it is taught, and the reasons it is included in the catalogue of things to know. A student becomes what he or she does, shaped by the manner in which these things are done. The way in which something is taught shapes both what is learnt and what kind of person emerges after the process of education is completed.

The core statement of my master's thesis was, as it so often is in the field of pedagogy, taken from Dewey. Loosely paraphrased, it is as follows: education is only effective in so far as it relates to the interests of those doing the learning. Learning is not a guaranteed outcome of partaking in an educational situation; especially not if what counts as "learning" is "absorbing the intended subject matter at hand in this particular learning situation". Though - and Dewey would back me up here - it is very possible that a student might learn that a particular teacher's voice is particularly conducive to falling asleep, and that this might be strategically used for restorative purposes.

We have now introduced the core concepts: educational goals, educational evaluation, the organization of the educational process, and student interest. We are almost ready to get to the fan fiction and computer games part. But first, some more discussion about goals and evaluation.

A common way to test what students know and do not know is, as you might have suspected, tests. The specifics vary, but the general principle is to sit students down in a room and subject them to a number of questions. If they manage to produce answers to these questions, a good grade is given; if they do not, a bad grade is given. This is thought to be a fair and proper way to evaluate what students know or do not know, and thus it is widely used as a basis for determining whether students have in fact achieved the goals of their education or not.

Thing is. These tests only measure whether or not a particular student have mastered the art of responding to test questions or not. They do not measure competencies outside the scope of the questions asked and the genre of answers deemed appropriate to those questions. They most certainly do not measure whether a student is interested in the narrow and specialized discipline of providing appropriate answers to evaluative tests.

This opens up for the possibility that a student might possess the desired skill or knowledge (as expressed in the goals of their particular educational setting), but not the will, desire or capability to express it in the form the test demands. If they find the test boring, they might just outright refuse to participate. If they find the test to be an insult to their intelligence, they might produce answers that deviate from the desired form. Or, if they do not understand the questions, they might simply not know what to do.

If the aim of education is to teach a particular skill, then the evaluation of whether a particular student has acquired this skill or not needs to take into account other expressions than test results alone. There are many ways to the same goal, and educational settings have a tendency to delegitimize those ways that are not explicitly stated in the educational process. And this is where we get to the fan fiction and computer games.

My thesis looked specifically at literacy and the goals associated with the teaching of it. While the specifics of what "literacy" means varies from place to place, to general gist of it tends to be to read a text and act on what is found within. In the case of fiction, it tends to be something along the lines of relating what happens in the narrative to other happenings, be it in the real or the narrative world. In the case of non-fiction, it tends to be along the lines of finding useful information and implementing it in some way. In both cases, reading comprehension is at the fore - if the student can relate the content of the text to other things, then they have displayed literacy.

Fan fiction is a clear example of this. If the educational goal is to teach a kid how to write, then it does not matter if the thing they are writing is fan fiction. The skills they acquire in the process of writing about Harry Potter are the same as when they write about anything else, by virtue of writing being writing. Moreover, as they become more proficient in what they do, they acquire other skills as well: referencing the source material, using it in a faithful way, understanding the limitations imposed by writing in the Harry Potter setting, etc. The more time they spend reading and writing fan fiction, the more time they spend reading and writing - which is an explicit goal in the education of literacy.

The same goes for computer games. Given games of enough complexity, there will come a time when it is necessary to consult a wiki. Whether it be to see how to complete a particular quest, accomplish a particular goal or master a particular mechanic, eventually the wiki will become a reference point. If the educational goal is to teach how to use reference material, then such a natural leap to using reference material is paydirt. It is the desired result.

The point here is not to let kids write fan fiction or play computer games (though it could be). The point is that these are but two examples of ways to reach the stated goals of literacy education - to read, write, and to use various forms of source material for instrumental reasons. The key is to look at what the students are interested in, and then to look at what they do when they do what they are interested in. If they write fan fiction and discuss it with their (online or offline) friends, then this is a great starting point for further literacy education. Similarly, if they frequently alt-tab to a wiki to accomplish a certain goal within a game, then this facility to use text-based resources can be expanded upon. While they do not know that they are learning how to read, write and find useful information, you as a teacher know, and you can use this as a starting point to get them to their intended destination.

The key is to let interest guide the way. Learning happens by doing what you are interested in, and the more avenues you have to act on that interest, the more learning has the potential to happen. It is up to schools to find ways to channel existing interest into learning/doing: by setting up discussions for this week's fan fic output, to gently mention that the library has books on the subject that covers things not in the wiki, etc. And, eventually, when their interest in a particular book or game has faded, the opportunity arises to introduce new topics of interest to similarly improve the desired skillsets.

An important aspect of focusing on the interest of the students is to recognize that they do not do what they do in order to learn the stated goal of the syllabi that apply to them. They engage in their interests in the social contexts these interests find themselves - in fan fiction communities, on gaming forums, in dedicated wikis etc. These places do not necessarily have the same priorities as the educational settings the student find themselves in. They are different sites of knowledge, who follow different situational rules, and might have different ways to go about the same activity. A fan fiction community is understandably focused on producing good fiction, with a definition of "good" that is defined by the genre of, indeed, fan fiction. Fiction written in a school setting is expected to conform to different norms and standards. Even though the activity is the same (writing), the social context differs in such a way that being able to perform in one context does not automatically translate into an ability to perform in the other.

A metapoint is that kids will do what they are interested in doing anyway, regardless of whether these interests are actualized in a school setting or not. Kids are always interested in things, but these things might not be on the curriculum in a form easily translated to the context of their interests. More importantly, there is the larger question of whether they are allowed to express what they know or not. A student who can navigate the subtle genres of fan fiction with ease (and who enjoys literary nuances which require years of practice to appreciate) might simply not give a flying fig about Hemingway, and thus flunk the test on him - and be graded accordingly.

If the goal is to teach literacy, then encourage interests that lead to literacy. Mutatis mutandis for other subjects - find what the students are interested in, and proceed from there. Then allow for expressions of mastery that are not explicitly made to be easily quantifiable. Standardized tests make it easy to compare tests results, but they have a hard time measuring non-standardized knowledge. If such tests are the only allowed means of expressing mastery in a subject, then schools institutionally and needlessly bar many students from expressing their actually existing knowledge in a socially recognized manner.

In the end, it all comes down to one thing: whether the goal is for students to learn, or for them to perform well on tests.

The difference is not subtle.

Tuesday, January 24, 2017

Future history, juxtaposed

Recently, my local university library switched cataloguing system, from our local Swedish SAB system to the Dewey decimal system. However, the transition was not made all at once, but rather is done sections at a time. New books are entered into the new system, while old books remain as they are, with some migration from old to new according to rules mostly inscrutable to us students and patrons.

This means that there are two sections for every subject, one old and one new. Interestingly enough, the difference between the two is distinct enough to tell a tale of its own. The old sections mainly contain classics, postmodernism and cyberoptimism. The new sections are, as you might expect, up to date.

Walking from the old sections to the new is akin to walking from the past to the future. However, the future is a very particular future, with a very particular set of events that shaped how things came to pass. We know these events, as we lived through them and have them in living memory. We remember what we did on 11/9 when we heard the news about the twin towers; we remember the aftermath. These things are reflected in the titles of the new shelves, as well they should be.

The old shelves, however, tell a different story. The classics are timeless, and point towards some universal truth or other. The postmodernists do their darnedest to deconstruct settled notions of universality and truth, so as to open up the space to actualizing new universals and new truths - those of our own making, as it were, rather than those we happened to inherit. The cyberoptimists are all enthused about the coming of the computers, and what it could, would, should mean in terms of a better future.

The future was up for grabs, and it was up to people like you and me to make the effort to make it a place worth living.

I suspect the library at some point will complete the transition from the old system to the new one, and that this inadvertent contrast between what was, what could be and what is will become but a memory. But for a little while longer, it will remain possible to observe the difference by physically moving around. Future history, juxtaposed.

It behooves us to notice these things.

Monday, January 23, 2017

Rationally debating nazis is bad

During the course of the discussions surrounding the punching or non-punching of nazis, many references have been made to the notion of rational debate. Words are stronger than fists, as it were, and it is better to use words whenever possible. Especially when considered in the long term - sticks and stones ain't got nothing on the longevity of words.

I do not think the proponents of rational debate understand what it is they are proposing.

To put it in its most brutal form: waking up to a world where parliament is engaged in rational debate regarding the extermination of the Jews would be a nightmare. Especially if they applied all the tools of rationality - weighing the benefits to the costs, comparing different methods of achieving the goal, searching the remaining nazi records for useful information on practical implementation.

These are not things you want to see rationally debated. You want them as far away from the range of available topics of conversation as possible. Ideally, you want the topic so fat removed from consideration that even thinking about it becomes an exercise in speculative fiction. It is not a topic for discussion.

Imagine, however, that it was a topic for discussion. Every day. All the time. To such a degree that when you try to go about your business, you are approached by someone who very politely asks if you have considered exterminating the Jews today. They have a pamphlet, you see, and a book of reasons why today is the day to start thinking that, yeah, maybe there are too many Jews around. Maybe they actually are a problem that needs to be solved, once and for all. After all, you've been hearing about it for so long, there might just be something to it.

If you at this point think to yourself that there is no set of circumstances which would make you consider the extermination of Jews to be a good idea on rational grounds, then you have fully understood why rational debate is not an option. The notion of rational debate assumes that the topic at hand is in the best interest of those who participate in the discussion, and if it is in your best interest to see Jews exterminated -

Well then, my friend, I have some bad news for you.

Sunday, January 22, 2017

Punching nazis is good

A strange discussion is taking place at the moment. An anonymous gentleman punch a very public nazi, and the altercation was captured on tape. The video subsequently went viral, and a great many opponents of nazism celebrated the event. Moreover, they encouraged people to do more of it. To, in no uncertain terms, punch more nazis.

Make no mistake. Punching nazis is good. Do more of it.

This part is not strange. Up to this point, things are rather straightforward. Punch, video, viral. This needs no explanation. However, it needs to be mentioned in order to cement your understanding of the strange part.

The strange part is that there seems to be a not insubstantial number of individuals who disagree with the sentiment that punching nazis is a public good that should be encouraged. Who, upon encountering statements in support of further punching, instantly and with vigor, object that violence is not an acceptable response to the situation. Following from this, they object that it is equally not acceptable to encourage enthusiastic punching of additional nazis that happen to be within reach.

Why this sudden urge to defend the nazis? What gives?

The short answer is that decent people oppose violence, and thus do not want to encourage it. Punching a nazi is an act of violence, and thus they do not want to encourage such actions. It is a simple principle, and they act on it. It is, in short, the decent thing to do.

Thing is. The very existence of organized nazis who act in public is an act of violence in and of itself. Nazism as an ideology has a very clearly defined goal, and that goal is to make the lives of inferior races a living hell up until the point where state policies can be enacted to systematically eradicate these races. This is the explicit goal, and every ounce of influence accumulated by those who follow this ideology will be used to further this goal. The inferior races are to be purged, to create living space for the master race. Compromise is not an option.

This is what they want. This is what they say they want. This is why they put pictures of literally Hitler on their propaganda material. This is not in any way a hidden secret.

Allowing nazis to go about their business undisturbed has the unintended consequence of allowing them to go about their business. They can hold meetings, distribute propaganda material and recruit more followers. They can go through all the steps required to get from here to their goal, undisturbed. Left to their own devices, they can get shit done.

It might be argued that it would be more prudent to try to reason with them. That words are better than fists. That reasoned debate still has a role to play in this.

My counterargument is that history happened, and we recorded it. It is very possible to find out what the nazis did. The cultural production of whole generations went into hammering in the importance of never allowing what they did happen again. Books, movies, monuments, essays, plays, songs, poems - those who want to know have it within their reach to find out. There is no excuse for not knowing.

Those who, in spite of this, come across the nazi point of view and think it agreeable, have already discarded the lessons of history. They know what they are doing, but they are doing it anyway. Telling them what they already know will not change their minds.

Punching them, however, has the effect of stopping what they are doing. It's hard to organize the second Holocaust when being punched.

And that is the point.

Monday, January 9, 2017

The proper amount of attention

You know those local artists who dedicate their lives to their art? Who work ceaselessly on their projects, utilizing all the time and resources at their disposal in a single-minded pursuit of doing the work. Who, at the end of the day, look back on what they have accomplished and deep within their souls know that the world needs more of it. Who live and breathe art as surely as everyone else breathes air.

You know the kind. Those who never actually succeed in getting anywhere, and at most impact the most immediate neighbors. Those who get chosen when the producers of local television scrapes the barrel for what to feature next, and whose daytime programs have audiences counted in double digits.

You have seen those shows. Right before you zap over to another channel, because who cares, right?

I want you to have this category of people as a template. They work hard, are earnest in their efforts, and get absolutely no recognition for it. They are, for all intents and purposes, literal nobodies.

This is the template for how to treat neo-nazis (and their alt right alter egos). This is the proper amount of attention to give them. This is the baseline.

If you are in tune with the current zeitgeist, you might instinctively think that there is some element of free speech at play here. Resist this instinct - the biggest problem with neo-nazis is not that people have not heard their arguments. To the contrary: an entire generation of a whole continent got to hear it point blank, and wrote extensively about why that particular ideology is a bad idea all around. The message has been heard; spreading it even further would not bring any additional insight into the present condition.

Ask instead why the issue of free speech actualized at the mention of neo-nazis rather than the ineffectual local artist. Examine the assumptions at work in that line of thinking, and put some critical distance between it and yourself.

Then go find one of those local artists. Chances are they actually do have useful insights into the present condition, in wait of someone to notice them. -